A Neural Network Model for Taxonomic Responding with Realistic Visual Inputs

Abstract

We propose a neural network model that accounts for the emergence of the taxonomic constraint in early word learning. Our proposal is based on Mayor and Plunkett (2010)'s neurocomputational model of the taxonomic constraint and overcomes one of its limitations, namely the fact that it considers artificially built, simplified stimuli. In fact, while in the original model the visual stimuli are random, sparse dot patterns, in our proposed solution they are photographic images from the ImageNet database. In our model the represented objects in the image can be of different size, color, location in the picture, point of view, etc.. We show that, notwithstanding the augmented complexity in the input, the proposed model compares favorably with respect to Mayor and Plunkett (2010)'s model.


Back to Table of Contents