A Bayesian Framework for Learning Words From Multiword Utterances


Current computational models of word learning make use of correspondences between words and observed referents, but as of yet cannot---as human learners do---leverage hypotheses regarding the meaning of other words in the lexicon. Here we develop a Bayesian framework for word learning that learns a lexicon from multiword utterances. In a set of three simulations we demonstrate this framework's functionality, consistency with experimental work, and superior performance in certain learning tasks with respect to a Bayesian word leaning model that treats word learning as inferring the meaning of each word independently of all others. This framework represents the first step in modeling the potential synergies between referential and distributional cues in word learning.

Back to Table of Contents