A rational model of syntactic bootstrapping

AbstractChildren exploit regular links between the meanings of words and the syntactic structures in which they appear to learn about novel words. This phenomenon, known as syntactic bootstrapping, is thought to play a critical role in word learning, especially for words with more opaque meanings such as verbs. We present a computational word learning model which reproduces such syntactic bootstrapping phenomena after exposure to a naturalistic word learning dataset, even when under substantial memory constraints. The model demonstrates how experimental syntactic bootstrapping effects constitute rational behavior given the nature of natural language input. The model unifies computational accounts of word learning and syntactic bootstrapping effects observed in the laboratory, and offers a path forward for demonstrating the broad power of the syntax–semantics link in language acquisition.


Return to previous page