Learning Grammar via Statistical Mechanism

Abstract

Adults' learning of grammatical dependencies was investigated with an artificial language consisting of shapes (e.g., circles, squares, etc.) and a novel prediction task. The grammar yielded simple sentences and complex sentences with an embedded clause resulting in a non-adjacent agreement relation. Similar to Elman's (1993) Simple Recurrent Network (SRN), sentences were concatenated and presented one shape at a time to participants, who predicted the next shape while the preceding seven shapes remained visible. The sentences represented a staged learning condition, with simple occurring before complex, or a mixed condition with simple and complex randomly ordered. Like the SRN, the participants' token predictions were frequently incorrect; however, accuracy was assessed by whether their prediction was grammatical. Accuracy was above chance in both the staged and mixed conditions demonstrating the beneficial effect of prediction errors, and it was significantly higher in the staged condition, where the stronger local contingencies facilitated category learning.


Back to Table of Contents