TRACX2: a RAAM-like autoencoder modeling graded chunking in infant visual-sequence learning

Abstract

Even newborn infants are able to extract structure from a stream of sensory inputs and yet, how this is achieved remains largely a mystery. We present a connectionist autoencoder model, TRACX2, that learns to extract sequence structure by gradually constructing chunks, storing these chunks in a distributed manner across its synaptic weights, and recognizing these chunks when they re-occur in the input stream. Chunks are graded rather than all-or-none in nature and during learning their component parts become ever more tightly bound together. TRACX2 successfully models data from four experiments from the infant visual statistical-learning literature, including tasks involving low-salience embedded chunk items, part-sequences, and illusory items. The model captures performance differences across ages by tuning a single learning rate parameter. These results suggest that infant statistical learning is underpinned by the same domain general learning mechanism that operates in auditory statistical learning and, potentially, in adult artificial grammar learning.


Back to Table of Contents