Simple Recurrent Networks and human spoken word recognition

James S. MagnusonUniversity of Connecticut

Abstract

A crucial problem in cognitive science, especially for speech processing, is sequence encoding. Models of spoken word recognition either ignore the problem (e.g., Norris et al., 2000), posit solutions incapable of representing repeated elements (e.g., Grossberg & Kazerounian, 2011), or "spatialize" time in possibly unrealistic ways (TRACE; McClelland & Elman, 1986). An alternative that has not been deeply explored for spoken word recognition is the Simple Recurrent Network (Elman, 1990). I trained SRNs on the TRACE lexicon with pseudo-spectral inputs, and used regression to compare fundamental effects (neighborhood, cohort, length, etc.) in SRNs vs. TRACE and TISK (Hannagan, Magnuson, & Grainger, 2013), and the fine-grained time course of those effects. In general, SRN predictions converge with TRACE and TISK, and are consistent with human behavior. However, some attested effects (e.g., short-word bias) do not emerge naturally in SRNs, calling into question their adequacy as models of human spoken word recognition.

Files

Simple Recurrent Networks and human spoken word recognition (1 KB)



Back to Table of Contents