A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language mediated visual attention

Alastair SmithMax Planck Institute for Psycholinguistics, Nijmegen, Netherlands
Padraic MonaghanLancaster University
Falk HuettigMax Planck Institute for Psycholinguistics

Abstract

When processing language, the cognitive system has access to information from a range of modalities (e.g. auditory, visual) to support language processing. Language mediated visual attention studies have shown sensitivity of the listener to phonological, visual, and semantic similarity when processing a word. In a computational model of language mediated visual attention, that models spoken word processing as the parallel integration of information from phonological, semantic and visual processing streams, we simulate such effects of competition within modalities. Our simulations raised untested predictions about stronger and earlier effects of visual and semantic similarity compared to phonological similarity around the rhyme of the word. Two visual world studies confirmed these predictions. The model and behavioral studies suggest that, during spoken word comprehension, multimodal information can be recruited rapidly to constrain lexical selection to the extent that phonological rhyme information may exert little influence on this process.

Files

A comprehensive model of spoken word recognition must be multimodal: Evidence from studies of language mediated visual attention (260 KB)



Back to Table of Contents