Integration of gaze information during online language comprehension and learning
- Kyle MacDonald, Communications, University of California Los Angeles, Los Angeles, California, United States
- Elizabeth Swanson, Psychology, Stanford University, Stanford, California, United States
- Michael Frank, Psychology, Stanford University, Stanford, California, United States
AbstractFace-to-face communication provides access to visual information that can support language processing. But do listeners automatically seek social information without regard to the language processing task? Here, we present two eye-tracking studies that ask whether listeners' knowledge of word-object links changes how they actively gather a social cue to reference (eye gaze) during real-time language processing. First, when processing familiar words, children and adults did not delay their gaze shifts to seek a disambiguating gaze cue. When processing novel words, however, children and adults fixated longer on a speaker who provided a gaze cue, which led to an increase in looking to the named object and less looking to the other objects in the scene. These results suggest that listeners use their knowledge of object labels when deciding how to allocate visual attention to social partners, which in turn changes the visual input to language processing mechanisms.
Return to previous page