Multimodal Event Knowledge in Online Sentence Comprehension: the Influence of Visual Context on Anticipatory Eye Movements

AbstractPeople tend to predict incoming words during online sentence comprehension based on their knowledge of real-world events that is cued by preceding linguistic contexts. We used the visual world paradigm to investigate how event knowledge activated by an agent-verb pair is integrated with multimodal information about the referent that fits the patient role. We found that during the verb time window participants looked significantly more at the referents that are expected given the agent-verb pair. The outcomes are consistent with the assumption that event knowledge involves fine-grained details about multimodal features of typical event participants. The knowledge activated by the agent is compositionally integrated with knowledge cued by the verb to drive anticipatory eye movements during online sentence comprehension, allowing people to predict not only the incoming lexical item, but also visual features of its referent.


Return to previous page