The cognitive systems of visual and multimodal narratives
- Neil Cohn, Department of Communication and Cognition, Tilburg University, Tilburg, Netherlands
- Emily Coderre, Department of Psychology, University of Vermont, Burlington, Vermont, United States
- Elizabeth O'Donnell, Department of Psychology, University of Vermont, Burlington, Vermont, United States
- Aidan Osterby, Department of Psychology, Kansas State University, Manhattan, Kansas, United States
- Lester Loschky, Department of Psychology, Kansas State University, Manhattan, Kansas, United States
AbstractCognitive research on visual and multimodal narratives has been burgeoning in the last decade. An increasing number of researchers in the psychological sciences have turned to examining sequential images from a variety of subdisciplines, particularly those from fields of linguistics (Cohn, 2013a), discourse studies (Magliano, Larson, Higgs, & Loschky, 2015), the perceptual sciences (Foulsham, Wybrow, & Cohn, 2016; Loschky, Hutson, Smith, Smith, & Magliano, In press), and the cognitive neurosciences (Cohn, Paczynski, Jackendoff, Holcomb, & Kuperberg, 2012).This broad coverage by different subfields of the cognitive sciences is testament to how complex sequential images can be, especially when they combine in multimodal interactions, such as with text. Indeed, visual narratives have proven to be a good testing-ground for many facets of basic cognition. The presentations in this symposium highlight several emerging lines of research on visual narratives in the cognitive sciences, spanning across the subfields of psycholinguistics, scene perception, cognitive neuroscience, and clinical psychology.
Return to previous page