Watching Non-Corresponding Gestures Helps Learners with High Visuospatial Ability to Learn about Movements with Dynamic Visualizations: An fNIRS Study

Abstract

This study investigates whether making and observing (human) gestures facilitates learning about non-human biological movements and whether correspondence between gesture and to-be-learned movement is superior to non-correspondence. Functional near-infrared spectroscopy was used to address whether gestures activate the human mirror-neuron system (hMNS) and whether this activation mediates the facilitation of learning. During learning, participants viewed the animations of the to-be-learned movements twice. Depending on the condition, the second viewing was supplemented with either a self-gesturing instruction (Y/N) and/or a gesture video (corresponding/non-corresponding/no). Results showed that high-visuospatial-ability learners showed better learning outcomes with non-corresponding gestures, whereas those gestures were detrimental for low-visuospatial-ability learners. Furthermore, the activation of the inferior-parietal cortex (part of the hMNS) tended to predict better learning outcomes. Unexpectedly, making gestures did not influence learning, but cortical activation differed for learners who self-gestured depending on which gesture they observed. Results and implications are discussed.


Back to Table of Contents