Cross-linguistically shared spatial mappings of abstract concepts guide non-signers’ inferences about sign meaning

AbstractAbstract concepts like valence and magnitude are represented through space in co-speech gestures and linguistic metaphors. Recent work has shown that such spatial mappings are also reflected in the motion patterns of signs in sign languages, suggesting that sign languages may reveal cross-linguistically shared ways of spatializing abstract concepts. We probed this possibility further by testing whether non-signers are sensitive to vertical spatial mappings encoded in signs in American Sign Language (ASL). Non-signers were presented with videos of ASL signs and asked to judge the likely valence and magnitude of their meanings. Judgments were well predicted by the direction of hand movement along the vertical axis but not other axes, implying that participants spontaneously relied on vertical mappings of valence and magnitude to make semantic inferences. These findings suggest that sign languages encode spatial mappings of abstract concepts that are readily accessible to non-signers, and potentially useful for language learning.


Return to previous page