Learning to Express Left-Right & Front-Behind in a Sign versus Spoken Language

Beyza SumerRadboud University Nijmegen & International Max Planck Research School for Language Sciences, Nijmegen, The Netherlands
Pamela PernissDeafness Cognition and Language Research Center, University College London
Inge ZwitserloodRadboud University Nijmegen, The Netherlands
Asli OzyurekRadboud University Nijmegen & Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands

Abstract

Developmental studies show that it takes longer for children learning spoken languages to acquire viewpoint-dependent spatial relations (e.g., left-right, front-behind), compared to ones that are not viewpoint-dependent (e.g., in, on, under). The current study investigates how children learn to express viewpoint-dependent relations in a sign language where depicted spatial relations can be communicated in an analogue manner in the space in front of the body or by using body-anchored signs (e.g., tapping the right and left hand/arm to mean left and right). Our results indicate that the visual-spatial modality might have a facilitating effect on learning to express these spatial relations (especially in encoding of left-right) in a sign language (i.e., Turkish Sign Language) compared to a spoken language (i.e., Turkish).

Files

Learning to Express Left-Right & Front-Behind in a Sign versus Spoken Language (405 KB)



Back to Table of Contents