Human-object interaction understanding without objects

Abstract

During object manipulation the actor’s eye movements are directed to the target of the interaction and to the relevant sites where this takes place. Eye movements during grasping observation are influenced by low-level motor information, helping inferring the target from hand shape. In an eye-tracking experiment, we investigated which factors influence understanding when observing bimanual object interactions, if no objects are visible but only the movements reproduced by an avatar. Participants watched ten different actions (e.g., pour water from a bottle into a cup) and guessed among ten possibilities. Also perspective was varied (frontal, side, head-centered). Preliminary results show higher response accuracy in the frontal perspective. During the interaction phase participants spent more time fixating closer to the interaction point between the hands, where the objects would be, than on the single hands, suggesting this is the best vantage point to make sense of the observed action without other cues.


Back to Table of Contents