How do listeners integrate multiple sources of information in order to accurately anticipate turn endings? In two experiments using synthesised speech and a virtual agent we examined the role of verbal and gaze information in a turn-end anticipation task. Listeners were as good at anticipating the synthesised voice as they were with human speakers (Experiment 1). However, the direction and timing of the agent’s gaze had little influence on their accuracy (Experiment 2). Overall, these findings support the idea that anticipation of turn ends relies primarily, but not exclusively, on verbal content.