Eyegaze is a powerful source of information in face-to-face conversation. Our previous work showed addressees can use this information rapidly and flexibly, speeding referential disambiguation when the speaker was clearly looking at a target object versus a similar competitor. However, monitoring eyegaze demands attentional resources which may also have costs during collaborative tasks. Since the most reliable indication of gaze direction comes from looking near a speaker's eyes, but people can also judge less precise head orientation more peripherally (e.g., Loomis et al., 2008), we compared addressees' ability to disambiguate instructions to pick up objects when a speaker's eyes were visible or were obscured by mirrored sunglasses, compared to a baseline where the speaker was completely hidden. We present analyses of addressees' eye movements which reveal the extent to which they benefit from attending to and coordinating the use of the speaker's disambiguating eye versus head movements during language comprehension.