We investigated the relationship between linguistic and visual information, combining divided visual field and blank screen paradigms. In an eye-tracking experiment, two objects appeared for 180 ms, one in the right (rvf) and one in the left visual field (lvf), while participants maintained central fixation. After the objects disappeared, a word was presented auditorily. In matching trails (50%), it indicated one of the objects previously shown. Participants had to decide whether the word named a man-made or a natural entity. Findings revealed that they were more likely to saccade toward the side of the referent object when it had been presented in the lvf than in the rvf. Moreover, saccades in the lvf targeted more precisely the object’s empty location. This suggests a crucial role of the right hemisphere in activating visual representations during language processing, indicating its greater ability in using spatial indexes to retrieve useful visual information.