While processing spoken language, people look towards relevant objects, and the time course of their gaze(s) can inform us about online language processing (Tanenhaus et al, 1995). Here, we investigate lexical recognition in British Sign Language (BSL) using a visual world paradigm, the first such study using a signed language. Comprehension of spoken words and signs could be driven by temporal constraints regardless of modality (“first in, first processed”), or by perceptual salience which differs for speech (auditorialy perceived) and sign (visually perceived). Deaf BSL signers looked more often to semantically related distracter pictures than to unrelated pictures, replicating studies using acoustically-presented speech. For phonologically related pictures, gaze increased only for those sharing visually salient phonological features (i.e., location and movement features). Results are discussed in the context of language processing in different modalities. Overall, we conclude that lexical processing for both speech and sign is likely driven by perceptual salience and that potential differences in processing emerge from differences between visual and auditory systems.