The literature currently contains a dichotomy in explaining how humans learn lexical semantic representations for words. Theories generally propose either that lexical semantics are learned through perceptual experience, or through exposure to regularities in language. We propose here a model to integrate these two information sources. The model uses the global structure of memory to exploit the redundancy between language and perception in order to generate perceptual representations for words with which the model has no perceptual experience. We test the model on a variety of different datasets from grounded cognition experiments.