Distributional models of semantics are a popular way of capturing the similarity between words or concepts. More recently,such models have also been used to generate properties associated with a concept; model-generated properties are typically compared against collections of semantic feature norms. In the present paper, we propose a novel way of testing the plausibility of the properties generated by a distributional model using data from a visual world experiment. We show that model-generated properties, when embedded in a sentential context, bias participants’ expectations towards a semantically associated target word in real time. This effect is absent in a neutral context that contains no relevant properties.