Several accounts of semantic representation have relied on a contextually insensitive similarity space, including recent structured probabilistic approaches (Kemp & Tenenbaum, 2009). However, evidence for these models relies on participants' inferences about a particular kind of property, namely, biological properties. We show that the training set used to extract a single tree structure by Kemp and Tenenbaum (2009) in fact contains additional structure in the relations between properties of different types. Moreover, participants who are asked to make inferences about different kinds of properties (biology, diet, habitat) show generalization differences that reflect this additional structure. We suggest that models of semantic representation must be able to dynamically adjust their representations in a context-sensitive manner, and we will present simulation results using a model that can do so.