An Instance Theory of Distributional Semantics

AbstractAbstraction to a single prototypical representation is a core principle of Distributional Semantic Models (DSMs) that learn semantic representations for words by applying dimension reduction to statistical redundancies in language. While the learning mechanisms for semantic abstraction vary widely across the many DSMs in the literature, they are essentially all prototype models in that they create a single abstract representation for a word’s meaning. The prototype method stands in stark contrast to work in the field of categorization that has converged on the importance of instance models. In comparison to the prototype method, instance-based models assume only an episodic store and, rather than applying abstraction mechanisms at learning, argue that meaning emerges in the act of retrieval. We cash this idea out by presenting and evaluating an instance theory of distributional semantics, and by demonstrating that it can explain diverging patterns of homonymous words that classic “abstraction-at-learning” models simply cannot as a consequence of their architectural assumptions.

Return to previous page