Catastrophic Interference in Neural Embedding Models

AbstractThe semantic memory literature has recently seen the emergence of predictive neural network models that use principles of reinforcement learning to create a “neural embedding” of word meaning when trained on a language corpus. These models have taken the field by storm, partially due to the resurgence of connectionist architectures, but also due to their remarkable success at fitting human data. However, predictive embedding models also inherit the weaknesses of their ancestors. In this paper, we explore the effect of catastrophic interference (CI), long known to be a flaw with neural network models, on a modern neural embedding model of semantic representation (word2vec). We use homonyms as an index of bias depending on the order in which a corpus is learned. If the corpus is learned in random order, the final representation will tend towards the dominant sense of the word (bank à money) as opposed to the subordinate sense (bank à river). However, if the subordinate sense is presented to the network after learning the dominant sense, CI produces profound forgetting of the dominant sense and the final representation strongly tends towards the more recent subordinate sense. We demonstrate the impact of CI and sequence of learning on the final neural embeddings learned by word2vec in both an artificial language and in an English corpus. Embedding models show a strong CI bias that is not shared by their algebraic cousins.


Return to previous page