Insulating Distributional Semantic Models from Catastrophic Interference
- Willa Mannering, Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, United States
AbstractPredictive neural networks are currently the most popular architecture for learning distributional semantics in the fields of machine learning and cognitive science. However, a major weakness of this architecture is catastrophic interference (CI): The sudden and complete loss of previously learned associations when encoding new ones. CI is an issue with backpropagation; when learning sequential data, the error signal dramatically modifies the connection weights between nodes—causing rapid forgetting. CI is a huge problem for predictive semantic models of word meaning, because multiple word senses interfere with each other. Here, we evaluate a recently proposed solution to CI from neuroscience, elastic weight consolidation, as well as a Hebbian learning architecture from the memory literature that does not produce an error signal. Both solutions are evaluated on an artificial and natural language task in their ability to insulate a previously learned sense of a word when learning a new one.
Return to previous page