Sparse Population Code Models of Word Learning in Concept Drift

Abstract

Computational modeling has served a powerful tool for studying cross-situational word learning. Previous research has focused on convergence behaviors in a static environment, ignoring dynamic cognitive aspects of concept change. Here we investigate concept drift in word learning in story-telling situations. Informed by findings in cognitive neuroscience, we hypothesize that a large ensemble of sparse codes flexibly represents and robustly traces drifting concepts. We experimentally test the population coding hypothesis on children’s cartoon videos. Our results show that learning the meanings of words over time is hard, especially when the concept evolves slowly, but the sparse population coding can handle the concept drift problem effectively while hypothesis elimination and simplistic parametric models have difficulty.


Back to Table of Contents