Knowledge transfer in a probabilistic Language Of Thought

Abstract

In many domains, people are able to transfer abstract knowledge about objects, events, or contexts that are superficially dissimilar, enabling striking new insights and inferences. We provide evidence that this ability is naturally explained as the addition of new primitive elements to a compositional mental representation, such as that in the probabilistic Language Of Thought (LOT). We conducted a transfer-learning experiment in which participants learned about two sequences. We show that participants' ability to learn the second sequence is affected the first sequence they saw. We test two probabilistic models of how algorithmic knowledge is transferred from the first to second sequence: one model rationally updates the prior probability of the primitive operations in the LOT; the other stores previously likely hypotheses as new primitives. Both models outperform baselines in explaining behavior, with human subjects appearing to transfer entire hypotheses when they can, and otherwise updating the prior on primitives.


Back to Table of Contents