Probability, programs, and the mind: Building structured Bayesian models of cognition

Noah GoodmanStanford University
Josh TenenbaumMIT

Abstract

Human thought is remarkably flexible: we can think about infinitely many different situations despite uncertainty and novelty. Probabilistic models of cognition (Chater, Tenenbaum, & Yuille, 2006) have been successful at explaining a wide variety of learning and reasoning under uncertainty. They have borrowed tools from statistics and machine learn- ing to explain phenomena from perception (Yuille & Kersten, 2006) to language (Chater & Manning, 2006). Traditional symbolic models (e.g. Newell, Shaw, & Simon, 1958; Anderson & Lebiere, 1998), by contrast, excel at explaining the productivity of thought, which follows from compositionality of symbolic representations. Indeed, there has been a gradual move toward more structured probabilistic models (Tenenbaum, Kemp, Griffiths, & Goodman, 2011) that incor- porate aspects of symbolic methods into probabilistic modeling. Unfortunately this movement has resulted in a complex “zoo” of Bayesian models. We have recently introduced the idea that using programs, and particularly probabilistic programs, as the representational substrate for probabilistic modeling tames this unruly zoo, fully unifies probabilistic with symbolic approaches, and opens new possibilities in cognitive modeling. The goal of this tutorial is to introduce probabilistic models of cognition from the point of view of probabilistic programming, both as a unifying idea for cognitive modeling and as a practical tool.

Files

Probability, programs, and the mind: Building structured Bayesian models of cognition (62 KB)



Back to Table of Contents