# Deconfounding Hypothesis Generation and Evaluation in Bayesian Models

- Elizabeth Baraff Bonawitz,
*University of California, Berkeley*
- Thomas L. Griffiths,
*University of California, Berkeley*

## Abstract

Bayesian models of cognition are typically used to describe human
learning and inference at the computational level, identifying which hypotheses
people should select to explain observed data given a particular set of inductive
biases. However, such an analysis can be consistent with human behavior even if
people are not actually carrying out exact Bayesian inference. We analyze a
simple algorithm by which people might be approximating Bayesian inference, in
which a limited set of hypotheses are generated and then evaluated using Bayes'
rule. Our mathematical results indicate that a purely computational-level
analysis of learners using this algorithm would confound the distinct processes
of hypothesis generation and hypothesis evaluation. We use a causal learning
experiment to establish empirically that the processes of generation and
evaluation can be distinguished in human learners, demonstrating the importance
of recognizing this distinction when interpreting Bayesian models.

Back to Table of Contents