Inferring Structured Visual Concepts from Minimal Data
- Peng Qian, MIT, Cambridge, Massachusetts, United States
- Luke Hewitt, MIT, Cambridge, Massachusetts, United States
- Josh Tenenbaum, Brain and Cognitive Sciences, MIT, Cambridge, Massachusetts, United States
- Roger Levy, Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
AbstractHumans can learn and reason about abstract concepts quickly, flexibly, and often from very little data. Here, we study how people learn novel concepts within a binary grid domain, and find that even this minimal task nonetheless necessitates the inference of highly structured parts as well as their compositional relationships. Furthermore, by changing the presentation condition of the learning examples, we reveal different approaches involved in learning such visual concepts: given the same images, human generalizations differ between rapid and static presentation conditions. We investigate this difference by developing several computational models that vary in their use of structured primitives and composition. We find that learning in the rapid presentation condition is best described as inference in simple models, while learning in the static presentation condition is best described as inference in a more structured space of graphics programs.
Return to previous page