Capturing human category representations by sampling in deep feature spaces
- Joshua Peterson, UC Berkeley, Berkeley, California, United States
- Jordan Suchow, Department of Psychology, UC Berkeley, Berkeley, California, United States
- Krisha Aghi, University of California, Berkeley, Berkeley, California, United States
- Alexander Ku, Department of Psychology, University of California, Berkeley, Berkeley, California, United States
- Tom Griffiths, University of California, Berkeley, Berkeley, California, United States
AbstractUnderstanding how people represent categories is a core problem in cognitive science. Decades of research have yielded a variety of formal theories of categories, but validating them with naturalistic stimuli is difficult. The challenge is that human category representations cannot be directly observed and running informative experiments with naturalistic stimuli such as images requires a workable representation of these stimuli. Deep neural networks have recently been successful in solving a range of computer vision tasks and provide a way to compactly represent image features. Here, we introduce a method to estimate the structure of human categories that combines ideas from cognitive science and machine learning, blending human-based algorithms with state-of-the-art deep image generators. We provide qualitative and quantitative results as a proof-of-concept for the method's feasibility. Samples drawn from human distributions rival those from state-of-the-art generative models in quality and outperform alternative methods for estimating the structure of human categories.
Return to previous page