Seeing the Meaning: Vision Meets Semantics in Solving Pictorial Analogy Problems
- Hongjing Lu, UCLA, Los Angeles, California, United States
- Qing Liu, John Hopkins University, Baltimore, Maryland, United States
- Nicholas Ichien, Psychology, University of California, Los Angeles, Los Angeles, California, United States
- Alan Yuille, John Hopkins University, Baltimore, Maryland, United States
- Keith Holyoak, Department of Psychology, University of California, Los Angeles, Los Angeles, California, United States
AbstractWe report a first effort to model the solution of meaningful four-term visual analogies, by combining a machine-vision model (ResNet50-A) that can classify pixel-level images into object categories, with a cognitive model (BART) that takes semantic representations of words as input and identifies semantic relations instantiated by a word pair. Each model achieves above-chance performance in selecting the best analogical option from a set of four. However, combining the visual and the semantic models increases analogical performance above the level achieved by either model alone. The contribution of vision to reasoning thus may extend beyond simply generating verbal representations from images. These findings provide a proof of concept that a comprehensive model can solve semantically-rich analogies from pixel-level inputs.
Return to previous page