Domain-General Learning of Neural Network Models to Solve Analogy Task -- A Large-Scale Simulation

Abstract

Several computational models have been proposed to explain the mental processes underlying analogical reasoning. However, previous models either lack a learning component or use limited, artificial data for simulations. To address these issues, we build a domain-general neural network model that learns to solve analogy tasks in different modalities, e.g., texts and images. Importantly, it uses word representations and image representations computed from large-scale naturalistic corpus. The model reproduces several key findings in the analogical reasoning literature, including relational shift and familiarity effect, and demonstrates domain-general learning capacity. Our model also makes interesting predictions on cross-modality transfer of analogical reasoning that could be empirically tested. Our model makes the first step towards a computational framework that is able to learn analogy tasks using naturalistic data and transfer to other modalities.


Back to Table of Contents