Can Tractable Algorithmic-level Explanations Be Evolved?

Arne WijniaRadboud University Nijmegen
Todd WarehamMemorial University of Newfoundland, St. John's, NL, Canada
Iris van RooijRadboud University Nijmegen, Donders Institute for Brain, Cognition, and Behaviour


Computational-level theories of cognition often postulate functions that are computationally intractable (e.g., NP-hard or worse). This seems to render such theories computationally and cognitively implausible. One account of how humans may nevertheless compute intractable functions is that they exploit parameters of the input of these functions to compute the functions efficiently under conditions where those parameters are small. Previous work has established the existence of such algorithms for various cognitive functions. However, whether or not these algorithms can evolve in a cognitively plausible manner remains an open question. In this poster, we describe the first formal investigation of this question relative to the constraint satisfaction model of coherence. In our investigation, we evolved neural networks for computing coherence under this model. Our simulation results show that such evolved networks indeed exploit parameters in the same way as known tractable algorithms for this model.


Can Tractable Algorithmic-level Explanations Be Evolved? (1 KB)

Back to Table of Contents