Comparing cognitive models in dynamic agent-based models: A methodological case study

AbstractDynamic models, such as agent-based models (ABMs), are becoming an increasingly common modelling tool in cognitive sciences. They enable cognitive scientists to explore how computational, analytic models scale up when placed in complex, interactive, and dynamic environments where agents can sequentially interact over time and in space. Frequently, ABMs are built to yield a particular behaviour (riots, echo chamber emergence, etc.). As such, some models may ‘bake in’ the desired behaviour. However, many models may yield this behaviour, making it difficult to discriminate between the adequacies of each computational model. The paper directly addresses this methodological challenge. We explore a case study (fisheries). Agents make decisions in this dynamic and complex environment. Given a rich data set against which to calibrate and validate model predictions, we compare and contrast statistical, adaptive, and ‘perfect’ agents. We show that adaptive computational agents equal statistical agents in calibration and outperform them for validation. In addition, we show that perfect and random agents fare poorly. This provides a method for using dynamic, agent-based models to choose between computational models


Return to previous page