Learning and Production in the Explanation of Regularization Behaviour: a Computational Model

AbstractWe propose a computational model to account for the regularization behaviour that characterizes language learning and that has emerged from experimental studies, specifically from concurrent multiple frequency learning tasks (Ferdinand, 2015). These experiments show that learners regularize the input frequencies they observe, suggesting that domain-general factors might underlie regularization behaviour. Standard models have failed to capture this pattern, so we explore the consequences of adding a production bias that follows the learning stage in a probabilistic model of frequency learning. We simulate and fit to experimental data a beta-binomial Bayesian sampler model, which allows an explicit quantification of both the learning and the production bias. Our results reveal that adding a production component to the model leads to a better fit to data. Given our results, we hypothesize that linguistic regularization may result from general-domain constraints on learning combined to biases in production, which need not to be considered innate.


Return to previous page