An Examination of Perseveration Terms in Reinforcement Learning Models

AbstractPerseveration, or “stickiness” parameters have been added to reinforcement-learning (RL) models to capture autocorrelation in choices. Here, we systematically examined whether perseveration terms simply improve a model’s ability to fit noise in the data, thereby making them overly flexible. We simulated data with basic versions of a Delta and Prediction-Error Decay model with no perseveration terms added, and for half of the simulated data sets we added random noise to expected RL values on each trial. We then performed cross-fitting analyses where the simulated data sets were fit by the basic data-generating models as well as extended models with perseveration terms added. The addition of perseveration terms improved model fit, particularly when noise was added to the simulation process. Parameter recovery was generally poorer for the extended models. These results suggest simpler models may be more useful for prediction and generalization to novel environments, as well as for theory development.


Return to previous page