Comparing reinforcement learning in humans and artificial intelligence through Tetris

Logan GittelsonRensselaer Polytechnic Institute, Troy, New York, USA
John LindstedtRensselaer Polytechnic Institute
Catherine SibertRensselear Polytechnic Institute, Troy, New York, United States
Wayne GrayRensselaer Polytechnic Institute

Abstract

Tetris has a long history in Artificial Intelligence (Fahey, 2014) and Cognitive Science (Mayer, in press). We combine both traditions to ask, "what can Tetris and these two approaches to Tetris, tell us about human expertise?" In our research, we use Cross-Entropy Reinforcement Learning (Szita and Larincz, 2006) to produce two very different types of models; an unsupervised version trained directly on the game and a reversed-engineered version fed the sequence of choices made by human players and asked to determine the best fitting weights for that series of human choices. For a given set of features, we can explore differences in weights that provide the highest score for the machines versus the set that provides the best fit to human data. By varying the set of features we can determine if different features provide better fits to human data while producing poorer performance in machines. NOTE: This work is related to submission 641, if possible could these posters be next to each other during the same session.

Files

Comparing reinforcement learning in humans and artificial intelligence through Tetris (1 KB)



Back to Table of Contents