Tetris: Exploring Human Strategies via Cross Entropy Reinforcement Learning Models

Catherine SibertRensselear Polytechnic Institute, Troy, New York, United States
John LindstedtRensselaer Polytechnic Institute
Wayne GrayRensselaer Polytechnic Institute

Abstract

Tetris has been used as a research tool more often than any other video game (Mayer, in press). However, perhaps due to the game's complex combination of visual features, rapid changes, and real-time decision-making, research and understanding of the strategies used by human players has languished. We use cross-entropy reinforcement learning (CERL) to explore the space of features and feature weights that predict or do not predict fine details of human performance. As CERL models operate without human constraints (e.g., all possible placements are evaluated on each episode of play without any cost of time) an important part of our project is to identify the fine details of play that distinguish human judgement from CERL judgement and to relate these differences to human cognitive strengths (such as goal hierarchies) and weaknesses (such as movement time and time to consider and weigh alternative moves)

Files

Tetris: Exploring Human Strategies via Cross Entropy Reinforcement Learning Models (1 KB)



Back to Table of Contents