No Fair! Ultimatum Game rejection rates for human-computer interactions

Denise A. BakerArizona State University
Sarai CabreraArizona State University
Jilma JoyArizona State University

Abstract

Research has demonstrated that in some types of human computer interactions people behave toward an artificially intelligent agent (AIG) as if it had moral agency. Anthropomorphic behaviors such as sparing a computer’s “feelings” and holding it morally responsible for cheating have been observed. However, research has not examined human capacity to act against one’s own interest as a result of perceived moral agency of an AIG. Using the Ultimatum Game paradigm, this study investigates whether participants who engage in an online chat interaction with a partner whom they believe to be an AIG will reject unfair offers from that AIG at a similar rate as participants who believe their partner to be a human. Using a $10 stake, preliminary data suggest participants would rather lose real money (always offered $2) than allow the AIG to “keep” the remaining $8, even when the artificial nature of the AIG is made highly salient.

Files

No Fair! Ultimatum Game rejection rates for human-computer interactions (1 KB)



Back to Table of Contents