A Bayesian Analysis of Moral Norm Malleability during Clarification Dialogues

AbstractOne of the principle tenets of modern behavioral ethics is that human morality is dynamic and malleable. Recent work in technology ethics has highlighted the role technologies can play in this process. As such, it is the responsibility of technology designers to actively identify and address possible negative consequences of such technological mediation. In this work, we examine dialogue systems employed by current robotic agents, arguing that they can have deleterious effects on both the human moral ecosystem and human perception of the robots, regardless of the robots’ actual ethical competence. We present a preliminary Bayesian analysis of empirical data suggesting that the architectural status quo of clarification request generation systems may (1) cause robots to unintentionally miscommunicate their ethical intentions (our two tests for this yielded Bayes factors of 1319 and 1099) and (2) weaken humans’ contextual application of moral norms (Bayes factor of 1069). Keywords: natural language generation, moral norms, robot ethics, experimental ethics

Return to previous page