Should moral decisions be different for human and artificial cognitive agents?


Moral judgments are elicited using dilemmas presenting hypothetical situations in which an agent must choose between letting several people die or sacrificing one person in order to save them. The evaluation of the action or inaction of a human is compared to those of two artificial agents – a humanoid robot and an automated system. Ratings of rightness, blamefulness and moral permissibility of action or inaction in incidental and instrumental moral dilemmas are used. The results show that for the artificial cognitive agents the utilitarian action is rated as more morally permissible than inaction. The humanoid robot is found to be less blameworthy for his choices compared to the human agent or to the automated system. Action is found to be more appropriate, more permissible, more right, and less blameworthy than inaction only for the incidental scenarios. The results are interpreted and discussed from the perspective of perceived moral agency.

Back to Table of Contents