Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation

Holly Wilson, Joanna J Bryson, Andreas Theodorou

Research output: Contribution to conferencePaperpeer-review

Abstract

The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates we
may perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].
We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size and
gender. Participants gave feedback after each scenario.
Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.
This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,
the self-driving car was perceived as significantly less morally culpable and human-like than the
human-driver. The transparency implementation led to a further significant reduction in perceived
human-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’
understanding of the self-driving cars decision.
We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.

References
[1] Edmond Awad. Moral machines: perception of moral judgment made by machines. PhD thesis, Massachusetts
Institute of Technology, 2017.
[2] Jean-Fran¸cois Bonnefon, Azim Shariff, and Iyad Rahwan. The social dilemma of autonomous vehicles. Science,
352(6293):1573–1576, 2016.
[3] Batya Friedman. it’s the computer’s fault: reasoning about computers as moral agents. In Conference companion
on Human factors in computing systems, pages 226–227. ACM, 1995.
[4] Evgeniya Hristova and Maurice Grinberg. Should moral decisions be different for human and artificial cognitive
agents. In 38th conf. of the cognitive science society, 2016.
[5] Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. Sacrifice one for the
good of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenth
annual ACM/IEEE international conference on human-robot interaction, pages 117–124. ACM, 2015.
[6] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. Designing and implementing
Original languageEnglish
Publication statusPublished - 18 Nov 2018
EventIA Symposium: When Robots Think. Interdisciplinary Views on Intelligent Automation - Katholisch-soziale Akademie, Münster, Germany
Duration: 14 Nov 201816 Nov 2018

Conference

ConferenceIA Symposium
Abbreviated titleIA Symposium
Country/TerritoryGermany
CityMünster
Period14/11/1816/11/18

Fingerprint

Dive into the research topics of 'Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation'. Together they form a unique fingerprint.

Cite this