Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation

Research output: Contribution to conferencePaper

Abstract

The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates we
may perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].
We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size and
gender. Participants gave feedback after each scenario.
Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.
This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,
the self-driving car was perceived as significantly less morally culpable and human-like than the
human-driver. The transparency implementation led to a further significant reduction in perceived
human-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’
understanding of the self-driving cars decision.
We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.

References
[1] Edmond Awad. Moral machines: perception of moral judgment made by machines. PhD thesis, Massachusetts
Institute of Technology, 2017.
[2] Jean-Fran¸cois Bonnefon, Azim Shariff, and Iyad Rahwan. The social dilemma of autonomous vehicles. Science,
352(6293):1573–1576, 2016.
[3] Batya Friedman. it’s the computer’s fault: reasoning about computers as moral agents. In Conference companion
on Human factors in computing systems, pages 226–227. ACM, 1995.
[4] Evgeniya Hristova and Maurice Grinberg. Should moral decisions be different for human and artificial cognitive
agents. In 38th conf. of the cognitive science society, 2016.
[5] Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. Sacrifice one for the
good of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenth
annual ACM/IEEE international conference on human-robot interaction, pages 117–124. ACM, 2015.
[6] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. Designing and implementing

Conference

ConferenceIA Symposium
Abbreviated titleIA Symposium
CountryGermany
CityMünster
Period14/11/1816/11/18

Cite this

Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation. / Wilson, Holly; Bryson, Joanna J; Theodorou, Andreas.

2018. Paper presented at IA Symposium, Münster, Germany.

Research output: Contribution to conferencePaper

Wilson, H, Bryson, JJ & Theodorou, A 2018, 'Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation' Paper presented at IA Symposium, Münster, Germany, 14/11/18 - 16/11/18, .
@conference{92199ca754ce42eeb924343203c951da,
title = "Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation",
abstract = "The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates wemay perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size andgender. Participants gave feedback after each scenario.Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,the self-driving car was perceived as significantly less morally culpable and human-like than thehuman-driver. The transparency implementation led to a further significant reduction in perceivedhuman-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’understanding of the self-driving cars decision.We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.References[1] Edmond Awad. Moral machines: perception of moral judgment made by machines. PhD thesis, MassachusettsInstitute of Technology, 2017.[2] Jean-Fran¸cois Bonnefon, Azim Shariff, and Iyad Rahwan. The social dilemma of autonomous vehicles. Science,352(6293):1573–1576, 2016.[3] Batya Friedman. it’s the computer’s fault: reasoning about computers as moral agents. In Conference companionon Human factors in computing systems, pages 226–227. ACM, 1995.[4] Evgeniya Hristova and Maurice Grinberg. Should moral decisions be different for human and artificial cognitiveagents. In 38th conf. of the cognitive science society, 2016.[5] Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. Sacrifice one for thegood of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenthannual ACM/IEEE international conference on human-robot interaction, pages 117–124. ACM, 2015.[6] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. Designing and implementing",
author = "Holly Wilson and Bryson, {Joanna J} and Andreas Theodorou",
year = "2018",
month = "11",
day = "18",
language = "English",
note = "IA Symposium : When Robots Think. Interdisciplinary Views on Intelligent Automation, IA Symposium ; Conference date: 14-11-2018 Through 16-11-2018",

}

TY - CONF

T1 - Perceptions of Moral Dilemmas in a Virtual Reality Car Simulation

AU - Wilson, Holly

AU - Bryson, Joanna J

AU - Theodorou, Andreas

PY - 2018/11/18

Y1 - 2018/11/18

N2 - The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates wemay perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size andgender. Participants gave feedback after each scenario.Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,the self-driving car was perceived as significantly less morally culpable and human-like than thehuman-driver. The transparency implementation led to a further significant reduction in perceivedhuman-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’understanding of the self-driving cars decision.We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.References[1] Edmond Awad. Moral machines: perception of moral judgment made by machines. PhD thesis, MassachusettsInstitute of Technology, 2017.[2] Jean-Fran¸cois Bonnefon, Azim Shariff, and Iyad Rahwan. The social dilemma of autonomous vehicles. Science,352(6293):1573–1576, 2016.[3] Batya Friedman. it’s the computer’s fault: reasoning about computers as moral agents. In Conference companionon Human factors in computing systems, pages 226–227. ACM, 1995.[4] Evgeniya Hristova and Maurice Grinberg. Should moral decisions be different for human and artificial cognitiveagents. In 38th conf. of the cognitive science society, 2016.[5] Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. Sacrifice one for thegood of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenthannual ACM/IEEE international conference on human-robot interaction, pages 117–124. ACM, 2015.[6] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. Designing and implementing

AB - The prevalence of artificial intelligent agents carrying out morally salient decisions is growing. The decisions made by such agents as autonomous cars or weapon systems, may have life and death consequences. We argue that the decision-making algorithms of all agents whose decisions have high societal impact should be transparent [6]; to ensure human-agent interaction is fully informed, consensual, and of maximum benefit to the society. Importantly, the literature also indicates wemay perceive and respond to morally salient decisions made by a machine differently to the same decision made by a human [5, 4, 3].We present here a virtual reality simulation of a self-driving car we developed, in which users experience moral dilemmas. In our two studies, we investigate the perceptions of a morally salient decision; first as moderated by the type of the agent, artificial or natural (human), and then with the implementation of transparency. Specifically, inspired by the Moral Machine research programme [2, 1], we used social value as a moral framework. The agent chooses to hit a pedestrian on either the left or right side of a zebra crossing dependent on dimensions of occupation, body-size andgender. Participants gave feedback after each scenario.Contrary to past findings, participants in this current study were distressed at the principle of decision-making based on attributes such as social value. In questionnaire responses and postexperiment conversation the majority reported preferring such decisions to be made at random.This raises important insights into how we implement moral frameworks. We suggest that the disparity between preferences from the current study and past work is due to the virtual reality methodology we used. Specifically, we note a distinction between emotional vs. rational decisionmaking, which was supported by an extension survey we conducted. Consistent with expectations,the self-driving car was perceived as significantly less morally culpable and human-like than thehuman-driver. The transparency implementation led to a further significant reduction in perceivedhuman-likeness, and also to reduced perceptions of intentionality. The reduction in moral culpability has disturbing possible connotations, though it may also be helpful for correct attribution of accountability. Promisingly, our transparency implementation significantly improved participants’understanding of the self-driving cars decision.We suggest companies implementing moral frameworks do not take crowd-sourced preferences at face value, but explore the methodology used. Additionally, our work supports transparency as a mechanism to calibrate our mental models of autonomous agents.References[1] Edmond Awad. Moral machines: perception of moral judgment made by machines. PhD thesis, MassachusettsInstitute of Technology, 2017.[2] Jean-Fran¸cois Bonnefon, Azim Shariff, and Iyad Rahwan. The social dilemma of autonomous vehicles. Science,352(6293):1573–1576, 2016.[3] Batya Friedman. it’s the computer’s fault: reasoning about computers as moral agents. In Conference companionon Human factors in computing systems, pages 226–227. ACM, 1995.[4] Evgeniya Hristova and Maurice Grinberg. Should moral decisions be different for human and artificial cognitiveagents. In 38th conf. of the cognitive science society, 2016.[5] Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. Sacrifice one for thegood of many?: People apply different moral norms to human and robot agents. In Proceedings of the tenthannual ACM/IEEE international conference on human-robot interaction, pages 117–124. ACM, 2015.[6] Andreas Theodorou, Robert H. Wortham, and Joanna J. Bryson. Designing and implementing

M3 - Paper

ER -