Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas

Holly Wilson, Andreas Theodorou

Research output: Contribution to conferencePaper

Abstract

Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) simulation involving an autonomous vehicle to investigate our perceptions of a morally-salient decision; first moderated by agent type, and second, by an implementation of transparency. Participants in our user study took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking Non-Playable Characters (NPC). Experimental subjects were exposed to one of three conditions: (1) participants were led to believe that the car was controlled by a human, (2) the artificial nature of AV was made explicitly clear in the pre-study briefing, but its decision-making system was kept opaque, and (3) a transparent AV that reported back the characteristics of the NPCs that influenced its decision-making process. In this paper, we discuss our results, including the distress expressed by our participants at exposing them to a system that makes decisions based on socio-demographic attributes, and their implications.
Original languageEnglish
Publication statusPublished - 12 Aug 2019
EventAISafety 2019: Workshop in Artificial Intelligence Safety - Macao, China
Duration: 11 Aug 201912 Aug 2019
Conference number: IJCAI-19
https://www.ai-safety.org/

Workshop

WorkshopAISafety 2019
CountryChina
CityMacao
Period11/08/1912/08/19
Internet address

Keywords

  • Virtual reality
  • ai ethics
  • Mental models
  • autonomous vehicle
  • moral dilemma
  • transparency

Cite this

Wilson, H., & Theodorou, A. (2019). Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas. Paper presented at AISafety 2019, Macao, China.

Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas. / Wilson, Holly; Theodorou, Andreas.

2019. Paper presented at AISafety 2019, Macao, China.

Research output: Contribution to conferencePaper

Wilson, H & Theodorou, A 2019, 'Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas' Paper presented at AISafety 2019, Macao, China, 11/08/19 - 12/08/19, .
Wilson H, Theodorou A. Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas. 2019. Paper presented at AISafety 2019, Macao, China.
Wilson, Holly ; Theodorou, Andreas. / Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas. Paper presented at AISafety 2019, Macao, China.
@conference{ea790d20afd34c749aa5b156f5c941e1,
title = "Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas",
abstract = "Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) simulation involving an autonomous vehicle to investigate our perceptions of a morally-salient decision; first moderated by agent type, and second, by an implementation of transparency. Participants in our user study took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking Non-Playable Characters (NPC). Experimental subjects were exposed to one of three conditions: (1) participants were led to believe that the car was controlled by a human, (2) the artificial nature of AV was made explicitly clear in the pre-study briefing, but its decision-making system was kept opaque, and (3) a transparent AV that reported back the characteristics of the NPCs that influenced its decision-making process. In this paper, we discuss our results, including the distress expressed by our participants at exposing them to a system that makes decisions based on socio-demographic attributes, and their implications.",
keywords = "Virtual reality, ai ethics, Mental models, autonomous vehicle, moral dilemma, transparency",
author = "Holly Wilson and Andreas Theodorou",
year = "2019",
month = "8",
day = "12",
language = "English",
note = "AISafety 2019 : Workshop in Artificial Intelligence Safety ; Conference date: 11-08-2019 Through 12-08-2019",
url = "https://www.ai-safety.org/",

}

TY - CONF

T1 - Slam the Brakes: Perceptions of Moral Decisions in Driving Dilemmas

AU - Wilson, Holly

AU - Theodorou, Andreas

PY - 2019/8/12

Y1 - 2019/8/12

N2 - Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) simulation involving an autonomous vehicle to investigate our perceptions of a morally-salient decision; first moderated by agent type, and second, by an implementation of transparency. Participants in our user study took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking Non-Playable Characters (NPC). Experimental subjects were exposed to one of three conditions: (1) participants were led to believe that the car was controlled by a human, (2) the artificial nature of AV was made explicitly clear in the pre-study briefing, but its decision-making system was kept opaque, and (3) a transparent AV that reported back the characteristics of the NPCs that influenced its decision-making process. In this paper, we discuss our results, including the distress expressed by our participants at exposing them to a system that makes decisions based on socio-demographic attributes, and their implications.

AB - Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) simulation involving an autonomous vehicle to investigate our perceptions of a morally-salient decision; first moderated by agent type, and second, by an implementation of transparency. Participants in our user study took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking Non-Playable Characters (NPC). Experimental subjects were exposed to one of three conditions: (1) participants were led to believe that the car was controlled by a human, (2) the artificial nature of AV was made explicitly clear in the pre-study briefing, but its decision-making system was kept opaque, and (3) a transparent AV that reported back the characteristics of the NPCs that influenced its decision-making process. In this paper, we discuss our results, including the distress expressed by our participants at exposing them to a system that makes decisions based on socio-demographic attributes, and their implications.

KW - Virtual reality

KW - ai ethics

KW - Mental models

KW - autonomous vehicle

KW - moral dilemma

KW - transparency

M3 - Paper

ER -