Abstract
Artificially intelligent agents are increasingly used for morally-salient decisions of high societal impact. Yet, the decision-making algorithms of such agents are rarely transparent. Further, our perception of, and response to, morally-salient decisions may depend on agent type; artificial or natural (human). We developed a Virtual Reality (VR) simulation involving an autonomous vehicle to investigate our perceptions of a morally-salient decision; first moderated by agent type, and second, by an implementation of transparency. Participants in our user study took the role of a passenger in an autonomous vehicle (AV) which makes a moral choice: crash into one of two human-looking Non-Playable Characters (NPC). Experimental subjects were exposed to one of three conditions: (1) participants were led to believe that the car was controlled by a human, (2) the artificial nature of AV was made explicitly clear in the pre-study briefing, but its decision-making system was kept opaque, and (3) a transparent AV that reported back the characteristics of the NPCs that influenced its decision-making process. In this paper, we discuss our results, including the distress expressed by our participants at exposing them to a system that makes decisions based on socio-demographic attributes, and their implications.
Original language | English |
---|---|
Publication status | Published - 12 Aug 2019 |
Event | AISafety 2019: Workshop in Artificial Intelligence Safety - Macao, China Duration: 11 Aug 2019 → 12 Aug 2019 Conference number: IJCAI-19 https://www.ai-safety.org/ |
Workshop
Workshop | AISafety 2019 |
---|---|
Country/Territory | China |
City | Macao |
Period | 11/08/19 → 12/08/19 |
Internet address |
Keywords
- Virtual reality
- ai ethics
- Mental models
- autonomous vehicle
- moral dilemma
- transparency
ASJC Scopus subject areas
- General Computer Science