Robot transparency, trust and utility

Robert H Wortham, Andreas Theodorou

Research output: Contribution to journalSpecial issue

  • 6 Citations

Abstract

As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.
LanguageEnglish
Pages242-248
JournalConnection Science
Volume29
Issue number3
DOIs
StatusPublished - 30 May 2017

Fingerprint

Transparency
Robots
Robotics

Keywords

  • EPOR, transparency, agents, ethics, roboethics, robotics

Cite this

Robot transparency, trust and utility. / Wortham, Robert H; Theodorou, Andreas.

In: Connection Science, Vol. 29, No. 3, 30.05.2017, p. 242-248.

Research output: Contribution to journalSpecial issue

Wortham RH, Theodorou A. Robot transparency, trust and utility. Connection Science. 2017 May 30;29(3):242-248. Available from, DOI: 10.1080/09540091.2017.1313816
Wortham, Robert H ; Theodorou, Andreas. / Robot transparency, trust and utility. In: Connection Science. 2017 ; Vol. 29, No. 3. pp. 242-248
@article{8df8f18f20124fba99905d8839d4d036,
title = "Robot transparency, trust and utility",
abstract = "As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.",
keywords = "EPOR, transparency, agents, ethics, roboethics, robotics",
author = "Wortham, {Robert H} and Andreas Theodorou",
note = "Connection Science Special Issue: Ethical Principles of Robotics (Part 2 of 2)",
year = "2017",
month = "5",
day = "30",
doi = "10.1080/09540091.2017.1313816",
language = "English",
volume = "29",
pages = "242--248",
journal = "Connection Science",
issn = "0954-0091",
publisher = "Taylor & Francis",
number = "3",

}

TY - JOUR

T1 - Robot transparency, trust and utility

AU - Wortham,Robert H

AU - Theodorou,Andreas

N1 - Connection Science Special Issue: Ethical Principles of Robotics (Part 2 of 2)

PY - 2017/5/30

Y1 - 2017/5/30

N2 - As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

AB - As robot reasoning becomes more complex, debugging becomes increasingly hard based solely on observable behaviour, even for robot designers and technical specialists. Similarly, non-specialist users have difficulty creating useful mental models of robot reasoning from observations of robot behaviour. The EPSRC Principles of Robotics mandate that our artefacts should be transparent, but what does this mean in practice, and how does transparency affect both trust and utility? We investigate this relationship in the literature and find it to be complex, particularly in non industrial environments where, depending on the application and purpose of the robot, transparency may have a wider range of effects on trust and utility. We outline our programme of research to support our assertion that it is nevertheless possible to create transparent agents that are emotionally engaging despite having a transparent machine nature.

KW - EPOR, transparency, agents, ethics, roboethics, robotics

UR - http://dx.doi.org/10.1080/09540091.2017.1313816

U2 - 10.1080/09540091.2017.1313816

DO - 10.1080/09540091.2017.1313816

M3 - Special issue

VL - 29

SP - 242

EP - 248

JO - Connection Science

T2 - Connection Science

JF - Connection Science

SN - 0954-0091

IS - 3

ER -