Robot Transparency, Trust and Utility

Robert H Wortham, Andreas Theodorou, Joanna J Bryson

Research output: Contribution to conferencePaper

337 Downloads (Pure)

Abstract

As robot reasoning becomes more complex, debugging
becomes increasingly hard based solely on observable behaviour,
even for robot designers and technical specialists. Similarly, nonspecialist
users find it hard to create useful mental models of robot
reasoning solely from observed behaviour. The EPSRC Principles of
Robotics mandate that our artefacts should be transparent, but what
does this mean in practice, and how does transparency affect both
trust and utility? We investigate this relationship in the literature and
find it to be complex, particularly in non industrial environments
where transparency may have a wider range of effects on trust and
utility depending on the application and purpose of the robot. We
outline our programme of research to support our assertion that it is
nevertheless possible to create transparent agents that are emotionally
engaging despite having a transparent machine nature.
Original languageEnglish
Number of pages3
Publication statusPublished - 4 Apr 2016
EventAISB Workshop on Principles of Robotics - University of Sheffield, Sheffield, UK United Kingdom
Duration: 4 Apr 20164 Apr 2016
http://www.sheffieldrobotics.ac.uk/aisb-workshop-por/

Workshop

WorkshopAISB Workshop on Principles of Robotics
CountryUK United Kingdom
CitySheffield
Period4/04/164/04/16
Internet address

Keywords

  • artificial intelligence
  • ai
  • robot
  • robotics
  • roboethics
  • ethics
  • transparency

Fingerprint Dive into the research topics of 'Robot Transparency, Trust and Utility'. Together they form a unique fingerprint.

  • Cite this

    Wortham, R. H., Theodorou, A., & Bryson, J. J. (2016). Robot Transparency, Trust and Utility. Paper presented at AISB Workshop on Principles of Robotics, Sheffield, UK United Kingdom.