ABOD3: A Graphical Visualization and Real-Time Debugging Tool for BOD Agents

Research output: Contribution to journalArticle

Abstract

Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts, requiring a steep initial learning curve, as they are tailored for programmers. This can be disadvantageous for implementing transparent inspection of agents, as additional work is needed to expose and represent information to end users.

We are working towards the development of a new editor, ABOD3, shown in fig.~\ref{abod3}. It allows the graphical visualisation of BOD-based plans, including its two major derivatives: POSH and Instinct. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.

The editor provides a user-customisable user interface (UI) aimed at supporting both the development and debug of agents. Plan elements, their subtrees, and debugging-related information can be hidden, to allow different levels of abstraction and present only relevant information. The graphical representation of the plan can be generated automatically, and the user can override its default layout by moving elements to suit his needs and preferences.

The simple UI and customisation allows the editor to be employed not only as a developer's tool, but also to present transparency related information to the end-user to help them develop more accurate mental models of the agent. Alpha testers have already used ABOD3 in experiments to determine the effects of transparency on the mental models formed by humans. Their experiments consisted of a non-humanoid robot, powered by the BOD-based Instinct reactive planner. They have demonstrated that subjects can show marked improvement in the accuracy of their mental model of a robot observed, if they also see an accompanying display of the robot's real-time decision making as provided by ABOD3. They concluded that providing transparency information by using ABOD3 does help users to understand the behaviour of the robot, calibrating their expectations.

We plan to continue developing this new editor, implementing debug functions such as ``fast-forward'' in pre-recorded log files and usage of breakpoints in real-time. A transparent agent, with an inspectable decision-making mechanism, could also be debugged in a similar manner to the way in which traditional, non-intelligent software is commonly debugged. The developer would be able to see which actions the agent is selecting, why this is happening, and how it moves from one action to the other. This is similar to the way in which popular Integrated Development Environments (IDEs) provide options to follow different streams of code with debug points. Moreover, we will enhance its plan design capabilities by introducing new views, to view and edit specific types of plan-elements and through a public beta testing to gather feedback by both experienced and inexperienced AI developers.

Fingerprint

Biochemical oxygen demand
Visualization
Robots
Transparency
User interfaces
Decision making
Intelligent agents
Testing
Chemical elements
Computer programming languages
Inspection
Experiments
Display devices
Derivatives
Feedback

Keywords

  • artifiical intelligence
  • cognitive architectures
  • planning
  • transparency
  • ethics
  • roboethics
  • ai ethics
  • hri

ASJC Scopus subject areas

  • Artificial Intelligence
  • Human-Computer Interaction

Cite this

ABOD3: A Graphical Visualization and Real-Time Debugging Tool for BOD Agents. / Theodorou, Andreas.

In: CEUR Workshop Proceedings, Vol. 1855, 12.06.2017, p. 60-61.

Research output: Contribution to journalArticle

@article{0dc0ba7648dc4de1b4c8d4320dad3ec3,
title = "ABOD3: A Graphical Visualization and Real-Time Debugging Tool for BOD Agents",
abstract = "Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts, requiring a steep initial learning curve, as they are tailored for programmers. This can be disadvantageous for implementing transparent inspection of agents, as additional work is needed to expose and represent information to end users. We are working towards the development of a new editor, ABOD3, shown in fig.~\ref{abod3}. It allows the graphical visualisation of BOD-based plans, including its two major derivatives: POSH and Instinct. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.The editor provides a user-customisable user interface (UI) aimed at supporting both the development and debug of agents. Plan elements, their subtrees, and debugging-related information can be hidden, to allow different levels of abstraction and present only relevant information. The graphical representation of the plan can be generated automatically, and the user can override its default layout by moving elements to suit his needs and preferences.The simple UI and customisation allows the editor to be employed not only as a developer's tool, but also to present transparency related information to the end-user to help them develop more accurate mental models of the agent. Alpha testers have already used ABOD3 in experiments to determine the effects of transparency on the mental models formed by humans. Their experiments consisted of a non-humanoid robot, powered by the BOD-based Instinct reactive planner. They have demonstrated that subjects can show marked improvement in the accuracy of their mental model of a robot observed, if they also see an accompanying display of the robot's real-time decision making as provided by ABOD3. They concluded that providing transparency information by using ABOD3 does help users to understand the behaviour of the robot, calibrating their expectations.We plan to continue developing this new editor, implementing debug functions such as ``fast-forward'' in pre-recorded log files and usage of breakpoints in real-time. A transparent agent, with an inspectable decision-making mechanism, could also be debugged in a similar manner to the way in which traditional, non-intelligent software is commonly debugged. The developer would be able to see which actions the agent is selecting, why this is happening, and how it moves from one action to the other. This is similar to the way in which popular Integrated Development Environments (IDEs) provide options to follow different streams of code with debug points. Moreover, we will enhance its plan design capabilities by introducing new views, to view and edit specific types of plan-elements and through a public beta testing to gather feedback by both experienced and inexperienced AI developers.",
keywords = "artifiical intelligence, cognitive architectures, planning, transparency, ethics, roboethics, ai ethics, hri",
author = "Andreas Theodorou",
note = "Proceedings of the EUCognition Meeting (European Society for Cognitive Systems) {"}Cognitive Robot Architectures{"}, Vienna, Austria, December 8-9, 2016.",
year = "2017",
month = "6",
day = "12",
language = "English",
volume = "1855",
pages = "60--61",
journal = "CEUR Workshop Proceedings",
issn = "1613-0073",

}

TY - JOUR

T1 - ABOD3: A Graphical Visualization and Real-Time Debugging Tool for BOD Agents

AU - Theodorou, Andreas

N1 - Proceedings of the EUCognition Meeting (European Society for Cognitive Systems) "Cognitive Robot Architectures", Vienna, Austria, December 8-9, 2016.

PY - 2017/6/12

Y1 - 2017/6/12

N2 - Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts, requiring a steep initial learning curve, as they are tailored for programmers. This can be disadvantageous for implementing transparent inspection of agents, as additional work is needed to expose and represent information to end users. We are working towards the development of a new editor, ABOD3, shown in fig.~\ref{abod3}. It allows the graphical visualisation of BOD-based plans, including its two major derivatives: POSH and Instinct. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.The editor provides a user-customisable user interface (UI) aimed at supporting both the development and debug of agents. Plan elements, their subtrees, and debugging-related information can be hidden, to allow different levels of abstraction and present only relevant information. The graphical representation of the plan can be generated automatically, and the user can override its default layout by moving elements to suit his needs and preferences.The simple UI and customisation allows the editor to be employed not only as a developer's tool, but also to present transparency related information to the end-user to help them develop more accurate mental models of the agent. Alpha testers have already used ABOD3 in experiments to determine the effects of transparency on the mental models formed by humans. Their experiments consisted of a non-humanoid robot, powered by the BOD-based Instinct reactive planner. They have demonstrated that subjects can show marked improvement in the accuracy of their mental model of a robot observed, if they also see an accompanying display of the robot's real-time decision making as provided by ABOD3. They concluded that providing transparency information by using ABOD3 does help users to understand the behaviour of the robot, calibrating their expectations.We plan to continue developing this new editor, implementing debug functions such as ``fast-forward'' in pre-recorded log files and usage of breakpoints in real-time. A transparent agent, with an inspectable decision-making mechanism, could also be debugged in a similar manner to the way in which traditional, non-intelligent software is commonly debugged. The developer would be able to see which actions the agent is selecting, why this is happening, and how it moves from one action to the other. This is similar to the way in which popular Integrated Development Environments (IDEs) provide options to follow different streams of code with debug points. Moreover, we will enhance its plan design capabilities by introducing new views, to view and edit specific types of plan-elements and through a public beta testing to gather feedback by both experienced and inexperienced AI developers.

AB - Current software for AI development requires the use of programming languages to develop intelligent agents. This can be disadvantageous for AI designers, as their work needs to be debugged and treated as a generic piece of software code. Moreover, such approaches are designed for experts, requiring a steep initial learning curve, as they are tailored for programmers. This can be disadvantageous for implementing transparent inspection of agents, as additional work is needed to expose and represent information to end users. We are working towards the development of a new editor, ABOD3, shown in fig.~\ref{abod3}. It allows the graphical visualisation of BOD-based plans, including its two major derivatives: POSH and Instinct. The new editor is designed to allow not only the development of reactive plans, but also to debug such plans in real time to reduce the time required to develop an agent. This allows the development and testing of plans from a same application.The editor provides a user-customisable user interface (UI) aimed at supporting both the development and debug of agents. Plan elements, their subtrees, and debugging-related information can be hidden, to allow different levels of abstraction and present only relevant information. The graphical representation of the plan can be generated automatically, and the user can override its default layout by moving elements to suit his needs and preferences.The simple UI and customisation allows the editor to be employed not only as a developer's tool, but also to present transparency related information to the end-user to help them develop more accurate mental models of the agent. Alpha testers have already used ABOD3 in experiments to determine the effects of transparency on the mental models formed by humans. Their experiments consisted of a non-humanoid robot, powered by the BOD-based Instinct reactive planner. They have demonstrated that subjects can show marked improvement in the accuracy of their mental model of a robot observed, if they also see an accompanying display of the robot's real-time decision making as provided by ABOD3. They concluded that providing transparency information by using ABOD3 does help users to understand the behaviour of the robot, calibrating their expectations.We plan to continue developing this new editor, implementing debug functions such as ``fast-forward'' in pre-recorded log files and usage of breakpoints in real-time. A transparent agent, with an inspectable decision-making mechanism, could also be debugged in a similar manner to the way in which traditional, non-intelligent software is commonly debugged. The developer would be able to see which actions the agent is selecting, why this is happening, and how it moves from one action to the other. This is similar to the way in which popular Integrated Development Environments (IDEs) provide options to follow different streams of code with debug points. Moreover, we will enhance its plan design capabilities by introducing new views, to view and edit specific types of plan-elements and through a public beta testing to gather feedback by both experienced and inexperienced AI developers.

KW - artifiical intelligence

KW - cognitive architectures

KW - planning

KW - transparency

KW - ethics

KW - roboethics

KW - ai ethics

KW - hri

UR - http://ceur-ws.org/Vol-1855/EUCognition_2016_Part19.pdf

UR - http://ceur-ws.org/Vol-1855/EUCognition_2016_Part19.pdf

M3 - Article

VL - 1855

SP - 60

EP - 61

JO - CEUR Workshop Proceedings

T2 - CEUR Workshop Proceedings

JF - CEUR Workshop Proceedings

SN - 1613-0073

ER -