Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices

Research output: Contribution to conferencePaper

6 Downloads (Pure)

Abstract

There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.
Original languageEnglish
Publication statusAccepted/In press - 22 Jun 2019
EventINTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS - London
Duration: 29 Jul 201930 Jul 2019
https://www.icres2019.org

Conference

ConferenceINTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS
Abbreviated titleICRES
CityLondon
Period29/07/1930/07/19
Internet address

Cite this

Georgilas, I. (Accepted/In press). Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices. Paper presented at INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, London, .

Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices. / Georgilas, Ioannis.

2019. Paper presented at INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, London, .

Research output: Contribution to conferencePaper

Georgilas, I 2019, 'Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices' Paper presented at INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, London, 29/07/19 - 30/07/19, .
Georgilas I. Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices. 2019. Paper presented at INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, London, .
Georgilas, Ioannis. / Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices. Paper presented at INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, London, .
@conference{322b865c2c104eb9ab6a0b5a6a6f19d4,
title = "Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices",
abstract = "There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.",
author = "Ioannis Georgilas",
year = "2019",
month = "6",
day = "22",
language = "English",
note = "INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS, ICRES ; Conference date: 29-07-2019 Through 30-07-2019",
url = "https://www.icres2019.org",

}

TY - CONF

T1 - Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices

AU - Georgilas, Ioannis

PY - 2019/6/22

Y1 - 2019/6/22

N2 - There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.

AB - There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.

M3 - Paper

ER -