Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices

Research output: Contribution to conferencePaperpeer-review

86 Downloads (Pure)

Abstract

There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.
Original languageEnglish
Publication statusAcceptance date - 22 Jun 2019
EventINTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS - London
Duration: 29 Jul 201930 Jul 2019
https://www.icres2019.org

Conference

ConferenceINTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS
Abbreviated titleICRES
CityLondon
Period29/07/1930/07/19
Internet address

Fingerprint

Dive into the research topics of 'Explainable AI, Model Reconciliation and System-Level Analysis for Safe-Control of Medical Devices'. Together they form a unique fingerprint.

Cite this