Abstract
There is an increase in the demand to healthcare systems to provide support for patients and give them a good quality of life. Given the limited human resources such support can be provided with specialised devices that can adapt to the needs of the patients. At the same time the number of prospective novel medical devices that are using AI is increasing every year. Only few will reach patients because of the difficulty to certify black-box systems. In this work we are proposing a method in which humans will work collaboratively with the AI system, building trust and collectively ensure the safe operation of the device. This method draws from the domain of Explainable Artificial Intelligence (XAI), model reconciliation, and System-Theoretical Process Analysis (STPA) to establish a transparent interaction and control regime. In this work the outline of the proposed system is given and how the different component will work and deliver the desired outcome. The ethical issues are also discussed and how the framework can sit within the existing regulatory setting as well as how the changes in standards for medical device certification and intelligent system will evolve.
Original language | English |
---|---|
Publication status | Acceptance date - 22 Jun 2019 |
Event | INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS - London Duration: 29 Jul 2019 → 30 Jul 2019 https://www.icres2019.org |
Conference
Conference | INTERNATIONAL CONFERENCE ON ROBOT ETHICS AND STANDARDS |
---|---|
Abbreviated title | ICRES |
City | London |
Period | 29/07/19 → 30/07/19 |
Internet address |