Abstract
The Bubonic Plague outbreak that wormed its way through San Francisco’s Chinatown in 1900 tells a story of prejudice guiding health policy, resulting in enormous suffering for much of its Chinese population. This article seeks to discuss the potential for hidden “prejudice” should Artificial Intelligence (AI) gain a dominant foothold in healthcare systems. Using a toy model, this piece explores potential future outcomes, should AI continue to develop without bound. Where potential dangers may lurk will be discussed, so that the full benefits AI has to offer can be reaped whilst avoiding the pitfalls. The model is produced using the computer programming language MATLAB and offers visual representations of potential outcomes. Interwoven with these potential outcomes are numerous historical models for problems caused by prejudice and recent issues in AI systems, from police prediction and facial recognition software to recruitment tools. Therefore, this research’s novel angle, of using historical precedents to model and discuss potential futures, offers a unique contribution.
Original language | English |
---|---|
Pages (from-to) | 983–999 |
Journal | AI and Society |
Volume | 36 |
Early online date | 22 Dec 2020 |
DOIs | |
Publication status | Published - 1 Sept 2021 |
Bibliographical note
Publisher Copyright:© 2020, The Author(s).
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
Keywords
- Artificial intelligence
- Bias
- Healthcare
- History
- Mathematics
ASJC Scopus subject areas
- Philosophy
- Human-Computer Interaction
- Artificial Intelligence