Abstract
This chapter analyses the application of AI systems which test and/or contest the accounts of human subject(s), and which are applied within the course of governmental decision-making. It argues that the rise in these decisional practices demands thor- ough interrogation of the ways in which testimony is elicited, offered, and received as an element of AI systems. This enables critical inquiry beyond narrowly conceived ethical categories, allowing for more comprehensive accounts of the range of harms – material and epistemic – produced by systems which bypass, undermine, and challenge the tes- timony of their targets. I identify the three evidentiary manoeuvres by which testimony figures in various governmental AI technologies: obviation, diminishment and impugnment, and apply the concept of epistemic justice to illuminate the different ways in which harm is produced through their enactment. I argue for a sociotechnical approach which recognizes that resulting testimonial injustices are not easily addressed by the cultivation of more virtuous practices and instead require alternative governance responses. This enables much-needed analysis at the intersection of ethics, epistemology and politics which better equips us to identify new vectors of domination and marginalization, and to imagine and realize less violent alternatives.
Original language | English |
---|---|
Title of host publication | KI-Realitäten |
Subtitle of host publication | Modelle, Praktiken und Topologien maschinellen Lernens |
Editors | R. Gross, R. Jordan |
Place of Publication | Germany |
Publisher | Transcript Verlag |
Pages | 67-92 |
ISBN (Print) | 9783837666601 |
DOIs | |
Publication status | Published - 1 May 2023 |
Keywords
- epistemic justice
- artificial intelligence
- testimonial justice
- public policy
- automation
- AI ethics
- decision making