Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics

Research output: Contribution to journalArticle

  • 1 Citations

Abstract

The question of whether AI or robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of AI-centred societies. I conclude that while constructing AI as either moral agent or patient is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.
LanguageEnglish
Pages15-26
JournalEthics and Information Technology
Volume20
Issue number1
DOIs
StatusPublished - 1 Mar 2018

Fingerprint

Electric grounding
artificial intelligence
Intelligent systems
moral philosophy
Robots
sociality
intuition
robot
society
subjectivity
biology
artifact
autonomy

Keywords

  • Moral patiency · Moral agency · Ethics · Systems artificial intelligence · Strong AI

Cite this

Patiency Is Not a Virtue : The Design of Intelligent Systems and Systems of Ethics. / Bryson, Joanna J.

In: Ethics and Information Technology, Vol. 20, No. 1, 01.03.2018, p. 15-26.

Research output: Contribution to journalArticle

@article{d8b3a6d09d664d1cb0d481aaf247ba2f,
title = "Patiency Is Not a Virtue: The Design of Intelligent Systems and Systems of Ethics",
abstract = "The question of whether AI or robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of AI-centred societies. I conclude that while constructing AI as either moral agent or patient is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.",
keywords = "Moral patiency · Moral agency · Ethics · Systems artificial intelligence · Strong AI",
author = "Bryson, {Joanna J}",
note = "earlier versions of this paper appeared in symposia (AISB, AAAI)",
year = "2018",
month = "3",
day = "1",
doi = "10.1007/s10676-018-9448-6",
language = "English",
volume = "20",
pages = "15--26",
journal = "Ethics and Information Technology",
issn = "1388-1957",
publisher = "Springer Verlag",
number = "1",

}

TY - JOUR

T1 - Patiency Is Not a Virtue

T2 - Ethics and Information Technology

AU - Bryson, Joanna J

N1 - earlier versions of this paper appeared in symposia (AISB, AAAI)

PY - 2018/3/1

Y1 - 2018/3/1

N2 - The question of whether AI or robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of AI-centred societies. I conclude that while constructing AI as either moral agent or patient is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.

AB - The question of whether AI or robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of AI-centred societies. I conclude that while constructing AI as either moral agent or patient is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to.

KW - Moral patiency · Moral agency · Ethics · Systems artificial intelligence · Strong AI

U2 - 10.1007/s10676-018-9448-6

DO - 10.1007/s10676-018-9448-6

M3 - Article

VL - 20

SP - 15

EP - 26

JO - Ethics and Information Technology

JF - Ethics and Information Technology

SN - 1388-1957

IS - 1

ER -