Abstract

DR JOANNA BRYSON
Expert in artificial intelligence, University of Bath

What I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.
LanguageEnglish
Specialist publicationThe Observer
StatusPublished - 9 Feb 2015

Fingerprint

artificial intelligence
nuclear weapon
biodiversity
guarantee
damages
climate change
expert

Cite this

Artificial intelligence: can scientists stop ‘negative’ outcomes? / Bryson, Joanna.

In: The Observer, 09.02.2015.

Research output: Contribution to specialist publicationArticle

@misc{0ad4b3d3702d4b1a91196fc04662300e,
title = "Artificial intelligence: can scientists stop ‘negative’ outcomes?",
abstract = "DR JOANNA BRYSONExpert in artificial intelligence, University of BathWhat I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.",
author = "Joanna Bryson",
year = "2015",
month = "2",
day = "9",
language = "English",
journal = "The Observer",

}

TY - GEN

T1 - Artificial intelligence: can scientists stop ‘negative’ outcomes?

AU - Bryson, Joanna

PY - 2015/2/9

Y1 - 2015/2/9

N2 - DR JOANNA BRYSONExpert in artificial intelligence, University of BathWhat I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.

AB - DR JOANNA BRYSONExpert in artificial intelligence, University of BathWhat I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.

UR - http://www.theguardian.com/technology/2015/feb/09/artificial-intelligence-can-scientists-stop-negative-outcomes?CMP=share_btn_tw

M3 - Article

JO - The Observer

T2 - The Observer

JF - The Observer

ER -