Abstract
DR JOANNA BRYSON
Expert in artificial intelligence, University of Bath
What I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.
Expert in artificial intelligence, University of Bath
What I don’t like is when people say artificial intelligence is going to take over. As humanity gets smarter, we keep creating these dangers – like climate change, the global extinctions of biodiversity, nuclear weapons. But AI just makes us smarter: it’s wrong to think of it as alien. So the question is: is it possible for us to keep regulating ourselves, including AI, so that we don’t do serious damage? So far we’re doing pretty well. We are able to build safe systems, but we sometimes make mistakes. And no one can guarantee that you won’t do that. But we do we know a lot of ways to make sure it doesn’t happen.
Original language | English |
---|---|
Specialist publication | The Observer |
Publication status | Published - 9 Feb 2015 |