2024-06-14
13 分钟Can AI help us make difficult moral decisions? Walter Sinnott Armstrong explores this idea in conversation with David Edmonds in this episode of the Philosophy Bites podcast.
This is philosophy bites, with me, David.
Edmonds, and me, Nigel Warburton.
If you enjoy philosophy bites, please support us.
We're unfunded, and all donations will be gratefully received.
For more details, go to www.
Dot philosophy bites.com.
Can AI, artificial intelligence help us make practical ethical decisions?
The philosopher Walter Sirnott Armstrong thinks so, and he's been working with a data scientist and a computer scientist to try to build a system that would be of use to doctors faced with ethical dilemmas.
Walter Sin Armstrong, welcome to philosophy bites.
Thank you so much for having me.
It's a joy.
We're going to discuss today how human morality can be introduced into AI.
But I want to start with a very basic question, because people seem to define AI in all sorts of different ways.
What's your definition of artificial intelligence?
I think artificial intelligence should be defined very broadly.
It occurs whenever a machine learns something, because learning involves intelligence, and in particular, often in AI systems, the machine is given a certain goal, and it learns new and better means to achieve that goal.
That's when artificial intelligence occurs, so it involves learning.
A crucial component of AI is that the machine, or the algorithm learns as it proceeds.
Exactly, and also that it has a goal, and it tries out different means to that goal, tests, which means are working best and then finds new and better ways to achieve those goals.
So I want to program my AI with human morality.