Oh no, not the Harvard Moral Psychology argument again, you know the one about the individual who sees 5 people on the track below with a fast moving train they cannot see, and another track with only one person on it, while the individual at a distance is sitting at the switch box to change the train from one track to another, what should they do? It’s one of those moral dilemma challenges, if they switch the track they caused someone to die, if they do nothing 5 people are going to be killed, and they have seconds to act, what do they do?
Well, in walks the new future world of artificial intelligence and autonomous cars. We’ve all been in situations we have to avoid thing, and swerve sometimes we risk damaging our car to prevent hitting a kid who just rode out in front of us on his bicycle. So, here goes the challenge – you see;
There was an interesting article in “Nature – International Weekly Journal of Science” titled; “Machine ethics: The robot’s dilemma – Working out how to build ethical robots is one of the thorniest challenges in artificial intelligence,” by Boer Deng on July 1, 2015. The article stated:
“In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle’s efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?”
Well, yes there are those types of dilemmas but before we get into any of that, or logic based, probability rules, there are even more dilemmas which are even more serious to ponder the prior. Let’s talk shall we?
You see, what some in the black-and-white world of programming fail to understand is that laws, and rules are never that, as there are exceptions and circumstances. Poorly programmed AI will be a disaster for “What’s Right” in the eyes of those they are supposedly serving. Ethically speaking this indeed ends up going against everything we stand for in free country.
So how should programmers approach this dilemma as they pre-decide who might live or die in some hypothetic situation in the future? Yes, see the philosophical moral quicksand here – more of this and other challenges will follow these future concept autonomous cars, but mind you, they will be here before you know it.