Should Robot Cars Be Programmed To Kill You If It Will Save More Lives?

Should Robot Cars Be Programmed To Kill You If It Will Save More Lives?

A curious thought experiment has made a lot of people nervous about the possibility of travelling in self-driving cars. More nervous than, you know, just getting in one in the first place.

Proposed by Patrick Lin, an associate philosophy professor at California Polytechnic State University, it goes something like this:

Say a robotic car -- which exist, by the way - is driving you happily along a mountain road, and then a tire blows sending you into oncoming traffic. If the car still has a modicum of control, it may have a simple choice to make. On one hand it could continue in its current path and slam into, say, a robotic SUV carrying parents and four happy kids. Or it could choose to send you over a cliff, killing you but saving the family car.

So what should it do?

The point being, if a robotic car can either choose to kill four people or one, should it be programmed to choose to betray its master?

Above: preparing for the end?

Posed in a recent opinion column on Wired, it's a fascinating take on a problem which has variously stumped and inspired scientists and futurists from Isaac Asimov to Ridley Scott.

It gets even more complicated when you take it out of the realm of pure ethics - the classic trolley problem - and into programming decisions. How much should the car be able to assess potential fatalities and alter its course accordingly? Should it know the difference between a bus, a car and a motorcycle? Should it take into account the age of the potential victims, or the chances of recovery?

Indeed, as another researcher Noah Goodall asks, should the car be able to tell which motorcyclist of two is wearing a helmet - and if so, which should it target? What if one of the motorcyclists is carrying a kidney? What if... What if...

These are both genuine questions that makers of self-driving cars should may have to make - depending on their car's superhuman ability to react to danger - but are also important as purely moral debates, Lin argues:

"For the foreseeable future, what’s important isn’t just about arriving at the “right” answers to difficult ethical dilemmas, as nice as that would be. But it’s also about being thoughtful about your decisions and able to defend them–it’s about showing your moral math.

"In ethics, the process of thinking through a problem is as important as the result. Making decisions randomly, then, evades that responsibility. Instead of thoughtful decisions, they are thoughtless, and this may be worse than reflexive human judgments that lead to bad outcomes."

In this sense the debates also apply to the (for now theoretical) concept of killer robots, currently being debated at the UN for the very first time. If these self-targeting war bots are so efficient as to cause "superfluous or unnecessary suffering" they could by definition be in violation of human rights treaties. So should they be programmed to have a level of inaccuracy built in, and if so what's the point?

Over at Popular Science, Erik Sofge concludes that in this futuristic scenario, the cars might indeed have to be programmed for the greater good - or in a best case, be built so as to collaboratively avoid accidents entirely.

Sofge quotes Michael Cahill, a law professor and vice dean at Brooklyn Law School, who sums the debate up nicely:

"The beauty of robots is that they don’t have relationships to anybody. They can make decisions that are better for everyone. But if you lived in that world, where robots made all the decisions, you might think it’s a dystopia."

Check out the full story at Popular Science and Wired.

Close

What's Hot