Who Is In The Wrong If A Self-Driving Car Kills Someone?

Who Is In The Wrong If A Self-Driving Car Kills Someone?

You may have recently seen that an Uber self-driving car killed a pedestrian in a headline-grabbing event that re-awakened the humanity in all of us.

This brings me to some common themes where ethical conundrums come back to the question of responsibility. Who is to blame if AI kills a human? Assuming that the AI is forced to make a decision between one life over another, how does it make that choice?

You see, self-driving or autonomous cars are a bit like the Terminator. They utilise a programmatic paradigm called artificial intelligence (AI) and its cohort, machine learning. Hollywood loves to present AI to us as humanoid robots.

The reality is that one of our first mainstream tastes of real AI are these occasionally life-threatening self-driving cars.

Coding reactions in real-time systems

Real-time systems take pride in being deterministic, which simply put means they always react the same way to the same stimulus in the same amount of time. They are wonderfully predictable and not at all “learning.”

There is very little about modern AI in combination with machine learning that is simple or deterministic. Imagine AI as having a decision matrix composed of various decision models trained independently to make predictions and therefore make decisions.

Let’s consider the decision matrix. The machine learning aspect is that this matrix, while starting with a repeatable outcome, is not necessarily creating a fixed outcome. It runs in combination with something often called adaptive boosting, which serves to boost models/decisions that are correct and lower the ones that are not. It can even combine groups of lower ranked decisions to win over more highly ranked ones. It’s clever and it is also effectively “learning.”

It’s exactly what we as humans do in that we learn from our mistakes and make better decisions going forward.

How are self-driving cars getting information?

While we humans rely mostly on our eyes and ears for information, self-driving cars have a complex array of sensors, cameras, and something called LIDAR (light detection and ranging). Combine radar and lasers and you have LIDAR. This is at the heart of information gathering. It creates a laser generated 3D map of the cars surroundings at, that’s right, the speed of light. In fact, advances in LIDAR are moving towards not just mapping what is “visible” but also hidden objects and objects around corners. These autonomous vehicles will in the foreseeable future be extremely good at driving.

Self-driving cars are one of the modern examples of AI and machine learning that we can attempt to wrap our heads around because, well, we drive cars. Also, for the most part, we think we’re great at it.

However, there were thousands of automobile related deaths in 2017 in the UK alone. Take a moment to let that sink in.

The blame game

Let’s get back to the ethical question at hand. What if a self-driving car kills a human?

I had an interesting conversation about this recently where the word “culpability” was used. I found it an interesting word choice as it implies wrong doing. Blame doesn’t change the outcome of our ethical dilemma but it seems that many people aren’t satisfied until we can be sure that errors can be attributed and perhaps recompense can be confirmed.

In terms of the cars, they will likely still be the responsibility of the company. Consider a plane crash. If a plane crashes due to a mechanical failure, maintenance, or malfunctioning auto-pilot, manufacturing corporations must take these scenarios very seriously.

The big ethical dilemma

Technology is moving in this direction because of the advantages of the bigger picture.

Starting with the environmental benefits alone, cars would not need to be purchased anymore. We tend to purchase the car for the worst-case scenario of long drives with lots of passengers (the top 5 selling cars in the USA were either pickup trucks or SUVs) and then we drive those cars alone for short distances.

In a utopian self-driving world, you could order a car in a journey-specific way. Small cars for quick journeys and large for large without needing to own or maintain it. The right car for the journey would be environmentally sound and would be cheaper.

All that aside, the biggest cause of automobile deaths is humans, not cars. Food for thought.

Unfortunately as humans inherently resist change and often fear what they don’t understand, progression into a safer world of hive minded, safety conscious vehicles will be a slow one with the occasional statistically unavoidable mortal consequence. Amidst that progress, we should do our best to not turn a self-driving car into a witch hunt, as this technology has the power to change the face of vehicle based mortality statistics significantly.

Close

What's Hot