There are over 1.2 million deaths globally each year on our roads. For every fatality, at least 20 other people suffer serious but non-fatal injuries. Some 94% of these incidents are estimated to be the result of driver error, against other factors which include extreme weather or vehicle failure (the latter of which incidentally is very rare).
Against a backdrop of figures like these, the question is this: when a human's driving performance is often so poor, how safe should autonomous vehicles have to be? Or to put it another way: is any improvement on human drivers good enough?
Tesla's Autopilot, Mercedes' Drive Pilot or Nissan's ProPilot are all examples of the auto industry developing advanced driver assistance technologies designed to take the burden of driving away from humans. Despite the popularity of these features with consumers, they pose a safety issue for the industry. Google's research using in-car cameras shows that even when explicitly told not to take their attention off the road, drivers quickly place too much faith in the car's ability to handle any scenario and start checking email or watching movies. Recently, Ford has observed the same problem and came to the same conclusion.
Whereas full "Level 5" autonomy requires no human input, driver assistance technologies which automate the driving task to a large degree, but not completely, are described as Level 2 or 3 autonomy. The latter require the human driver to be able to take back control at a moment's notice, which isn't likely if the driver is focused on an aging Liam Neeson's exploits in the denouement of Taken 7.
This behavioural trap is so catastrophic and permanent that the only safe approach to autonomous vehicles is to avoid car and driver sharing any elements of the driving task. Full, Level 5 automation has to be the goal.
On the whole, the automotive industry uses risk-based safety standards which have led to an ultra-low prevalence of vehicle failures. The strictest standards, designed to prevent failures that can result in death or serious injury, demand a maximum undetected failure rate of just one in every 11,000 years.
Current vehicle technologies rely heavily on human drivers to perceive and reason their environment in order to undertake the complex control decisions that are beyond the car's current capabilities.
When concentrating and sober, humans are generally not bad drivers. The problems start when they fail at one or both of these things. That's why they're at least 200 times more likely to be the cause of a crash than the vehicle itself.
The autonomous vehicle technology in development today aims to replace the high level perception and reasoning that humans use when driving well. This means that Level 5 systems must respond appropriately to a far broader range of scenarios than current Level 2 systems.
So what is "safe enough," and how do we get there? Testing clearly has a vital role to play, but using the same failure targets applied to present-day vehicle electronics systems to test the fully autonomous systems in development would make testing virtually impossible. Manufacturers would need to simulate an almost infinite number of objects and situations in order to test new autonomous technologies.
We know that autonomous vehicles can be far safer than human drivers. Even if they are still some distance away from the current standard (for low-complexity components) of one undetected failure every 11,000 years, there's a clear moral and practical case for an intermediate target for autonomous systems.
Driverless cars which are just twice as safe as human drivers would deliver accident rates in the region of only 200 incidents per billion driven miles. Serious injury and death on our roads would be halved.
That might sound simple, but even with a more feasible (lower) failure rate target, the testing and validation effort for autonomous systems is still vast, although the existing standards provide a useful framework which can be adapted to test fully autonomous systems.
A staged approach should be favoured for real-world deployment. Systems should be validated in the real-world over limited length fixed routes, which limits the possible scenarios and hazard events a vehicle could encounter. This approach would lead to consumers enjoying the enhanced safety, cost and convenience benefits of autonomous mobility-as-a-service sooner than a "go-anywhere" approach to testing.
Driverless cars will improve societies and make our world a better place with lower congestion, quicker journeys, cheaper travel and fewer emissions. But the primary objective for any autonomous vehicle technology must be to reduce death and injury. For drivers to get behind autonomous vehicles, the industry needs to prove that these vehicles perform better than humans. This is very achievable, but first we need to set sensible risk-based safety standards which will enable us to prove that this is the case.Suggest a correction