Does The Law Think Robots Should Have Human Rights?

The last question is the latest being asked by the EU Commission in the January 2017 report by their Legal Affairs Committee. They want to push forward the debate on if robots, and indeed other AI technology, should be granted 'personhood' status.

The debate around robots isn't slowing down. Will they take our jobs? Will humans and robots be allowed to legally marry in the future? Should robots be granted a status and rights akin to those given to a human?

The last question is the latest being asked by the EU Commission in the January 2017 report by their Legal Affairs Committee. They want to push forward the debate on if robots, and indeed other AI technology, should be granted 'personhood' status. If approved, it means that robots could benefit from the protections that laws typically give things with a legal personality - like us - and they could also start taking on some legal responsibility of their own.

Who would be responsible if things went wrong?

If robots were given the ability to 'own' or be responsible for something, our usage of them and ultimately our interaction with them would need to fundamentally change. If you knew that your robot might own the shopping you asked it to buy for you, or that you might have a duty of care as to how you treat your robot, would that force you to think differently about how you use it?

As it stands, current liability regimes don't hold robots liable per se for their actions. Regardless of the changing sophistication and autonomy of the technology the law doesn't recognise the robot as having an impact worthy of changing existing legal approaches.

The newer thinking indicated in the EU report and hinted at by the Science and Technology Committee of the Houses of Parliament in October last year, is that the current liability regime could be amended - potentially moving to an allocation of responsibility with a proportion of liability linked to the robot.

The EU Legal Affairs Committee have also drafted a robotics charter, which said that designers must place obvious 'kill switches' in the technology and called for an advisory code of conduct for robotics engineers.

They also turned back to science fiction history and reminded developers and engineers of Asimov's three rules of robotics - first published in 1942 - rules that have always been seen as sensible principles as to how we build robots and protect humanity.

What about insurers?

The availability and accessibility of insurance is an important consideration surrounding liability generally and is shaping approaches already. For example the Department for Transport has noted it plans to expand compulsory motor insurance to driverless cars and some car manufacturers have indicated that they'll take on liability for the driving of their driverless cars.

Where legislators are slow to react and create uncertainty, then industry typically steps into the vacuum, and it is likely that the insurance industry will play a significant part in the shaping of any future legal approach.

What will happen in the future?

It isn't yet clear what status the European Parliament or the UK will give to robots - whether that's the same as a human, different but with a legal status, or no status at all.

Personally, I think we need to consider the status of robots who can show true autonomous actions and cognitive behaviour. I believe we will see such advances in technology that revisiting how the law works will be essential. It is an exciting time as the areas of law which will need re-visiting are some of the most critical and fundamental aspects that govern our society and any sort of material change will likely be revolutionary.

Close

What's Hot