06/06/2016 16:28 BST

Google Is Creating A 'Kill Switch' For The Robot Apocalypse


Stockernumber2 via Getty Images

When the inevitable happens and the robots which have built our cars for years finally turn on us it will be Google and an Oxford academic that we'll have to thank when we're not actually enslaved.

You see Google's DeepMind team, along with Dr Stuart Armstrong from Oxford's Future of Humanity Institute, have been working on what they call 'Interruptible' measures.

Their results have been published in a paper called 'Safely Interruptible Agents'.

In case you're wondering what safely interruptible agents are then the simple answer is that they would be safeguards built into every AI, or advanced robot and would at any point allow a human to take absolute control should things go wrong.

Hans-Peter Merten via Getty Images

One of the key hurdles in doing this is that intrinsically true AI is being designed to learn and better itself. What happens when it learns how to stop us getting involved in its affairs?

This is where Google comes in, using their DeepMind supercomputer program the team have been creating a framework that should prevent AI from learning how to stop us taking control.

Think of if as the failsafe when suddenly one morning DeepMind's younger, more-intelligent cousin wakes up and decides the cure to global warming is enslaving the human populace.

One of the most primitive examples of this situation however can go right down to the basics of robot tasks.

Google’s computer program AlphaGo defeats its human opponent, Lee Sedol.

Say a robot is performing a task both inside and outside, if it's not waterproof then human intervention might be needed to move the robot inside so it doesn't short circuit.

DeepMind's framework would allow the human to safely intervene in the robot's working process, move it and then allow it to carry on.

"Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not necessarily receive rewards for this," write the authors.