Google's AI Research Team Outline Five Questions Every Robot Needs To Answer

Turns out robots aren't the problem, we are.
LOADINGERROR LOADING

When it comes to predicting a robot apocalypse so few of us take humanity into consideration.

It seems that when the robots and AI do eventually enslave us almost nobody will be looking around asking the question: "Was this our fault?"

Chip Somodevilla via Getty Images

Well until now that is. Google's AI research team have been asking just that question and have come up with a definitive research paper which explores how we should safely evolve robots into the ultra-intelligent servants we want them to be.

Hinting rather subtly at the amount of attention the 'rogue AI' story has gotten in the press and wider media, Google's team are hoping to throw in some empirical research into the mix.

Entitled 'Concrete Problems in AI Safety', the paper puts forward five key questions that every robot should successfully be able to answer before it can be deemed a safe and responsible member of society.

While the paper goes into each of these questions in some considerable detail Google's own blog post has rather helpfully broken them down:

  • Avoiding Negative Side Effects: How can we ensure that an AI system will not disturb its environment in negative ways while pursuing its goals, e.g. a cleaning robot knocking over a vase because it can clean faster by doing so?
  • Avoiding Reward Hacking: How can we avoid gaming of the reward function? For example, we don’t want this cleaning robot simply covering over messes with materials it can’t see through.
  • Scalable Oversight: How can we efficiently ensure that a given AI system respects aspects of the objective that are too expensive to be frequently evaluated during training? For example, if an AI system gets human feedback as it performs a task, it needs to use that feedback efficiently because asking too often would be annoying.
  • Safe Exploration: How do we ensure that an AI system doesn’t make exploratory moves with very negative repercussions? For example, maybe a cleaning robot should experiment with mopping strategies, but clearly it shouldn’t try putting a wet mop in an electrical outlet.
  • Robustness to Distributional Shift: How do we ensure that an AI system recognizes, and behaves robustly, when it’s in an environment very different from its training environment? For example, heuristics learned for a factory workfloor may not be safe enough for an office.

As you can see, these are no Asimov's laws, in fact they're designed to be used far far earlier into the robotic development stage.

At their most basic level, these requirements are simply to help us create the next generation of factory robots.

You see before AI is even placed into the equation there have already been instances where 'intelligent' autonomous robots have injured human beings.

Bloomberg via Getty Images

Take a manual labouring robot that works at a quarry for example. You want to give it the freedom to find the best technique which works for it.

What you don't want it to do is decided that explosives are the best technique and to start using them without asking us first.

That sounds obvious but for robotics it's an important step that can be very hard to implement if you decide to give it any level of autonomy or 'machine learning' capability.

Close

What's Hot