Artificial Intelligence Warning Says Research Must Avoid Apocalyptic 'Pitfalls'

AI Research 'Must Avoid Apocalyptic Mistakes'

Dozens of scientists and innovators including Stephen Hawking and executives from Google, Amazon and Space X have made a pre-emptive call for artificial intelligence research to specifically avoid causing the end of the world.

The letter states that studies into advanced AI must focus on positive aims, and put restrictions on areas that might lead down a dark path.

"Our AI systems must do what we want them to do," the letter warns.

"Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls."

Futurists, fiction writers and researchers have long speculated on what might happen if truly self-aware artificial intelligence was brought into existence.

Usually, nothing great.

But now the reality of artificial intelligence is starting to approach that very point. And while anyone claiming to have created human-like AI is probably still selling management software, and not declaring the start of the end-times, researchers and investors are now saying it's time to pay attention.

The letter published by the Future of Life Institute states that AI is finally coming to fruition, thanks to "the establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems".

Inevitably, it says, this means there is now more money swirling around AI research.

"As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research."

"There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence."

And the potential benefits are massive, the letter says ("the eradication of disease and poverty are not unfathomable") but so are the pitfalls. While the letter is not specific, research by the Oxford Martin School has suggested that up to 40% of all jobs currently undertaken by humans could be at risk by automation. Others - including Hawking - have suggested that a "runaway" AI could even pose a physical threat to humanity as a whole.

The letter links to a research paper - which is well worth reading - on the future of AI and its potential benefits and calls for more research in the identified areas.

It also calls for law and ethics research into:

  • Libability and law for autonomous machines - if self-driving cars are involved in fatal crashes, who is liable?
  • Machine ethics: "How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?" - ie, should your robot car kill you to save two others?
  • Autonomous weapons: should deadly robots be made to comply with the Geneva Convention, et al?
  • Privacy
  • Professional ethics

And it concludes with this chilling quote from Stanford's One Hundred Year Study Of AI:

"We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes – and that such powerful systems would threaten humanity. Are such dystopic outcomes possible? If so, how might these situations arise? ...What kind of investments in research should be made to better understand and to address the possibility of the rise of a dangerous superintelligence or the occurrence of an “intelligence explosion”?"

Fortunately AI hasn't quite got to that point yet - and the potential benefits to humanity remain enormous. But perhaps more importantly, if the development of the tech itself is possible, the money involved means it is now also pretty much inevitable. The warning is simply to try and make sure that when it does, it happens in the right way.

Close

What's Hot