Experts Warn AI Is Now A 'Clear And Present Danger' In Major Report

From creating fake news to hacking fleets of self-driving cars.
LOADINGERROR LOADING

In a major new report, a team of 26 international experts have warned that artificial intelligence (AI) is now a “clear and present danger” urging governments and corporations to address the “myriad” threats that it could cause.

The report has been co-authored by experts from Oxford University, The Centre For The Study of Existential Risk, the Electronic Frontier Foundation and more.

Within the report, called “The Malicious Use Of Artificial Intelligence”, the authors present a series of scenarios wherein AI could present the greatest danger.

The three scenarios examine how artificial intelligence could be used to breach our digital security, physical security and finally our political security.

The rise of the malicious chatbot

The first scenario sees a person being conned into clicking on malware links after having had a conversation with what they thought was a friend.

In fact the conversation was with a highly advanced chatbot which has learned to mimic the writing style of your friend and has been created to infect as many computers as possible.

In an even more alarming step, the report then imagines a world where these chatbots could even mimic a friend over a video call.

The weaponised cleaning robot

The next scenario imagines a situation where a terrorist cell is able to hack into a cleaning robot that is also used by a government ministry.

Having hacked the bot it then enters the ministry, seamlessly replacing another that has since been removed and for the majority of the day carries out its main function of cleaning.

However once it makes visual contact with its intended target the bot heads towards it and then detonates an explosive device found within.

The report also goes on to explain how AI could also improve other arenas of warfare including the automation of many high-skilled roles from long-range snipers to even self-aiming rifles.

The surveillance state

The last and final scenario imagines a world where a huge rise in fake news forces a citizen to write something that publicly criticises a government.

A state-powered surveillance system that utilises AI is able to trawl through the millions of messages and identify those that contradict government policy. It finds the message written by the unhappy citizen and he is promptly arrested.

This scenario is then made exponentially worse as experts consider the rise of AI-powered video manipulation.

Imagine seeing a video of what you think is a politician saying something controversial. They then lose their job. Months later it’s revealed that the video was in fact a fake, created using advanced AI to mimic the person’s facial features and voice.

Solutions

The report then offers three potential interventions that governments, corporations and academic institutions should follow:

1. Policymakers should collaborate closely with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

2. Researchers and engineers in artificial intelligence should take the dual-use nature of their work seriously, allowing misuse-related considerations to influence research priorities and norms, and proactively reaching out to relevant actors when harmful applications are foreseeable.

3. Best practices should be identified in research areas with more mature methods for addressing dualuse concerns, such as computer security, and imported where applicable to the case of AI.

4. Actively seek to expand the range of stakeholders and domain experts involved in discussions of these challenges.

Finally the experts point out that only through a combined attitude of shared responsibility and transparency can we hope to prevent AI being used to cause damage on a vast scale.

Close

What's Hot