Why Academics, Developers, And Politicians Must Collaboratively Contemplate AI

Let’s listen to the experts, and make sure the profit-minded players are listening too
monkeybusinessimages via Getty Images

If you give a group of tech developers a whiteboard, and ask them to imagine how AI could impact the everyday individual, they’ll likely see a window of opportunities: a road where the drunk “driver” can instead let the car control itself; a sky where drones take unprecedented photographs, manage disaster relief, and spare human causalities from combat; drawers upon drawers of digital file cabinets – within which big data algorithms create customised consumer profiles, publish personalised news updates, and direct both diagnoses in medicine and decisions in law.

Like many Milennials, I feel captivated by these opportunities. From predicting YouTube playlists to asking Siri what the spaces between the prongs of a fork are called (she didn’t know), AI has - on the whole - been a boon to many individuals’ quality-of-life, including my own.

But if you invite academics to the “whiteboard”, they’ll probably sketch a darker projection – one that postulates a not-so-unlikely scenario: what if AI gets in the hands of the wrong people?

Such was the subject of “The Malicious Use of AI” - a recently published 100-page report, written by 26 emerging tech experts.

While acknowledging the inevitable incentive to innovate, and recognising AI’s unquestionable benefits, the researchers suggest that certain artificial intelligent systems – like the ones spotlighted earlier - could actually provoke large-scale harm through breaching three forms of security: physical, digital, and political.

For instance, the same “self-driving” car that makes the road safe from drivers who are inattentive or drunk could also be used by cyber-terrorists for dangerous ends. The report provides a visual example, in which hackers bring a Jeep to a standstill on a busy highway, then later train the car to accelerate suddenly or turn the steering wheel wildly, whilst disabling the human “driver” from regaining control. These same methods, they say, could be used to mobilise “swarms of thousands of drones” to execute large-scale, rapid-fire attacks – rendering the human helpless once again.

But beyond physical threats, the report explores how AI could invite more modest consequences - like undermine our institutions’ integrity, or inflict severe psychological distress among its “beneficiaries”. Quoting cybersecurity expert Waltzmann, the report writes that “the ability to influence is now effectively ’democratised,”, but that such is “not necessarily favourable to democracy”, since it is “very easy” to spread sensationalist, misleading, or outright false information.

In effect, this could mean manipulating our data to spread fake - but highly persuasive - “news”. It could mean flooding our feeds with unsolicited - but highly targeted—advertisements. And while we already see such phenomena underway, the report imagines how “bad actors” could go one step further by exploiting and sell such private information. This could occur subtly - through “spear phishing” emails, or — more creepily— through impersonating loved ones with artificial speech synthesis.

Ultimately, however, the report is hopeful that there are ways we can “forecast, prevent, and better mitigate” these risks. And like many of our most large-scale societal threats, their prevention comes down to collaboration – between those who develop AI (techies), those who research AI (academics), and those who protect AI users (politicians).

Pragmatically speaking, this may be difficult, since AI developers are essentially competing in an “arms race” – a phenomenon highlighted when Vladimir Putin remarked that “the nation that leads in AI will be the ruler of world’. To techies, ‘slowing down’ is not an option.

But innovating cautiously could be. While the report does not endorse “decreasing openness”, it does suggest we should explore different models of open research – like requiring “pre-publication risk assessments” in certain areas. The report also recommends the need to learn both from and with cyber security experts, and enact legislative measures to protect user privacy.

Yet perhaps most significantly, the report advocates a “culture of responsibility” ― one in which AI developers, and the governments under which they operate, recognise the massive stakes AI holds for humanity.

As a start, let’s make sure these institutions don’t allow our natural excitement about AI to overshadow our healthy skepticism. Let’s listen to the experts, and make sure the profit-minded players are listening too. Let’s make sure that the proverbial white board is met with a cautious “black mirror”.

Close

What's Hot