The machines rise, mankind falls. It's a science fiction trope nearly as old as machines themselves. The dystopian scenarios spun around this theme can make for compelling entertainment but it's difficult to see them as serious threats. Nonetheless, artificially intelligent systems continue developing apace. Self-driving cars share our roads; smartphones manage our lives; facial recognition systems help catch bad guys and sophisticated algorithms dethrone our Jeopardy and Go champions. Developing these technologies could obviously benefit humanity. But, then--don't most dystopian sci-fi stories start out this way?
Discussions about Artificial Intelligence (AI) risk gravitating towards one of two extremes. One is overly credulous scare-mongering. Of course Siri won't transmogrify into the murderous HAL from 2001: A Space Odyssey. But the other extreme is equally dangerous--complacency that we can overlook these issues now, because humanity-threatening AI is decades or more away. Whether it is down to scare-mongering or complacency, serious debates about the role of government, regulatory bodies and courts in regulating AI have been lacking. There are a number of possible explanations for this: the so-called 'legal lag problem' where law is seen as invariably playing catch-up to rapid technological advances; the apparent anti-government libertarian bent of Silicon Valley; and the possibility that AI might elude traditional regulatory regimes. Like nuclear power and genetic research, AI is a classic risk/reward technology. If developed safely it could bring enormous benefits to society. If developed recklessly it could pose significant risks. If these risks are to be mitigated than we need to start taking AI seriously and devote some serious attention to how we want to regulate or restrain its use in our lives.
Whether or not we realise it, we already live in an age of intelligent machines. Everything from loan and mortgage applications, stock transactions, job applications to music and film suggestions made by services like Netflix are powered by 'weak AI' algorithms that learn to predict everything from risky financial behaviour to who else you might like if you like Taylor Swift. But as AI extends into new sectors, it will bring new potential for error, liability and harm. A survey of AI researchers conducted by TechEmergence identified a wide variety of concerns about the potential risks within a 20-year timeframe--including total financial meltdown as algorithms interact unexpectedly, and the potential for AI to help malicious actors maximise the lethality of biotechnological weapons.
There is, however, plenty of reason for optimism and no need to believe in the inevitably of our AI overlords making life crummy for us. At earlier junctures in legal history technological advances such as barbed wire, cars, radios, computers and the internet carried their own subset of novel legal problems. Despite all requiring a periods of adjustment for the legal system to 'catch up' each has found its place within regulatory frameworks, albeit with difficulty. Law is not perfect, nor is it swift. But the comparatively slow pace of law-making and regulation is an inherent strength that allows legal systems to absorb the 'shocks' exerted upon it by new technologies and more clearly ascertain how technologies fit within exiting laws and regulations. It is rarely preferable to re-write laws from the ground up, as lawyers in the 1990s suggested was necessary with 'Cyber Law'. However, in the view of some experts the unique properties of AI will prove exceptionally difficult to regulate compared to other sources of public risk.
However, unlike previous advancements such as space flight or the Internet, the intuitive response should not be to concoct a sub-set of 'AI Law' but instead look to existing legal categories and concepts to see where they may be reasonably extended, and where novel innovations might be required. It will be necessary to leverage the competencies of legislatures, regulatory agencies and courts and to establish something akin to a Artificial Intelligence Regulation Act that establishes the core values of the regulatory regime. To make AI safe, secure, and to ensure it remains in human control and aligned with human values it will be necessary to dissuade the creation of AI systems that lack those values and encourage development of systems that include them.
It's time to kickstart a conversation about what the best legal mechanisms are to safely manage the development of AI, harness its benefits and minimise as best as possible its drawbacks. There's no magic bullet for AI regulation and things can, and will, go wrong. However, there's no reason to conclude that AI presents challenges that are completely beyond the control of the legal system as they have demonstrated their remarkable adaptive capacities at previous junctures in history in response to technological changes. To regulate AI we need to take it seriously, and ensure our governments do too.Suggest a correction