Are we sleepwalking into an AI-controlled dystopian nightmare? Once the preserve of science fiction paperbacks, the idea that artificial intelligence poses a true existential threat has grown in popularity.
Tech royalty like Elon Musk or Stephen Hawking warn us of impending doom whilst new think tanks seem to spring up on a daily basis, looking to try and 'steer' AI to a more benevolent form.
The fact is, we're still a long way from AI that poses the human race any kind of threat, and the hand wringing over its morality and current direction perhaps misses a larger, more important point.
I would argue that the most dangerous area in modern technology is not software that's too clever, but rather software that isn't clever enough.
To explain what I mean it's worth looking back to April 2014. Back then the world learned in horror that a major security flaw at the very centre of the internet had exposed the personal details, credit cards, and passwords of millions. It was called, somewhat dramatically, the 'heartbleed bug'.
Speaking at the time, security expert and chief technology officer of Co3 Systems, Bruce Schneier said the leak was 'catastrophic ... on the scale of one to 10, this is an 11'. The world took a collective gasp, changed a couple of passwords, then went back to business as usual.
You would assume this then became the global tech community's main priority? Wrong. Last week a report revealed that close to 200,000 websites and servers remain vulnerable. That's three years after it was discovered.
The fact is that software is becoming increasingly complex, interdependent and vulnerable. Hacks like heartbleed are becoming regular occurrences, and this is down to software that has become too difficult for humans to properly test and manage. When an application has millions upon millions of lines of code it's unrealistic to think that humans can guarantee its safety and security. Indeed, as we adopt new tech like self-driving cars and connected smart homes we could well be putting our faith, and potentially our lives in the hands of shaky software.
So it's important we get it right. The good news is that AI, rather than threatening our very existence, has the potential to insure us against this risk, though perhaps not as you might imagine.
I'm talking about AI that talks not to humans but directly to computer programs, creating software that codes, edits and tests itself. This will ensure better software that is more secure and impervious to attack. AI could well be our savior against bad code.
So are we there yet?
We are certainty not in a world where a computer will independently decide that a particular problem needs a solution and then autonomously write a program to solve that problem without any human input.
But we have managed to write computer programs that can understand what other programs are trying to do and then correct them when they go wrong. This is a huge breakthrough.
It sounds terrifying but we have, in essence, created computers that can write code by themselves, suggesting corrections to enormously complex programs that make them better, or more safe.
What's more, this software is in fact an important step forward towards autonomous software production. Put another way, it is another rung on the ladder towards self-conscious code. But this is software which is correcting other software, which may one day save our lives. It's fair to say this could be one of the most important problems currently being worked on by computer science.
Forty years ago we had '10 print 'hello', 20 goto 10'. Now we have software that can read 10 million lines of code from another program and improve it. We are making rapid progress and it's important that we work quick enough to help developers prevent the type of catastrophic errors that led to the heartbleed bug.
As the internet of things becomes reality we need to know the software behind our tech is safe and secure. AI may well be the only answer.