22/03/2016 12:34 GMT | Updated 23/03/2017 05:12 GMT

Artificial Intelligence - The Fourth Revolution?

Just over a week ago, Google Deepmind's AlphaGo machine crushed 18-time World Go Champion Lee Sedol 4-1 in a 5 game series, heralding an achievement many experts predicted to be at least a decade away.

Just over a week ago, Google Deepmind's AlphaGo machine crushed 18-time World Go Champion Lee Sedol 4-1 in a 5 game series, heralding an achievement many experts predicted to be at least a decade away.

And whilst the victory of machine over man was a great result for Google, Machine Learning, and Artificial Intelligence (AI) - it also served as a chilling reminder that the ever-extending arm of AI is showing absolutely no sign of slowing.

DeepMind founder Demis Hassabis has stated that Go is "probably the most complex game ever devised by man." And he's not wrong.

For starters, it's played on a 19 by 19 board, which allows for 10171 possible layouts, versus roughly 1050 possible configurations on a standard chessboard, and an estimated 1080 atoms in the universe.

Because of this, players are often said to rely heavily on sub-conscious intuition or 'gut feeling'. Meaning that something that was once thought to be held exclusively in in the realm of biology can now be created in a world of silicon and binary.

And whilst the resounding victory is a proof-of-point that AI, and more specifically Machine Learning, could one day move into other 'exclusively' biological facets of intelligence and cognition - the broader implications are ones that now demand the attention of each and every one of us.

In a scintillating, awe-inspiring, and cautious TED talk, Jeremy Howard - the CEO of Enclitic, who employ Machine Learning for Medical Imaging Diagnosis - estimates that; "80 percent of the world's employment in the developed world is stuff that computers have just learned how to do."

It's safe to say that this is very exciting from a technological advancement aspect, but a real worry comes one considers how these 80% will 'earn their keep' once they're replaced by machine counterparts.

And how will the existent picture of economic inequality look once the small cohort of tech entrepreneurs making, programming, and selling these very machines, eventually displace this once economically valuable but now jobless cohort?

Is the 1% set to become a rose-tinted memory replaced by the 0.0001%?

In a recent article, the World Economic Forum founder and Executive Chairman Klaus Schwab talks of The Fourth Revolution, which "is characterized by a fusion of technologies that is blurring the lines between the physical, digital, and biological spheres."

Like the revolutions that have preceded it, Schwab points out that the fourth revolution has the potential to raise wealth and income on a global scale and ultimately, improve the quality of life for all.

(Insert scepticism here.)

Interestingly, Schwab does allude to some potential benefits of displacement by machines in the workforce, resulting in a net increase in what he calls "safe and rewarding jobs."

However, he goes on to predict: "This will give rise to a job market increasingly segregated into "low-skill/low-pay" and "high-skill/high-pay" segments, which in turn will lead to an increase in social tensions."

Something an increasingly fractured and tensile social landscape probably doesn't need right about now.

On top of this, the exponential rate at which the technology is advancing leaves one to ponder whether or not the ethical and moral discourse is getting left behind in the proverbial dust. Something that's all too apparent in Alex Garland's recent masterpiece Ex Machina.

In the timeless words of the fictional mathematician Dr Ian Malcolm: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."

And whilst luddite shouts of warning and scaremongering are demanding AI progress be scrapped altogether - one can easily see that this is both unlikely and potentially damaging.

To deprive humankind of beneficial technologies, such as those provided by Howard's own Enlitic - which is democratising medical imaging diagnosis on a global scale - seems not only unnecessary but unfair.

Important to remember in all this is that AI in itself holds no positive or negative intrinsic value but the way in which we apply it needs careful consideration.

And more to the point - the inevitable, rapid, and largely unnoticed nature in which it is happening, makes the need for measures to be put in place for if, and increasingly more likely, when the 'machines take over.'

But what are these measures? And more importantly - when are people going to start talking about them?