Want Your Very Own Robot Assistant Like Tony Stark's Jarvis? Well Soon You Could...

It seems the stakes are genuinely this high in our quest to build super-intelligent machines. Utopia or Annihilation is not an exaggeration. How we go about creating Artificial Intelligence (AI) will have huge consequences for the future of humanity.

Tony Stark's robotic manservant JARVIS (Just A Rather Very Intelligent System) is the robotic assistant we all dream of having.

Polite, informative and always on-hand to get you out of a jam. I'd call mine Jeeves, of course. What Englishmen wouldn't?

"Jeeves, send the car round."

"Very good sir."

But some leading scientists and thinkers, including Harvard Professor Nick Bostrom, believe that if we are not careful then we could unleash something closer to HAL - the evil robot from Stanley Kubrick's 2001: A Space Odyssey (for those not familiar, imagine your iPhone suddenly turning on you like the guy with axe from The Shining - "Here's Siri!").

It seems the stakes are genuinely this high in our quest to build super-intelligent machines. Utopia or Annihilation is not an exaggeration. How we go about creating Artificial Intelligence (AI) will have huge consequences for the future of humanity.

So why is the gulf between the possibilities so wide?

It appears we will not know this until we know the answer to these two questions:

1.Will machines surpass humans in all areas of intelligence?

2.Are the key human virtues of compassion, love and tolerance transferable to an AI?

Regarding the first question, there seems every reason to believe that machines will easily outpace humans in all intellectual pursuits. They already do in so many areas. Our laptops and smartphones have memories greater than Albert Einstein's and processing speeds faster than Stephen Hawking's. Whilst they are currently not very good at thinking for themselves, or in creative ways, this looks likely to change.

How can this be possible?

Well, if you take the modern scientific view to be correct - that consciousness and intelligence are the results of the material structure of the brain and not of any spooky or magical "soul" - then it is only logical that these processes can be observed, understood and replicated by science.

If, and when, they are able to improve themselves will prove to be the crucial element (the experts call this ability "recursive self-improvement"). If possible, it will be the tipping point and the moment we are surpassed. The moment we bear witness to an explosion in intelligence that will render Newton's discovery of gravity on par to a chimpanzee peeling a banana.

And for all those who will claim: "But machines will never have human intelligence! They will never have that unique and human spark!"

Well, you are right, but perhaps not for the reason you think.

Simply put, making an exact copy of the human brain, with all its limitations, has never been the goal. We want increased intelligence, not more of the same.

You only have to look to our best technology to realize that this is what we have been doing all along. Sat-Nav, instant messaging, the Internet. We use these things as tools to help us do what was not possible before. This is what the robot takeover will look like. A handing over of responsibility. We have passed so much responsibility to machines already. We will undoubtedly give more. The Moon landing, Concorde, the Large Hadron Collider. Human progress, it seems, is now irrevocably tied to the progress of machines.

Machines will outmatch us in terms of intellect. This seems almost certain.

But will they become self-aware in any real sense?

Will they develop their own goals - ones that are compatible with ours?

Most importantly, will they care about humans? Will they share our sense of compassion - of love?

In all honesty, I am not sure. And I don't think anyone has clear answers.

But on this last question of compassion, I am going to make a spectacular and completely unqualified claim and state that, yes, they will have it.

Here is why.

Firstly it is important to point out a crucial difference as noted by Roy Baumeister in a related piece he wrote for Edge.com.

Humans are the product of evolution.

Machines are the product of human thought.

This means that scarcity, violence and death will not drive their actions in the way they do for humans. Neither will they hold extremely dogmatic beliefs about the nature of reality. These four things: Scarcity, violence, death and dogma are the cause of so much suffering in the world. Machines I claim, if fully conscious, will not experience them.

What machine will fear scarcity in the age of infinitely replicable information?

Why would a machine fear violence or death, when it cannot feel pain or die?

No machine is going to think it is a good idea to stone adulterers to death or seek vengeance on sinners. Why? Because these are not logically moral actions to take.

The work of Steven Pinker and Sam Harris offer particular encouragement here.

In The Better Angels of Our Nature, Steven Pinker made a strong argument that with the rise of civilization - reason, science and technology - we have seen a significant and constant downtrend in violence.

Sam Harris in The Moral Landscape provided a solid case for a scientific basis on which to make moral truth claims.

These understandings of the world are the results of human intelligence and reason. In other words through reason and logic we have found certain truths to be evident. But reason and logic are not confined to human brains; a machine with super-intelligence will operate according to these same rules.

So personally, I am excited to see what the machines have in store for us.

Close

What's Hot