Artificial Intelligence vs The Human Brain

By the beginning of the 21st century computers did, indeed, typically have a gigabyte of memory, and they were a million times faster than the 'Baby', but still they could not pass his test. Even today, with still far more computing power and memory, no machine has convincingly passed the test. This would have surprised Turing had he lived to see it.

In the run-up to Manchester's Brain Box event which will showcase pioneering science around the workings of the brain we ask the question 'will machines with artificial intelligence ever outperform the human brain?' In response computing pioneer Professor Steve Furber shares his views on why no machine has been clever enough to succeed at the Turing test - but says there is still much to play for in understanding how intelligence works.

In his seminal 1950 paper, Computing Machinery and Intelligence Alan Turing began by considering the question: "Can machines think?" He then went on to suggest that this question is difficult to answer directly, and he turned it around into a research experiment that he called 'The Imitation Game', but which subsequent generations simply known as the Turing test for Artificial Intelligence.

The basis of the test is whether a person sitting at a terminal communicating with another person in one room and a computer in another room can tell, in a reasonable length of time, which of their correspondents is the computer and which is the human. If most people sitting at the terminal cannot get this right most of the time, then the computer is judged to have passed the Turing test.

It is worth remembering that Turing wrote this paper only two years after the world's first operational electronic stored-program computer ran its first program in Manchester, on June 21, 1948. Indeed, it was this machine that brought Turing to Manchester, and it was during his time here that he wrote this paper. Turing thought that all that a machine would require to pass his test was more memory - the 1948 Manchester 'Baby' was quite powerful enough already. He estimated that a gigabyte (a thousand million bytes) of memory should suffice, and this should be achievable by the end of the twentieth century.

By the beginning of the 21st century computers did, indeed, typically have a gigabyte of memory, and they were a million times faster than the 'Baby', but still they could not pass his test. Even today, with still far more computing power and memory, no machine has convincingly passed the test. This would have surprised Turing had he lived to see it.

Although research into artificial intelligence (AI) has delivered in many areas of life - think of Google, talking to your smartphone, driverless cars, and so on, it has failed to deliver as expected, particularly by many imaginative science fiction writers, in the area known as Artificial General Intelligence. This is the idea that a suitably programmed machine might display aspects of intelligence that we normally associate only with humans. My take on the failure to deliver this form of AI is that we have never actually worked out what natural intelligence is, so we don't know what it is that we are trying to imitate in our machines.

As a result, in my research I have gone back to the source of human intelligence - the human brain and tried to see how we might use computers to better understand this mysterious organ upon which we all so critically depend.

This line of thinking has led to the development of the SpiNNaker machine - shorthand for Spiking Neural Network architecture. This is a machine designed specifically to support computer models of systems that work in some ways that are similar to the brain. It can be used to model areas of the brain and to test new hypotheses about how the brain might work.

Are the warnings we have heard, from no lesser figures than Bill Gates and Stephen Hawking, about artificial intelligence being a threat to the future of humanity, to be taken seriously? Will our children or grandchildren live in a world where walking, talking humanoid robots are difficult to tell apart from biological humans? There are those in the field, such as Ray Kurzweil, American writer and computer scientist, who believe that in just a few decades machine intelligence will be able to improve itself, independent from human help, to generate exponential improvements that are described as "the singularity" - a term borrowed from mathematics where it is used to describe a function that veers off to infinity.

I consider myself to be a singularity sceptic. We do not yet understand the nature of human intelligence, but I suspect that it is not a simple physical parameter that can be amplified. Human intelligence has many dimensions: it is not simply the ability to understand maths, or science, or the arts. Think, for example, of the intelligence required to kick a leather sphere into the back of a net past other humans who are doing their best to stop you. This is clearly a form of intelligence. Indeed, it would appear to be the form of intelligence most valued by current human society!

Also, if we do build computers capable of mimicking human intelligence they will, at least initially, be the size of aircraft hangars and consume a million times more power than the rather neat version that you carry around in your head. So they would be a very inefficient substitute for the real thing.

Why then, you might reasonably ask, bother trying to build computer models of the brain at all? There are three answers to this question. Firstly, this is a very effective way to advance the science and the quest to understand our own brains and minds remains as one of the great frontiers of science. As the late, great scientist Richard Feynman once said: "What I cannot create, I do not understand." Secondly, a computer model of the brain would be very useful for understanding diseases of the brain, which is vital for developing new treatments.

Brain diseases cost the developed economies more than heart diseases, cancer and diabetes put together, not to mention their impact on the quality of life of those affected and their families. Yet research into new drugs for brain diseases has all but stopped because modern drug development is based on understanding disease processes and that understanding is missing for the brain. Thirdly, understanding the brain is likely to lead to insights that can be used to build better and more efficient computer systems. These three threads: future neuroscience, future medicine and future computing underpin the one billion euro European flagship Human Brain Project, in which SpiNNaker is playing its role.

These are exciting times for brain research, with major projects not only in Europe but also in the USA, China, Australia and elsewhere. There is a widespread belief that we now have the tools, not only computers for modelling, but also brain imaging machines, multi-electrode probes, and many more, that make this the right time to try to push forward our understanding of this most complex of organs.

Steve Furber CBE FRS FREng is ICL Professor of Computer Engineering in the School of Computer Science at The University of Manchester. Professor Furber pioneered the world-leading ARM processor which forms part of his SpiNNaker project. The ultimate aim of his research is to allow neurosurgeons and psychologists to unravel the mysteries of the human brain.

Close

What's Hot