Sentient Robots Might Be Mathematically Impossible, In Boost To Humanity's Ego

Phew: Scientists Say It Is Impossible To Build A Conscious Robot
|

There is good news today in humanity's struggle to avoid a nightmarish robot dystopia.

Scientists have determined that it is probably mathematically impossible for a robot to think like a human.

The prospect of a 'sentient' robot which is genuinely aware of its own consciousness has long been a dream (and nightmare) of science and fiction writers.

And as we move into a future where killer robots are under discussion by the UN, and robot cars could theoretically have to be programmed to kill us in order to save more lives, it's an ever more important question.

Well now it appears there may be natural limits to how far that discussion can go.

Phil Maguire at the National University of Ireland has found a flaw in a famed mathematical model for consciousness, which some believe makes it virtually impossible for a machine to think like humans.

The debate focuses on the work of Giulio Tononi at the University of Wisconsin-Madison and his team, who have spent 10 years developing their framework of thought. Their argument in naturally complex, but hinges on the idea that the ability to integrate information is crucial. Humans do not see a cat and the colour black, they say - humans see a black cat.

For Maguire, however, this model has a problem, because it means that human brains would logically be losing information all the time. He points to a device called the XOR logic gate, in which two bits of information have to be the same in order for a "1" signal to be output, and different for a "0". This gate necessarily cuts two bits of information down to one. If human brains worked like this, they "would have to be continuously haemorrhaging information" he tells New Scientist. Memories would erode every time you called them up - and that's something we don't really observe.

They have instead developed a new model, which argues it is difficult to 'edit' memories once they are made, and which they have shown cannot be replicated by a machine.

Alas, it's not quite a settled science. Maguire accepts criticisms that if a new process can be found to break down and reintegrate memories, it might be possible to edit them and still maintain consciousness. Neuroscientists also argue that mathematical models of consciousness might not be the whole story - and that perhaps new computers based on new styles of information processing could replicate the organic brain more completely.

For the rest of us there is also the uncomfortable truth that a machine might not have to actually think like us to appear like it does - and which according to Alan Turing might as well be the same thing anyway.

Still, it's an interesting new way of thinking about the idea. Head over to New Scientist for the full story, and if you're interested the journal paper can be read here.