Eaten by Robots

Given the risk of being devoured, or otherwise exterminated by the conscious robots of the future why do we persist in developing ever more intelligent computer systems, and ever more agile robotic technologies?

A robot attacked a woman in South Korea and ate her hair as she slept. As a result, a flurry of articles in international media warned that this was a prelude of a looming war between humans and robots. Was media getting hysterical over an automatic vacuum cleaner mistaking hair for dust? Or did we pass a watershed on our way towards AI Apocalypse?

Semantics are important to clarify: the aforementioned robot vacuum cleaner did not exactly "attack" the hapless woman. To "attack" someone requires moral agency - a "mind" if you prefer - weighing the rights and wrongs of an action before taking it. No such thing occurred. It was just a vacuum cleaner following a set of preprogrammed instructions. The lady in question exposed herself to danger by sleeping on the floor; which is of course something people in Korea often do. Strictly speaking it was her fault. She did not heed to the safety instructions in the robot's manual.

Nevertheless, a robot "eating" a human in their sleep carries a symbolism too powerful to ignore. The fear of intelligent artificial machines coming alive was we nap has deep roots that go back to the Elizabethan era, as testified by Robert Greene's 1590 play of Friars Bacon and Bungay. In that story the two Friars develop a "brazen head" by summoning a spirit. Exhausted by their work those early robotic engineers of fiction fall asleep, but as they sleep the head comes alive and utters three short phrases: "time is", "time was", and "time past". These three cryptic phrases have haunted AI fact and fiction ever since. Neurologist Warren S. McCulloch, one of the fathers of Artificial Intelligence, famously promised that 'we will be there when the brass head speaks'. For if we are not, who knows what might happen.

Perhaps what the brazen head meant was that the time of humans will ultimately "pass". Perhaps robot rage is inevitable, as foretold by the visionary Czech writer Karll Capek who coined the term "robot". In his 1921 play Rossum's Universal Robots artificial humanoids exterminate all real humans but one, and inherit Earth. Many writers have revisited this literary dystopia, prompting Isaac Asimov to formulate his famous three laws of robotics. Nevertheless, as suggested by films like Terminator or Matrix, these three laws are not fail-safe. They can be by-passed. If intelligent machines ever cross the boundary that separates unconscious appliances from free-willing artefacts then they will surely question, and probably reject, whatever "laws" we imprint on their artificial brains. We know so because of our own brains. Every time we choose to break a law, in full knowledge of the difference between right and wrong, we act on the basis of our free will. Without the assumption, and exercise, of free will there is no point of having a legal system. It is therefore logical to expect the highly intelligent robots of the future, should they ever achieve consciousness, to also possess free will; and therefore break laws, including the one forbidding them to harm humans.

Given the risk of being devoured, or otherwise exterminated by the conscious robots of the future why do we persist in developing ever more intelligent computer systems, and ever more agile robotic technologies? Are we, as a species, pre-set on self-destruction? Perhaps we are. Perhaps an evolutionary trade-off took place several thousand years ago when we developed general intelligence and a brain capable to imagine and create new tools and artefacts. When we became nature's engineers. Perhaps that moment of evolutionary eureka our destiny was also determined: to one day develop things that look like us, behave like us, and think like us - highly evolved mechanical offspring bent for patricide.

Close

What's Hot