A Cambridge University research group is to investigate the possibility of killer robots overthrowing human civilisation.
The Centre for the Study of Existential Risk (CSER) said that concerns of a technological armageddon should not be dismissed.
While the concept of man versus machine in an all-out, last-ditch struggle for survival is an image more familiar from The Terminator or The Matrix, but many now consider it worthy of genuine study.
Alongside robot armageddon CSER will study a range of possible, though fringe, causes of humanity's ultimate downfall.
On its website, CSER said: "Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change.
"The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake."
CSER has been co-founded by eminent scientist Sir Martin Rees, former Astronomer Royal and former master of Trinity College, who wrote on the Guardian that "new hazards are emerging that could be so catastrophic that even a tiny probability is disquieting".
"[We] begin with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind.
We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge's great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future. (In the process, we hope to make it a little more certain that we humans will be around to celebrate the University's own millenium, now less than two centuries hence.)
The danger of a deadly robotic uprising is not an isolated concern: recently the New York-based Human Rights Watch called for so-called 'killer robots' to be outlawed.
Such machines do not currently exist, and robots used on the battlefield require human intervention before they use lethal force, but HRW warned that militaries are closer than ever to implementing automatic killing machines and such machines should be pre-emptively banned.