A Singular Sort of Cult

Those who believe in a Singularity movement are hoping for a certain sort of humanity to develop - a species that is stronger, smarter, and more efficient than anything that came before it, thus driving out what they deem to be old, weak and inefficient. Which sounds a lot like eugenics to me.

You've probably never heard of singularitarianism, but NASA, Google, Nokia, Autodesk, ePlanet Capital, the X Prize Foundation, the Kauffman Foundation and Genentech certainly have, because they're the founding sponsors of a university dedicated to its advancement. In a nutshell, singularitarianism is an ideology and social movement that believes the creation of a superintelligence will occur in 2045, or thereabouts, and that precautionary measures ought to be taken in order to ensure technological singularity will benefit humans.

The emergence of this superintelligence - which is too complicated for our as yet unaided human minds to fully comprehend - will cause an intelligence explosion, whereby one superintelligence will be succeeded by another greater superintelligence and so on, until artificial intelligence surpasses the capabilities of any human cognitive function. At this stage, the superintelligent AI will take over the human race, which has now partly evolved into a post-biological cyborg state, and achieve benevolent world domination, eventually leading to benevolent domination of the universe.

This may all sound like Moore's Law on crack, or a poorly written sci-fi screenplay, but it is, bizarrely, a very real branch of what's called futurist transhumanism, taken very seriously by a large number of influential people. Although ideas of Singularity began circulating in the late 1940s, the ideology truly came into being when Eliezer Yudkowsky, an artificial intelligence researcher, wrote The Singularitarian Principles in 2000:

"The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever..."

Backing these peculiar ideas is a high-powered cabal of well-known individuals and companies: Peter Thiel, co-founder of PayPal, and Jaan Tallinn, co-creator of Skype, are two of the Singularity Institute for Artificial Intelligence's (SIAI) most well-known supporters, with Thiel providing $100,000 in matching funds to back the Singularity Challenge donation drive of the SIAI in 2006, as well as providing half of the $400,000 in matched funds for the same donation drive in 2007. Indeed, the Thiel Foundation has donated well over $1 million since SIAI's creation. Not bad when you consider that the SIAI has the privilege of being tax exempt under Section 501(c)(3) of the United States Internal Revenue Code.

The SIAI website asks visitors to donate money to help them "produce the research required" to create a friendly artificial intelligence, which will ultimately "benefit everyone in society". But what exactly are the millions for? It's not immediately clear. There are suggestions of using the money to create a scholarly AI risk wiki, costing $462,000, or to pay top academics grants (tens of thousands of dollars) to write research papers for their cause. The proposed allocation of funds seems rather unspecific for such large amounts of cash.

Peculiarities of finance aside (to be fair, the institute is for the most part admirably transparent), I can't help but be confused that so many serious and respectable companies and individuals are so willingly involved in something that seems to the untrained eye to be verging into the murky waters of a new cult-like religious movement. I'm not alone - John Horgan, author of Rational Mysticism: Dispatches from the Border Between Science and Spirituality and contributor to Scientific American, dismisses singularitarianism as a sort of rapture for the nerds:

"The Singularity is a religious rather than a scientific vision... Such yearning for transcendence, whether spiritual or technological, is all too understandable. Both as individuals and as a species, we face deadly serious problems, including terrorism, nuclear proliferation, overpopulation, poverty, famine, environmental degradation, climate change, resource depletion, and AIDS. Engineers and scientists should be helping us face the world's problems and find solutions to them, rather than indulging in escapist, pseudoscientific fantasies like the Singularity."

It turns out that Horgan is one of very few outspoken critics of singularitarianism, despite its somewhat sinister undercurrents - just take a look at the Singularity University, founded in 2008 and sponsored by big names like Google and NASA. With tuition fees of $25,000 they hope to "assemble, educate and inspire a cadre of leaders who strive to understand and facilitate the development of exponentially advancing technologies and apply, focus and guide these tools to address humanity's grand challenges" which, admittedly, sounds quite exciting - but it glosses over the rather absurd desire for the benevolent world domination of a superintelligent AI.

Even the University of Cambridge has an interest in the movement, with the recent creation of the Project for Existential Risk faculty, launched by Lord Rees and co-founded by Jaan Tallinn, among with numerous other singularitarians and transhumanists. Lord Rees defended the construction of the new faculty saying: "We fret unduly about carcinogens in food, train crashes and low-level radiation." Really? The very real concern of cancer is just undue fretting, but the potential threat of humankind being overthrown by evil superintelligent computers isn't? Aren't we getting a bit carried away?

Those who believe in a Singularity movement are hoping for a certain sort of humanity to develop - a species that is stronger, smarter, and more efficient than anything that came before it, thus driving out what they deem to be old, weak and inefficient. Which sounds a lot like eugenics to me. But if Singularity is considered an unavoidable inevitability - an intelligence event horizon - perhaps comparing it to eugenics is unfair. Perhaps Singularity is more about making the best of a bad lot.

The glee and anticipation with which many of its proponents talk about Singularity doesn't make me feel that any of them see this as a tragedy, though. Surely it is exactly that. Singularity would mark the loss of what it means to be human, that is, to make mistakes, to be flawed, to feel pain, to suffer and, of course, to die.

I don't think world domination will occur in 2045, or anytime soon, but I do think the idea of Singularity raises some interesting questions about what it means to be alive and I have no doubt the research currently being produced will at the least have a positive fallout for the wider technology and academic communities. I remain, however, slightly dismayed that so many people - regardless of the probability of such an event occurring - have failed to acknowledge adequately the moral ambiguities of their ideal and, even more worryingly, failed utterly to understand what it means to be human.

Close

What's Hot