Apocalypse Soon!

Religious nutcases are always issuing dire warnings about the end of the world, but when a group of highly respected scientists does so, we should probably sit up and listen. On 10 January the Bulletin of Atomic Scientists moved the Doomsday Clock one minute closer to midnight.

Religious nutcases are always issuing dire warnings about the end of the world, but when a group of highly respected scientists does so, we should probably sit up and listen. On 10 January the Bulletin of Atomic Scientists moved the Doomsday Clock one minute closer to midnight.

The Doomsday Clock is an imaginary device which scientists use to convey their estimates of the risk of global catastrophe. Originally, the analogy represented the threat of global nuclear war, but since 2007 it has also reflected the dangers of climate-change and new developments in technology. The higher the probability of catastrophe is deemed to be, the closer the scientists move the clock to midnight. The clock hands were reset nineteen times between 1947 and 2010 in response to world events. The latest adjustment means the clock now stands at 11:55.

Clearly, the scientists are more pessimistic about the future of humanity than they were back in 2010 when they move the hand backwards, from 11:55 to 11:54. The gloom is apparently due to

the failure of the US and China to ratify the Comprehensive Test Ban Treaty, the failure of governments to adopt climate change agreements to reduce carbon emissions, and the failure to vastly increase public and private investments in alternative energy sources such as solar and wind.

I wonder, however, about the value of the Doomsday Clock as a tool for communicating risk. Although the clock uses numbers (minutes to midnight), which give an impression of precision and rigor, it is unclear what these numbers mean in terms of actual probabilities. How much more likely is a global catastrophe now that the clock is one minute closer to midnight? Is it ten percent more likely? Or twenty per cent? Numbers alone are not enough; it's probabilities that count.

We live in strange times, caught between two opposing views of the future. On the one hand, the believers in technology and progress promise a world of ever increasing prosperity, a science-fiction scenario in which huge advances in technology have made material abundance and long healthy lives possible for people all over the world. On the other hand, the doomsayers warn us that climate change and the end of cheap oil will put an end to the stupendous economic growth we have seen in the past hundred years, and usher in a new dark age of poverty, disease and war. There are some middle positions, it is true, but they seem less convincing than the two extremes.

Recently, the doomsayers have been gaining the upper hand. On the surface, they may appear more sober, more realistic, and more courageous than the optimists. They want to stare the apocalypse in the face, which seems better than denying it could possibly happen. But thinking too intensely about the end of civilisation can be just as dangerous than denying the possibility of such a danger. It's important to consider such worst-case scenarios when planning for the future, but the point of considering them is to help one plan how to avoid them. If you dwell on them too much, you can begin to forget that this is just one possible future out of many, and start to think of it as an inevitable occurrence. And from there, it is not such a big step to start idealising this dark future. It's an extreme version of the 'sweet lemons' phenomenon.

With sour grapes, if you don't think you're going to get something you wanted, you persuade yourself you didn't want it after all. With sweet lemons, if you think you're going to get something you don't want, you persuade yourself it is actually a great thing to have.

It might sound odd that people can welcome an event like social collapse, but an increasing number of 'doomers' seem to embrace this scenario in much the same way as some religious people look forward to the apocalypse. Indeed, for a while I was tempted by the thought myself, and it was only by seeing others get sucked into this mode of thought so completely that I was able to see how bizarre it was, as if the doomers presented a mirror that made me see my own folly more clearly.

In July 2006 I set up social experiment in an attempt to figure out how life in Britain might be affected by climate change and the end of cheap oil during the first few decades of the twenty-first century. I invited volunteers of all ages and walks of life to form a small community in the Scottish Highlands. We pretended we were living in the near future in a post-apocalyptic world in which the combined effect of climate change and peak oil had caused global civilisation to collapse.

A few months into the experiment I left for a few days to attend the Free Thinking Festival organised by the BBC in Liverpool, where I spoke in a debate about the future of civilisation. One of the other panellists was my old friend Nick Bostrom, whom I had first met at the London School of Economics in the late 1990's, where we were both PhD students in the philosophy department. Now, Nick was Director of the Future of Humanity Institute at Oxford University, and was writing extensively about global catastrophic risks.

During a coffee break, Nick asked me a question that, with hindsight, sowed the very first seeds for my subsequent research on risk intelligence: "How likely do you think it is that something like the imaginary scenario you are acting out in Scotland might really come to pass in the next ten years?" I paused for thought. "Give me your answer as a percentage," Nick added, crucially. I thought a bit longer, and finally declared that I thought that the chance of such a thing happening within the next ten years was about a 50%.

Nick looked shocked. Looking back, his surprise seems eminently reasonable to me. It betrayed the extent to which what had started out as an exercise in collaborative fiction had become, in my mind, an insurance policy against a global disaster that I then thought had a good chance of taking place.

When I abandoned the experiment the following year, and returned to England to reflect on the whole experience, I often recalled Nick's incisive question, and my overconfident answer. The nice thing about the question was Nick's insistence that I put a number on my estimate. He didn't let me get away with some vague expression such as "quite likely". And as a result, half way into the ten-year forecasting horizon I gave myself when I answered his question, with no obvious signs of imminent catastrophe on the horizon, I can see all the more clearly how wrong I was. The British philosopher Carveth Read wrote that "it is better to be vaguely right than exactly wrong". But here, the precision that Nick had demanded of me forced me to own up to my error in a way that vagueness never would.

Close

What's Hot