THE BLOG
20/04/2018 12:54 BST | Updated 24/04/2018 10:53 BST

The Astronomical Consequences Of An Existential Catastrophe

The future could contain astronomical amounts of value, so understanding and preventing existential risks should be a top priority for humanity.

For most of human history, thinking about the future of humanity has been largely bound up with religious beliefs about the end of the world. Myths about messianic figures, bodily resurrections, and apocalyptic battles shaped the ideas that many people had about what the culmination of history and transformation of the earth would look like.

This began to change as fields like evolutionary biology and cosmology gave humanity new ways of understanding our place in the cosmos. Although Charles Darwin never discussed human extinction—at least to my knowledge—our eventual demise follows from the naturalistic worldview in which he developed his theory of evolution. And the great physicist Lord Kelvin observed in 1862 that, if applied to the cosmos itself, the second law of thermodynamics—crudely speaking, that disorder increases over time in isolated systems—entails “a state of universal rest and death,” although he ultimately rejects this outcome in part because of his religious beliefs.

More recently, the detonation of the first atomic bomb in 1945 led a number of scientists and philosophers to worry about self-annihilation caused by a nuclear winter. Yet today, humanity faces not just the threat of nuclear conflict, but a growing swarm of planetary-scale anthropogenic risks associated with climate change, global biodiversity loss, the sixth mass extinction event, synthetic biology, molecular nanotechnology, physics experiments, geoengineering, and machine superintelligence—to name a few. The result is that, as the late Stephen Hawking put it, humanity finds itself in the most dangerous moment of our history.

What exactly is at stake here? On the one hand, an event that wipes out humanity tomorrow would cause roughly 7.6 billion deaths. The immensity of this number is difficult to grasp because of a cognitive bias called “scope neglect.” Basically, as Joseph Stalin—not someone I often quote!—once said, “a single death is a tragedy; a million deaths is a statistic.” Now consider more than 7 billion living, breathing humans. Whereas the difference between one person dying and two people dying seems significant, the difference between 6 billion and 7 billion people dying often doesn’t. That’s a problem for our ability to evaluate just how tragic human extinction would be.

There is so much more at stake, though. Think about what could come to be if our species survives and thrives on this planetary spaceship. Carl Sagan once wrote that we could expect another 500 trillion people to come into existence over the next 10 million years. (By comparison, between 60 and 100 billion humans have lived since our species emerged in the African savanna ~200,000 years ago.) But our planet will remain habitable for a lot longer than that—about another billion years. Thus, if Earth’s population stays at about 1 billion people with normal lifespans, there could come to exist a million billion, or 10^16, people who someday get to exclaim, “I lived on planet Earth!” Even more, if we colonize our supercluster, there could be 10^23 future lives over the course of a single century, and upwards of 10^38 lives for the same period of time if we convert planets into supercomputers and migrate to something like The Matrix (but good).

These are astronomical numbers. But there are quality-of-life issues to consider as well. For example, future people probably won’t have the same physical flaws and psychological quirks that we do. Any civilization capable of expanding into outer-space will likely also have the technological ability to cure all diseases, reverse aging, and produce what Eric Drexler refers to as “radical abundance.” Imagine remaining young for centuries, vacationing with your friends on different islands in the galaxy, or even uploading your consciousness to a computer and vacationing inside a virtual reality that your present self (reading this) would unhesitatingly describe as “paradise.” While these scenarios remain science fiction today, there are no good arguments for the impossibility of achieving them given enough time and technological innovation.

Another way to think about how much there is to lose goes like this: our species has existed on Earth for about 2,000 centuries and could exist here for another 10 million (which equals 1 billion years). Thus, if we make Earth our home for as long as possible, and if we stipulate the civilization began about 6,000 years ago, then we are currently a mere 0.0006 percent into our story. That’s it. If we map this onto the annual calendar—à la the Cosmic Calendar—then we are slightly more than 3 minutes into the first day of the new year, with 525,600 minutes remaining. Or, following the Long Now Foundation, which refers to the current year as “02018,” we could emphasize the long habitability of Earth by writing it as “0000002018.”

The futurist Wendell Bell once declared that “without the possibility of a future, there is nothing left but despair. Thus, if we give up on the future, we give up on ourselves.” This is why I believe that any public discussion of existential risks—i.e., events that would either trip our species into the eternal grave of extinction or irreversibly catapult us back to the Stone Age—needs to include some account of just how good the future could be if we play our cards right. When Steven Pinker argues in Enlightenment Now that people only have so much capacity for worrying about the sundry threats to our collective existence, he has a point. Pondering the possibility of global catastrophes can be overwhelming and exhausting.

But if one is reminded why reducing existential risk is important, if one understands how huge the payoff of winning what Max Tegmark calls the “wisdom race” is, then this can motivate people to do whatever they can—or whatever is necessary—to ensure a good future for us and our children. This might involve educating others, donating to organisations  that  study  existential risks, and only voting for politicians who “get” the gravity of our existential predicament at the mid-morning of the twenty-first century.

It would be a profound shame if humanity were to make it this far—more than seven decades into the Atomic Age, having survived the Toba catastrophe that almost wiped us out—only to succumb to disaster this century. The good news is that there’s hardly a single existential risk facing us today that can’t be solved. We just need to understand what’s at stake and take the appropriate actions. Our story deserves a good ending.