Review: Toby Ord’s “The Precipice”
by Miles Raymer
My best friend regularly refers to Toby Ord‘s The Precipice as “the most persuasive and impactful book I’ve ever read.” This attitude evoked a lot of eye-rolling and protestation on my part, but I eventually gave in when he was kind enough to buy me a copy. Now that I’ve read it, I’m happy to admit that this is one of the finest and most important philosophical works of the 21st century, and perhaps of any century in human history.
If you’re like me, you’re thinking that the above statement is probably false and definitely obnoxious. Yes, this claim seems totally overblown on the face of it, but Ord’s astronomical ambitions are nicely tempered by epistemic humility, straightforward prose, careful argumentation, and passionate humanism. The Precipice is truly a book for everyone.
Okay, but what is this “super important and amazing book” all about? The subtitle says it all: Existential Risk and the Future of Humanity. I’ll address these two topics in turn, with an interlude to lay out some critiques. Ord defines existential risks as “risks that threaten the destruction of humanity’s longterm potential” (6). I’ll shorten this term to “ExRisk” for the remainder of the review because all the cool kids are doing it. ExRisks include but are not limited to natural ExRisks (asteroids/comets, supervolcanoes, stellar explosions), anthropogenic ExRisks, (nuclear weapons, climate change, environmental damage), and future ExRisks (pandemics, unaligned artificial intelligence, “lock-in” dystopias). “Existential risks present new kinds of challenges,” Ord tells us:
They require us to coordinate globally and intergenerationally, in ways that go beyond what we have achieved so far. And they require foresight rather than trial and error. Since they allow no second chances, we need to build institutions to ensure that across our entire future we never once fall victim to such a catastrophe. (6)
Yikes. Global and intergenerational coordination? Institutional foresight? No second chances? I guess we’re fucked.
But seriously, Ord thinks we can do this; his enthusiasm and no-nonsense optimism chipped away at my knee-jerk skepticism until not much of it was left. He begins by making the case that three historical Revolutions––Agricultural, Scientific, and Industrial––have brought us to a unique chapter in the human story:
Consider human history as a grand journey through the wilderness. There are wrong turns and times of hardship, but also times of sudden progress and heady views. In the middle of the twentieth century we came through a high mountain pass and found that the only route onward was a narrow path along the cliff-side: a crumbling ledge on the brink of a precipice. Looking down brings a deep sense of vertigo. If we fall, everything is lost. We do not know just how likely we are to fall, but it is the greatest risk to which we have ever been exposed.
This comparatively brief period is a unique challenge in the history of our species. Our response to it will define our story. Historians of the future will name this time, and schoolchildren will study it. But I think we need a name now. I call it the Precipice. (31)
I usually balk at historical claims that include a “this time it’s different” flavor, but Ord’s twist on the classic Great Filter framing holds up. It’s not that ExRisk is a new thing; it’s been with us forever. But what has changed is that over the last century humanity has made massive gains in our self-destructive capacities that have rendered us more vulnerable to ExRisk than ever before. Distressingly, these rapid technological advances have not been accompanied by commensurate progress in collective wisdom, or what Ord calls “civilizational virtues” (53).
One of the main goals of The Precipice is to begin rectifying this disparity by contextualizing humanity’s short-term goals within the bigger picture of “longtermism,” a philosophical framework in which “people matter equally regardless of their temporal location,” and that “takes seriously the fact that our own generation is but one page in a much longer story” (45-6). Without longtermism, humanity stays focused on the now and systematically neglects ExRisk, which compromises not only our own future but also the lives of all possible generations to come. Ord cheekily chides: “We can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us” (58).
But this is all very understandable, the patient professor assures us. The most dangerous ExRisks are brand new in historical terms, and longtermist thinking isn’t part of our evolved psychological makeup. In fact, ubiquitous cognitive fallacies such as availability bias and scope insensitivity make it much harder for most people to perform the moral calculations that effective ExRisk management requires. So we shouldn’t be too hard on ourselves, but we also need to get off our asses and get to work in order to preserve and protect humanity’s future, which Ord characterizes as a “global” and “intergenerational public good” (59).
To do this, we need a firm sense of how many ExRisks there are and how likely each of them is to bring about an existential catastrophe. Here’s Ord’s probabilistic breakdown of our total ExRisk for the next century:
He explains:
Overall, I think the chance of an existential catastrophe striking humanity in the next hundred years is about one in six. This is not a small statistical probability that we must diligently bear in mind, like the chance of dying in a car crash, but something that could readily occur, like the roll of a die, or Russian roulette…
What about the longer term? If forced to guess, I’d say there is something like a one in two chance that humanity avoids every existential catastrophe and eventually fulfills its potential: achieving something close to the best future open to us. It follows that I think about a third of the existential risk over our entire future lies in this century. This is because I am optimistic about the chances for a civilization that has its act together and the chances that we will become such a civilization––perhaps this century.
Indeed, my estimates above incorporate the possibility that we get our act together and start taking these risks very seriously. Future risks are often estimated with an assumption of “business as usual”: that our levels of concern and resources devoted to addressing the risks stay where they are today. If I had assumed business as usual, my risk estimates would have been substantially higher. But I think they would have been misleading, overstating the chance that we actually suffer an existential catastrophe. So instead, I’ve made allowances for the fact that we will likely respond to the escalating risks, with substantial efforts to reduce them.
The numbers therefore represent my actual best guesses of the chance the threats materialize, taking our responses into account. If we outperform my expectations, we could bring the remaining risk down below these estimates. Perhaps one could say that we were heading toward Russian roulette with two bullets in the gun, but that I think we will remove one of these before it’s time to pull the trigger. And there might just be time to remove the last one too, if we really try. So perhaps the headline number should not be the amount of risk I expect to remain, about one in six, but two in six––the difference in existential risk between a lackluster effort by humanity and a heroic one. (169-70)
I include this lengthy passage not just because it’s the central finding of the book, but also because it demonstrates Ord’s admirable efforts to avoid overconfidence in his presentation of these numerical estimates. More on this later in the review.
Assuming we accept Ord’s assertion that ExRisks pose a serious and imminent threat, what should we do? Ord has oodles of suggestions, including a whole appendix with bulleted lists of policy and research recommendations. For the long run, he proposes a tripartite “Grand Strategy for Humanity”: (1) Achieve existential security, (2) undertake a “Long Reflection,” and (3) set out to realize humanity’s potential. I’m personally super jazzed about the Long Reflection, a period when we explore, debate, and ultimately decide on the best possible future(s) for our species. But somber, avuncular Ord insists that achieving existential security must be our top priority as long as levels of ExRisk remain high, and encourages everyone to participate in the “public conversation about the longterm future of humanity: the breathtaking scale of what we can achieve, and the risks that threaten all of this, all of us” (216).
This isn’t the end of what The Precipice has to offer, but it’s a good place to interject some questions and concerns. Before doing this, I’d like to stipulate that I approached this book with a lot of skepticism, especially regarding longtermism. But Ord completely convinced me that his perspective is both a deserving and essential component of contemporary ethics. In short, I think Ord is basically right about everything he’s arguing. But I also feel well-situated to point out some of this book’s possible limitations, including areas where I think Ord’s method and tone may not appeal to certain readers or may even backfire.
First off, I’m concerned that Ord’s persuasive style may constrain the reach of his ideas. His hyper-rational and “mathy” approach to ExRisk can be hard for “normies” like me to swallow. Take this passage, which precedes Ord’s unveiling of his final ExRisk calculations:
When presented in a scientific context, numerical estimates can strike people as having an unwarranted appearance of precision or objectivity. Don’t take these numbers to be completely objective. Even with a risk as well characterized as asteroid impacts, the scientific evidence only takes us part of the way: we have good evidence regarding the chance of impact, but not of the chance a given impact will destroy our future. And don’t take the estimates to be precise. Their purpose is to show the right order of magnitude, rather than a more precise probability. (166-7)
Readers familiar with or sympathetic to Ord’s methodology will see this for what it is: a responsible writer’s attempt to stave off misinterpretation and not convey an unjustified sense of certainty. But one part of me reads this and wonders, “If your numbers are so damned subjective and imprecise, why include them at all?” I suspect I’m not the only person who reacted that way. Unfortunately, this objection can be bolstered by Ord’s lack of transparency regarding how some of his estimates were reached. He includes helpful statistical summaries for all the natural ExRisks, but I couldn’t find anything similar for the anthropogenic or future ones––not even in the appendices. Why is ExRisk for unaligned AI determined to be 1 in 10 instead of 1 in 15, or 1 in 20? Is this because such calculations don’t exist, or perhaps because Ord decided not to share them? Either way, their absence leaves a space open to question just how willing we should be to take Ord’s word for it.
Ord also makes it easier than I would like for critics to accuse him of something like “human exceptionalism”––the notion that humanity is much more valuable and powerful than other forms of life or inanimate matter. Here are some examples of what I mean:
Humanity is currently in control of its own fate. We can choose our future. Of course, we each have differing visions of an ideal future, and many of us are more focused on our personal concerns than on achieving any such ideal. But if enough humans wanted to, we could select any of a dizzying variety of possible futures. The same is not true for chimpanzees. Or blackbirds. Or any other of Earth’s species…Our unique position in the world is a direct result of our unique mental abilities. Unmatched intelligence led to unmatched power and thus control of our destiny. (142-3)
What we do with our future is up to us. Our choices determine whether we live or die; fulfill our potential or squander our chance at greatness. We are not hostages to fortune. While each of our lives may be tossed about by external forces––a sudden illness, or outbreak of war––humanity’s future is almost entirely within humanity’s control. (187)
If we can venture out and animate the countless worlds above with life and love and thought…we could bring our cosmos to its full scale; make it worthy of our awe. And since it appears to be only us who can bring the universe to such full scale, we may have an immense instrumental value, which would leave us at the center of this picture of the cosmos. In this way, our potential, and the potential in the sheer scale of our universe, are interwoven. (235)
I get that optimism helps Ord convince readers that taking ExRisk seriously doesn’t have to be a completely puddleglum process, but it’s hard to believe that anyone alive right now can have this kind of confidence about humanity’s ability to control our own fate. Quotes from two of my favorite writers come to mind:
We realize how little the progress of man has been the product of intelligent guidance, how largely it has been a by-product of accidental upheavals, even though by an apologetic interest in behalf of some privileged institution we later transmute chance into providence. We have depended upon the clash of war, the stress of revolution, the emergence of heroic individuals, the impact of migrations generated by war and famine, the incoming of barbarians, to change established institutions. ––John Dewey
Humanity today is like a waking dreamer, caught between the fantasies of sleep and the chaos of the real world. The mind seeks but cannot find the precise place and hour. We have created a Star Wars civilization, with Stone Age emotions, medieval institutions, and god-like technology. We thrash about. We are terribly confused by the mere fact of our existence, and a danger to ourselves and the rest of life. ––Edward O. Wilson
If I’m being honest, Dewey and Wilson’s appraisals of humanity seem much more aligned with reality than Ord’s. Furthermore, Ord’s call to use our “unmatched intelligence” and “unmatched power” to populate the universe and “make it worthy of our awe” feels nauseatingly anthropocentric. Readers who aren’t big fans of what humans have been up to in recent decades are likely to cite these passages as reasons for ignoring Ord’s excellent and urgent message. This is a shame.
My final complaint centers on some meta-worrying about how Ord’s priorities can (or will) be put to use in the real world. The way I see it, there are two basic routes for undertaking the practical project of minimizing ExRisk and setting humanity up for long-term success: the populist route and the elitist route. This is a simplification of a complex situation, but if you bear with me you might find it useful to investigate this somewhat-arbitrary dichotomy.
The populist route essentially requires us to convince a critical mass of the global population to accept longtermism and insist that their governments make ExRisk a major priority. This seems implausible to me because, as Ord rightly states, human psychology is generally hostile to longtermism. I think this is especially true when asking people to treat the lives of possible future humans as having value equal to the lives of existing humans. People en masse may prove so resistant to this idea that we discover there’s a natural ceiling on longtermism’s growth. Still, I think Ord and his supporters should run this experiment and find out, and am personally committed to helping them do this. The populist route, if attainable, aligns with the values of liberal democracy and traditional humanism. It also syncs nicely with the goals of the Effective Altruism movement, of which Ord is a keystone member.
The elitist route, in contrast, represents my greatest fear about how a growing concern with longtermism and ExRisk might go wrong. When stacked up against the populist route, I think the chances of longtermism having a significant impact in the near future are much higher if a small number of sufficiently-influential elites decide to adopt it. Given how global wealth is presently concentrated, a bespoke coterie of do-gooder billionaires, technologists, scientists, and politicians could not only jumpstart this movement but also ensure ample attention and funding for the long run. This would be something to celebrate, assuming these ExRisk philanthropists had altruistic motivations, were sensitive to possible errors in their execution, and adapted their approach in response to legitimate criticism. And I must say, Ord and his ilk totally seem like such people.
But here’s where things get goosebumpy for me. Surveying the current state of politics, economics, and media in America and elsewhere on Earth, one does not have to work very hard to imagine a charismatic and ambitious sociopath (or group of sociopaths) gaining global popularity using apocalyptic rhetoric derived from a bastardized interpretation of the longtermist view. If you think like a supreme asshole, longtermism provides the ultimate secular version of “the ends-always-justify-the-means” reasoning, especially in a situation where the stakes quite literally can’t be higher. Why do we have to go to war with China? To protect humanity’s long-term future. Why do we have to invade our nearest neighboring country? To protect humanity’s long-term future. Why do we have to accept total government surveillance and restrictions on basic freedoms? To protect humanity’s long-term future. Why do we have to exterminate this troublesome minority population? To protect humanity’s long-term future.
Below are two quotes from The Precipice––Ord’s own words. But instead of imagining them coming from a kindly, balding philosopher, I’d like you to imagine them oozing from the venomous lips of a brilliant, beautiful, newly-minted demagogue:
The idea that it may be a serious crime to impose risks to all living humans and to our entire future is a natural fit with the common-sense ideas behind the law of human rights and crimes against humanity…our descendants would be shocked to learn that it used to be perfectly legal to threaten the continued existence of humanity. (204)
These blights upon our world must end. And we can end them––if we survive. In the face of persecution and uncertainty, the noblest among our ancestors have poured their efforts into building a better and more just world. If we do the same––and give our descendants a chance to do the same––then as our knowledge, invention, coordination and abundance grow, we can more and more fulfill the fierce hope that has flowed through so many strands of the human project: to end the evils of our world and build a society that is truly just and humane. (236, emphasis his)
I’d like to say that I’m not trying to scare you, but I am––just a little. As the global temperature rises on a host of intimidating issues, I think the potential for longtermism to be hijacked by bad actors is nonnegligible. This may represent another type of ExRisk, or at least what Ord calls an “existential risk factor”––a factor that can’t by itself extinguish humanity’s future but may increase our vulnerability to genuine ExRisks (177). Indeed, Ord discusses at one point an “enforced dystopia”––a future “where only a small group wants that world but enforces it against the wishes of the rest” (153-4). How horrific would it be if such a world came about due to willful abuse of Ord’s noble attempt to stand up for future generations and promote the best possible future for us and our descendants? I’m giving this a 1 in 50 chance of happening in the next hundred years (just kidding!).
To be entirely clear: I am not suggesting that Ord or any other longtermists I’m aware of are using their ideology to advocate for violence, oppression, or world domination. They’re not.
Okay, enough gloom and doom. I’m not going to leave you sad or cynical, dear reader, especially if you’ve managed to venture this far into my too-long review. The last chapter of The Precipice is called “Our Potential,” and it is one of the most remarkable and inspiring philosophical essays I have ever read. I’ve heard Ord interviewed many times, and now that I’ve read his book I realize that his positive vision for humanity’s future doesn’t get nearly as much attention as his ExRisk analysis. If it bleeds, right? This is too bad because his last chapter strikes me as the standout moment that makes The Precipice exceptional.
Ord’s possible future for humanity has three components: Duration (in time), Scale (in space), and Quality (of life/experience). Here’s my favorite passage from each section:
Duration:
[In the very distant future] We could save the biosphere from the effects of the brightening Sun. Even if humanity is very small in your picture of the world, if most of the intrinsic value of the world lies in the rest of our ecosystem, humanity’s instrumental value may yet be profound. For if we last long enough, we will have a chance to literally save our world.
By adding sufficient new carbon to the atmosphere to hold its concentration steady, we could prevent the end of photosynthesis. Or if we could block a tenth of the incoming light (perhaps harvesting it as solar energy), we could avoid not only this, but all the other effects of the Sun’s steady brightening as well, such as the superheated climate and the evaporation of the oceans. Perhaps, with ingenuity and commitment, we could extend the time allotted to complex life on Earth by billions of years, and, in doing so, more than redeem ourselves for the foolishness of our civilization’s youth. I do not know if we will achieve this, but it is a worthy goal, and a key part of our potential. (222, emphasis his)
Scale:
At the ultimate physical scale, there are 20 billion galaxies that our descendants might be able to reach. Seven eighths of these are more than halfway to the edge of the affectable universe––so distant that once we reach them no signal could ever be sent back. Spreading out into these distant galaxies would thus be a final diaspora, with each galactic group forming its own sovereign realm, soon causally isolated from the others. Such isolation need not imply loneliness––each group would contain hundreds of billions of stars––but it might mean freedom. They could be established as pieces of a common project, all set in motion with the same constitution; or as independent realms, each choosing its own path. (233)
Quality:
Peak experiences are not merely potential dwellings––they are also pointers to possible experiences and modes of thought beyond our present understanding. Consider, for example, how little we know of how ultraviolet light looks to a finch; of how echolocation feels to a bat, or a dolphin; of the way that a red fox, or a homing pigeon, experiences the Earth’s magnetic field. Such uncharted experiences exist in minds much less sophisticated than our own. What experiences, possibly of immense value, could be accessible, then, to minds much greater? Mice know very little of music, art or humor. Toward what experiences are we as mice? What beauties are we blind to?
Our descendants would be in a much better position to find out. At the very least, they would likely be able to develop and enhance existing human capacities––empathy, intelligence, memory, concentration, imagination. Such enhancements could make possible entirely new forms of human culture and cognition: new games, dances, stories; new integrations of thought and emotion; new forms of art. And we would have millions of years––maybe billions, or trillions––to go much further, to explore the most distant reaches of what can be known, felt, created and understood.
In this respect, the possible quality of our future resembles its possible duration and scope. We saw how human civilization has proved only a tiny fraction of what is possible in time or space. Along each of these dimensions, we can zoom out from our present position with a dizzying expansion of scale, leading to scarcely imaginable vistas waiting to be explored. Such scales are a familiar feature of contemporary science. Our children learn early that everyday experience has only acquainted us with a tiny fraction of the physical universe.
Less familiar, but just as important, is the idea that the space of possible experiences and modes of life, and the degree of flourishing they make available, may be similarly vast, and that everyday life may acquaint us with a similarly parochial proportion. In this sense, our investigations of flourishing thus far in history maybe like astronomy before telescopes––with such limited vision, it is easy to think the universe is small, and human-centered. Yet how strange it would be if this single species of ape, equipped by evolution with this limited set of sensory and cognitive capacities, after only a few thousand years of civilization, ended up anywhere near the maximum possible quality of life. Much more likely I think, that we have barely begun the ascent. (237-8)
Some will dismiss these aspirations as outrageous hubris, and still others may fret about fueling humanity’s thirst for “intergalactic colonialism.” Fine, let’s have those discussions; they’ll be good ones. But I think we also have to consider that, if some version of Ord’s future comes to pass, he may one day be studied and spoken about as a real-life hero in the history of philosophy, a figure with heavyweight status akin to Socrates or Gautama Buddha.
The Precipice filled me with awe and excitement, but more than anything it made me grateful––grateful to be alive and conscious in this vertiginous moment atop the Precipice, a time when such wonders are dreamt of and vigorously debated.
Rating: 10/10
Another excellent review Miles! Excellent quotes and I always love it when you provide your own thoughts within the context of the material you are reading. Thanks!
Thanks for reading, Kevin! I’m always impressed with you ability to tackle my longest (and most tedious?) reviews. 🙂
“…nauseatingly anthropocentric.” I suppose the nauseating part is a bit redundant in that anthropocentrism is generally nauseating (unless it’s about my cat!, but I digress). Very apt observation.
This review is nearly as much of a catalyst for deeper investigation as I imagine the book was. (that’s my dad way of saying it’s an excellent review.)
However, not having read it (yet), I wonder if he couldn’t just have quantified risk by stratum, for example: marginal risk, low, medium, high, very likely, etc. The actual figures make it seem implausibly wonky, but I’m not a math guy. I do like your concern about someone’s potential “negative” spin on these concepts, but a demagogue of your concern might just argue “negative” from whose perspective? Somebody current comes to mind…
At any rate, you seem to have added to my burgeoning reading list; at this rate I’m never gonna get to take that nap!
And too bad most of us won’t be around to see more significant unraveling of the story!
Thanks Dad for this thoughtful comment! I’m glad you got around to reading this review. In particular, I really like your suggestion about presenting the risks by stratum rather than using quantitative figures––kind of like an “ExRisk Likert Scale.” This would work much better for me, and perhaps for you as well, but it’s also important to keep in mind that Ord’s target audience (policy wonks, technologists, and effective altruists) tends to be extremely math- and statistics-oriented. So in terms of making the intended impact, using numerical figures may have been the right way to go.
In any case, I’m sure Ord thought about it a lot and chose his strategy according to what he thought would be most effective. But as I said in the review, this decision may alienate some readers more than others.
Missed this review back in June. Picked up the book (used on Abe Books) after spending that utterly delightful weekend in Eureka and enjoying that wonderful conversation with you in the corner at the beginning of the celebration.
Will report back 🙂
Excellent! Glad our chat made an impression and can’t wait to hear your thoughts. 🙂