SNQ: Yuval Noah Harari’s “Nexus”
by Miles Raymer

Summary:
Like all his previous books, Yuval Noah Harari’s Nexus deploys the lens of history to sharpen our view of the present. Focusing on the historical trends and current states of global information networks, Harari demonstrates the dangerous half-truths that arise from flawed theories of information. He also presents his own theory that information networks help civilizations balance the natural tension between the pursuit of empirical truth and the maintenance of social order. Harari invites us to grapple with the question of how humanity has amassed such immense technological power while remaining self-destructive and unwise. In the age of artificial intelligence, the time we have to pair our power with corresponding maturity is rapidly running out. Nexus is an intense and urgent book that offers an accessible entrance to the AI debate for readers new to the subject, as well as hard-hitting insight for readers already steeped in AI theory.
Key Concepts and Notes:
- The central idea in Nexus is that humans gain power by creating increasingly large and complex information networks, and the structure of these networks becomes a major determining factor of a civilization’s character. Harari presents three theories of information: the “naive” view, the “populist” view, and the “more complete historical” view. The naive view posits that information always leads to truth, which in turn generates both wisdom and power. The populist view says that information only leads to power. The more complete historical view argues that information generates both truth about the world and order in society, with truth increasing our wisdom and both truth and order generating power. The more complete historical view, Harari says, is the appropriate framework for analyzing historical information networks and informing our decisions about how to structure contemporary ones.
- Applying these theories to politics, Harari treats democracy and totalitarianism as two ends of a spectrum, providing numerous examples of how information networks pull societies in one direction or the other. In general, totalitarian societies seek to concentrate power in a single bureaucracy that is controlled by an all-powerful leader. Totalitarian societies are good at creating and maintaining order, but tend to suppress the discovery of truth. They are also fragile because they lack effective self-correcting mechanisms and can break down quickly if the centralized network is compromised. Citizens in totalitarian states have little choice regarding how their society is structured and are often prevented from safely participating in public conversations and critiquing their governments.
- Democracies, in contrast, distribute power between various governmental and non-governmental organizations, and function through an ongoing process of collaboration and conflict between various independent information networks. Democracies are better than totalitarian societies at discovering and disseminating truth, but often struggle to maintain order. Democracies can bolster their capacity to maintain order by implementing institutional self-correction mechanisms that allow the government and other important organizations to retain the public’s trust. Democratic societies protect the rights of individuals to actively participate in public conversations and critique the government, viewing this process as one of many important self-correcting mechanisms that enable democratic flourishing.
- Nexus explores many historical examples of the interplay between information networks, totalitarian regimes, and democracies. But where the book shines is in the second half, where Harari addresses the consequences of introducing AI into the picture. Harari’s most important “this time it’s different” argument involves the agentic capacity of AI systems, which differentiates them from any other technology in human history. For the first time, humanity must now share the planet with an “alien intelligence” that already rivals our own intellectual prowess and may soon surpass it. These agents are already capable of doing many human tasks as well as or better than an average person, including jobs that involve creativity, forming emotional bonds with people, and generating new religions/mythologies that humans will be all-too-willing to adopt. From a purely functional standpoint, the notion that “some things will always need to be done by humans” seems less plausible by the minute.
- The potential consequences of the AI revolution for human politics and culture are staggering. Everything is on the table, from nightmarish dystopias to splendid utopias and––most likely––some weird mixture of good and evil that no living person can yet imagine. As AI-driven information networks continue to grow, humans will cede more and more authority to systems and agents whose motives and agendas are either mostly or entirely inscrutable. Nobody knows what kind of civilizations this process will generate, nor can we assure that humans will retain an understanding of why our civilizations are structured the way they are. Indeed, this may be because the civilizations in which we live may no longer be strictly ours.
- Looking to the near future, Harari considers that we might be living through the lowering of a “Silicon Curtain,” which could isolate populations inside AI-controlled information bubbles that are increasingly disconnected from each other, to the point where a person living on one side of the Curtain may barely recognize the daily habits, incentives, or goals of a person living on the other side. AI’s informational power could concentrate within a single global empire or––more likely––could coalesce into several different spheres of influence (e.g. the “American sphere” versus the “Chinese sphere”). The consequences of this latter outcome are potentially catastrophic because such stark isolation may rob humanity of its capacity for collective action to address global issues such as climate change, or render cross-Curtain communication so challenging that new world wars break out due to communication breakdowns and/or competition for essential resources. But the alternative––a single worldwide AI-enabled empire––could prove to be the most comprehensively oppressive society in human history.
- To prevent either of these outcomes, Harari advocates for a middle path that allows nations to retain a legitimate sense of self-interested patriotism while also committing to universal rules about AI use that protect humanity as a whole. It’s a sensible approach, but one that Harari admits could be quite easy to undermine, given the natural pressures of evolutionary competition, game theory, and AI’s considerable ability to operate stealthily. Harari highlights the myriad cooperative dynamics that pervade our planet’s ecosystems, urging us to mimic them instead of giving into a zero-sum mindset.
- Harari also argues that keeping humans “in the loop” regarding key decisions is crucial. Harari repeatedly warns against “technological determinism,” insisting that the choices we make right now and in the near future can lead to better or worse outcomes. Even if AI vastly improves how we utilize resources, create and distribute wealth, wage war, or keep peace, there’s no way to guarantee that these choices will be made with humanity’s best interest in mind if people are cut out entirely. Of course, as Harari’s historical analysis reveals, there has never been a way to guarantee that human leaders act in the best interest of the species, but giving to process over to AI entirely could make that prospect even dimmer. The good news is that fruitful collaboration between humans and AIs, fostered by new institutions with strong self-correcting mechanisms, could open new horizons of prosperity and well-being.
- As with any book that attempts to peer into the future, nobody knows if the forecasting in Nexus will prove enlightening in the long run. But I have a hard time believing that Harari has completely missed the mark, so I think the value of this book relative to others I have read on this topic is very high. As always, Harari’s writing is a joy to read and his perspective is inimitable.
- My one complaint––or perhaps reservation––about this book is that the act of reading it made me feel utterly helpless. There were two points toward the end where Harari identified the “heavy responsibility on all of us to make good choices” and the gravity of the “decisions we all make in the coming years” (393, 404). I couldn’t help but scoff at this sentiment, not because I think Harari’s wrong about the importance of human choices in this moment, but because I think his sense of scope is off. Given how things have played out so far, I don’t think the average Earthling has any power whatsoever to influence the shape of what’s to come. It feels like our fate is being decided by a tiny number of insanely wealthy and powerful individuals who are creating and learning to wield AI technology, with the rest of us just along for the ride. Maybe this is too cynical but it’s my honest take. So yeah, if you’re Sam Altman and you’re reading this book, please take note and do your best to internalize Harari’s point of view. But if you’re just some guy on the Internet like me, I recommend focusing on ways to enjoy yourself and participate in meaningful work; we have no idea how much longer those options will be available to us.
Favorite Quotes:
Given the magnitude of the danger, AI should be of interest to all human beings. While not everyone can become an AI expert, we should all keep in mind that AI is the first technology in history that can make decisions and create new ideas by itself. All previous human inventions have empowered humans, because no matter how powerful the new tool was, the decisions about its usage remained in our hands. Knives and bombs do not themselves decide whom to kill. They are dumb tools, lacking the intelligence necessary to process information and make independent decisions. In contrast, AI can process information by itself, and thereby replace humans in decision making. AI isn’t a tool––it’s an agent.
Its mastery of information also enables AI to independently generate new ideas, in fields ranging from music to medicine. Gramophones played our music, and microscopes revealed the secrets of our cells, but gramophones couldn’t compose new symphonies, and microscopes couldn’t synthesize new drugs. AI is already capable of producing art and making scientific discoveries by itself. In the next few decades, it will likely gain the ability even to create new lifeforms, either by writing genetic code or by inventing an inorganic code animating inorganic entities.
Even at the present moment, in the embryonic stage of the AI revolution, computers already make decisions about us––whether to give us a mortgage, to hire us for a job, to send us to prison. This trend will only increase and accelerate, making it more difficult to understand our own lives. Can we trust computer algorithms to make wise decisions and create a better world? That’s a much bigger gamble than trusting an enchanted broom to fetch water. And it is more than just human lives we are gambling on. AI could alter the course not just of our species’ history but of the evolution of all lifeforms. (xxii-xxiii)
History is often shaped not by deterministic power relations, but rather by tragic mistakes that result from believing in mesmerizing but harmful stories. (31)
To survive and flourish, every human information network needs to do two things simultaneously: discover truth and create order. Accordingly, as history unfolded, human information networks have been developing two distinct sets of skills. On the one hand, as the naive view expects, the networks have learned how to process information to gain a more accurate understanding of things like medicine, mammoths, and nuclear physics. At the same time, the networks have also learned how to use information to maintain stronger social order among larger populations, by using not just truthful accounts but also fictions, fantasies, propaganda, and––occasionally––downright lies.
Having a lot of information doesn’t in and of itself guarantee either truth or order. It is a difficult process to use information to discover the truth and simultaneously use it to maintain order. What makes things worse is that these two processes are often contradictory, because it is frequently easier to maintain order through fictions. (37)
he history of the early modern European witch craze demonstrates that releasing barriers to the flow of information doesn’t necessarily lead to the discovery and spread of truth. It can just as easily lead to the spread of lies and fantasies and to the creation of toxic information spheres. More specifically, a completely free market of ideas may incentivize the dissemination of outrage and sensationalism at the expense of truth. It is not difficult to understand why. Printers and booksellers made a lot more money from the lurid tales of The Hammer of the Witches than they did from the dull mathematics of Copernicus’s On the Revolutions of the Heavenly Spheres. The latter was one of the founding texts of the modern scientific tradition. It is credited with earth-shattering discoveries that displaced our planet from the center of the universe and thereby initiated the Copernican revolution. But when it was first published in 1543, its initial print run of four hundred failed to sell out, and it took until 1566 for a second edition to be published in a similar-sized print run. The third edition did not appear until 1617. As Arthur Koestler quipped, it was an all-time worst seller. What really got the scientific revolution going was neither the printing press nor a completely free market of information, but rather a novel approach to the problem of human fallibility. (101)
While democracy agrees that the people is the only legitimate source of power, democracy is based on the understanding that the people is never a unitary entity and therefore cannot possess a single will. Every people––whether Germans, Venezuelans, or Turks––is composed of many different groups, with a plurality of opinions, wills, and representatives. No group, including the majority group, is entitled to exclude other groups from membership in the people. This is what makes democracy a conversation. Holding a conversation presupposes the existence of several legitimate voices. If, however, the people has only one legitimate voice, there can be no conversation. Rather, the single voice dictates everything. Populism may therefore claim adherence to the democratic principle of “people’s power,” but it effectively empties democracy of meaning and seeks to establish a dictatorship. (131-2)
Totalitarian regimes choose to use modern information technology to centralize the flow of information and to stifle truth in order to maintain order. As a consequence, they have to struggle with the danger of ossification. When more and more information flows to only one place, will it result in efficient control or in blocked arteries and, finally, a heart attack? Democratic regimes choose to use modern information technology to distribute the flow of information between more institutions and individuals and encourage the free pursuit of truth. They consequently have to struggle with the danger of fracturing. Like a solar system with more and more planets circling faster and faster, can the center still hold, or will things fall apart and anarchy prevail? (186)
Fear of powerful computers has haunted humankind only since the beginning of the computer age in the middle of the twentieth century. But for thousands of years humans have been haunted by a much deeper fear. We have always appreciated the power of stories and images to manipulate our minds and to create illusions. Consequently, since ancient times humans have feared being trapped in a world of Illusions. In ancient Greece, Plato told the famous allegory of the cave, in which a group of people are chained inside a cave all their lives, facing a blank wall. A screen. On that screen they see projected various shadows. The prisoners mistake the illusions they see there for reality. In Ancient India, Buddhist and Hindu sages argued that all humans live trapped inside maya––the world of illusions. What we normally take to be reality is often just fictions in our own minds. People may wage entire wars, killing others and willing to be killed themselves, because of their belief in this or that illusion. In the seventeenth century René Descartes feared that perhaps a malicious demon was trapping him inside a world of illusions, creating everything he saw and heard. The computer revolution is bringing us face-to-face with Plato’s Cave, with maya, with Descartes’s demon.
What you just read might have alarmed you, or angered you. Maybe it made you angry at the people who lead the computer revolution and at the governments who fail to regulate it. Maybe it made you angry at me, thinking that I am distorting reality, being alarmist, or misleading you. But whatever you think, the previous paragraphs might have had some emotional effect on you. I have told a story, and this story might change your mind about certain things, and might even cause you to take certain actions in the world. Who created this story you’ve just read?
I promise you that I wrote the text myself, with the help of some other humans. I promise you that this is a cultural product of the human mind. But can you be absolutely sure of it? A few years ago, you could. Prior to the 2020s, there was nothing on earth, other than a human mind, that could produce sophisticated texts. Today things are different. In theory, the text you’ve just read might have been generated by the alien intelligence of some computer. (213-4)
No matter how aware algorithms are of their own fallibility, we should keep humans in the loop, too. Given the pace at which AI is developing, it is simply impossible to anticipate how it will evolve and to place guardrails against all future potential hazards. This is a key difference between AI and previous existential threats like nuclear technology. The latter presented humankind with a few easily anticipated doomsday scenarios, most obviously an all-out nuclear war. This meant that it was feasible to conceptualize the danger in advance, and explore ways to mitigate it. In contrast, AI presents us with countless doomsday scenarios. Some are relatively easy to grasp, such as terrorists using AI to produce biological weapons of mass destruction. Some are more difficult to grasp, such as AI creating new psychological weapons of mass destruction. And some may be utterly beyond the human imagination, because they emanate from the calculations of an alien intelligence. To guard against a plethora of unforeseen problems, our best bet is to create living institutions that can identify and respond to the threats as they arise. (300-1)
The most important human skill for surviving the twenty-first century is likely to be flexibility, and democracies are more flexible than totalitarian regimes. While computers are nowhere near their full potential, the same is true of humans. This is something we have discovered again and again throughout history. For example, one of the biggest and most successful transformations in the job market of the twentieth century result did not from a technological invention but from unleashing the untapped potential of half the human species. To bring women into the job market didn’t require any genetic engineering or some other technological wizardry. It required letting go of some outdated myths and enabling women to fulfill the potential they always had.
In the coming decades the economy will likely undergo even bigger upheavals than the massive unemployment of the early 1930s or the entry of women into the job market. The flexibility of democracies, their willingness to question old mythologies, and their strong self-correcting mechanism will therefore be crucial assets. Democracies have spent generations cultivating these assets. It would be foolish to abandon them just when we need them most. (326)
What happens to democratic debates when millions––and eventually billions––of highly intelligent bots are not only composing extremely compelling political manifestos and creating deepfake images and videos but also able to win our trust and friendship? If I engage online in a political debate with an AI, it is a waste of time for me to try to change the AI’s opinions; being a nonconscious entity, it doesn’t really care about politics, and it cannot vote in the elections. But the more I talk with the AI, the better it gets to know me, so it can gain my trust, hone its arguments, and gradually change my views. In the battle for hearts and minds, intimacy is an extremely powerful weapon. Previously, political parties could command our attention, but they had difficulty mass-producing intimacy. Radio sets could broadcast a leader’s speech to millions, but they could not befriend the listeners. Now a political party, or even a foreign government, could deploy an army of bots that build friendships with millions of citizens and then use that intimacy to influence their worldview. (342)
How might the rise of the new computer network change the shape of international politics? Aside from apocalyptic scenarios such as a dictatorial AI launching a nuclear war, or a terrorist AI instigating a lethal pandemic, computers pose two main challenges to the current international system. First, since computers make it easier to concentrate information and power in a central hub, humanity could enter a new imperial era. A few empires (or perhaps a single empire) might bring the whole world under a much tighter grip than that of the British Empire or the Soviet Empire. Tonga, Tuvalu, and Qatar would be transformed from independent states into colonial possessions––just as they were fifty years ago.
Second, humanity could split along a new Silicon Curtain that would pass between rival digital empires. As each regime chooses its own answer to the AI alignment problem, to the dictator’s dilemma, and to other technological quandaries, each might create a separate and very different computer network. The various networks might then find it ever more difficult to interact, and so would the humans they control. Qataris living as part of an Iranian or Russian network, Tongans living as part of a Chinese network, and Tuvaluans living as part of an American network could come to have such different life experiences and worldviews that they would hardly be able to communicate or to agree on much.
If these developments indeed materialize, they could easily lead to their own apocalyptic outcome. Perhaps each empire can keep its nuclear weapons under human control and its lunatics away from bioweapons. But a human species divided into hostile camps that cannot understand each other stands a small chance of avoiding devastating wars or preventing catastrophic climate change. A world of rival empires separated by an opaque Silicon Curtain would also be incapable of regulating the explosive power of AI. (364)
Refusing to reduce all human interactions to a zero-sum power struggle is crucial not just for gaining a fuller, more nuanced understanding of the past but also for having a more hopeful and constructive attitude about our future. If power were the only reality, then the only way to resolve conflicts would be through violence. Both populists and Marxists believe that people’s views are determined by their privileges, and that to change people’s views it is necessary to first take away their privileges––which usually requires force. However, since humans are interested in truth, there is a chance to resolve at least some conflicts peacefully, by talking to one another, acknowledging mistakes, embracing new ideas, and revising the stories we believe. This is the basic assumption of democratic networks and of scientific institutions. It has also been the basic motivation behind writing this book. (401)
As far as we know today, apes, rats, and the other organic animals of planet Earth may be the only conscious entities in the entire universe. We have now created a nonconscious but very powerful alien intelligence. If we mishandle it, AI might extinguish not only the human dominion on Earth but the light of consciousness itself, turning the universe into a realm of utter darkness. It is our responsibility to prevent this.
The good news is that if we eschew complacency and despair, we are capable of creating balanced information networks that will keep their own power in check. Doing so is not a matter of inventing another miracle technology or landing upon some brilliant idea that has somehow escaped all previous generations. Rather, to create wiser networks, we must abandon both the naive and the populist views of information, put aside our fantasies of infallibility, and commit ourselves to the hard and rather mundane work of building institutions with strong self-correcting mechanisms. This is perhaps the most important takeaway this book has to offer.
This wisdom is much older than human history. It is elemental, the foundation of organic life. The first organisms weren’t created by some infallible genius or god. They emerged through an intricate process of trial and error. Over four billion years, ever more complex mechanisms of mutation and self-correction led to the evolution of trees, dinosaurs, jungles, and eventually humans. Now we have summoned an alien inorganic intelligence that could escape our control and put in danger not just our own species but countless other lifeforms. The decisions we all make in the coming years will determine whether summoning this alien intelligence proves to be a terminal error or the beginning of a hopeful new chapter in the evolution of life. (403-4)