Review: Max More and Natasha Vita-More’s “The Transhumanist Reader”
by Miles Raymer
Max More and Natasha Vita-More’s The Transhumanist Reader is probably the single best source for readers interested in a crash course in transhumanist philosophy. It presents more than forty essays addressing myriad aspects of transhumanist theory, with a good mixture of classic (i.e. pre-21st-century) papers and contemporary ones. It is a dense text containing a lot of terminological inconsistencies and conceptual redundancies, so prospective readers should have a basic level of preexisting knowledge about the ideas and research fields relevant to transhumanist endeavors. The quality and length of essays varies significantly, and several of the articles were too specialized and/or mathematically advanced for an amateur like me to fully grasp. In general, however, the essays should be accessible to those willing to put in the time and effort.
As defined by the Transhumanist FAQ, transhumanism is:
The intellectual and cultural movement that affirms the possibility of desirability of fundamentally improving the human condition through applied reason, especially by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities…The study of the ramifications, promises, and potential dangers of technologies that will enable us to overcome fundamental human limitations, and the related study of the ethical matters involved in developing and using such technologies. (3)
I have considered myself a transhumanist almost as long as I have been aware of the movement, and it seems to me that most people are actually sympathetic to transhumanist aspirations, even if they are unfamiliar with the label. But, like any ideological faction, the health of transhumanist thought requires constant analysis and critique. My review will focus, therefore, not on the many ways I concur with the transhumanist worldview, but on the areas where I feel the movement lacks perspective and/or requires revision. This is a difficult task since transhumanists openly admit that their philosophy contains no central dogma or specific prescriptive behaviors, but I will nevertheless do my best to track some worrisome threads in their discourse, hopefully with the result of exposing weaknesses and proposing ways to improve transhumanist thought moving forward.
One of the central features of the transhumanist perspective is summed up in Max More’s “Proactionary Principle,” which he defines with three interrelated imperatives:
- Progress should not bow to fear, but should proceed with eyes wide open.
- Protect the freedom to innovate and progress while thinking and planning intelligently for collateral effects.
- Encourage innovation that is bold and proactive; manage innovation for maximum human benefit; think about innovation comprehensively, objectively, and with balance. (264-5)
The Proactionary Principle is a direct response the the Precautionary Principle, an intellectual product of the late 20th-century environmental movement: “When an activity raises threats of harm to the environment or human health, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically” (260). Transhumanist authors such as Ted Chu have argued that the Precautionary Principle “essentially prohibits any new technology or activity unless it can be scientifically proven that there will be no resulting harm to health or environment” (Human Purpose and Transhuman Potential, 304). But, when compared to the above definition, Chu’s position is clearly a straw man argument. The Precautionary Principle does not require absolute proof that no harm will occur, nor does it endorse technology bans or relinquishment of cutting-edge research. When possible harms of new technologies are analyzed, it is (and should be) a live question as to what constitutes a “precautionary measure,” especially in cases involving hypercomplex systems, where our understandings of all the salient “cause and effect relationships” may be inadequate. Such measures can easily be designed to minimize (not completely obviate) risk––a goal also shared by proactionary thinkers.
It’s important to envision ways that the Precautionary and Proactionary Principles might coexist, rather than contradict, one another. Progress, when defined as increases in human flourishing that don’t cause unacceptable damage to human communities or the greater biosphere, requires a complex interplay between precaution and proaction. Whether one principle should be favored over the other is largely dependent on context––in some situations/environments/communities, proaction is more desirable than precaution, and vice versa.
Precaution and proaction share a common goal, which is to preserve and promote human safety and wellbeing. Max More rightly points out that the Precautionary Principle can lead to “being obsessively preoccupied with a single value––safety,” but does not admit that the Proactionary Principle has equal potential to result in obsessive preoccupation with a different but no less singular value––progress (261, emphasis his). Progress and safety are not mutually exclusive, and their definitions vary depending on cultural priorities and assumptions.
As I will argue later, I think it is a fundamental mistake to draw a sharp distinction between people and organizations focused on precaution and those favoring proaction; too much common ground is ignored. Groups that ought to unite on multiple fronts end up squabbling over ideology, even as concrete problems go neglected. There are huge risks related to transhumanist objectives––extinction, ecosystem collapse, and dystopian social stratification, to name a few. But there are also potentially huge payoffs––immortality (or massively improved longevity), post-scarcity lifestyles, and vast improvements/expansions of conscious experience. It seems foolish to think we can get by with either proaction or precaution as a dominant guiding principle, when in fact both perspectives are necessary. This appears to be precisely what More is suggesting when he urges us to “think about innovation comprehensively, objectively, and with balance” and to “plan intelligently for collateral effects.” I seriously doubt that such an approach would always preclude reasonable invocations of the Precautionary Principle.
How societies choose to apply and/or privilege proactionary and precautionary methods will have serious consequences for Earth in the 21st century. One of the most germane questions for contemporary transhumansists is whether human enhancement––including but not limited to genetic manipulation (conscious evolution), pharmaceutical and dietary supplementation, physical augmentation, development of increasingly complex and realistic virtual environments, and the creation of human- or superhuman-level AI––can be pursued with a sincere and realizable commitment to egalitarian access. Although some transhumanists are quick to dismiss worries about sharp increases in inequality and social stratification, several authors in The Transhumanist Reader offer perceptive recommendations about how to parse and address this critical issue.
Gregory Stock identifies enhancement as the “next frontier” of human exploration:
The next frontier is not outer space but ourselves. Exploring human biology and facing the truths we uncover in the process will be the most gripping adventure in all our history…What emerges from this penetration into our inner space will change us all: those who stay home, those who oppose the endeavor, those tarrying at its rear, and those pushing ahead at its vanguard. (312)
Stock is right to emphasize that enhancement is everyone’s concern, including those who plan nonparticipation or active resistance. The potential benefits of successful, safe genetic sculpting are too great to justify banning research and experimentation outright, so we will have to manage risks, costs, and distribution in order to minimize harmful socioeconomic tension. Michael H. Shapiro explains:
If you could generate major changes in mental and physical ability only through very expensive technological applications, you may sharply and irreversibly increase social partitioning to the point of true “lock-in”…Enhancement technologies aren’t free, and future development and economies of scale may still leave them beyond the means of many persons…If we decide that enhancement is tolerable, permissible, good, or even obligatory when distribution is not at issue, distributional effects––such as drastically exacerbated and irreversible social stratification––may render the moral price of enhancement unacceptable in some eyes, on some theories. If the partitioning is linked to race, ethnicity, gender, religion, or other problematic classifications, the price may be that much higher. (287-8, emphasis his)
Shapiro is describing dangers that are very real and potentially costly, both in economic and ethical terms. Though I am receptive to historical arguments demonstrating that the cost of technological innovations tends to fall over time, eventually leading to easy universal access, there is no guarantee that this will be the case with genetic and other forms of enhancement.
If small numbers of wealthy individuals gain exclusive access to enhancement, even for a short time, the risks of exacerbating already existing inequities are significant. This is because we don’t know how quickly certain enhancements might lead to further ones, especially if they are being researched/implemented with the help of AGI or ASI (artificial general or superintelligence). Granted, these early adopters (which Peter Watts has cleverly called the “bleeding edge”) will assume quite a lot of risk by volunteering for experimental treatments. I think the best case scenario here is that the bleeding edge discovers and corrects the most harmful blunders using trial and error, with successful individuals and groups gaining a competitive advantage. The precise character of this advantage is difficult to foresee.
If intelligent laws and/or social norms can be adopted that retard (not restrict) the pace at which certain kinds of enhancement can progress, this will provide time for costs to come down and for less privileged members of society to reap the rewards of those who came before. The bleeding edge will have its hard-won head start, but will not be so far ahead that they can stop others from following or exert undue influence on future events. Thus, slowly and carefully, enhancement is democratized. Protections would also need to be put in place for the “new Amish”––those who choose not to enhance, whether for religious, ideological, or personal reasons.
Do I think this rosy picture is possible? Yes. Do I find it probable? I’m not sure, and I wouldn’t trust anyone who was too certain one way or the other. The general trend may be attainable, but the transitory details could be quite messy. But I don’t think it’s outrageous to suggest that, in the long term, human enhancement will contribute far more to human flourishing than to conflict and discrimination.
Damien Broderick gives a clear picture of what our failure to achieve egalitarian access would look like:
One could imagine a future world in which extended life is allowed only to a few––the very wealthy, the political elite and their chosen followers, Mafia, military, scientists, sports heroes, movies stars. This is not the transhumanist objective––far from it. It is up to all of us to ensure that this segmented future never happens. We will not best prevent it by denouncing technical advances and trying to blockade them, but in thinking hard, feeling deeply and wisely, debating the issues together, and acting as free men and women. (436).
I applaud Broderick’s sentiment, but would like to add that our thoughts, emotions, debates, and free actions will be meaningless without the social, political, and monetary clout necessary to manage the advent of enhancement in a (trans)humane fashion. The current political sphere, at least in the USA, is not intellectually or structurally equipped for this challenge. Politicians and policies are easily purchased, huge amounts of private money and public time are wasted on opulent campaigns, and scientific sensibilities are steamrolled by corporate interests. It should be among our highest priorities to fix, or at least ameliorate, these barriers to progress before experimentation with enhancement goes too far. Though our political system is surely in need of reform, a radically free market for enhancements is not the right path. Smart, informed regulations, as well as well-allocated public funding for the best and most accessible enhancements, will be essential. Otherwise, the most prudent forecast would seem to be that enhancement will contribute to, rather than reverse, existing forms of systematic inequity and injustice.
My last point is to voice my continuing fascination at the ability of people who are tirelessly interested in humanity’s future to all but ignore the growing threat of climate change. In The Transhumanist Reader’s entire 460 pages, the closest thing to a direct reference to climate change comes from transhumanist front-man Ray Kurzweil:
Ubiquitous nanotechnology, now about two decades away, will…create extraordinary wealth, thereby eliminating poverty, and enabling us to provide for all of our material needs by transforming inexpensive raw materials and information into virtually any type of product. Lingering problems from our waning industrial age will be overcome. We will be able to reverse remaining environmental destruction. (451)
It is my fervent hope that Kurzweil’s statement becomes reality. However, I’m baffled that he and other futurists can be so consistently sanguine about the ability of future technology to revive collapsed ecosystems and lessen (or halt) the worst effects of climate change. Can Kurzweil explain precisely how nanotechnology will reverse the melting of the Antarctic, prevent desertification, rebound exhausted fishstocks, or revitalize coral reefs and other endangered ecosystems? How do we program nanomachines to restore hypercomplex systems we don’t fully understand?
In the coming century, the effects of climate change (superstorms, drought, famine, sea level rise, desertification-induced migration, and the international conflicts that result from these pressures) will most probably cause volatility in global markets and disruption of supply chains. This not only has the potential to compromise the manufacturing and distribution of transhumanist technologies, but could also quell research by cutting off access to essential resources. Economies of scale might not be as reliable in the near future as they are now, and there’s no guarantee that the technologies we need to lessen climate damage will be invented and mass-produced before these networks become untenable. I’m not betting one way or the other, but it surprises me that Kurzweil and other futurists don’t appear to view the climate situation as threatening to transhumanist values and goals. (One exception is Jeremy Rifkin, whose excellent book The Zero Marginal Cost Society identifies climate change as one of the two major threats (along with cybercrime) to 21st-century global stability.)
The worst part of this problem is the apparently mutual enmity between transhumanists and the environmental movement. Transhumanists are fond of criticizing environmental activists, most notably Bill McKibben, whose 2003 book Enough explicitly rejects practices that transhumanists see as indispensable, such as genetic modification of plants and animals.
There are legitimate and fundamental disagreements between transhumanists and environmentalists, but in a time of rapid technological advancement and equally rapid environmental degradation, these groups would be better off playing for the same team. Each should consult scientific consensus to discover the flaws in their positions: Transhumanists should acknowledge that climate change poses a significant and imminent threat that may require more than a future-tech fix, and environmentalists should admit that genetic modification is not the evil Pandora’s Box they’ve made it out to be. Precautionary and proactive approaches both have a role to play in the coming era, so they would be well-advised to acknowledge one another’s legitimacy and seek nonzero-sum opportunities for practical compromise. Such attitudinal shifts would sow seeds of scientifically-informed goodwill and stake out common ground for collaborative efforts between technologists and naturalists, potentially resulting in tremendous flourishing for the whole biosphere.
“Transhumanism has an intellectual core,” writes Russell Blackford. “It makes large claims––large enough and clear enough to provoke anxieties. One core of idea is of human beings in transition…Transition, then, from what to what?” (421, emphasis his). Here is the inkling of a rallying point––the idea of humans in transition. We have always been moving, always adapting, but now we are moving faster than ever toward unmapped horizons. As we accelerate, we do well to remind ourselves that it might not be good enough to simply enjoy the ride. We must champion forms of transition that not only serve our interests, but that also align with our ethical and ideological aspirations. We don’t just want transition; we want a just transition. This is the true transhumanist challenge.
Rating: 8/10