Mark Zuckerberg speaks at the F8 developer conference in San Jose, California, April 2019 (DPA Picture Alliance/Alamy Stock Photo).


Interested in discussing this article in your classroom, parish, reading group, or Commonweal Local Community? Click here for a free discussion guide.

Arthur O. Lovejoy’s 1936 book, The Great Chain of Being, tells the story of a remarkably durable conception of humanity’s place in the cosmos. Imagine every mineral, plant, and animal arrayed vertically, from least to greatest. Humans stand at the top, followed by other sentient beings, then plants, and, finally, the rocks and soil that support the whole. Some thinkers have interpreted this hierarchy as man’s dominion over earth, justifying all manner of depredations of the living world. Catholic social thought has emphasized a duty of stewardship, particularly in Pope Francis’s encyclical Laudato si’. The Chain of Being admits of both interpretations, encapsulating both hierarchy and interconnectedness. As Lovejoy shows, it is a capacious metaphor, embraced by very different thinkers in very different times to contemplate ultimate questions. 

Both modernity and postmodernity have undermined this metaphor. David Hume saw our reason as little more than a servant of our passions—the kind of irrational appetites and aversions we share with other animals. In the work of Darwin, Dennett, and Dawkins, the human appears less as the apex of nature than as one of myriad possible outcomes of blind evolutionary struggle. On this view, the planet could just as easily have been dominated by cockroaches or crabs. Despite the refinements of civilization, psychologists affirm the enduring influence of our “lizard-brain” limbic system. Meanwhile, what were once deemed lasting cultural achievements now appear, from some postmodern perspectives, as little more than a matter of taste, itself as contingent as the evolution of humans. These intellectual currents have coalesced around a posthumanist consensus on the exhaustion of “the human” as a normative category: it no longer provides reliable guidance on what we ought to do as ethical, political, or social beings.

Adam Kirsch’s The Revolt Against Humanity tours a variety of posthumanist worldviews, ranging from “dark ecology” to transhumanism. In the antihumanist environmentalism of dark ecology, we are culpable for having destroyed too much nature already, and incapable of conserving what remains. The dark ecologists despair of coordinated climate action, and also doubt that we’re capable of any effective defense against other threats arising from the Anthropocene, such as future pandemics. Some even welcome the imminent extinction of human beings and celebrate the endurance of an earth without us.

[Like what you’re reading? Support our work today!]

Kirsch ably contrasts this fatalist creed with transhumanism’s immortalist plan for superhuman self-improvement. For transhumanists, human bodies and brains as we know them are just too fragile, especially when compared with machines. “Wetware” transhumanists envision a future of infinite replacement organs for failing bodies, and brains jacked into the internet’s infinite vistas of information. “Hardware” transhumanism wants to skip the body altogether and simply upload the mind into computers. AIs and robots will, they assume, enjoy indefinite supplies of replacement parts and backup memory chips. Imagine Westworld’s Dolores, embodied in endless robot guises, “enminded” in chips as eternal as stars.

Kirsch worries that eco-antihumanism and tech-driven transhumanism are poised to ensnare even well-meaning persons in a baleful rejection of the human. It is indeed possible that new media will push these now-marginal viewpoints toward mainstream acceptance, just as they have so often promoted anti-vax nonsense. But both antihumanism and transhumanism are also susceptible to critical thinking, and that should prove resilient over time.


The worldview of dark ecologists is relatively easy to refute. Failures to arrest global warming, or to respect biodiversity in an equitable manner, are contingent. They are political failures; they are not determined by human biology. Skillful politicians and cultural leaders can change minds. There have already been enormous technological advances toward affordable green energy. Wilderness reserves are a proven preservation tactic, and wise governments will invest more in them. If we expand existing nature preserves, we might eventually move toward a respect for biodiversity that enables a high percentage of present species to persevere, bothered only by the occasional ecotourist or nature documentarist. 

The transhumanist challenge is more difficult to answer, because of the varied and overlapping efficiencies that advanced computation now offers. A law firm cannot ignore ChatGPT; not only can it automate simple administrative tasks now, but it may also become a powerful research tool in the future. Militaries feel pressed to invest in AI because technology vendors promise it will upend current balances of power. Even tech critics have Substacks, X (formerly Twitter) accounts, and Facebook pages, and they are all subject to the algorithms that determine whether they have one, a hundred, or a million readers. In each case, persons with little choice but to use AI systems are donating more and more data to advance the effectiveness of AI, thus constraining their future options even more. “Mandatory adoption” is a familiar dynamic: it was much easier to forgo a flip phone in the 2000s than it is to avoid carrying a smartphone today. The more data any AI system gathers, the more it becomes a “must-have” in its realm of application.

Thus, while Kirsch calls for a renewed defense of the human in an era of anti- and transhumanism, it might be just as effective to deflate AI’s reputation and aspirations.

Is it possible to “say no” to ever-further technological encroachments? For key tech evangelists, the answer appears to be no. Mark Zuckerberg has fantasized about direct mind-to-virtual-reality interfaces, and Elon Musk’s Neuralink also portends a perpetually online humanity. Musk’s verbal incontinence may well be a prototype of a future where every thought triggers AI-driven responses, whether to narcotize or to educate, to titillate or to engage. When integrated into performance-enhancing tools, such developments also spark a competitive logic of self-optimization. A person who could “think” their strategies directly into a computing environment would have an important advantage over those who had to speak or type them. If biological limits get in the way of maximizing key performance indicators, transhumanism urges us toward escaping the body altogether.

This computationalist eschatology provokes a gnawing insecurity: that no human mind can come close to mastering the range of knowledge that even second-rate search-engine indexes and simple chatbots can now summarize, thanks to AI. Empowered with foundation models (which can generate code, art, speech, and more), chatbots and robots seem poised to topple humans’ self-regard. Call it the Great Chain of Bing: a new hierarchy placing the computer over the coder, and the coder over the rest of humans, at the commanding heights of political, economic, and social organization.

While the appeal of a Great Chain of Being has slowly eroded over time, the Great Chain of Bing should appear ridiculous at its outset. Microsoft’s many missteps, from the annoying Clippy to force-fed adware on newer versions of Windows, leave us rightly skeptical of any metaphysical aspirations the company may have—even when its search engine becomes AI-powered. Even major AI achievements can shrink under scrutiny. There is no path from a machine’s mastery of the board game Go to a supreme military strategy—and researchers recently discovered a method to beat the best AI Go program. Chatbots generate plausible-sounding text, but in high-stakes areas, they will need intense scrutiny from experts for the foreseeable future. Given the recent rash of crashes in San Francisco, even self-driving cars seem to be far from a solved problem. 

Compile enough of these disappointments, and the acids of modernity, now directed at “the human,” can just as easily be flung at its would-be usurpers. Thus, while Kirsch calls for a renewed defense of the human in an era of anti- and transhumanism, it might be just as effective to deflate AI’s reputation and aspirations. The more parochial, venal, and destructive the enemies of the human are revealed to be, the more plausible humanism becomes. As Oz’s Dorothy might advise: pay attention to the men behind the curtain.


Speculating about the long-term future of humanity, OpenAI’s Sam Altman once blogged about “the merge.” “A popular topic in Silicon Valley is talking about what year humans and machines will merge (or, if not, what year humans will get surpassed by rapidly improving AI or a genetically enhanced species),” he wrote. “Most guesses seem to be between 2025 and 2075.” Altman writes of the merge with a mix of wonder and horror, delight and dread. He does not think the enlightened “spiritual machines” anticipated by Google’s Ray Kurzweil are inevitable. But he does believe the merge of human and machine is already underway and unstoppable:          

We are already in the phase of co-evolution—the AIs affect, effect, and infect us, and then we improve the AI. We build more computing power and run the AI on it, and it figures out how to build even better chips. This probably cannot be stopped. As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it.

But is this a story of progress, or one of domination? Interaction between machines and crowds is coordinated by platforms, as MIT economists Erik Brynjolfsson and Andrew McAfee have observed. Altman leads one of the most hyped ones. To the extent that CEOs, lawyers, hospital executives, and others assume that they must coordinate their activities by using large language models like the ones behind OpenAI’s ChatGPT, they will essentially be handing over information and power to a technology firm to decide on critical future developments in their industries. A narrative of inevitability about the “merge” serves Altman’s commercial interests, as does the tidal wave of AI hype now building on Elon Musk’s X, formerly known as Twitter. Start doubting this inevitability, and other horizons of technological development open up. 

Other daunting challenges appear when one looks behind the curtain of the wizards of AI. Chatbots have spewed all manner of false and defamatory content, in contexts ranging from eating disorder hotlines to courtrooms. What company, let alone person, would want to “merge” into that? Liability concerns are also real. First Amendment scholar Eugene Volokh has warned about the risk of “large libel models,” and the concern is not just hypothetical: a chatbot recently fabricated abuse allegations against a professor. 

Copyright lawsuits also loom, as artists protest wholesale appropriation of their work. If a company’s generative AI creates works that draw on existing art, and its creations are substantially similar to it, why shouldn’t that company be liable for copyright infringement? While intellectual-property lawyers scrutinize generative AI output, privacy lawyers are keen to challenge its inputs. Are personal emails, social-media posts on locked accounts, or other private or semi-private communications part of the text-training models? How much personal information is already in these massive databases? Who consented to having their information processed in this way, and by what authority?

Taken together, these issues are not merely engineering problems or a list of concerns easily translated into a compliance checklist. The rapid development of AI hangs in a legal balance now, as jurisdictions around the world decide just how much lawbreaking and social harm to tolerate from it. The European Union and Japan are already investigating generative AI.

This is the social context for the sudden rise in claims about sentient chatbots, AI existential risk, and the like. While big tech’s lobbyists and lawyers work to ensure favorable legal treatment, their propagandists try to convince the public that AI is a generalized technological revolution as fundamental as the shifts to fossil fuels and electricity (or, in the remarkable opinion of Google CEO Sundar Pichai, even more fundamental than those). Such claims should be received with a healthy degree of doubt. Here, credulity leads to deference, and deference to inadequate regulation. Lina M. Khan, the current chair of the U.S. Federal Trade Commission, has also noted this problem, and proposed extensive solutions. Regulatory agendas for generative AI have already been developed by NGOs like the Electronic Privacy Information Center. The European Union’s General Data Protection Regulation, Digital Markets Act, and Digital Services Act may also hammer firms that concentrate power or spew disinformation, even when they try to shift responsibility onto “experimental” AI. The more that rules of transparency, nondiscrimination, and public responsibility are imposed on firms like OpenAI, the less they can pass off their wares as the irresistible next stage of human evolution.

Kirsch’s learned exploration of the darker corners of posthumanism is both lucid and thought provoking. He is right to observe that “secular reverence for humanity nurtured…humanistic culture,” and to indict the antihumanist worship of nature and the transhumanist worship of machines. But the path back toward accepting the well-being of human persons as the measure of all things may be less direct than he hopes. Rather than re-inflating “the human,” deflating the promise of the transhuman—particularly in its overhyped guise as “AI”—may be key. Skepticism has eroded the primacy of the human in many minds, but it should also discredit the wan, wacky, and weird visions of posthuman grace now hawked by so many tech propagandists.


This piece was published as part of a symposium about The Revolt Against Humanity. For more of the symposium, read:

  • Gilbert Meilaender on birth, the expression of hope that defies posthuman ideologies
  • John F. Haught on how human thought is integral to, not separate from, the arc of the universe 
  • Nolen Gertz on how both anti- and transhumanism avoid political and moral responsibility

To see the full collection, click here. To listen to an interview with Adam Kirsch on the Commonweal Podcast, click here.

Frank Pasquale is author of The Black Box Society and New Laws of Robotics, both from Harvard University Press, and a professor of law at Cornell University.

Also by this author
This story is included in these collections:

Please email comments to [email protected] and join the conversation on our Facebook page.

Published in the November 2023 issue: View Contents
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.