(Martin Vorel/LibreShot)

The word “compulsion” can mean two different, if closely related, things: “the state of being compelled” or an “irresistible, persistent impulse.” The first suggests a force from outside that compels us; the latter an internal addiction or craving. The internet seems to blur the line between the two. Much of online life comes across as compulsory, at least to those of a certain age and station. One simply “has to” be online for any number of reasons: to make money, be informed, keep friends, find love. But once we’re there, life online quickly becomes compulsive. We lurk and loiter on social-media platforms designed to addict, mainlining content we don’t, upon reflection, actually value; advertisers compete for and scatter our attention; low-quality, slanted information consumes us, inflames our emotions, and distorts our thinking. The upshot is that the forum for public and private life we are all but compelled to use is, at least in some ways, against us.

To put it another way—the way philosopher Justin E. H. Smith puts it—“the internet is anti-human.” Smith’s book, The Internet Is Not What You Think It Is (Princeton University Press, $24.95, 208 pp.), draws on the history of philosophy and technology to recontextualize debates over the internet and related technologies like artificial intelligence and try to “figure out what went wrong.” Smith’s method is genealogical—he thinks that we don’t really understand the nature of the internet because we don’t really understand where it comes from. He means this more in terms of the idea of the internet than the technology, but he also surveys a variety of technologies, both imagined and real, that prefigure the internet. (My favorite is an “infinite book wheel” from 1588.)

Smith’s book, for better and worse, is short on solutions, but it does suggest that before we can improve our relationship with technology, we have to think differently about it. To take one example, the idea of high-powered computing systems that can take over some of the operations of human thought goes back at least to the early modern period. Smith cites the ideas of the German polymath, G. W. Leibniz, who invented, in addition to calculus, an early calculating machine. Unlike many of today’s technologists, Leibniz believed that human consciousness could never be reduced to any mechanical process, no matter how sophisticated. The analogy between artificial and human intelligence breaks down. For Smith, part of what’s gone wrong is that Leibniz’s conception of computers as tools to be “subordinated to our own rational decisions” has given way to a conception of machines as “rivals or equals,” capable of the same kind of thought as we are. This latter belief is not only a “science fiction” that neglects differences between minds and machines; it tacitly underwrites a regime that treats minds as machines—whether it be through advertising that attempts to mold our consumption patterns or language processing software that attempts to identify our innermost feelings.

We’ve elevated mere information over meaning, “technique” over “truth.”

The Twittering Machine (Verso Books, $10.78, 256 pp.) by Richard Seymour delves deeper into the effects of this regime on the human mind, including analyses of phenomena like addiction, celebrity, and trolling. Like Smith, Seymour refuses to blame technology itself for our predicament, but where Smith’s lens is historical, Seymour’s is psychoanalytic. “If we’ve found ourselves addicted to social media in spite of or because of its frequent nastiness,” he writes, “then there is something in us that is waiting to be addicted.” He sees the internet as concentrating and “collectivizing” pathologies in our culture—narcissism, gossip, lies, bullying, vindictiveness, and competitiveness—that long predate the internet.

Of particular interest is Seymour’s account of what’s been called a “post-truth” society. He doesn’t think our epistemological ills are caused mainly by lies and misinformation that spread easily online, but rather the “rule of brute facts” itself. We’ve elevated mere information over meaning, “technique” over “truth.” The result is a cascade of information that only serves to keep users stimulated and producing their own content in response to it. It leaves us less knowledgeable and more detached from any meaningful reality, whether or not the information happens to be true. “The problem is not the lies,” Seymour writes, “but a crash in meaning.”

Can meaning be recovered in the face of the twittering machine? Matthew B. Crawford’s Why We Drive (Custom House, $14.39, 368 pp.) suggests that we might find a site of resistance in an older machine: the car. Crawford, a political philosopher and motorcycle restorer, is, to put it mildly, not a fan of autonomous vehicles. To him, they are the latest attempt to turn a place of attention and skill into yet another occasion for passive consumption and compulsion. In his previous book, The World Beyond Your Head, Crawford argued against over-designed, over-automated environments backed by a false, abstracted conception of human freedom that elevates choices made in complete independence of the environment. For Crawford, the limitations and feedback of the physical world are not constraints on freedom but conditions for it—as are embodiment, attention, and skill. “To drive,” he writes, “is to exercise one’s skill at being free.”

With their highly mediated and mediatized cockpits, contemporary cars have already gone a long way toward disconnecting drivers from the road. Self-driving cars, according to Crawford, will complete the separation and leave us subject to new forms of surveillance capitalism as we are ferried to and fro by opaque, unknowable systems that reduce us to compliant consumers. The potential gains in safety and drop in emissions are, for Crawford, no justification for the further deference to machines and loss of sovereignty.

Crawford suggests that what is being lost is not just an opportunity to develop competence or exercise freedom, but also to contribute to the common good. At the end of the book, he cites a homily in which Pope Francis praised the citizens of Rome in part for the way “they confront its traffic with care and prudence” and in so doing “express concretely their love for the city” as “artisans of the common good.” By contrast, Silicon Valley, Crawford writes, aims at a common good “achieved by engineering herd behavior without our awareness, in such a way that prudence and other traits of character are rendered moot.” As tech takes the wheel from us in more and more areas of our collective life, one hopes it’s not too late to turn this rival back into a tool.

Published in the December 2022 issue: View Contents

Alexander Stern is Commonweal’s features editor.

Also by this author
This story is included in these collections:
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.