“A Paul Klee–style painting of an FBI agent playing pinball” (Created by the author with DALL·E, an AI system by OpenAI)

In recent months, artificial intelligence developers have released tools to the general public that have demonstrated the capacity of AI to mimic and perhaps, in some cases, even surpass human creative capacities. The technology, known by the general term “generative AI,” is trained on large datasets consisting of examples of images or writing. It can then spit out images conforming to a specific description, pieces of writing in a user-specified genre, or convincing responses to a series of questions.

The results can be quite startling. When I asked DALL-E 2, the image generator, to produce an image of “an FBI agent playing pinball in the style of Paul Klee,” it produced something that felt like a Klee to the untrained eye and wouldn’t look out of place in an art gallery. I had the uncomfortable experience of kind of liking it. And the text generator ChatGPT (Generative Pre-trained Transformer) produced a plausible, if juvenile, draft of a poem about the risks of AI: “Once we create it, we can’t control its mind, / It could turn against us, and be unkind.” Others have used AI to write code, play games, and even diagnose maladies.

Predictably, the newly released technologies have generated their share of online excitement, prognostication, and handwringing. Some people are understandably concerned that these technologies will put an already beleaguered class of “creatives” out of work. AI is already enlisted to write financial news or weather reports. With a few human interventions, the current model is already capable of producing coherent, if insipid, op-eds and workmanlike explainers. Some genre fiction writers are even using it to fill in details and generate ideas, by asking how it might continue a story that’s hit a wall. And an AI-generated image recently won first prize at a Colorado art fair.

Others see something even more sinister in the technology: the prospect of what’s called artificial general intelligence. More advanced versions of this technology, the worry goes, will develop something like a mind of their own and start to go well beyond the prompts fed to them by human users, with unpredictable and possibly dangerous results.

But, as yet, what’s striking about the products of GPT and DALL-E 2 is their evident mindlessness. Playing around with GPT, I was reminded of a scene from American Psycho where Patrick Bateman (Christian Bale)—a serial-killer finance bro blank behind the eyes who just wants to fit in—monologues platitudes about how to solve world problems: “Most importantly, we have to promote general social concern and less materialism in young people.” Like Bateman, GPT is programmed to respond in the most conformist, inoffensive way possible. To the extent that it has a personality or style, it is the perfect average of the styles of the corpus it was trained on, which, judging from the results, seems to have included a fair amount of HR-speak and legal boilerplate.

GPT is programmed to respond in the most conformist, inoffensive way possible. To the extent that it has a personality or style, it is the perfect average of the styles of the corpus it was trained on.

Even when it is asked to write in the style of someone else, GPT’s fundamental lack of style shines through. I asked it to “write a tweet in the style of Donald Trump about why up is down” and it produced this: “Up is down, folks! Believe me, I know what I’m talking about. The fake news media doesn’t want you to know this, but it’s true. Up is down, and down is up. Trust me, it’s all part of my plan to make America great again!” Even here, the writing has the quality of blanks being filled with probabilistic precision, a kind of reverse MadLibs where words are chosen at the opposite of random—with too much, rather than too little, knowledge of the context. The overall result is a translation of Trump into the language of an overcautious, overliteral, overeager teacher’s pet.
 

In his magnum opus Being and Time, the philosopher Martin Heidegger wrote about “das Man”—man being not the German word for “man,” but the impersonal pronoun “one” or “they,” as in “it’s just what one does” or “that’s what they say.” In Heidegger’s analysis of human existence, “the they” represents the average, everyday background mode of collective being that we all depend on to make ourselves understood, to make small talk, to get by. But there is a risk of falling completely into the everyday, getting lost in “the they” and living effectively unfree lives, where people just do “what one does” and their “ownmost possibilities” are never taken up authentically, to say nothing of being realized.

The language of “the they” is what Heidegger calls “idle talk” (Gerede). This is language not as it belongs to any individual expressing herself, but the average, everyday sentiment that circulates more or less thoughtlessly—that belongs to everyone and no one. It is the background talk against which genuinely original expression can emerge. “Idle talk” is not a problem in itself—it’s part of how language works. Life would be unbearable and exhausting if we had to express ourselves at every chance with fresh insight and constantly come to grips with the utterly original outpourings of others. The average expression and intelligibility of idle talk is a necessary default.

The trouble arises when we engage in nothing but idle talk and confuse it for genuine understanding. Think, for example, of a pedantic art historian who has so thoroughly absorbed the conventional discourse about the style of Paul Klee, say, that he can’t really see the paintings anymore. For him, any given Klee painting is just grist for encyclopedic chatter about expressionism and Bauhaus. While seeming to give us access to the objects it is apparently about, idle chatter actually closes them off, sealing them away under interpretations that are learned and repeated by “thought leaders” and thought followers. The more it circulates the further it departs from real objects and the further it blocks off the possibility of new interpretations. This language, of course, evolves, and there are dissenting views, but “the discourse,” as it’s sometimes called on Twitter, is all channeled down predictable, almost automated avenues. New variations on the same theme are developed; new mash-ups and remixes proliferate; and new objects are subjected to the near-industrial cycle of interpretation, dissemination, and reaction. But this production is involuted and self-referential: it is driven by motivations and incentives internal to “the discourse” and increasingly disconnected from the outside world.

Social media did not create but has accelerated and refined this process. To the extent that they appear at all, real events appear on Twitter as almost an embarrassment to be quickly covered over by the ready-made interpretations of a given political or cultural framework. What matters is not coming to grips with the world in all its ambiguity and with all its complicated, often contradictory features, but rather fitting the new phenomena into an existing discourse. New events seem to exist only to provide fresh fodder for interpretation that sets upon them like an algorithm on new inputs. The result is language that is more about itself than about the world.

Real events appear on Twitter as almost an embarrassment to be quickly covered over by the ready-made interpretations of a given political or cultural framework.

In this respect, GPT may be not so much a revolutionary leap forward as another step down a long, well-trodden path. Insofar as it is used for cultural production and commentary, it will streamline already well-established tendencies toward imitation, repetition, and pastiche. In The Player, Robert Altman’s send-up of Hollywood superficiality, there is a running joke about the derivative pitches that the producer Griffin Mill fields from writers: “It’s kind of like a Gods Must Be Crazy except the coke bottle is an actress...it’s Out of Africa meets Pretty Woman”; “it’s kind of a psychic political thriller comedy with a heart...not unlike Ghost meets Manchurian Candidate.” One of the best uses of DALL-E 2 (whose name, a combination of Pixar’s WALL-E and Salvador Dali, already evokes a movie sequel) is to produce amusing and sometimes interesting mash-ups of style and content: “a Klimt-style painting of the JFK assassination,” “a painting in the style of Roy Lichtenstein of a couple on their smartphones,” “a portrait of Super Mario by El Greco.” What it’s good at, in other words, is what our culture already does: inane recombination that generates novelty out of what already exists.

Of course, that it is unoriginal is no criticism of AI. It seems likely, however, that it will contribute to the proliferation of cultural content designed not to be original or even to say anything, but to produce, like a drug, the same experiences over and over again, to call them up on demand. This is what our culture has already been up to for some time, as many commentators—most recently, Ross Douthat in The Decadent Society—have noted. The culture repeats itself over and over again, relying on sequels, reboots, and remixes rather than genuine creativity to produce reliably profit-making content that can be delivered with algorithmic precision into people’s feeds. GPT may help further mechanize the production side of our culture as much as the delivery side has already been mechanized. The net effect, Douthat argues, is a kind of cultural doom loop—stagnation, where, as Antonio Gramsci put it, “the old is dying and the new cannot be born.”

 

That it is unoriginal is no criticism of AI. It seems likely, however, that it will contribute to the proliferation of cultural content designed to produce, like a drug, the same experiences over and over again.

In a later essay, “The Question Concerning Technology,” Heidegger adopted a term to describe what he took to be the essence of modern technology: Gestell, which is variously translated as “enframing” or “positionality.” He uses the word to refer not to any particular product of technology, but to the way technology overall imposes a particular order and way of being on things—how it “reveals” or “discloses” the things that it comes into contact with. In a famous example, Heidegger counterposes a hydroelectric dam on the Rhine with a much earlier form of technology, a footbridge crossing the river:

The hydroelectric plant is not built into the Rhine River as was the old wooden bridge that joined bank with bank for hundreds of years. Rather, the river is dammed up into the power plant. What the river is now, namely, a water-power supplier, derives from the essence of the power station.

Whereas the footbridge allows the river to appear as it is, the power plant transforms the nature of the river into a resource. Under the possibility that it will be “set upon” by modern technology, Heidegger writes, the character of things changes: “everywhere everything is ordered to stand by, to be immediately on hand, indeed to stand there just so that it may be on call for a further ordering.” For generative AI, the images and text of human culture, all our efforts to communicate and express ourselves, indeed all of searchable human history, may soon lie there on call for a further ordering.

To enframing, Heidegger counterposes the way of revealing unique to poetry, understood broadly as the articulation of things in language. Where modern technology orders things into resources, poiesis, in Heidegger’s jargon, “lets what presences come forth into appearance.” The paradigmatic case of this kind of revealing is the giving of names. Names don’t dominate things; they give things particular contours which make them available for further articulation. They allow them to come forth—to emerge from the blurred, undifferentiated background of existence with definite features and in definite relationship to other things, but with a certain incompleteness, ambiguity, and mystery that solicits further, indeed never-ending, attention and interpretation.

For Heidegger, what is to be feared is not technology itself, but rather its mode of disclosing the world. The threat is not mainly that AI may take over certain activities from human beings, but that we already regard these activities as functions akin to the supply of power. “Where enframing reigns,” Heidegger writes, “there is danger in the highest sense.” Then he quotes the poet Friedrich Hölderlin: “But where danger is, grows / The saving power also.” To the extent that the shock of this technology might direct us back toward different ways of seeing, we may yet be saved.

Published in the February 2023 issue: View Contents

Alexander Stern is Commonweal’s features editor.

Also by this author
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.