In recent months, artificial intelligence developers have released tools to the general public that have demonstrated the capacity of AI to mimic and perhaps, in some cases, even surpass human creative capacities. The technology, known by the general term “generative AI,” is trained on large datasets consisting of examples of images or writing. It can then spit out images conforming to a specific description, pieces of writing in a user-specified genre, or convincing responses to a series of questions.
The results can be quite startling. When I asked DALL-E 2, the image generator, to produce an image [1] of “an FBI agent playing pinball in the style of Paul Klee,” it produced something that felt like a Klee to the untrained eye and wouldn’t look out of place in an art gallery. I had the uncomfortable experience of kind of liking it. And the text generator ChatGPT (Generative Pre-trained Transformer) produced a plausible, if juvenile, draft of a poem about the risks of AI: “Once we create it, we can’t control its mind, / It could turn against us, and be unkind.” Others have used AI to write code, play games, and even diagnose maladies.
Predictably, the newly released technologies have generated their share of online excitement, prognostication, and handwringing. Some people are understandably concerned that these technologies will put an already beleaguered class of “creatives” out of work. AI is already enlisted to write financial news or weather reports. With a few human interventions, the current model is already capable of producing coherent, if insipid, op-eds and workmanlike explainers. Some genre fiction writers are even using it to fill in details and generate ideas [2], by asking how it might continue a story that’s hit a wall. And an AI-generated image recently won first prize [3] at a Colorado art fair.
Others see something even more sinister in the technology: the prospect of what’s called artificial general intelligence. More advanced versions of this technology, the worry goes, will develop something like a mind of their own and start to go well beyond the prompts fed to them by human users, with unpredictable and possibly dangerous results.
But, as yet, what’s striking about the products of GPT and DALL-E 2 is their evident mindlessness. Playing around with GPT, I was reminded of a scene from American Psycho where Patrick Bateman (Christian Bale)—a serial-killer finance bro blank behind the eyes who just wants to fit in—monologues platitudes about how to solve world problems: “Most importantly, we have to promote general social concern and less materialism in young people.” Like Bateman, GPT is programmed to respond in the most conformist, inoffensive way possible. To the extent that it has a personality or style, it is the perfect average of the styles of the corpus it was trained on, which, judging from the results, seems to have included a fair amount of HR-speak and legal boilerplate.
Please email comments to [email protected] [11] and join the conversation on our Facebook page [12].