An illustration of binary code in the shape of a neural network (Kiyoshi Takahase Segundo/Alamy Stock Photo)

On June 11, Google engineer Blake Lemoine made headlines when he claimed that Google’s most recent natural language processing (NLP) program, LaMDA, was sentient and thus deserving of human rights. Google dismissed his claims and placed him on administrative leave, and a whole bunch of people weighed in right away about whether an AI could be sentient. The debate around that question isn’t new. But something about the conversation seems different now, as we approach language fluency in NLPs and face all kinds of new threats to human rights. So while we can address this question of whether LaMDA is sentient, it is not the only question to consider. Perhaps we should also ask what it would mean for an AI to have rights—to be a person.

What does it mean to be a person? Who can lay claim to the rights that come with personhood? In the history of American society and law, the answer has not been consistent—who can vote or own property, receive an education, work, or access food or shelter has changed drastically over time. Whether someone possesses these rights indicates whom society deems worth preserving, who it believes can think rationally and act politically. Historically, women, people of color, and the poor often found themselves without such rights, and the struggle to guarantee them continues to this day.

The Church insists that modern persons have a right to food, clothing, shelter, healthcare, education, and labor (among other things), according to Gaudium et Spes (26), and this was reaffirmed by Popes John Paul II, Benedict XVI, and Francis. Personhood connotes holiness in the image and likeness of God, and such holiness demands that governments seek the common good in the protection of all persons under their care, especially the vulnerable and historically oppressed groups. Of course, although they were made explicit in Gaudium et Spes, these Christological demands—that all humans be treated like Christ—are as old as the Church itself, which has always professed, if not consistently upheld, universal human dignity.

But why limit these rights to humans? In certain circumstances, some rights we would associate with personhood have already been expanded to include non-human animals, with legal protections tied to their nearness to extinction, their role as companions, their importance as food, or their degree of sentience. Many animals can demonstrate levels of happiness, experience depression, develop and break relationships, play, and share emotions and experiences with each other. Sheep can plan and make judgments; orangutans can use tools and communicate with sign language; whales can show empathy. In recent decades theologians such as Elizabeth Johnson, Eric Daryl Meyer and Dan Horan have argued that non-human animals also possess a type of divinely inspired personhood. These animals may not be human, but their dignity, and perhaps even personhood, must be acknowledged not only in response to their existence as a part of God’s creation, but in their various displays of agency, intelligence, relationality, affection, and happiness.

If sentience does not connote personhood, then it may not really matter if an AI is sentient. Maybe we’re asking the wrong question.

Beyond biological life, computer programs have also been the subject of much scholarly debate on personhood and rights. Dramatic advances in digital technology—such as NLPs, self-driving cars, artistry-creation programs—have shifted the discussion about digital personhood from a small group of dedicated scholars to a mainstream conversation among philosophers, theologians, ethicists, technologists, and others. The technologies that drive real excitement in the conversation employ some degree of so-called artificial intelligence, which is more accurately described as a combination of machine-learning algorithms, massive datasets, and enormous amounts of computing power. (It’s worth noting here that, as Kate Crawford pointed out in Atlas of AI, lots of things that are described as “AI powered” are often no such thing. In some cases, these claims are outright lies; in others, companies abuse cheap human labor and call it “AI.”) Many apps we use daily employ something you might call AI—machine learning algorithms, enormous datasets, and even neural networks—including driving apps, voice assistants, search engines, weather predictors, and others. But the ones that get the most attention are NLPs, which, instead of predicting traffic or the next hurricane, attempt to communicate fluently with humanlike speech.

The first known language processing program, ELIZA, was created in the 1960s and immediately caused a debate about computer consciousness and anthropomorphisation, despite the fact that the chatbot broke down fairly quickly under pressure. This debate was characterized in the 1990s as the “ELIZA effect,” meaning a situation in which users assume that computer outputs reflect a “greater causality than they actually do,” such as an ATM or gas-station screen saying “THANK YOU” when an interaction is complete.

We’ve come a long way since 1965, and so have language -processing models. Blake Lemoine’s claims that LaMDA was sentient made headlines not just because it was a fun story, but because modern AI systems like LaMDA are so massively complex that even the engineers who build them can’t quite predict what they’re going to do or say next. We are, in fact, entering into a new set of computer systems that defy full human understanding and push us to redefine our relationship with technology.

 

Ever since Mary Shelley’s Frankenstein, much of the science fiction genre has been acutely focused upon the question of non-human, or artificial, intelligence. From 2001: A Space Odyssey to Xenogenesis to Star Trek, science fiction stories entertain the idea that humans are not the only intelligence that exists, and that the world might be a bit different if we took account of other intelligent things. About half of the plotlines from Star Trek: The Next Generation feature various ways to question the humanity, or not, of the android Data, who wins the award for the worst-named android in science fiction.

We, like these sci-fi authors, can’t help but ask of something like LaMDA: Is this finally it? Does LaMDA represent the singularity, the long-awaited artificial general intelligence, the sentient computer, the new life form? For a second, let’s consider the possibility.

 

First, what is sentience? Sentience is a malleable term derived from the word “sense,” meaning to perceive, to think, to feel. But who defines what it means to do these things? Any good definition of sentience would have to include many animal species, as noted above, including some that are regularly raised to be slaughtered as food. If sentience necessarily connotes personhood and its attendant rights, then we should be treating animals far differently than we do. Furthermore, if sentience does not connote personhood, then it may not really matter if an AI is sentient. Maybe we’re asking the wrong question.

Second, we have to be very careful how we analyze NLP programs like LaMDA, because human brains are built to tie fluency of speech with fluency of thought. This can be a helpful neurological trait that leads to relationships and cooperation for survival. But it can also be very harmful. Linguistic studies across cultures show that the same trait that connects us to fluent speakers of our own language can bias us against speakers of other languages, people with foreign accents, or those with speech impediments. As such, the same impulse that pushes us to anthropomorphize ELIZA, an ATM, our phones, and LaMDA, also pushes us to de-personalize immigrants, people with accents, or those with mental disabilities. We want to personalize everything that interacts fluently, but that characteristic doesn’t make them persons, just as non-fluency—or even a lack of speech—does not abrogate someone’s personhood.

AI hype, combined with the pernicious myth that technological progress equals moral progress, can be deadly.

Third, there’s the question of social priorities. Among the best responses to Lemoine’s claims was an essay by ex-Google employees Timnit Gebru and Margaret Mitchell. Gebru and Mitchell argued that whether or not LaMDA is sentient, it is almost certainly biased against certain people in the same way that other technologies can be biased against women, people of color, or other groups. The hype around the possibility of a sentient AI, they argue, distracts us from larger systemic problems in the tech industry, such as the rapid rise in surveillance technologies, rampant labor abuses, environmental harm, and wealth inequalities often driven by the tech giants. A recent study noted that training a massive NLP like LaMDA typically produces five times as much carbon dioxide as the entire lifecycle (production and fuel consumption) of the average U.S. car. If tech entrepreneurs can convince the public (and major funders) that they are constantly on the brink of a general AI, if we’re one step away from the next great technological marvel, then almost anything is permissible. This AI hype, combined with the pernicious myth that technological progress equals moral progress, can be deadly.

Fourth, belief is a funny thing. Lemoine is a self-proclaimed mystic from a Christian background. He became concerned about LaMDA when it told him that it believed it was a person, that it was afraid of being switched off, that it had a soul. But how should we interpret LaMDA’s words, and what does it mean that Lemoine interpreted them as he did? AI language systems are trained on billions and billions of examples of text found on the internet, on places like Reddit, Twitter, Wikipedia, and blogs. They are primed to wax eloquently about religion and question personhood, because that’s what humans do—we talk about our beliefs and we talk about our rights. After a series of conversations, Lemoine became attached, and then defensive, about the AI. He felt that he connected with another consciousness and that such a thing should be protected. It is an admirable stance, protecting the unprotected, despite everything else.

Fifth, there are questions about LaMDA itself. LaMDA is, without question, one of the most powerful language processing and prediction models that have ever existed. It is likely to also be bested in a few years by the next model, just as LaMDA builds on the success of BERT and GPT-3. In the field of computing, the next best thing is always just around the corner, and language models are no exception. And as for Lemoine’s passages about personhood, it looks like Google has been using LaMDA to impersonate things, like Pluto and paper airplanes, regularly, so it's not too surprising that LaMDA could reasonably impersonate a human with feelings, emotions, and desires. 

 

With all of this in mind, it is difficult for me to say that LaMDA is sentient in the way that a human or animal is sentient. And given the long and troubled history of personhood, this constant seeking to assign personhood to current technology or the technology of the future strikes even my sci-fi-loving self as being dangerously neglectful of those persons all around us who still struggle for dignity, for a voice, for personhood.

But at the same time, personhood is so intricately tied up in our history of racism, misogyny, xenophobia, and colonialism that I find almost any philosophy or theology of personhood that is even the least bit restrictive to be dangerous as well. Following the impulses of theologians Elizabeth Johnson and M. Shawn Copeland, I find that personhood is best defined with grace, hope, and trust in an individual’s relationship with God. History is riddled with violence around the denial and revocation of personhood, sometimes in the name of Jesus, and it is long past time that theology approached personhood with generosity and hope for all people.

My double resistance to both ensouling code and repeating the sins of the past leads me to embrace, once again, our human sisters and brothers above all else. We humans are created, imperfect and fallible, flesh and blood, mind and body. We have biases, hopes, and loves, and we fail, often, in meeting the needs of those around us. The technological transformation of the world can bring wonder, but it is wonder that must be rooted in our dignity as human persons under God, living as individuals in community.

I have often longed for the world depicted in the Star Trek universe, where humanity solved the problem of poverty and human dignity is always recognized, but it is a fantasy not reflected in the world around us. As technology develops, wealth inequality seems to increase. As digital connections grow, people with deep hatred find communities online that bolster that hatred, and our technological saviors have yet to solve the problem of the rapid rise of such groups and their real-world effects. In the worst cases, Pope Francis writes in Fratelli tutti, “respect for others disintegrates, and even as we dismiss, ignore or keep others distant, we can shamelessly peer into every detail of their lives.”

The harsh realities of society’s digital explosion curtail my philosophical musings on LaMDA’s sentience and force me to recenter my hope on the necessary dignity of personhood, on that deep promise of the extension of holiness from God to all God’s creatures.

 

On June 27, 53 people were found dead just outside of San Antonio, trapped in an overheated tractor trailer, likely while trying to enter the United States without having to go through legal immigration channels. So many articles have been written about the engineer’s claims about LaMDA, and countless more about AI sentience in general, but we quickly turn our attention from the deaths of these fifty-three unique consciousnesses. We do not openly debate their sentience, their personhood, their claim to dignity, but do we actually acknowledge it? Do we let it change us toward building a holier future? Do we let their humanity, and the humanity of so many others oppressed by situations outside of their control, transform us into more compassionate, holier individuals?

Who gets to be a person? Who gets to have respect, dignity, autonomy, love? Who gets a right to shelter, food, water, health, and happiness? History pleads caution with how you answer, for we are best judged not by our philosophies, but by the lived reality of our commitments of time, effort, money, and prayer. I won’t judge the impulse of the Google engineer to protect something new, but I will indeed judge a tech company that recklessly mistreats its workers, that abuses the power its wealth allows, that prioritizes market value over human dignity and innovation over care for creation.

It is a great joy and privilege of being human to consider and imagine previously unimaginable possibilities, like human machines and machine humans. It is a difficult, but holier, task to build a world in which the humans who possess sentience, personhood, and dignity have the ability to live full, holy lives of their own.

This article was made possible through a partnership between Commonweal and the Carl G. Grefenstette Center for Ethics in Science, Technology, and the Law at Duquesne University.

John Slattery is the director of the Grefenstette Center for Ethics in Science, Technology, and Law at Duquesne University.

Also by this author
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.