(Jezper/Alamy Stock Photo)

In the view of many scientists, Artificial Intelligence (AI) isn’t living up to the hype of its proponents. We don’t yet have safe driverless cars—and we’re not likely to in the near future. Nor are robots about to take on all our domestic drudgery so that we can devote more time to leisure. On the brighter side, robots are also not about to take over the world and turn humans into slaves the way they do in the movies.

Nevertheless, there is real cause for concern about the impact AI is already having on us. As Gary Marcus and Ernest Davis write in their book, Rebooting AI: Building Artificial Intelligence We Can Trust, “the AI we have now simply can’t be trusted.” In their view, the more authority we prematurely turn over to current machine systems, the more worried we should be. “Some glitches are mild, like an Alexa that randomly giggles (or wakes you in the middle of the night, as happened to one of us), or an iPhone that autocorrects what was meant as ‘Happy Birthday, dear Theodore’ into ‘Happy Birthday, dead Theodore,’” they write. “But others—like algorithms that promote fake news or bias against job applicants—can be serious problems.”

Marcus and Davis cite a report by the AI Now Institute detailing AI problems in many different domains, including Medicaid-eligibility determination, jail-term sentencing, and teacher evaluations:

Flash crashes on Wall Street have caused temporary stock market drops, and there have been frightening privacy invasions (like the time an Alexa recorded a conversation and inadvertently sent it to a random person on the owner’s contact list); and multiple automobile crashes, some fatal. We wouldn’t be surprised to see a major AI-driven malfunction in an electrical grid. If this occurs in the heat of summer or the dead of winter, a large number of people could die.

The computer scientist Jaron Lanier has cited the darker aspects of AI as it has been exploited by social-media giants like Facebook and Google, where he used to work. In Lanier’s view, AI-driven social-media platforms promote factionalism and division among users, as starkly demonstrated in the 2016 and 2020 elections, when Russian hackers created fake social-media accounts to drive American voters toward Donald Trump. As Lanier writes in his book, Ten Arguments for Deleting Your Social Media Accounts Right Now, AI-driven social media are designed to commandeer the user’s attention and invade her privacy, to overwhelm her with content that has not been fact-checked or vetted. In fact, Lanier concludes, it is designed to “turn people into assholes.”

As Brooklyn College professor of law and Commonweal contributor Frank Pasquale points out in his book, The Black Box Society: The Secret Algorithms That Control Money and Information, the loss of individual privacy is also alarming. And while powerful businesses, financial institutions, and government agencies hide their actions behind nondisclosure agreements, “proprietary methods,” and gag rules, the lives of ordinary consumers are increasingly open books to them. “Everything we do online is recorded,” Pasquale writes:

The only questions left are to whom the data will be available, and for how long. Anonymizing software may shield us for a little while, but who knows whether trying to hide isn’t itself the ultimate red flag for watchful authorities? Surveillance cameras, data brokers, sensor networks, and “supercookies” record how fast we drive, what pills we take, what books we read, what websites we visit. The law, so aggressively protective of secrecy in the world of commerce, is increasingly silent when it comes to the privacy of persons.

Meanwhile, as Lanier notes, these big tech companies are publicly committed to an extravagant AI “race” that they often prioritize above all else. Lanier thinks this race is insane. “We forget that AI is a story we computer scientists made up to help us get funding once upon a time, back when we depended on grants from government agencies. It was pragmatic theater. But now AI has become a fiction that has overtaken its authors.”

AI-driven social-media platforms promote factionalism and division among users, as starkly demonstrated in the 2016 and 2020 elections.

In Marcus and Davis’s view, the entire field needs to refocus its energy on making AI more responsive to common sense. And to do this will require a complete rethinking of how we program machines.

 

“The ability to conceive of one’s own intent and then use it as a piece of evidence in causal reasoning is a level of self-awareness (if not consciousness) that no machine I know of has achieved,” writes Judea Pearl, a leading AI proponent who has spent his entire career researching machine intelligence. “I would like to be able to lead a machine into temptation and have it say, ‘No.’” In Pearl’s view, current computers don’t really constitute artificial intelligence. They simply constitute the ground level of what can and likely will lead to true artificial intelligence. Having an app that makes your life much easier is not the same thing as having a conversation with a machine that can reason and respond to you like another human being.

In his Book of Why: The New Science of Cause and Effect, co-written with Dana McKenzie, Pearl lays out the challenges that need to be met in order to produce machines that can think for themselves. Current AI systems can scan for regularities and patterns in swaths of data faster than any human. They can be taught to beat champion chess and Go players. According to an article in Science, there is now a computer that can even beat humans at multiplayer games of poker. But these are all narrowly defined tasks; they do not require what Pearl means by thinking for oneself. In his view, machines that use data have yet to learn how to “play” with it. To think for themselves, they would need to be able to determine how to make use of data to answer causal questions. Even more crucially, they would need to learn how to ask counterfactual questions about how the same data could be used differently. In short, they would have to learn to ask a question that comes naturally to every three-year-old child: “Why?”

“To me, a strong AI should be a machine that can reflect on its actions and learn from past mistakes. It should be able to understand the statement ‘I should have acted differently,’ whether it is told as much by a human or arrives at that conclusion itself.” Pearl builds his approach around what he calls a three-level “Ladder of Causation,” at the pinnacle of which stand humans, the only species able to think in truly causal terms, to posit counterfactuals (“What would have happened if...?”).

But then a further question arises: Would such artificial intelligence be conscious the way we are? Or would it simply be a more advanced form of “smart” machine that exists purely to serve humans? There is reason for skepticism. As philosopher David Chalmers told Prashanth Ramakrishna in a New York Times interview in 2019, intelligence does not necessarily imply subjective consciousness:

Intelligence is a matter of the behavioral capacities of these systems: what they can do, what outputs they can produce given their inputs. When it comes to intelligence, the central question is, given some problems and goals, can you come up with the right means to your ends? If you can, that is the hallmark of intelligence. Consciousness is more a matter of subjective experience. You and I have intelligence, but we also have subjectivity; it feels like something on the inside when we have experiences. That subjectivity—consciousness—is what makes our lives meaningful. It’s also what gives us moral standing as human beings.

In Chalmers’s view, trying to prove that machines have achieved consciousness would not be easy. “Maybe an A.I. system that could describe its own conscious states to me, saying, ‘I’m feeling pain right now. I’m having this experience of hurt or happiness or sadness’ would count for more. Maybe what would count for the most is [its] feeling some puzzlement at its mental state: ‘I know objectively that I’m just a collection of silicon circuits, but from the inside I feel like so much more.’”

For his part, Pearl doesn’t address this philosophical question of consciousness directly. He seems to assume that if AI were strong enough in its causal thinking, there’s no reason to believe it wouldn’t also be conscious in Chalmers’s sense. Many philosophers and theologians, as well as many other AI scientists, would disagree. Consciousness won’t happen spontaneously or as emergent behavior, the computer scientist Ernest Davis told me in an email:

It might be possible for humans to develop systems that would tend in that direction. That, I should say, would be unwise, not so much because of the remote possibility that they would be malevolent and powerful, but just because they would be hard to predict…. The scenario in the movie Ex Machina, where the malevolent robot murders her creator in order to escape her confinement, is extremely unlikely unless you deliberately set out to do that. (It’s also far beyond current technology.)

What would be much more likely, in Davis’s view, is an AI whose behavior seems intermittently very strange and pointless and therefore occasionally randomly destructive. Currently AI programs serve as a tool, and a good tool is one in which you can be confident.

What’s tantalizing about Pearl’s work is the possibility that the causal thinking he has marked out as the path toward achieving strong AI might bring with it a haunting suggestion of how our own human consciousness evolved. For Pearl, who was raised an observant Jew in Israel, somewhere in human evolution humans realized the world is made up not only of facts (data), but also of a network of cause-effect relationships. In fact, these causal explanations, not facts, make up the bulk of human knowledge. Finally, in Pearl’s view, the transition from animal processors of raw data to the composers of explanations was not gradual. It was a leap that required a push.

For Pearl, the story of Adam and Eve and their punishment in the Book of Genesis represents the emergence of our unique ability to think causally. “We knew, however, that the author who choreographed the story of Genesis struggled to answer the most pressing philosophical questions of his time. We likewise suspected that this story bore the cultural footprints of the actual process by which Homo sapiens gained dominion over our planet. What, then, was the sequence of steps in this speedy, super-evolutionary process?” Pearl believes it was a rapid ascent through the Ladder of Causation: from observation, to intervention, to causal thinking

Currently AI programs serve as a tool, and a good tool is one in which you can be confident.

Scripture might have something to tell us about the top rung of Pearl’s Ladder: the need and the ability to posit counterfactuals. The tragic aspect of the story of Adam and Eve, as interpreted in the Christian doctrine of the Fall and original sin, is the great counterfactual: How much better off would we be if Adam and Eve had not sinned? What would the world without the Fall be like? It’s a question that preoccupied more than a few of the Church Fathers and later theologians, including Thomas Aquinas. We know animals feel sadness and loss over the death of family members. We know they experience denial. (We have observed a female chimp carrying the corpse of her baby around on her back for days after its death, and an orca whale doing the same thing for her dead calf). But we have no evidence they brood over how things might have been different. Only humans have the self-awareness to ask themselves, “How might grief have been avoided if we had gone down that path instead of this one?” Pearl thinks we will one day be able to build machines with the kind of self-awareness that makes them, too, capable of second thoughts and regret—though he doesn’t think it will be soon.

 

But not everyone is so certain that a strong AI, capable of reason, would be conscious in the way we consider ourselves to be. Like David Chalmers, the AI specialist Susan Schneider, author of Artificial You, draws a distinction between intelligent behavior and the kind of consciousness brains produce. Schneider has a PhD in philosophy from Rutgers and was a fellow at Princeton’s Center for Theological Inquiry. She now runs the Center for the Future of the Mind at Florida Atlantic University, where she and her colleagues are building a robot lab. “I think we need to distinguish between consciousness and intelligence. We could in principle create highly intelligent machines that are not conscious,” she told me by phone. “Think about the human brain. Much of the brain’s activity is unconscious computation. We know sophisticated mental processing can happen without consciousness. And we can also see from the developments in AI that there’s impressive development in machine learning—moving toward a more general form of intelligence. And these algorithms are not exactly like what the brain is doing.” Schneider mentions the computer Go champion as an example. “That algorithm does not process the game the way we would. So I would not assume at all that machines would be conscious. We need to be sensitive to that distinction.”

Until now, Schneider says, we haven’t needed to be so mindful of the distinction between intelligence and consciousness “because in the biological arena, where you see intelligence, you see consciousness.” But with the rise of AI, the distinction must be maintained clearly: the fact that a computer can behave intelligently does not demonstrate that it is capable of subjective experience. The AI industry often obscures the basic distinction between how computers work and how brains work by offering “cute AI”: robots that look like cats or dogs, or like us. When a robot has what looks like a face—when it can appear to smile and look us in the eyes—it becomes much easier for us to imagine that it has experiences like our own. But that is a projection, not a reasonable inference from the robot’s actual behavior. Schneider told me she has adopted a wait-and-see position when it comes to the question of whether machines can ever attain actual consciousness.

The British writer Susan Blackmore has no doubt that they can; in fact, she believes that they already have, at least to some degree. Blackmore began her career in psychology and the study of consciousness by researching and trying to understand the cause of her own out-of-body experience (OBE) as a young woman. She wrote about this at length in her recent book, Seeing Myself: What Out-of-Body Experiences Tell Us About Life, Death and the Mind. Her years of reviewing the experiences of others who reported OBEs, as well as scientific surveys of the phenomena, led her to conclude that none constitute reliable evidence of the mind’s survival after death. She has come to believe that consciousness as we have traditionally thought about it is an illusion. This means not that consciousness isn’t real, but that it is not an immaterial substance or essence with an independent existence. The existence of consciousness does not entail the existence of an immaterial soul. And if this is the case, Blackmore argues, then there is no reason to believe machines cannot also become conscious.

“I think there is no reason not to expect strong AI,” Blackmore told me when we spoke via Skype. But her idea of strong AI and how it might develop is very different from Pearl’s. In Blackmore’s view, there is too much credence given to the notion that AI is an entirely human production. “Most people seem to concentrate on this idea of ‘We make artificial intelligence and we put it into some kind of machine.’ Well, maybe. Maybe not. What really interests me is the artificial intelligence that is already evolving of its own accord, for its own sake.”

Blackmore is a proponent of “meme theory”—the idea that human concepts, behaviors, cultural artifacts, and religious rituals survive and propagate and evolve among human societies over time in much the same way that genes do within Darwinian evolution. Like genes, they succeed by the simple process of replication, variation, and selection. According to Blackmore, we can see that machines are now engaged in their own worldwide sharing of their own kind of memes. Indeed, she sees AI memes—or “tremes” as she calls them—as the third replicator in the history of life: genes came first with biological evolution, followed by memes with the explosion of human culture, and now with AI the third great replicator has emerged.

“This would include internet memes,” she said, “and it would include all the information that is being processed without our knowledge. I mean, internet memes are a kind of intermediate example because we do the choosing and we do the varying.” But with the “tremes,” most of the replicating is being done by the machines themselves. “You need three processes: copying, varying, and selecting. That’s basic Darwinism. So the question to my mind is: How much of those three things are already being done without human intervention?” The copying is fairly obvious: “Stuff is copied all around all over the cloud in no time—copies of this chat will be for a while. And our emails, multiple copies.” Then there is the variation. Blackmore claims that this is now being done by machines with little supervision or monitoring by human beings. “Things like automated student essays—things like artificial journalism. Loads and loads of these programs have been developed, algorithms for producing newspaper articles that are often indistinguishable from ones written by a human person.” Finally you have selection. And this, too, is being done by machines (think of Google). “Search engines select what they’re going to give to any particular person in response to their query.”

Blackmore argues that if all that is happening right now, “it’s really the same kind of thing happening that produced our intelligence. It is a whole lot of different algorithms or different processes if you want to be more general, all interacting with each other, feeding off each other, by the nature of being an evolutionary process, getting more and more complex and more and more interconnected. That to my mind is how artificial intelligence is appearing.”

The AI industry often obscures the basic distinction between how computers work and how brains work by offering “cute AI”: robots that look like cats or dogs, or like us.

At some point she thinks this will evolve into strong AI—without our help—and, once it does, she thinks machine consciousness could rival human consciousness. “It’s deeply mysterious in many ways, although becoming less so, I would say. But there seems no question to me but it’s come about by evolution producing the kind of brain and the kind of body that we have and in the kind of social exchanges that we have. You’ve got to have brain, body, and other people as well, to get our kind of consciousness. Machines are doing that, too. And they are evolving. It’s another layer of evolution.” Blackmore’s position suggests that there is no impermeable boundary between biology and technology: both are subject to the same fundamental laws of nature.

 

Mark Vernon, author of A Secret History of Christianity, and a trained psychologist as well as an Anglican theologian, is not convinced that AI can achieve consciousness in any meaningful sense. As he told me via email:

Perhaps a good question to ask is what is it to be alive? The classical answer, from Aristotle through Aquinas, is to say: it is to have a principle of movement from within, where movement means any kind of activity. So a plant has one internal principle of movement, which Aristotle called the plant soul; animals have another, because there’s also an element of voluntariness in their movement, the animal soul; and humans have another again—along with arguably some other animals—because we can add reason, self-awareness, and so on to our principle of movement, which gathered together can be called the human soul as the principle of movement within the body.

Vernon does not believe that artificial intelligence can be self-moving. “Self-moving in the sense that organisms have it is something that a whole organism does and this implies that the whole organism comes before the parts and ‘programs’ that make up the organism,” he said. “It’s why you can’t make a living organism out of parts (though you might be able to replace certain parts of living organisms). An organism’s parts always exist in virtue of being part of the whole, not by having an autonomous character of their own.”

Human consciousness depends on a body that developed through evolution. If we want to create AI that is conscious in the same way we are, should we be building it in something like the way that evolution built us? Schneider told me her new lab at the Center for the Future Mind is actually working on this approach as one of its long-term projects. Christof Koch, a chief neuroscientist at the Allen Institute for Brain Science in Seattle, also appreciates this Aristotelian approach to embodiment. His confidence that AI systems can and will indeed become conscious is based on a quantitative theory of Integrated Information first proposed by neuroscientist and psychiatrist Giulio Tononi—a theory that draws on Aristotle’s notion of formal causality.

But in Koch’s view, the possibility of consciousness need not be tied to any particular material substrate: the “brain” could be biological or silicon. The emergence of consciousness depends on a system whose parts causally interact with each other to a degree that can be quantified (by the Greek letter Φ in this case), and is by definition irreducible. Koch sees this theory offering a way to explain “why certain types of highly organized matter, in particular brains, can be conscious. The theory of integrated information...starts with two basic axioms and proceeds to account for the phenomenal in the world. It is not mere speculative philosophy, but leads to concrete neurobiological insights, to the construction of a consciousness-meter that can assess the extent of awareness in animals, babies, sleepers, patients, and others who can’t talk about their experiences.”

Koch expands on this idea in his recent book, The Feeling of Life Itself: Why Consciousness Is Widespread But Can’t Be Computed. There he argues that such a theory of integrated information reflects Aristotle’s usage of formal causality, derived from the Latin in-formare, to give form or shape to. “Integrated information gives rise to the cause-effect structure, a form. Integrated information is causal, intrinsic, and qualitative: it is assessed from the inner perspective of a system, based on how its mechanisms and its present state shape its own past and future. How the system constrains its past and future states determines whether the experience feels like azure blue or the smell of a wet dog.”

Max Tegmark is also a proponent of Information Integration Theory (IIT). In his book Life 3.0, he writes:

I’d been arguing for decades that consciousness is the way information feels when being processed in certain complex ways. IIT agrees with this and replaces my vague phrase ‘certain complex ways’ by a precise definition: the information processing needs to be integrated, that is, Φ needs to be large. Giulio’s argument for this is as powerful as it is simple: the conscious system needs to be integrated into a unified whole, because if it instead consisted of two independent parts, then they’d feel like two separate conscious entities rather than one. In other words, if a conscious part of a brain or computer can’t communicate with the rest, then the rest can’t be part of its subjective experience.

Not everyone agrees with this theory, or thinks it’s a fruitful place to start a discussion about machine consciousness. But if Koch and Tegmark are right—if a machine of sufficient complexity could develop not only intelligence but also consciousness—it follows that a machine could also be capable of suffering, a suffering for which its creators would be responsible. Schneider believes it would be the height of hubris to think we will know how to create conscious machine minds, in essence playing God. If we create machines that can feel pain, will they have any rights? It may be easier to think of AI as merely a tool at our service when we assume that it cannot be conscious. For what is a conscious tool if not a slave? “We have to be sensitive to the dystopian possibilities,” Schneider warns.

What does it mean for the future of humanity if we are soon sharing our living space with machines that have a sense of self? What if Judea Pearl’s dream comes true and we can design machines capable of resisting temptation, or yielding to it? Whether this kind of strong AI comes sooner or later or not at all, what already seems clear is that the pursuit of machine intelligence, in parallel with the study of our own human intelligence in neuroscience, requires a degree of caution and metaphysical humility. “We should be very humbled by the prospect of creating other forms of intelligence,” Schneider said. “And especially humbled by the potential for those forms of intelligence to be conscious, because we don’t even understand consciousness well in humans yet.” 

Published in the April 2021 issue: View Contents

John W. Farrell is the author most recently of The Clock and the Camshaft: And Other Medieval Inventions We Still Can’t Live Without.

Also by this author
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.