dotCommonweal

A blog by the magazine's editors and contributors

.

The march of progress

Todays NY Times has an article on recent and anticipated advances in computer technology, which has some scientists nervous. A couple of interesting paragraphs:

The idea of an "intelligence explosion" in which smart machines would design even more intelligent machines was proposed by the mathematician I. J. Good in 1965. Later, in lectures and science fiction novels, the computer scientist Vernor Vinge popularized the notion of a moment when humans will create smarter-than-human machines, causing such rapid change that the "human era will be ended." He called this shift the Singularity.This vision, embraced in movies and literature, is seen as plausible and unnerving by some scientists like William Joy, co-founder of Sun Microsystems. Other technologists, notably Raymond Kurzweil, have extolled the coming of ultrasmart machines, saying they will offer huge advances in life extension and wealth creation."Something new has taken place in the past five to eight years," Dr. Horvitz said. "Technologists are replacing religion, and their ideas are resonating in some ways with the same idea of the Rapture." ...

The meeting on artificial intelligence [held last February] could be pivotal to the future of the field. Paul Berg... said it was important for scientific communities to engage the public before alarm and opposition becomes unshakable."If you wait too long and the sides become entrenched like with G.M.O.," he said, referring to genetically modified foods, "then it is very difficult. Its too complex, and people talk right past each other." ...

Despite his concerns, Dr. Horvitz said he was hopeful that artificial intelligence research would benefit humans, and perhaps even compensate for human failings. He recently demonstrated a voice-based system that he designed to ask patients about their symptoms and to respond with empathy. When a mother said her child was having diarrhea, the face on the screen said, "Oh no, sorry to hear that."A physician told him afterward that it was wonderful that the system responded to human emotion. "Thats a great idea," Dr. Horvitz said he was told. "I have no time for that."

Comments

Commenting Guidelines

I read a novel recently that touches on this stuff - Breakpoint by Richard Clarke, a former National Security Council staff member. It goes into the technological singularity stuff. But I think any reader of science fiction has seen all this coming for a long time, and after watching movies like The Terminator and the Matrix, we know how it turns out :)

When they come up with an artificial intelligence that can produce a distinguished work of imaginative literature in one language and then translate it into another as well as any human translator could do it, I will start to worry. When some form of mere calculation is all that is required, computers are for more accurate and far faster than humans. That is not news. But intelligence is more than calculation. As for empathy. of course a computer can be programmed to respond to certain patterns of words with an expression of concern like "I'm sorry to hear that," but does that mean that a computer can be sorry. I don't think so.

What got me is that there is a doctor who thinks this is progress!

So the physician said, "I have no time for (emotion)."Is technology turning doctors into, well, machines???

Insurance company reimbursement practices and big business medicine are turning doctors into machines.

If doctors are turned into machines by hmo's, imagine how they'd be if they worked for the state. It'd be like getting your mammograms done at the dmv. Cold!

Prof. Gannon --I agree entirely with you that if indeed computers are only machinss they will never truly communicate meaning in the human sense. If they do, then baptize them.Where the AI people go wrong, I think, is in thinking that these *logic* machines can be genuinely concerned with anything but analogues of necessary physical relaitonships. True, as *logic* mahines their operations are analogous to reasoning processes. But the data is contingent, and I can't see how those machine can move from necessities to inventing such contingencies as a poem. Yes, a computer can copy the text of a poem, and the *written*, physical text has a certain physical necessity, but the physical text is NOT the poem. The contingent pattern == a mental one that is the poem -- that makes it meaningful is beyond the reach of logic. As to computers and translations, your point seems similar to that made by the philosopher John Searle with his "Chinese room" thought experiment. Brilliant. AI folks dislike it. Wikipedia explains it here:http://en.wikipedia.org/wiki/Chinese_roomSearle disappooints me, though. While he is willing to challenge the AI folks about computers and consciousness, he is unwilling to go all the way and grant that human thinking involvea something more than mere matter.

And so my worst fears come to pass.

I agree, Ann, that Searle's critique of AI is more convincing than his own stated position. I was very disappointed when, in a review in the NYRB, he asked the reader to pinch his own arm severely so that he and the reader could have a common experience of consciousness on which to reflect. One might have hoped he would refer us to the experience of inquiring, thinking, understanding, intending, etc. But there is a very strong tradition in Anglo-American analytical philosophy that one mustn't ask questions about the experience of mind.

Computers can only perform operations that the software writers designate. If, to go farther, the software engineer write code that will cause the computer to "create" its own software, i.e., to "think" (in terms of Boolean logic, DeMorgan's Laws, etc.), it will necessarily be limited by that logic and those laws. But Boolean logic is itself a very limited subset of general logic, which in turn is a very limited subset of the larger universe of Philosophy, and ultimately, of Theology. And this doesn't even take into account Psychology, Sociology, Economics, Biology, etc.Can computers "break out" of their own mechanical universe and, rising above humanity to gaze upon us, Godlike, go off to autonomously decide whats good or evil, and thrust us into a living Hell? I dont see how. Computers are tools, only as good or evil as the persons using them.

I think that the propensity of capitalists to remove pesky humans from the productive side of any equation was already noted by Marx.

Bob Schwartz, you're taking all the fun out of the paranoia! I love the thought of defending my home from invading arachnid-robots with a mixture of a shotgun and home-made molotov cocktails.

I agree with AO and JK about Searle. His refutation of misguided claims for AI is brilliant, but he has no theory of consciousness except that it just is there and is based on brain activity. Another acute critic of AI and especially cognitive theory is Jerry Fodor. He writes for the NRB and the LRB brilliantly dissecting the mad claims of cognitive scientists and their ilk.On a related topic I recall that a few year ago an IBM computer beat the World Chess Champion, and it was said that the computer was a better player than any human player. I thought that was absurd. The computer was programmed with lines of play by outstanding chess players. Chess masters, like computers, work from lines of play stored in their memories, but computers never forget anything and do not get tired or distracted. One could as well say that the team of programmers beat the champion with the aid of the computer. To prove superiority one would have to identify moves by the computer that were truly innovative. Human chess player are capable of innovation, and also capable of exploiting an opponent's psychology. Computers have no psychic weaknesses.

Bob, the problem with some of the designs now is that no single person understands all the complexities of the programming (i.e. sophisticated war drones). It's analogous to--and if fact potentially more problematic--than the U.S. tax code (well over a million words). If machines are given a level of autonomy as their programming becomes more complex, then it will be difficult for any person to accurately predict how they will "behave". If even some scientists/engineers are starting to get worried, then perhaps there is something to this after all. Thomas, I'm with you in not wanting the paranoia to die down just yet.

Artificial intelligence lacks wisdom which is why there are some intelligent people without wisdom that can still do an "awful lot of talking." :-)

Christopher:I hear you. But the scenario you envision is not the nightmare scenario of computers striking out on their own to bedevil mankind; its the scenario of uncontrolled chaos, minus the malevolence. I particularly worry a bit about the new world of nanotechnology, whereby microscopic machines intended for surgical and drug-delivering tasks proliferate to the point that they constitute a plague upon the human race. But then, they would be somewhat limite in memory capability...

I'm more concerned with finding signs of actual human intelligence on earth before worrying too much about artificial intelligence. But not to be flippant, but isn't the crux of this discussion how we define 'conscious' (as opposed to conscience)? Isn't this a rehashing of the brain/mind debate. If the two are simply a linguistic distinction, and mental activity arises solely from physical functions, isn't it a matter of time before computers acquire such a level?On a side note, a while ago some people were discussing summer reading and Miller's A Canticle for Leibowitz was repeatedly brought up. I read it (in three days) or rather devoured it. I think Miller can ask the question about computer 'conscience' before 'consciousness' becomes a defining characteristic of the human mind.

If by being conscious you mean "self awareness", I revert to my original post. How would a software engineer write code to make a computer self aware? If self awareness is the ability to observe oneself, to be a detached onlooker of the working of one's own mind, then the machine's self awareness would be "written" by the engineer, i.e., under the engineer's control.

Is terms of moral processing, I suspect computers will be programmable to easily handle deontological (find the rule and follow it,) and utilitarian (greatest good for the greatest number,) ways of "thinking. But could they do virtue? Virtue ethics, I think, inheres in the dance-like interaction of intellect and appetite, reason, emotion and imagination, that cut pretty close to the complexities of human subjectivity. The fact that they're immune to the push and pull of delightful and unruly desire might make them incapable of virtue. They could certainly be programmed to simulate it, though.But a question: some suggest that the apparent sentience of animals is analogous more to the squeaks and groans of unoiled machines than to anything like what we'd call suffering. One reply to this is that since animals show those reactions via the same neurological equipment that humans have and do so under similar circumstances, Occam's razor suggests that we consider them capable of suffering (at least more intelligent animals. The jury's out on bugs.) So...when machines are so well designed that their apparent emotional and social reactions are indistinguishable from ours, how would we recognize suffering? Or true joy? Art? And on what grounds would we deny the sacraments to a machine who asked for them? (Assuming the machine is well-catechized, of course... :-))

The problem starts with the very definition, or even description, of "consciousness." There are differences among the scientists even in providing a heuristic description of what they are studying and would like to understand. There are even greater differences among philosophers. It's not just the mind-brain problem either, nor just the problem of knowing other minds. One way I've seen it put is the question of the possibility of a third-person description of what is a first-person experience. Some people use the language of consciousness as almost equivalent to that of knowledge. Some see it as perception, others as concomitant awareness. (E.g., strictly speaking, should one speak of "being conscious that...", which sounds like a judgment? Or speak instead of "being conscious of...", which is more like concomitant awareness.) Aquinas, following Augustine, distinguished a "notitia sui" from a "cognitio sui"; everyone has the first, he said, a "self-awareness", but few achieve the latter "self-knowledge". See what he has to say about the soul's knowledge of itself, which is a very sophisticated treatment.

I would be inclined to believe that animals are mentally active in the way that humans are, but at a lower cognitive level. Anyone who has had a dog or lived in a household with a dog will attest to this. Dogs and cats too have a mind of their own, as we say. But they can be trained to some degree, and rendered "virtuous" form out perspective. I see no reason to deny dogs something that can fairly be described as consciousness, certainly awareness of others an perhaps of themselves. Contrarily I do not believe that machines are mentally active or conscious. Can a computer really play chess or does it just rattle off a series of moves following a program?

Share

About the Author

Rev. Joseph A. Komonchak, professor emeritus of the School of Theology and Religious Studies at the Catholic University of America, is a retired priest of the Archdiocese of New York.