To have a career reflecting on ethics requires three skills: to know oneself, to understand the culture in which one’s life and one’s profession are embedded, and to have a working knowledge of the history and methods of the field. To know oneself, one must know where one came from. That contention has a particular importance in my case. Although I left the Catholic Church and religion in my mid-thirties, many friends and critics believe they can spot their remnants in me just below the surface. If this is true, I am hardly embarrassed by it. That background gave me insights and perspectives I would not otherwise have.

My parents were both Catholics. I was sent to parochial schools in New Orleans and Washington, D.C., where I was taught by nuns. From the seventh grade on, I was sent to St. John’s College (actually a high school) at Thomas Circle in Washington and spent the next five years in uniform, with daily marching. The teachers, all male, were members of the LaSalle Christian Brothers order, and they were a nice bunch, fanatical neither about their religion nor about education. We took a fine range of courses, Latin and French, English, biology, chemistry and physics, religion and history—but all taught in a low-key way with little, if any, homework. The military side was no less low key. I was not a military star—I was one of the few seniors in my school not made an officer.

I was only a fair to middling high school student, and my parents, though supportive, were not the least bit pushy. But two things of importance happened in high school. I began to take religion seriously, even becoming a bit pious, and I became a local swim star.

I can’t say exactly why religion became important to me. I guess I would attribute it to what can be called a burgeoning religious experience—a sense of awe and transcendence at Mass and during other religious devotions. I was moved, and when that happens to people they take a different stance toward religion: it becomes something more than a set of beliefs or an act of faith.

My swimming was almost as important as religion. It got me into Yale, which at that time had the leading swimming team in the world. I have always had some affection for affirmative-action programs, if only because I was a beneficiary of one in 1948. I did poorly on my SATs, came from a third-tier high school, and had good grades only in my senior year. But I was a swimmer, and Yale wanted me, smart or not.

My teachers at St. John’s were openly fearful that I would “lose my faith,” and most Catholic schools would not even forward transcripts to such “heathen” institutions as Yale. Their vision of Yale was that of a sinkhole of atheism and agnosticism, snobbish, antireligious, thoroughly materialistic, and shot through with Communist ideas. My high-school teachers were worried that I would lose my faith. They were right, but only in the long run.

As it turns out, after a less than stellar career I retired from swimming at the end of my junior year. “What else is going on at Yale?” I thought. I was, in a way, lucky to still be there. During my freshman year, juggling swimming and a part-time job in one of the colleges, I got poor grades and actually failed biology. I was about to flunk out. My freshman adviser went to my biology professor, George Baitsell, saying that the only thing that would save me would be a change in my grade. Professor Baitsell said, “Fine; no problem,” and he moved me from an F to a D on the spot, and that did it. In light of my later career in bioethics, which requires I know something about biology and medicine, that was not an auspicious start. But it was all the formal biological science I ever had after high school. It was a delicious pleasure, some twenty-five years later, to give a named lecture in the same classroom where I had failed the biology course.

A more important part of my education was an experimental interdisciplinary program mixing literature, history, and philosophy. The courses I took were to serve later as a powerful antidote to the analytic philosophy I received as a graduate student at Harvard. The world was richer than our language and full of realities other than concepts and logical entailments. At that time, Paul Desjardins, an older Catholic graduate student getting a PhD in philosophy at Yale and writing a dissertation on the Socratic dialogues, befriended me. Paul made an indelible impression when, after he’d taken his clothes to a Chinese laundry, he asked me, “How did I do?” I did not understand the question, so he clarified it: “I mean how did I do in dealing with my Chinese laundry man? How should someone who is a Yale instructor treat someone who does his laundry? Should it be different from the way I treat my Italian barber?” I had no answer to such questions then, but they helped fix in my mind an important point about ethics, which begins at the borderline of etiquette and with the seemingly trivial details of daily life.

Another event of importance was a course on Thomas Aquinas, taught by an unusual (for Yale) visiting professor, the prominent Jesuit John Courtney Murray. Murray was engaged in a great struggle to get the church to accept the notion of freedom of religion in a pluralistic society. He had been silenced on occasion but was finally able to make his mark at the Second Vatican Council in the 1960s. His course was rather routine, but at least I had a chance to read Aquinas and to learn about natural law theory, about which I later heard nothing at all at Harvard. His course, plus Paul’s influence, was sufficient to convince me that I should go for a PhD.

After Yale and before Harvard, I did a brief stint in the Army. I also got married. I was twenty-four, and my new wife, Sidney, was twenty-one. We were well matched. As a teenager, Sidney asked to be excused from playing bridge so that she could read instead. I was planning to go into philosophy, not a career that would commend me to her father. Even my parents, though they tried, could not extract from me a clear answer to a sensible parental question: Just what is philosophy? Even now I doubt that I could give a persuasive answer to that question. The comic-book version goes something like this, and it’s not too far off: it has something or other to do with truth, whether there is a real world outside our sense perceptions, and whether there is any way to know what goes on in the minds of other persons.

I now had a wife who shared my Catholicism and, even more, my kind of Catholicism. We were both readers of Commonweal (whose staff I would later join and where Sidney was a columnist for many years), fervent Democrats, and followers of Dorothy Day and the Catholic Worker movement, aiming to live and work in poverty. A critical book for us was Maisie Ward’s Be Not Solicitous. Its theme was that the serious Christian should give no thought to the future, eschewing insurance and other sources of security and, above all, accepting all the children God sent.

I was aiming for an academic life, and Sidney, eager for us to have many children, wanted six of them by age thirty and a PhD by thirty-five. We had no materialistic ambitions, but we did want to go to the best schools and do all we could for our children. Although it was true that our desire for children was inspired by our (left-wing) Catholicism, demography and the zeitgeist plays tricks on patterns of procreation. My wife was born in 1933; women born in that year had more children than any other year in the twentieth century, an average of 3.8; we were not quite as procreationally countercultural as we thought. I could not help thinking of my father’s extended family, with its grand total of seven offspring, but then my parents had married during the Depression, which saw a great drop in birthrates for Catholics and everyone else.

While still a soldier, I entered an MA program in philosophy at Georgetown University in 1954. In the fall of 1956, out of the army and with an MA and one child, I entered the PhD program at Harvard. Until that time, only a handful of Catholics had been admitted to Harvard’s philosophy department, and I was a curiosity. I found it a strange and increasingly distasteful place, not at all fitting my romantic picture of philosophy. Historian Bruce Kuklick published a book in 1977, The Rise of American Philosophy, which, despite the title, was actually a history of the Harvard philosophy department. His thesis was illuminating, and it had been acted out before my eyes. In the nineteenth and early twentieth centuries, there was no tight discipline of philosophy. Apart from students, the audience of philosophers was assumed to be the general public. Harvard’s renowned philosophers—William James, George Santayana, and Josiah Royce—wrote essays and books accessible to the educated layman and filled with references to history, literature, religion, and the classics.

But as Kuklick showed, by the 1940s philosophy was becoming a narrow, more technical academic discipline. As he put it, “During the same period in which philosophy became a profession [the 1930s], political and social theorizing continued to occupy a minor place.... [W]hen narrow professionals turned to their scholarship, they thought of their work as a game. For a few, professional philosophy had become a way, not of confronting the problem of existence, but of avoiding it.” That is harsh, but not far from the professional philosophy I was introduced to decades later.

The department I entered was dominated by the kind of philosophy fostered at Oxford and Cambridge, focusing on conceptual and linguistic analysis. Willard Van Orman Quine, a distinguished logician and analytic philosopher, was the dominant figure. I had no idea beforehand of the flavor or the substance of the department, and it was unsettling. I had been drawn to philosophy by the example of Socrates, who went about Athens asking large questions in the Agora and getting a dram of hemlock for doing so. The message I got at Harvard was, in effect, “Hey, kid, we don’t do that kind of thing anymore.”

Our introduction to the history of philosophy (which unreconstructed analytic philosophers dismissed as “not philosophy” at all) moved from Plato and Aristotle (both only lightly touched on) to Kant and the British empiricists in the eighteenth century, and from there another two hundred years to twentieth-century Oxford philosopher J. L. Austin. We skipped Thomas Aquinas and the medievals, Hegel and the German idealists, Adam Smith, Karl Marx and Nietzsche, just about all political philosophy, Heidegger, and Sartre and the existentialists.

We did learn a good deal about utilitarianism, Kant, and deontology, and we were introduced to the work of the leading analytic philosophers: G. E. Moore, Bertrand Russell, Ludwig Wittgenstein, and A. J. Ayer. The characteristic feature of all of them was a focus on concepts and language, analyzed in great detail and heavily shadowed by the dominance of science and the earlier logical positivists. A. J. Ayer achieved a wide reputation with his 1936 book Language, Truth, and Logic, which touted the “verification principle,” the idea that the only meaningful ideas were those open to scientific verification. To say that the Mona Lisa is a great painting, or that it is good not to tell lies, is no more than to express my feelings. At one stroke, the verification principle threw ethics, aesthetics, and religion into the dustbin of history.

G. E. Moore’s formulation of the “naturalistic fallacy” complemented Ayer by contending that there is a fundamental divide between facts and values and that moral rules and principles could not be derived from factual knowledge: an ought cannot be derived from an is. Of course, one response is to note that is is all there is in the world, and if ethics cannot be developed from that, it is hard to see how it can be justified at all. But that was just what many of the analytic philosophers actually believed. Although Ayer’s position did not have a long intellectual life, I later saw its lingering influence on many of the doctors I encountered in my work in bioethics at the Hastings Center in the 1970s. They believed that science was the only source of real knowledge and that ethics was nothing more than emotion-driven opinion, with religion lurking just below the surface.

As for the philosophers of that postwar era, British philosopher Mary Midgely noted that the one thing most did not approach was the inner life of humans, neither their own nor anyone else’s. Not once did I hear anything said in or out of the classroom about how we should live our lives, that most ancient of all ethical questions. It is worth noting that some of the leading Oxbridge women philosophers of that era—Phillipa Foote, Iris Murdoch, Elizabeth Anscombe, and Midgely—were critical of that omission among their male counterparts.

We were warned in the philosophy department not to go near Professor John Wild, who taught a course on existentialism. To show an interest in that sort of conceptually muddy stuff (as bad as Hegel) was a professional kiss of death. Philosophy, one distinguished logician told me, was best thought of as a kind of game, technical and intricate, that only a few highly educated—and of course very smart—people liked to play. If one asked, “What is the meaning of life?” the not-quite-joking answer was, “Life has no meaning; only propositions have meaning.”

My worst experience was with my professor of moral philosophy, Roderick Firth, who held the view that the best perspective on ethical truth was that of an “ideal observer,” detached, disinterested, and all-knowing; that was the way to know right from wrong. How one became such an observer was left unexplained. Each year, the various professors would devote an evening to talk about their own philosophical positions, and Firth once talked about his theory. Having heard that he was a Quaker, I asked him how he related the moral values of Quakerism, such as its pacifism, to moral philosophy. He was not pleased with the question, stiffly responding, “I don’t think that is an appropriate question for this evening, Mr. Callahan. Perhaps you should visit me during office hours.” A few of my fellow students later defended his answer by saying, “But that was a religious, not a philosophical question.” Oh.

At that moment, I think the seed was planted that would eventually lead me to conclude that I did not want to spend the rest of my life in such academic company, with philosophy considered a kind of game and with an almost total bifurcation of academic ethics and one’s personal moral life. I found that a great way to start an argument with a philosopher (and it’s good to this very day) was to contend that one had to be a good person to think well about ethics. Nonsense, I was invariably told: it is only the quality of one’s arguments that counts, and that is a function of one’s smarts, not one’s moral sensibilities—whatever they might be.

It was probably no accident that the two most prominent of my classmates, the writer Susan Sontag and the civil-rights leader Robert Moses, were dropouts. I found it a cold and competitive department. Only three of the seventeen students who started in my year ever got their degree; most of the others faded away, bored and disappointed. When, later, I was asked what I had learned, it took me a while to find an answer: I learned how to ask, with the proper Oxford snarl, “But what do you mean when you say something is ‘good?’”—an all-purpose, deliberately intimidating question when talking about ethics.

Fortunately, there was more to Harvard than the philosophy department. By the late 1950s, I had begun writing for Commonweal. I had written Jim O’Gara, the editor, out of the blue, telling him what a smart fellow I was—Harvard pedigree, no less—offering to review books and noting that I had taken much English literature at Yale. That was a bit of an exaggeration. My first assignment was to review a book by the distinguished literary critic Hugh Kenner, whose theories I did not understand, and I had never read the books and authors he cited. Actually, I had no idea what he was talking about, not a clue. That lacuna was solved by recruiting a friend getting a degree in English to read the book and tutor me about what to say. That was my inauspicious, slightly shady start to writing regularly for the magazine. Later, I sometimes even knew what I was talking about.

I soon became a minor name in Catholic intellectual circles at Harvard and elsewhere. My interest in religion led me to make friends at the Divinity School and to put together an ecumenical discussion group, one of the first in the country. Harvey Cox, Michael Novak, and John T. Noonan were among the young up-and-comers at Harvard during those years, and out of that group came my first book, Christianity Divided (1961), one of the first ecumenical collections of Protestant and Catholic essays. Shortly thereafter, I was asked to serve as an assistant to Christopher Dawson, a historian of religion and culture and the first holder of the Chauncey Stillman chair in Roman Catholic Studies at the Divinity School. His writings focused on the role of religion, particularly Christianity, as a necessary foundation for viable cultures.

Dawson was an odd but interesting person. He once said to me that “the world ended when the queen died,” meaning Queen Victoria. He was about as old-fashioned and rooted in the past as that statement might suggest, but he was a lucid and interesting historian. He graduated from Oxford prior to World War I and saw many of his friends and classmates killed in the war. He had never taught in England, just wrote, and Harvard was his first and only academic appointment.

I was brought in because his teaching style was soporific—even mind-numbing. When students asked him questions, he usually answered them with some mumbled words at most, and sometimes just one word. “Did the Reformation have some economic roots?” he was once asked. He was silent, pondering for about a minute, and then answered, “Yes.” Just yes. It was my job to liven things up, to elaborate on his monosyllables, to Americanize his seminar. I knew nothing whatever about the history of religion, but somehow he thought I did and I scrambled every day to skim history books and find out what he was talking about. My experience with Dawson instilled in me an interest in the shaping force of culture, which I thereafter brought into all my writing.

I might add that the Divinity School was treated with maximum scorn in the philosophy department, as in most of the university. I thought that the theologians had all the interesting questions about life, but no methodology of any great value in answering them, and that the philosophers had great methodologies to answer uninteresting questions. Neither judgment was quite true—and many scientists thought that their way of thinking was better than that of both theology and philosophy—but the contrast between those two distant worlds was enormous. Aiming to stay out of trouble, I never mentioned in the philosophy department that I was moonlighting in that place.

I finished my days at Harvard by writing a dissertation on the eighteenth-century Irish-English bishop and philosopher George Berkeley. In his dialogue Alciphron, Berkeley held that the language of religion and ethics does not describe the world, but rather moves us to feeling and action. That was an adventurous view in his day, at a time when it was thought, especially by John Locke, that language was primarily used to name objects. Berkeley was in effect a precursor to Ludwig Wittgenstein, who famously said, “Don’t look to the meaning but to the uses of a word.” As a philosopher Berkeley provided me with two crucial models: he had a masterfully clear and elegant writing style, and he was a good person, beloved and revered.

Nonetheless, if my life in the philosophy department was unsatisfactory, some attitudes I developed at Harvard have endured. One is an abiding suspicion of the naïve view that clean and pure reason is the answer to everything, a view that is only a shade removed from a belief that science is the only reliable source of knowledge.

While not quite dead, the idea that philosophy is meant to be a search for wisdom had come on hard times. As someone who has always been drawn to literature, history, and the social sciences, and at one time to religion, I had effectively been inoculated against hard-core, antiseptic rationalism. The most glaring deficiency in the analytic moral philosophy of the 1940s and ’50s was a total absence of any interest in how we should live and shape our lives. “Virtue theory” had a number of theological adherents but few in philosophy. There has been some revival of that tradition since then.

If I was put off by the pretense of rationalism, I was no less alienated from the aggressive secularism that went with it. Even less understandable was a disinterest in even trying to understand religion. (Susan Sontag was an exception. She once asked me to talk at length with her about my religious beliefs.) Yet if “dogmatic religion,” as it was often called, had been put harshly aside, dogmatism of a different kind had not been. The secularism I encountered at Harvard was as rigid in its beliefs and mores as the most dogma-bound Roman Catholicism, only less aware of its own blind spots. Those believers in freedom and tolerance had little tolerance for people outside their circle. Something worse also became apparent: an inflated sense of what personal freedom meant in the moral sphere for secular true believers, a firm conviction that the only binding moral obligations are those voluntarily chosen, not those thrust upon us by life in common with others.

Social-contract theories of ethics going back to Thomas Hobbes hold that societies are glued together by our mutual understanding that the common welfare requires accepting law, moral rules, and government oversight. I would add that a moral life requires that we accept and endure many burdens that are thrust upon us by accident, outside the social contract.

Although I did not know it at the time, the experiences and events were laying the foundation for my work in bioethics. I was brought up in a conventional kind of Catholicism: be virtuous, do good, love thy neighbor, be chaste, feed the poor, and visit the sick. One was expected to do well economically but not to aspire to great wealth or a fast-track life. Though the term was not in use in those days, it seems fair to say that the model lifestyle was “laid back”: not too ambitious or materialistic, and at all times nice, not pushy. The kind of person you are is what counted. That was a good legacy from my youth.

I can recall no mention whatsoever in my religious upbringing of some ethical issues that would preoccupy me later: contraception and abortion, welfare policy, just warfare, and the eradication of racism. Still, I think my Catholic upbringing gave me the helpful advantage of being an outsider in the Protestant cultures of Yale and Harvard, whose academic departments had become heavily secularized. At Harvard there was the double-whammy of Protestants and secular Jews (who had their own problems with the Protestant culture), both of whom were suspicious of Catholicism. But that background forced me to see the larger society and the smaller academic world from a different angle.

To be fair, my Harvard education did teach me that good work in ethics requires care and diligence. Still, too many major figures and important ideas in the history of philosophy were skipped and often scorned. The not-so-hidden belief that philosophy really began only with the Oxbridge analytic movement (but grandfathering in Locke, Kant, and the eighteenth-century British empiricists), and that philosophers were now clearing up centuries of confusion and muddled language, closed rather than opened the mind. The belief in objective, detached, and impersonal rationality was simply naïve, but seductively so.

The whole idea of shaping and living a moral life open to the widest range of human experience had been abandoned at the altar of narrowly focused disciplinary goals and a view of ethics deprived of interest in the ancient Greek admonition to “know thyself.” (I would add, “Know thy culture.”) As an editor of the distinguished British journal Philosophy once said scornfully of another philosopher admired by my Hasting Center colleague Willard Gaylin, “He was a very good philosopher until he got interested in ‘wisdom.’” No one I knew on the Harvard philosophy faculty could have been accused of that error.

Looking back, I realize that if losing my religious faith put me into the secular camp, having once had religion gave me the advantage of bringing to contemporary ethical debates a sympathy that many of my secular colleagues lack. Mere rationalism is too thin and narrow an understanding of ourselves and of human life as a whole. My lost faith still keeps me from falling into the lockstep secularists get themselves into, particularly if they entertain the delusion that they are more rational than the rest of us.

Daniel Callahan, a former Commonweal editor, is president emeritus of the Hastings Center and the author of What Price Better Health: Hazards of the Research Imperative.
Also by this author

Please email comments to [email protected] and join the conversation on our Facebook page.

Published in the 2012-11-09 issue: View Contents
© 2024 Commonweal Magazine. All rights reserved. Design by Point Five. Site by Deck Fifty.