Moral philosophers, naturally enough, want to make ethics look good. They want to make people look capable and respectable as moral agents. But they are working with difficult material. Humans are not as tidy, as thoughtful, or as disciplined as the moralists would like them to be.
Not only that, but modern ethics builds on foundations established centuries—or, in the case of Aristotle, Moses, and Jesus, millennia—before anyone knew much about the springs and mechanisms of human conduct. Cognitive psychologists, neuroscientists, and evolutionary theorists have uncovered quite a bit in the last few decades, and at first sight what they have discovered does not look good for the conventional picture. The traits we call “virtues” do not reliably generate actions of the kinds that we value. The thoughts we call “intuitions” evince emotional responses unhappily skewed by our evolutionary history. The more we learn about the sources of our actions and judgments, the harder the task of connecting these modes of behavioral and sentimental responsiveness with the careful thinking about values and principles that moralists do for a living and that they urge upon everyone else.
How should moral philosophers react to all this? One response is to batten down the hatches and reaffirm the independence of our discipline from psychology. Around the end of the nineteenth century, philosophy thought it could work itself pure through “anti-psychologism”—by distancing itself from psychology. This has been a fruitful dissociation in areas like logic, but should we expect it to be helpful in ethics? Ethics is devoted to the evaluation of actions and motives; not only that, but it tries to construct a system of evaluation that is based upon, but also used to discipline, the way we respond to the dilemmas of ordinary life. Trying to separate all that from psychology seems, in an image used by Kwame Anthony Appiah, “like trying to peel a raspberry.”
Appiah’s new book, Experiments in Ethics , counsels against the insular approach. Professor Appiah is the former director of the Center for Human Values at Princeton and one of our most imaginative writers on topics like culture, values, and individual identity. His previous books include Cosmopolitanism: Ethics in a World of Strangers (2006), The Ethics of Identity (2005), and In My Father’s House: Africa in the Philosophy of Culture (1992). He is a philosopher but one not bound by any disciplinary straitjacket; he succeeds in what he does by inviting his readers to stand back with him from the preoccupations of any particular style of theorizing. Experiments in Ethics is based on a set of lectures that Appiah delivered at Bryn Mawr College in 2005. The lightness of his lecture style has been preserved in the book and that, together with his determination to make analytic moral philosophy the topic rather than the method of his study, has given us a wry and engaging account of the challenge that psychology poses to ethics. Whether that lightness of touch is adequate to respond to the challenge is another matter.
How should we think about ethics if we’re not committed to a wall of separation between psychology and moral philosophy? Appiah begins with what he calls the psychologist’s “case against character.” Consider the concept of virtue—a trait of character like honesty, generosity, or courage that is supposed to generate a consistent pattern of conduct across a range of situations. A lot of moral philosophers have invested heavily in this idea: they think that “virtue ethics” is a better bet than either Kantian principles or utilitarian calculations. But Appiah warns that the evidence we have from experimental psychology suggests that the claims made for virtue may be overstated.
Rosalind Hursthouse, for example, one of our leading theorists of virtue in moral philosophy, says this about the virtue of honesty:
We expect [honest people] to disapprove of, to dislike, and to deplore dishonesty,…to choose, where possible, to work with honest people and have honest friends, to be bringing up their children to be honest…. We expect them to be distressed when those near and dear to them are dishonest,… not to be amused by certain tales of chicanery, to despise rather than to envy those who succeed by dishonest means….1
Virtue theorists believe that the disposition to act and react courageously or honestly is deeply entrenched in a person’s character. As Appiah describes their position, a virtue is supposed to be something that “goes all the way down,” enmeshing itself with other aspects of character, equally admirable, and affecting what a person wants out of life, her conception of happiness, and her view of other people.
Are there such virtues? Well, the psychologists that Appiah has read report that character traits do not exhibit the “cross-situational stability” that virtue presupposes. He cites a study of ten thousand American schoolchildren in the 1920s, which showed that they were willing to lie and cheat in school and at play in ways that did not correlate with any measurable personality traits. It is not that the children cheated whenever they could get away with it; they cheated sometimes and in some settings (when they could get away with it) and not other times or in other settings (when they could get away with it). “The child who wouldn’t break the rules at home, even when it seemed nobody was looking, was no less likely [than other children] to cheat on an exam at school.” There was none of the consistent and comprehensive honesty, “all the way down,” that virtue ethics seems to presuppose.
This seems to be true for other virtues too: helpfulness or charity, for example. With respect to them, studies cited by Appiah show that people act in ways that seem vulnerable to odd and unseemly differences in circumstance. If you accidentally drop your papers outside a phone booth, the best predictor of whether people will help you pick them up is whether they have just discovered a dime in the phone’s coin-return slot: six out of seven of the dime-finders will help as opposed to one in twenty-five of everyone else. If you need change for a dollar, stand outside a bakery: the warm smell of fresh-baked bread makes a huge difference to the kindness of strangers. The beneficiaries will probably say of anyone who came to their assistance, “What a helpful person,” little suspecting that tomorrow when the bakery is shut down and there is nary a dime in the phone booth, the selfsame person will be as mean-spirited as everyone else.
What should we make of all this? One objection is that the psychological studies treat virtues and traits of character as though they are reducible to a set of behaviors that can be counted by a psychologist with a clipboard. In the generosity studies, is the only information we want information about who helped and who didn’t? What about what people said to themselves or to each other as they went past the bakery or the phone booth? A virtue is a disposition to think and talk and evaluate in a certain way, not just a disposition to behave. Appiah acknowledges this—“virtues are more than simple dispositions to do the right thing”—but he doesn’t explore its implications for the wider issue he is considering: Are we evaluating the isolation of ethics from an ideal psychology or are we evaluating its isolation from the reductionist behavioral psychology that we actually have?
Also, virtues are supposed to be acquired, not innate, characteristics, acquired (if Aristotle is to be believed) by hard training over decades.2 So what should we infer from the study of the schoolchildren in the 1920s—that there is no such thing as the virtue of honesty or that the virtue of honesty in a child is a rather ragged and sketchy work-in-progress?
I don’t think Appiah is unaware of these points, but he doesn’t credit them as grounds for dismissing the studies. To him, the generosity experiments show that
a lot of what people do is best explained not by traits of char-acter but by systematic human tendencies to respond to features of their situations that nobody previously thought to be crucial at all.
A key question is whether the same result holds for more serious situations. I ask this because the bakery and phone booth studies border on the trivial. It is hardly a moral requirement to help someone make change or pick up papers that have been clumsily dropped: even people who are paragons of generosity might choose to do this on some occasions and not others. It is certainly not surprising that the discharge of a trivial obligation might vary on the basis of trivial circumstances.
Appiah does consider some cases where generosity is more urgently called for. A well-known study reports that Princeton seminary students, coming from a discussion of the Good Samaritan, were six times less likely to stop to help someone “slumped in a doorway, apparently in some sort of distress,” if they’d been told they were late for an appointment. The point here is not just to confirm anecdotal evidence, with which we are all already familiar, about respectable people failing to help others in distress. (There was an incident last year in Hartford, Connecticut, where an old man lay critically injured in the street after a hit-and-run accident and a number of cars drove by him without stopping to help.3 ) Nor is it to show that we are bad, selfish, or self-absorbed. It is to show that in some situations such selflessness as we have is deeply vulnerable to being distracted or displaced.
What about other virtues in cases where the stakes are high? The Stanley Milgram experiments from the 1960s and the Stanford prison experiments from the 1970s showed alarming evidence of people’s willingness, on instructions from the psychologists in charge, to inflict pain in various role-playing experiments.4 But there were some who resisted. Do we know whether the virtues that enabled some to resist the temptation to abuse their authority in the prison experiments, or to refuse compliance with plainly immoral instructions in Milgram’s experiment, exhibit the same haphazardness as generosity and helpfulness did in the more trivial studies? Do the experiments that Appiah describes tell us anything about real-world virtues in situations of great danger like the Oliners’ studies of the attributes of “rescuers” in Europe during the Holocaust (who were more likely than nonrescuers to describe themselves as religious, more likely to have been involved in friendships with a diverse array of people, less likely to be distrustful of outsiders, less likely to be preoccupied with their own autonomy, and so on)?5 If questions like these were pursued, one would come away with a stronger impression that Appiah takes the experimental challenge seriously.
But his discussion turns out to be more lighthearted than that. Appiah wonders whether we might consider moving to an enlarged set of virtues, ones where susceptibility to trivial distractions doesn’t matter so much. “The index for [Rosalind] Hursthouse’s On Virtue Ethics contains entries for honesty, control, charity, compassion, and wisdom,” says Appiah. “None for humor, wit, conviviality, originality, raconteurship, or love.” The term “experiments in ethics” has a double meaning in Appiah’s book: besides the psychological studies, it refers also to Appiah’s recommendation that we should try on some new and enjoyable virtues and see which ones fit. I am tempted to say that a book that suggests substituting wit and conviviality for honesty and moral courage was perhaps just teasing us with the challenge from psychology in the first place.
A more serious suggestion is that we should place greater reliance on institutions and the socially constructed environment “so that human beings,” as Appiah’s Princeton colleague Gilbert Harman writes, “…are not placed in situations in which they will act badly.” I am not sure what this would involve for the examples Appiah gave us—a bakery on every corner to elicit charitable conduct? A dime in every phone booth? It certainly won’t do for the more serious studies I mentioned. For these concern the role that virtue has to play in the absence of institutions (on the streets of Hartford until the ambulance arrives), or in the construction of institutional relationships (the Stanford prison experiment), or in the face of institutions that are abusive or evil (rescuers in Europe in the 1940s). Institutions alone are not enough. We need people sometimes to be steadfast in their moral character in the most adverse circumstances, and if I read him right, Appiah or the psychologists he cites are at least entertaining the possibility that this steadfastness may not be available.
In any case, laws and institutions have to be designed and constructed and for that we need a prior sense of moral purpose and moral principle among their framers. But where do these principles come from? Where do we get the sense of right and wrong that can help us construct and maintain the fabric of social life?
A common answer among philosophers is “intuition”—what Henry Sidgwick called “the spontaneous unreflected judgments” that come to us when we confront particular scenarios. These are supposed to be the starting points of moral philosophy. (Starting points, though certainly not the end points: for Sidgwick and others insisted that the crucial thing was not the intuitions themselves but what we did with them—reflect on them, challenge them, think them through.) What does psychology tell us about intuitions?
What it seems to tell us, says Appiah, is that our capacity for immediate moral judgment is flawed and unreliable. Just like the exercise of virtue, intuitions are “determined in part by things that cannot possibly be relevant.”
Consider this example, beloved of modern philosophers. Suppose there is a runaway trolley car on a line that cannot be stopped, but it can be switched onto a branch line. If it runs on the branch line, one person working on that line will be killed. But if we leave it on the main line, five people will be killed. Is it right to throw the switch? It turns out that more people will say yes if I reverse the order and tell them about the five people in danger on the main line first.
Again, the first thing one wants to say is that the trolley problem may indicate the limits of the experimental method. We can test for immediate reactions, or we can test for thoughtful or considered reactions. “Spontaneous unreflected judgments” are often revised on further consideration. So which should we identify as the “intuition”—the quick response or the considered judgment? As in the case of the virtues, the psychologists test for simple answers that can be easily coded. But what if they were to test for paragraph-length opinions, answers that enabled people to display their hesitations or explain or qualify their response? Why wouldn’t that be equally valid? Psychologists might find such responses harder to deal with, harder to count. But surely that is psychology’s problem, not morality’s.
Having said all that, I am conscious that we must not assume too much in all this talk of reflection. Our spontaneous judgments stand in need of examination: that is clear. But if we respond to the studies that Appiah cites by saying that all they show is that we need to examine our spontaneous judgments using our existing moral sensibility, then we beg the question of the origin and reliability of that sensibility itself. The modes of moral thoughtfulness that we bring to bear on our spontaneous responses—where do they come from? What are they built up out of?
I wish Appiah or the psychologists he cites had more to say about this. The frustrating thing about Experiments in Ethics is that it sometimes takes the psychologists’ challenge too seriously and at other times not seriously enough. Appiah takes it too seriously when he exaggerates the immediate behavioral response and neglects the role of reflection. But to the extent that he sometimes suggests that thought and reflection can redeem morality from the psychologists’ challenge, Appiah’s analysis is incomplete: for surely we then have to look at what—if anything—experimental studies can tell us about the psychology of thought and reflection.
There is a variation on the trolley problem. Suppose there is no branch line, but a very fat man standing on a footbridge over the trolley line. If we push him off the bridge and onto the line, he will be killed but the weight of his body will stop the trolley car from killing the other five. Should we do that? Many who say “yes” to the first scenario say “no” to the second. Some philosophers say this vindicates the traditional Catholic “Doctrine of Double Effect.” (That doctrine holds, for example, that while a medical procedure to save a pregnant mother’s life is not absolutely forbidden simply on the ground that it may result in the death of her fetus, it is forbidden to actually set about killing the fetus in order to save the mother’s life.) Under the Doctrine of Double Effect, killing the fat man to stop the tram is not permissible, but switching the tram with the result that somebody dies may well be. That is what our intuitions are supposed to show.
Psychologists tell us, however, that the difference in response to the two cases is explained in large part by the fact that imagining pushing someone in front of the tram is emotionally much more salient to us humans than just throwing a switch. According to Appiah, magnetic resonance imaging shows that different parts of our brain are activated when imagining pushing someone: regions associated with emotion light up; regions associated with memory and cognitive processing are suppressed. Moreover, the psychologists say that this neural phenomenon can be perfectly well explained. If there is any moral dimension to our evolution, we have probably evolved with hard-wired emotional inhibitions directed at the physical interactions involved in pushing a person to his certain death. And those inhibitions have probably not evolved in a way that would make them vulnerable to further thoughts about tram lines and five other people out of sight. Accepting the evolutionary account does not necessarily discredit the moral response that it explains. But it might lead us to be skeptical about investing too much in that response, for, as Appiah puts it, the study seems to show that
our brains deliver these intuitions to us in response to the wrong question: How physically close is the person I’d have to kill? The cosmic engineer may have made us that way for a good purpose. But no responsible person thinks that the distance between a permissible and an impermissible homicide could be measured by a hundred yards of track.
Once again an immediate fascination with the results that Appiah reports yields to an unease about inferring too much from them. Our spontaneous judgments are sometimes distorted; is that any more alarming than the fact that our visual sense is vulnerable to optical illusions? In the case of vision, we can explain the illusion and do things like ask the subject to measure the lines that he perceived initially as unequal in length. And it is tempting to say that we can also correct for ethical illusions. If we are concerned by what the psychologist tells us about our judgments in the trolley case, why not just take that data as a useful corrective to our immediate intuitions? Why not adopt a rule that says “Always insist on more than one description of a difficult situation before deciding what to do”?
Unfortunately, things are not that simple. In the optical case, we use other, more reliable evidence gathered from our visual sense to offset the particular illusion. But what is the equivalent for the moral case? Is it just a matter of bringing additional spontaneous judgments to be compared with the first set of spontaneous judgments? And what would make that process genuinely “reflective” as opposed to a sort of majority vote among our intuitions? The worry is that any plausible-sounding answer presupposes that we already have what these studies are calling into question: a reliable moral sensibility that we can use to sift our judgments.
Evidently we need to think hard about what we mean when we say that spontaneous judgments must be made the object of serious moral reflection. And certainly this hard thinking about moral thinking ought to include whatever the psychologists can tell us about reflective thought itself. I don’t mean that the psychologists will necessarily discredit reflective moral thought; but we are not entitled to assume that such thought has exactly the rationality that it claims for itself, any more than we were entitled to assume that in the case of the spontaneous judgments.
Perhaps because of this book’s origin as a set of lectures—explorative rather than systematic—it is hard to keep track of what Professor Appiah really thinks about the implications of the experimental studies. Sometimes, as I have said, he seems to overestimate the damage they do to our conceptions of ethics. Other times he suggests that they need not worry us at all. The studies tell us, Appiah says, about the causes and the conditions of our moral judgments; psychologists tell us how our capacities for moral judgment work as part of the natural world. But in making these judgments we also constitute another world for ourselves—a moral world—that sets its own standards and that is not answerable to nature. “Our moral world,” says Appiah, “is both caused and created, and its breezes carry the voices of both explanations and reasons.”
The fact that the raw materials of moral norms have psychological and evolutionary causes neither affirms nor discredits the business of building a moral world. The raw materials are what they are, and we make of them what we choose. Or as Appiah puts it, “nature taught our ancestors to walk; we can teach ourselves to dance.” Any affirmation of our values or any discrediting of them must be a move within the evaluative world itself, not an intervention from outside. We can of course consider morally what to make of the information revealed by the psychological studies, and no doubt we should. But (logically) we should be careful about using the studies to discredit the whole business of moral evaluation.
That is fair enough. And Appiah is right to insist that describing the causes and explanations of our reactive attitudes doesn’t “undermine our higher level moral appraisals of our first- order moral appraisals.” But no amount of fancy footwork between first- and second-order appraisals and between the natural world and the world constituted by our morals precludes the possibility that a psychology of our higher-level moral appraisals might discredit the processes by which we reflect on our reactive attitudes. Not necessarily, of course; but we won’t know until we are in possession of the relevant psychology. And we don’t get that from this book.
Experiments in Ethics is a delightful piece of writing, replete with stories, jokes, anecdotes, and insights. However, at times I found myself wishing it was either a little more grim in its acceptance of the psychological challenge or a little more dismissive of the studies he discusses. Appiah has fun toying with both alternatives; but I find he doesn’t do justice to either.
It is pretty clear by the end of the book that he is not really willing to entertain any radical change in the way that we moralize. It seems to be a premise of the final chapters that any significant alteration in our modes of moral judgment (as opposed to in their contents) is out of the question as a response to what the psychological studies have shown. Whatever they show, he implies, we have no choice but to go on “righting” and “oughting” pretty much as we did before we allowed the psychologists over the fence. We have lives to lead and decisions to make and there is no choice but to continue evaluating our own and others’ conduct against abstract formulations of virtue, principle, and value.
But that response is too complacent. Things change. We no longer evaluate conduct in the way we used to (say, a hundred years ago). We don’t talk of wrong action as “forbidden”; ethics now has a problem-filled rather than a straightforward relation to religion; and we no longer say, as Lord Acton once said, “Opinions alter, manners change, creeds rise and fall, but the moral law is written on the tablets of eternity.”6 We choose and evaluate not just different things, but in a different way; and it is not unimaginable that we could make further changes if we were convincingly shown that our concepts of virtue and principle were as muddled or as bereft of good sense as the concepts of divine command and eternal law are now reputed to be.
Was a change like that ever considered as a response to what psychology might show us? I don’t think so—not in Appiah’s book. He says we should listen to what the psychologists say, but in the end, the vaunted autonomy of ethics is undented. The psychological critique of virtue, we are told, “doesn’t…undermine the claim that it would be better if we were compassionate people, with a persistent, multi-track disposition to acts of kindness.” And the psychologist’s critique of the building blocks of intuition doesn’t shake Appiah’s confidence in moral reflection.
What he says by way of a conclusion to Experiments in Ethics is no doubt inspiring: “Our having a life to make is what our humanity consists in” and “in deciding whether to try to reshape ourselves into new habits, we hold ourselves up to this standard: Can I live with this evaluation and be the person I am trying to be?” But most of this advice depends on our having forgotten about the studies a hundred pages earlier supposedly showing that the best-inculcated ethical dispositions and the raw materials of our considered moral judgments are haphazard and unreliable. At the beginning of the book, we were given the impression that we ought to take the studies seriously, and we examined them critically as though something depended on our response. Were we really just wasting our time—or is there something in our instinctive makeup still worth worrying about?
October 8, 2009
Rosalind Hursthouse, On Virtue Ethics (Oxford University Press, 1999), pp. 11–12. ↩
Aristotle, Nichomachean Ethics , Books 2 and 10. ↩
See Steven Goode, Tina A. Brown, and Jeffrey B. Cohen, “‘So Inhumane’: Police Chief Decries City Residents’ Callousness After Hit-and-Run Victim Lies Unaided on Busy Street…,” Hartford Courant , June 5, 2008. ↩
See Stanley Milgram, Obedience to Authority; An Experimental View (Harper Collins, 1974) and C. Haney, W.C. Banks, and P.G. Zimbardo, “Interpersonal Dynamics in a Simulated Prison,” International Journal of Criminology and Penology , Vol. 1 (1973). ↩
Samuel P. Oliner and Pearl M. Oliner, The Altruistic Personality: Rescuers of Jews in Nazi Europe (Free Press, 1988). ↩
Quoted from Lord Acton’s Inaugural Lecture of 1895 by Jonathan Glover, Humanity: A Moral History of the Twentieth Century (Yale University Press, 1999), p. 1. ↩