• Email
  • Single Page
  • Print

Right and Wrong: Psychologists vs. Philosophers

A more serious suggestion is that we should place greater reliance on institutions and the socially constructed environment “so that human beings,” as Appiah’s Princeton colleague Gilbert Harman writes, “…are not placed in situations in which they will act badly.” I am not sure what this would involve for the examples Appiah gave us—a bakery on every corner to elicit charitable conduct? A dime in every phone booth? It certainly won’t do for the more serious studies I mentioned. For these concern the role that virtue has to play in the absence of institutions (on the streets of Hartford until the ambulance arrives), or in the construction of institutional relationships (the Stanford prison experiment), or in the face of institutions that are abusive or evil (rescuers in Europe in the 1940s). Institutions alone are not enough. We need people sometimes to be steadfast in their moral character in the most adverse circumstances, and if I read him right, Appiah or the psychologists he cites are at least entertaining the possibility that this steadfastness may not be available.

In any case, laws and institutions have to be designed and constructed and for that we need a prior sense of moral purpose and moral principle among their framers. But where do these principles come from? Where do we get the sense of right and wrong that can help us construct and maintain the fabric of social life?

A common answer among philosophers is “intuition”—what Henry Sidgwick called “the spontaneous unreflected judgments” that come to us when we confront particular scenarios. These are supposed to be the starting points of moral philosophy. (Starting points, though certainly not the end points: for Sidgwick and others insisted that the crucial thing was not the intuitions themselves but what we did with them—reflect on them, challenge them, think them through.) What does psychology tell us about intuitions?

What it seems to tell us, says Appiah, is that our capacity for immediate moral judgment is flawed and unreliable. Just like the exercise of virtue, intuitions are “determined in part by things that cannot possibly be relevant.”

Consider this example, beloved of modern philosophers. Suppose there is a runaway trolley car on a line that cannot be stopped, but it can be switched onto a branch line. If it runs on the branch line, one person working on that line will be killed. But if we leave it on the main line, five people will be killed. Is it right to throw the switch? It turns out that more people will say yes if I reverse the order and tell them about the five people in danger on the main line first.

Again, the first thing one wants to say is that the trolley problem may indicate the limits of the experimental method. We can test for immediate reactions, or we can test for thoughtful or considered reactions. “Spontaneous unreflected judgments” are often revised on further consideration. So which should we identify as the “intuition”—the quick response or the considered judgment? As in the case of the virtues, the psychologists test for simple answers that can be easily coded. But what if they were to test for paragraph-length opinions, answers that enabled people to display their hesitations or explain or qualify their response? Why wouldn’t that be equally valid? Psychologists might find such responses harder to deal with, harder to count. But surely that is psychology’s problem, not morality’s.

Having said all that, I am conscious that we must not assume too much in all this talk of reflection. Our spontaneous judgments stand in need of examination: that is clear. But if we respond to the studies that Appiah cites by saying that all they show is that we need to examine our spontaneous judgments using our existing moral sensibility, then we beg the question of the origin and reliability of that sensibility itself. The modes of moral thoughtfulness that we bring to bear on our spontaneous responses—where do they come from? What are they built up out of?

I wish Appiah or the psychologists he cites had more to say about this. The frustrating thing about Experiments in Ethics is that it sometimes takes the psychologists’ challenge too seriously and at other times not seriously enough. Appiah takes it too seriously when he exaggerates the immediate behavioral response and neglects the role of reflection. But to the extent that he sometimes suggests that thought and reflection can redeem morality from the psychologists’ challenge, Appiah’s analysis is incomplete: for surely we then have to look at what—if anything—experimental studies can tell us about the psychology of thought and reflection.

There is a variation on the trolley problem. Suppose there is no branch line, but a very fat man standing on a footbridge over the trolley line. If we push him off the bridge and onto the line, he will be killed but the weight of his body will stop the trolley car from killing the other five. Should we do that? Many who say “yes” to the first scenario say “no” to the second. Some philosophers say this vindicates the traditional Catholic “Doctrine of Double Effect.” (That doctrine holds, for example, that while a medical procedure to save a pregnant mother’s life is not absolutely forbidden simply on the ground that it may result in the death of her fetus, it is forbidden to actually set about killing the fetus in order to save the mother’s life.) Under the Doctrine of Double Effect, killing the fat man to stop the tram is not permissible, but switching the tram with the result that somebody dies may well be. That is what our intuitions are supposed to show.

Psychologists tell us, however, that the difference in response to the two cases is explained in large part by the fact that imagining pushing someone in front of the tram is emotionally much more salient to us humans than just throwing a switch. According to Appiah, magnetic resonance imaging shows that different parts of our brain are activated when imagining pushing someone: regions associated with emotion light up; regions associated with memory and cognitive processing are suppressed. Moreover, the psychologists say that this neural phenomenon can be perfectly well explained. If there is any moral dimension to our evolution, we have probably evolved with hard-wired emotional inhibitions directed at the physical interactions involved in pushing a person to his certain death. And those inhibitions have probably not evolved in a way that would make them vulnerable to further thoughts about tram lines and five other people out of sight. Accepting the evolutionary account does not necessarily discredit the moral response that it explains. But it might lead us to be skeptical about investing too much in that response, for, as Appiah puts it, the study seems to show that

our brains deliver these intuitions to us in response to the wrong question: How physically close is the person I’d have to kill? The cosmic engineer may have made us that way for a good purpose. But no responsible person thinks that the distance between a permissible and an impermissible homicide could be measured by a hundred yards of track.

Once again an immediate fascination with the results that Appiah reports yields to an unease about inferring too much from them. Our spontaneous judgments are sometimes distorted; is that any more alarming than the fact that our visual sense is vulnerable to optical illusions? In the case of vision, we can explain the illusion and do things like ask the subject to measure the lines that he perceived initially as unequal in length. And it is tempting to say that we can also correct for ethical illusions. If we are concerned by what the psychologist tells us about our judgments in the trolley case, why not just take that data as a useful corrective to our immediate intuitions? Why not adopt a rule that says “Always insist on more than one description of a difficult situation before deciding what to do”?

Unfortunately, things are not that simple. In the optical case, we use other, more reliable evidence gathered from our visual sense to offset the particular illusion. But what is the equivalent for the moral case? Is it just a matter of bringing additional spontaneous judgments to be compared with the first set of spontaneous judgments? And what would make that process genuinely “reflective” as opposed to a sort of majority vote among our intuitions? The worry is that any plausible-sounding answer presupposes that we already have what these studies are calling into question: a reliable moral sensibility that we can use to sift our judgments.

Evidently we need to think hard about what we mean when we say that spontaneous judgments must be made the object of serious moral reflection. And certainly this hard thinking about moral thinking ought to include whatever the psychologists can tell us about reflective thought itself. I don’t mean that the psychologists will necessarily discredit reflective moral thought; but we are not entitled to assume that such thought has exactly the rationality that it claims for itself, any more than we were entitled to assume that in the case of the spontaneous judgments.

Perhaps because of this book’s origin as a set of lectures—explorative rather than systematic—it is hard to keep track of what Professor Appiah really thinks about the implications of the experimental studies. Sometimes, as I have said, he seems to overestimate the damage they do to our conceptions of ethics. Other times he suggests that they need not worry us at all. The studies tell us, Appiah says, about the causes and the conditions of our moral judgments; psychologists tell us how our capacities for moral judgment work as part of the natural world. But in making these judgments we also constitute another world for ourselves—a moral world—that sets its own standards and that is not answerable to nature. “Our moral world,” says Appiah, “is both caused and created, and its breezes carry the voices of both explanations and reasons.”

The fact that the raw materials of moral norms have psychological and evolutionary causes neither affirms nor discredits the business of building a moral world. The raw materials are what they are, and we make of them what we choose. Or as Appiah puts it, “nature taught our ancestors to walk; we can teach ourselves to dance.” Any affirmation of our values or any discrediting of them must be a move within the evaluative world itself, not an intervention from outside. We can of course consider morally what to make of the information revealed by the psychological studies, and no doubt we should. But (logically) we should be careful about using the studies to discredit the whole business of moral evaluation.

That is fair enough. And Appiah is right to insist that describing the causes and explanations of our reactive attitudes doesn’t “undermine our higher level moral appraisals of our first- order moral appraisals.” But no amount of fancy footwork between first- and second-order appraisals and between the natural world and the world constituted by our morals precludes the possibility that a psychology of our higher-level moral appraisals might discredit the processes by which we reflect on our reactive attitudes. Not necessarily, of course; but we won’t know until we are in possession of the relevant psychology. And we don’t get that from this book.

Experiments in Ethics is a delightful piece of writing, replete with stories, jokes, anecdotes, and insights. However, at times I found myself wishing it was either a little more grim in its acceptance of the psychological challenge or a little more dismissive of the studies he discusses. Appiah has fun toying with both alternatives; but I find he doesn’t do justice to either.

It is pretty clear by the end of the book that he is not really willing to entertain any radical change in the way that we moralize. It seems to be a premise of the final chapters that any significant alteration in our modes of moral judgment (as opposed to in their contents) is out of the question as a response to what the psychological studies have shown. Whatever they show, he implies, we have no choice but to go on “righting” and “oughting” pretty much as we did before we allowed the psychologists over the fence. We have lives to lead and decisions to make and there is no choice but to continue evaluating our own and others’ conduct against abstract formulations of virtue, principle, and value.

But that response is too complacent. Things change. We no longer evaluate conduct in the way we used to (say, a hundred years ago). We don’t talk of wrong action as “forbidden”; ethics now has a problem-filled rather than a straightforward relation to religion; and we no longer say, as Lord Acton once said, “Opinions alter, manners change, creeds rise and fall, but the moral law is written on the tablets of eternity.”6 We choose and evaluate not just different things, but in a different way; and it is not unimaginable that we could make further changes if we were convincingly shown that our concepts of virtue and principle were as muddled or as bereft of good sense as the concepts of divine command and eternal law are now reputed to be.

Was a change like that ever considered as a response to what psychology might show us? I don’t think so—not in Appiah’s book. He says we should listen to what the psychologists say, but in the end, the vaunted autonomy of ethics is undented. The psychological critique of virtue, we are told, “doesn’t…undermine the claim that it would be better if we were compassionate people, with a persistent, multi-track disposition to acts of kindness.” And the psychologist’s critique of the building blocks of intuition doesn’t shake Appiah’s confidence in moral reflection.

What he says by way of a conclusion to Experiments in Ethics is no doubt inspiring: “Our having a life to make is what our humanity consists in” and “in deciding whether to try to reshape ourselves into new habits, we hold ourselves up to this standard: Can I live with this evaluation and be the person I am trying to be?” But most of this advice depends on our having forgotten about the studies a hundred pages earlier supposedly showing that the best-inculcated ethical dispositions and the raw materials of our considered moral judgments are haphazard and unreliable. At the beginning of the book, we were given the impression that we ought to take the studies seriously, and we examined them critically as though something depended on our response. Were we really just wasting our time—or is there something in our instinctive makeup still worth worrying about?

  1. 6

    Quoted from Lord Acton’s Inaugural Lecture of 1895 by Jonathan Glover, Humanity: A Moral History of the Twentieth Century (Yale University Press, 1999), p. 1.

  • Email
  • Single Page
  • Print