• Email
  • Single Page
  • Print

Winner Take Less

It is so hard to make important decisions that we have a great urge to reduce them to rules. Every moral teacher or spiritual adviser gives injunctions about how to live wisely and well. But life is so complicated and full of uncertainty that rules seldom tell us quite what to do. Even the more analytically minded philosophers leave us in quandaries. Utilitarians start with Jeremy Bentham’s maxim, that we should strive for the greatest happiness of the greatest number of people. How do we achieve that? Kant taught us that we should follow just those rules of conduct that we would want everybody to follow. Few find this generalization of the golden rule a great help. It may seem that there is one kind of person or group that has no problem: the entirely selfish. Utilitarians find selfish people more interesting than one might expect, for it has been argued that the way to “maximize everyone’s utility” is to have a free market in which everyone acts in his own interests.

Even if you are entirely selfish and know what you want, you still live in a world full of uncertainties and won’t know for sure how to achieve what you want. The problem is compounded if you are interacting with another selfish person with desires counter to yours. Long ago Leibniz said we should model such situations on competitive games. Only after World War II was the idea fully exploited by John von Neumann. The result is called game theory, which quickly won a firm foothold in business schools and among strategic planners. In a recently reprinted essay, “What is Game Theory?” Thomas Schelling engagingly reminds us that we reason “game-theoretically” all the time.1 He starts with an example of getting on the train, hoping to sit beside a friend, or to evade a bore. Given some constraints about reserved seats and uncertainties about what the other person will do, should one go, he asks, to the dining car or the buffet car? You’ll see by the example that this is a vintage piece from 1967, when people were a little more optimistic about game theory than they are now.

The simplest games have two players, you and your friend, perhaps. Each has some strategies; go, for example, to the dining car or to the buffet car. Each has some beliefs about the world, about seating, reservations, and so forth, and each has some uncertainties. Each has some preferences—company over isolation, and dinner with wine, perhaps, over sandwich with beer. The “rational” player tries to maximize his utility by calculating the most effective way to get what he wants. Schelling’s first game is not even competitive; it becomes so when one is trying to avoid the bore.

Game theory is quite good for those games in which winners take everything that is staked, and losers lose all that they stake. The solution—the optimum strategy—is crisply defined for some classes of games. Unfortunately things begin to go awry for more interesting cases. Game theory has not been so good for games in which all parties stand to gain by collaboration. Indeed one of the advantages of moral philosophy over game theory is that moralists give sensible advice to moral agents while game theory can give stupid advice to game theorists.

By now the classic example of this is a little puzzle, invented in 1950: the prisoners’ dilemma. It is an irritating puzzle because problems taking the same general form seem to recur, time and again, in daily life. Here is the classic, abstract, and impossible tale that gives the dilemma its name. Two thieves in cahoots have been caught and await trial. They are locked in isolated cells. The prosecutor goes to each offering a deal. “Confess and implicate the other: if he is meanwhile maintaining innocence, we shall set you free and imprison him for five years. If neither of you confesses, you will both get two years for lack of better evidence. If you both confess, we have both of you, and will give each of you a slightly mitigated four-year term.”

Thief A reasons thus: if thief B does not confess, I go free by confessing. If thief B does confess but I do not, I get five years in jail. So whatever B does, I had better confess and implicate B. Thief B reasons identically. So both confess, and both get four years. Had both remained silent, each would have got only two years in jail. Game theory teaches each thief to act in his own worst interests.

Perhaps the two thieves will take an ethics course during their four years in prison. If they read Bentham, they may still be in doubt how to act so as to maximize the greatest happiness of the greatest number. Since utilitarians are the ancestors of game theorists, they may be in a dilemma, but at first sight maximizing both players’ happiness means not confessing. The golden rule is more transparent: A should act as he hopes B would act; if B follows the same rule, neither confesses and they get a two-year sentence. In their ethics course the thieves will also learn Kant’s categorical imperative: follow only those rules that you can sincerely wish every agent to follow. Kant thought that defined the very essence of the rational moral agent. Certainly it looks rational, while game theory looks irrational. Game theory advises the thieves to spend four years in jail, while the categorical imperative recommends two years.

Kant and Bentham act on fundamentally different moral principles, but had Kant and Bentham, per impossibile, been the two thieves, they would have had the shorter sentence. You may wonder why the business schools teach game theory rather than ethics. The answer is plain. Kant does not know that Bentham is on the other side of the wall. If it is not Bentham but a game theorist, the villain will double-cross and go scot free, while Kant gets what, in the game theoretic jargon, is called the “sucker’s payoff,” namely five years in jail. Not that “sucker” is my choice of words for a philosopher who spends five years in jail for acting as a rational human being.

Decision problems in the form of the prisoners’ dilemma are common enough. They require that two agents—human beings, nations, corporations, or bacteria—can benefit from cooperation. However one of the two could profit more from “defecting”—not cooperating—when the other is trying to cooperate. If one succumbs to this temptation, he profits while the other loses a lot. But if both succumb, both incur substantial losses (or at any rate gain less than they would have through mutual cooperation).

One of Axelrod’s chapters uses a historical example based on a recent book by Tony Ashworth, which describes the trench warfare in which German battalions were pitted against French or British ones.2 The opposed units are in a prisoners’ dilemma. The soldiers themselves would rather be alive, unharmed and free, than dead, maimed, or captured. They also prefer victory to stalemate, but stalemate is better than death or defeat.

Now if neither side shoots there will be a stalemate, but everyone will be alive and well. But if one side does not shoot while the other does, one side will be dead or ignominiously vanquished, while the other will win. If both sides shoot, most everyone will be dead, and in trench warfare, no one will win.

What to do? Were the soldiers on both sides to follow the insane motto of my maternal clan, Buiadh no bas (“Victory or death”), there would be much death and no victory. Game theorists on both sides would also, in one encounter, act like a demented highland regiment, and be dead. Ashworth’s book proves, however, what has long been rumored. Private soldiers (as opposed to staff officers) behaved like rational human beings. For substantial periods of the Great War, much cooperative nonshooting arose spontaneously, even though both sides of the line were, presumably, acting selfishly with no serious concern for the other side.

How were soldiers able to act in the best interests of all? German Kantians pitted against British Benthamites? Not at all. Axelrod’s answer is that trench warfare is not a one-shot affair. Every morning you are supposed to get up and shell the other side. Every day there dawns a new prisoners’ dilemma. This is a repeated or “iterated” prisoners’ dilemma, about which game theorists have a curious story to tell.

In Ashworth’s history there was seldom any outright palaver between opposed battalions. Protocols emerged slowly, starting, for example, with neither side firing a shot at dinner time. This silence spread through days and months: effective silence, not actual silence, for the infantrymen and artillery took care to simulate action so they would not be court-martialed and shot by their own side. According to Ashworth, staff officers, worried at the loss in morale caused by no one being killed, finally invented raids and sorties which, according to Axelrod, destroyed the logical structure of the prisoners’ dilemma.

How could such well-understood rational conventions about not shooting arise, when opposed sides did not arrange any explicit treaty? The emergence of such cooperation is one topic of Axelrod’s book. The secret is iterating the prisoners’ dilemma: players are caught in the same situation over and over again, day in, day out, facing roughly the same opponent on each occasion. Repetition makes a difference to people trapped in repeated dilemmas of the same form.

The difference is not that pure game theory sees a way out of its dilemma thanks to repeated plays. There is a rather abstract argument that two game theorists, equipped with a complete strategy for all possible ways in which a sequence of games might go, are still bound to defect on every single game. It arises this way. Suppose both players know that exactly 200 games will be played. Then they reason on the final, 200th, game, as they would on a one-shot affair: both defect. But then on the 199th game, both know that at the next game both will defect, so both defect at 199 as well—and so on. Knowing that there are just 200 games, they defect at every game. The same argument works for any finite number of games (so long as we do not “discount” very distant games overmuch). Hence no matter what finite number of games are played, the game theorists “ought” to defect at every game. In technical jargon, “defect always” is the equilibrium solution to the class of finitely repeated prisoners’ dilemmas.

Whatever one thinks of this argument, it is entirely remote from what real people do. In a one-shot game many people cooperate at once. There are, in most experiments, at least as many initial cooperators as initial exploiters. There is also a fairly regular pattern when people are asked to play prisoners’ dilemmas against each other many times. People do act differently, but typically there is a fairly high chance of cooperation at the start, which declines as one tries to exploit another, and mutual distrust develops. But after a while people realize this is stupid, and on average they slowly begin to cooperate again. There is now an enormous literature on the psychology of the dilemma, whose results do not always cohere, but that gives a rough-and-ready summary of many psychological investigations.

  1. 1

    Thomas Schelling, “What is Game Theory,” in Choice and Consequence (Harvard University Press, 1984), pp. 213–242.

  2. 2

    Tony Ashworth, Trench Warfare 1914–1918: The Live and Let Live System (Holmes and Meier, 1980).

  • Email
  • Single Page
  • Print