Cass R. Sunstein
Cass R. Sunstein; drawing by James Ferguson

Cass Sunstein and Reid Hastie’s recent book about how to make group decisions “wiser” has a strange text on its copyright page: the Harvard Business Review Press will, for large orders of the book from companies and corporations, provide special printings with the company logo displayed on a customized cover and with a letter from the CEO added to the front matter. Since the groups that are discussed here include government agencies and juries as well as companies and corporate boards, I imagine all American citizens receiving a copy of the book with the Great Seal of the United States on its cover and a letter from President Obama, our CEO, added to its front matter. I imagine all of us reading the book. Would the decisions we make after that be any wiser?

For citizen readers, the answer is: probably not, with regard to many important political decisions. If we follow Sunstein and Hastie’s advice, we will do better addressing “problems with well-defined solutions” where “accuracy is measured by reference to objective facts.” But I worry about problems whose solutions are neither well-defined nor objectively measurable. In political life, many problems are of that sort; not much is said about them here. The avoidance of politics is the issue I mean to focus on in this review, but before that, I want to consider Wiser’s strengths and weaknesses in the terms its authors have set.

1.

Wiser is divided into two parts: the first deals with how group decisions go badly and the second with how to make them go well—or more accurately, how to get them “right.” Sunstein and Hastie write in a simple, direct prose, with a lot of repetition, like teachers who don’t quite trust the intelligence of their students. Indeed, the argument of the book is very much like that of its well-known predecessor, Nudge (2008), written by Sunstein with a different coauthor, Richard Thaler: your intelligence and mine can definitely not be trusted. We are the reason group decisions go badly, and the part of the book devoted to explaining this frequent failure is gripping, if also humbling.

The second part of the book, the helpful part, is less gripping. Too much of the advice—well, I knew it already, despite my decision-making deficiencies. “When dissent and diversity are present and levels of participation are high, groups are likely to do a lot better.” “If [group leaders] are inquisitive, they are more likely to learn.” “Nothing seems to inject reality into a discussion…as well as empirical evidence.” “Do not allow irrelevant social factors such as status, talkativeness, and likability…to bias evaluations.” These lines won’t make any likely readers of Wiser significantly wiser, but there are stronger suggestions in the book; I will take up a few of them later on.

Assume now that all the helpful advice has been ignored. Then things will often go wrong, and they will go wrong at two levels. There are the errors that we all make as individuals, which Sunstein and Thaler in the earlier book try to “nudge” us out of.1 And there are those same errors compounded by groups, the subject of this book. Individual errors derive from egotism, overconfidence, laziness, naiveté, conformism, the tendency to follow the leader, the tendency to defer to high-status men, the desire to be popular or the fear of being unpopular, and, most important of all, the triumph of what Daniel Kahneman has called System 1 thinking (impulsive, emotional, fast) over System 2 thinking (calculating, reflective, slow).2

Ideally, groups should do better than individuals, the members correcting, or at least canceling out, each other’s mistakes. Long ago, Aristotle defended something like this view:

The judgment of a single man is bound to be corrupted when he is overpowered by anger, or by any other similar emotion; but it is not easy for all to get angry and go wrong simultaneously.

In fact, Sunstein and Hastie tell us, nothing is easier.

They write partly from their own experience—Sunstein served as the administrator of the White House Office of Information and Regulatory Affairs (OIRA) from 2009 to 2012—but mostly they describe the results of academic studies of small-group behavior. There are thousands of these studies, which derive from a general dissatisfaction with the theory of the rational actor—that is, the actor who always thinks in System 2, intelligently maximizing his or her well-chosen preferences. It seems that there aren’t enough people like that. Individual decision-making often goes radically askew. And the behavioral studies show that members of groups, deliberating together, do even worse than individuals thinking alone. For those of us who admire juries, town meetings, and democratic assemblies, this would seem to be very bad news.

The descriptions of small-group behavior are not unfamiliar, but they are scary nonetheless. According to the studies that Sunstein and Hastie cite, the errors that individuals make are exaggerated when they talk to one another. They often follow, herdlike, the egocentric, overconfident, or emotional argument of the person who speaks first, producing what Sunstein and Hastie call “cascades” of agreement. They withhold information or repress insights that might offend or irritate the other members of the group—or that the “boss” doesn’t want to hear—and so decisions are made in ignorance of things that individual members know. Their discussions tend to “squelch internal diversity,” since most participants don’t want to be left out of an emerging consensus.

Advertisement

Consider the small group of advisers to President Kennedy who unanimously recommended the Bay of Pigs invasion in 1961. Some of those advisers had private doubts that they “never pressed,” Theodore Sorensen wrote in his 1965 book Kennedy, “partly out of fear of being labeled ‘soft’ or undaring in the eyes of their colleagues.” Beware unanimous decisions that may reflect a cascade of group members following a few “blunderers.” Always ask: “Why are there no dissenters?” I am reminded of a passage in the Babylonian Talmud (tractate Sanhedrin) that holds that if, in a capital case, all the judges vote to convict, the defendant is acquitted. The absence of dissent means that there wasn’t an adequate deliberation.

Members of small groups commonly have watched or read the same news, and so they tend to confirm each other’s enthusiasms or each other’s fears. For example, they overestimate the risk of storms, epidemics, and terrorist attacks; they base their estimates on the most recent one of these that they all know about. Each of them reinforces the overestimation, so that it gets worse and worse. This is the “availability heuristic,” or mental shortcut, whereby the graphic pictures of the last storm outweigh any statistical analysis of the likelihood of another and produce a disproportionate response. Similarly, the “availability” of a single horrific terrorist attack (this isn’t one of Sunstein and Hastie’s examples) may lead to a “war on terror.”

Mutual reinforcement also produces group polarization. When members share a political outlook—for example, some version of moderate conservatism—their deliberations will produce a more extreme conservative position than any individual member, consulted alone, would favor. Juries behave in a similar fashion: “their punitive-damage awards tend to be far higher than the preferred award of the median member.” Even worse for our justice system, deliberating jury members tend to follow their high-status or most talkative colleagues.

This is of particular concern since men outtalk women by two to one on juries—a finding from the early 1960s that was found again by researchers in the early 2000s. I would have thought that decades of feminist agitation had produced more talkative women. Not, apparently, in small mixed-gender groups. The problem is that, according to another research finding, women are critically important to small-group success: “the more women, the better the [group’s] performance.” An all-woman jury might improve those punitive-damage awards.

2.

For certain kinds of decisions, groups do best if their members don’t meet together at all—if they are just a “statistical group.” Jean-Jacques Rousseau argued long ago that the “general will” could only be reached by this kind of decision-making:

If, when the people, being furnished with adequate information, held its deliberations, the citizens had no communication one with another, the grand total of the small differences would always give the general will, and the decision would always be good.

Sunstein and Hastie provide an explanation for this goodness: “by polling all members and by avoiding social influences, groups make very efficient use of information.” But they are assuming here that the members are independently informed; they each inform themselves, and then the statistical group, as if by an invisible hand, pools the information. But if the group is “furnished” with information, as in most behavioral studies (and in Rousseau’s republic), I would expect that its members will decide in roughly the way the furnisher wants—and that may not “always be good.” Sunstein and Hastie discuss strategies designed “to furnish information that could steer groups in the right directions.” This is “nudging” at the collective level.

Statistical groups do especially well in answering factual questions. In a famous example, Francis Galton, the British polymath, examined a competition at a regional fair in England “to draw lessons about collective intelligence.” Contestants were asked to guess the weight of a fat ox. “The ox weighed 1,198 pounds; the average estimate, from the 787 contestants, was 1,197 pounds, more accurate than any individual’s guess.” I would conclude from this that late-nineteenth-century rural Englishmen knew a lot about oxen.

But recent experiments, including some that require special knowledge and some that don’t, have produced similar results. Asked the right sorts of questions, the average answer of a statistical group is remarkably accurate. Imagine that a company wants to project the sales of a certain product in the coming year. It might well do best, Sunstein and Hastie tell us, by polling its salespeople and trusting the average number.

Advertisement

But what about harder questions? Who, among many qualified candidates, should be selected for an academic or civil service position? What version of national health insurance is best? How should we deal with Fidel Castro? “Alas,” Sunstein and Hastie write, “in such cases, no answer may be demonstrably correct.” Alas, in most of the academic studies referred to in this book, one answer, one decision, one choice is demonstrably correct. That’s how the studies work. They set up a question that has a “right answer” and then try to figure out whether statistical groups will get there and why groups that hold meetings so often don’t.

3.

Studies of how search and selection committees function provide a useful example here. Sunstein and Hastie report on a research group asked to decide among three candidates applying for a position of marketing manager. “The experimenters rigged the attributes of the candidates so that one applicant was clearly the best for the job described.” I won’t discuss the rest of the experiment; it is important only to note that the members of the group failed to make “the right choice.”

walzer_2-030515.png

Edward Gorey Charitable Trust

Drawing by Edward Gorey

The goal of Wiser is to figure out how to get them to do that—a goal that is relatively easy to reach if you know the right choice beforehand. But let’s consider a more likely case; I will leave marketing management behind and describe something I know about—a university department of political science choosing a political theorist. The choice has come down to two highly qualified candidates; the department is divided between those who favor a theorist who works in the style of analytic philosophy and those who favor her rival, who is a follower of Leo Strauss. There is no “clearly” right choice.

What happens then? It is possible that one of the department’s factions simply wins. Perhaps a few high-status members speak first and produce a cascade. But there are other possibilities: the two factions might persuade the department chair to go to the dean and ask for permission to make two appointments. Or the factions might agree on the analytic philosopher this time and a Straussian next time. This is politics in an academic department, and what it involves is what politics everywhere involves: bargaining. I find it curious that in a book on group decision-making, there is no discussion, not even a single mention, of bargaining. But the departmental give-and-take that I’ve just described may well be wiser than the victory of the “best” candidate. Sometimes, perhaps, we should think less about the singularly right decision than about the ongoing well-being of the group.

There are many examples of this sort; political decision-making isn’t confined to the world of conventional politics: governments, parties, elections, and all that. Politics is pervasive. A hospital board is choosing a manager: Should it opt for the candidate who has a special interest in improving the treatment of cancer patients or the candidate who wants to focus on the children’s wing? In such cases, board members may think they know what’s right, but there is no “rightness” in the sense that word has in Wiser.

The focus on making the right decision and the anxiety about getting it wrong may derive from an example that many Americans are familiar with and that comes up many times in Wiser: the example of the jury. Juries are composed of disinterested men and women who deliberate and reach a verdict—verum dictum, a true speech. When it is a question of guilt or innocence (and not of punitive-damage awards), the jury is supposed to get it right. That’s why it is important to take up the helpful advice that Sunstein and Hastie offer, so that jury members avoid cascades, refuse deference to talkative men, share information, and listen to dissenters.

But jurors are not supposed to bargain. Lawyers and district attorneys can plea bargain, but once a trial begins, bargaining ends. It wouldn’t be right for one juror to say to another, “I’ll vote your way on the first charge if you vote my way on the second charge.” We don’t imagine a verdict coming out of a bargain of that sort. But it’s easy to imagine a good political decision coming out of a bargaining process. Partisan disagreements arise in all sorts of organizations, and they often require negotiation and compromise. Sometimes, of course, we want our party to win—in the US Congress, for example—and enact its program, making one correct decision after another. But these decisions are never definitive, and the next election is around the corner. Soon enough, we will be bargaining with the party we just defeated.

4.

Partisanship is a regular feature of democratic politics. I suspect that Sunstein and Hastie don’t like it much; in Nudge, Sunstein and Thaler argue explicitly for a program that provides “a real basis…for crossing partisan divides.” So let’s think about partisanship and non- or postpartisanship. To do so, we should look at one of the most interesting proposals in Wiser—for “tournaments and other contests.” We should also consider the defense of cost-benefit analysis in that book and, more extensively, in Valuing Life, Sunstein’s book published in 2014. Wiser itself presents an argument that is meant to take us “beyond groupthink.” But I suspect that the whole series of books from Nudge and Why Nudge? to Wiser is also meant to take us beyond politics—or simply to avoid the difficulties that politics always involves.

The discussion of tournaments begins with the “highly publicized…Netflix Prize contest,” in which contestants were invited to submit proposals that would improve Netflix’s movie recommendation service by 10 percent. The prize was $1 million, and over 20,000 teams from more than 150 countries competed; it took three years before there was a winner. Tournaments of this kind can generate enormous effort and remarkably innovative proposals. Sunstein and Hastie claim that they solve the problems that regularly arise in deliberating groups because the contestants work independently and the large prize “incentivizes divergent strategies.” There is no pressure to conform, repress information, or follow the leader.

US government agencies have run some interesting tournaments: for example, the State Department offered $10,000 for “new ideas for implementing arms control and nonproliferation treaties,” and the Air Force Research Lab organized a competition “to create a prototype system for air-dropping large humanitarian-aid packages of food and water into populated areas without damaging the packages or injuring bystanders.”

I am sure that these competitions produced new ideas and original designs. Maybe the ideas about arms control and nuclear proliferation contributed to our strategy in the negotiations with Iran. But I doubt that they help much with the hard political decisions that the president and his advisers will soon be making. Nor will an innovative system for dropping aid packages help us decide where to direct our humanitarian aid—or whether and when more forceful assistance is necessary.

The State Department didn’t organize a tournament when it was deciding how to deal with the Syrian uprising of 2011. A countrywide democratic debate—if such a debate can emerge—is probably a good enough way of getting a range of proposals on the table. But how should we choose among those proposals when there is no “right choice?” For these sorts of questions, Cass Sunstein would probably advise us to undertake a careful cost-benefit analysis—that is, to adopt the very model of System 2 thinking. This is the subject of Valuing Life, which is a better book than Wiser. It takes on harder cases—though, again, not the hardest ones.

Sunstein’s cases derive from his experience with regulatory issues in OIRA. They typically have this form: a new regulation, dealing with water quality, say, will cost $200 million to implement and enforce. Will its benefits “justify” this expense? The benefits have to do with human health, and they are not easy to quantify. Much of Valuing Life is devoted to producing at least rough estimates for benefits of this sort, so that the analysis can move forward. The methods are ingenious; they extend to benefits like fostering human dignity, reducing inequality, and saving lives. The costs are always given; they are, as it were, budgetary: Should these dollars be added to the state budget for the sake of these benefits, whose dollar value can only be estimated? Actually, some values are fixed: the US government values an American life at $9 million. You will have to read Valuing Life to learn how that number was arrived at (and why lives in other countries are valued differently). The calculations that Sunstein describes seem plausible enough. There may be people who think that the state should not be involved in regulating water quality or anything else, but for most of us, this kind of cost-benefit analysis will seem like a good way of avoiding partisan wars and making wise decisions.

But come back now to the case of Syria in 2011. The budgetary costs of the intervention then being considered—there would be no “boots on the ground”—can be estimated fairly precisely: so much for the weapons delivered to the anti-Assad rebels, so much for the “advisers” who come with the weapons. But there will also be nonbudgetary costs: to the Syrian people because of the intensification of the fighting and probably to neighboring countries also if our side wins (think of the dispersal of the Libyan arsenal after our intervention there).

There are also the costs of not intervening, which have to figure somehow in the calculations. And there are the prospective or hoped-for benefits of intervening, short-term and long-term, which may or may not justify the costs. Both the costs and the benefits will come with probabilities: a 40 percent chance that this cost will be realized, a 25 percent chance that this benefit will actually be delivered to the people we intend to help. And finally, we will be setting precedents: they may serve to deter the brutality of dictators, a clear benefit, or they may promote future American interventions, a benefit, maybe, or maybe a cost. Any analysis of all these factors is bound to be difficult and bitterly disputed. The disputes will be partisan, and there is no decision-making procedure that will take us across “partisan divides.” This is true not only of hard cases involving action abroad; it is also true at home.

5.

Despite all the arguments of Sunstein and Thaler, Sunstein and Hastie, and Sunstein solo, there is still a lot of room for partisan politics. I mean that to include deliberation—which is necessary in a democracy despite the risks of impulsive thinking, mental shortcuts, cascades, and conformity—but not to stop there. For the truth is that much of our political life is not deliberative at all, not, at least, in the sense of a deliberation aiming at a “right decision.” There is also bargaining, which may produce a compromise between two positions, one of which we think right and the other wrong, and even between positions that are, both of them, “right.” And then there is everything that comes under the heading of “political action”: organizing, campaigning, agitating, demonstrating—for all these are also central to democratic decision-making.

Political action is necessary because many people are missing from the group decision-making described in Wiser. Sunstein and Hastie report that low-status members of the research groups, and also of many real groups, commonly defer to high-status members. This assumes that the low-status members are in the room, but the truth is that very often they aren’t. One of the reasons that group decision- making goes wrong is that the people most affected by the decisions aren’t participating in the deliberations.

Wiser has a brief discussion of “asking the public”—but only to comment, not to join in the actual business of deciding. And yet the conditions of both powerlessness and inequality may be more important than the problems associated with groupthink. They are certainly a central source of “partisan divides.” Organizing, agitating, demonstrating—these are ways of bringing the powerless to the attention of the powerful. They can contribute importantly to democratic decisions, even if they seem nondeliberative, even if the shouting in the street sounds like, and probably is, the product of emotive System 1 thinking.

Sunstein himself is clearly an advocate of greater equality. Years ago he published a book praising President Roosevelt’s 1944 State of the Union address, in which FDR called for a radically redistributive “second bill of rights.”3 But in the books considered here, he and his coauthors don’t recall FDR’s famous response to people who urged him actively to promote a redistributive program: “Make me.” Government agencies will make better decisions, at least on such questions as greater equality, if they feel political pressure from outside the room.

Sometimes we will want the people outside the room actually to win—to organize and agitate so successfully that they take over the small groups who dominate decision-making, with the result that they change the political conversation. We may think that their view is right, even if no one has “rigged” the choices. But in fact, all political choices are shaded by our uncertainties, our knowledge of past mistakes, and, in the best of cases, our respect for the people who disagree with us. So, yes, we need to be wiser in the ways described by Sunstein and Hastie; but we also need a radically different kind of decision-making than what they describe, involving a larger number of people inside and outside the rooms where small groups sit.

Read these books; there is much to learn from them. And then pick up Machiavelli, and then Marx.