• Email
  • Single Page
  • Print

Why Are You Scared?

Risk and Culture: An Essay on the Selection of Technical and Environmental Dangers

by Mary Douglas, by Aaron Wildavsky
University of California Press, 221 pp., $14.95

Acceptable Risk

by Baruch Fischhoff, by Sarah Lichtenstein, by Paul Slovic, by Steven L. Derby, by Ralph L. Keeney
Cambridge University Press, 185 pp., $19.95

What is risk? Not risk of this disease or that accident, but Pure Risk? It is a sign of the times. The Society for Risk Analysis was founded last year. There is a new magazine for the trade: Risk Analysis, an International Journal. Risk has become a profession. Social risks have long been familiar to surgeons, engineers, admirals, and entrepreneurs. Today everyone can go to a new kind of expert, the risk assessor, who offers an impartial evaluation of risks of any kind. The new expert uses lots of older lore, but history records an official beginning to “modern risk-benefit thinking.” The year was 1969.1

Risk analysis can cater to any sort of hazard, but their profession owes its existence to a relatively narrow band of possible dangers. The social risks that worry us are not a random bundle of frights. You can arrange them in an orderly way to form a triangle. At the top is the unthinkable war. Then come the two demons, cancer and nuclear meltdown. Below these are threats to the environment: foul rivers, increased radiation, acid rain, toxic waste, assaults on threatened species. At the base of the triangle is a motley collection many of which are old standbys: mining coal, smoking, drinking, getting pregnant, climbing mountains, lead, asbestos, saccharine. It is not these at the bottom nor nuclear war at the top that created the new profession of risk analysis. It was the public demons and popular causes. (Only in the past year, after the books under review were written, has total war again become both an American demon and a popular cause.)

What selects a demon or a cause out of all the risks we know? That question prompts the remarkable Risk and Culture, the joint work of Mary Douglas, the social anthropologist, and Aaron Wildavsky, an imaginative political scientist. They tell of a people in Zaire afflicted by the tropical horrors of fever, leprosy, ulcers, and parasites. Only three of their dangers are singled out by this people for worry and intervention, namely, barrenness, bronchitis, and bolts of lightning. Is that trio more curious than our own variable list of demons and popular causes? Our causes are not in general the greatest risks to which we ourselves are subject. Is our choice of a “risk portfolio” also irrational?

A social anthropologist like Mary Douglas is not one to say that stupid people rank fears wrongly. She will look at our society in the way she studies an entirely foreign culture. This book says that popular causes are created by the groups that campaign for them. But to understand which causes will be chosen, we must ask what a group needs in order to stay together long enough to be effective. The chosen cause must be of a sort that helps to maintain the group. Groups of different internal structures will fall apart in different ways, and so will perceive different kinds of risk. So Douglas and Wildavsky conclude that we need a taxonomy of possible organizations for small-to-medium-sized protest groups, and that this will help tell us what kinds of risk will be the focus of attention.

One beauty of Risk and Culture is that it elegantly leads us into a handful of simplistic models of social organization. Then it locates protest or preservation groups thus modeled along the “border” of American society, the places where protest groups spring up. The border pits itself against the “center,” the public and private institutions that hold power. The center is complacent, the border is alarmed. We get quick glimpses of groups like the Sierra Club (which is for wildlife and wilderness) and the Clamshell or Abalone Alliances (who don’t want nuclear power on their turf). The different organizations of these groups, and the risks that they protest, help create each other: if there is no group, there will be no perceived risk; but equally, without a risk of the right sort to hold the group together, there will be no group.

Having defined themselves and their risks, the groups on the border say: let these risks be diminished or removed. The center is challenged. Air must be cleaned at great expense, nuclear power replaced by solar energy. The center counterattacks. Its spokesmen declare that the world is full of risks. Let us study them carefully. Take due precautions but do not exaggerate the one risk that you highlight above others that you have forgotten. When we get costs and risks in full perspective, you will see that nuclear plants are a definite plus, and remember that we have already improved air quality by 30 percent in a decade. At this point enters the risk assessor, who, we shall see, belongs to the new profession created to assist the center in its debate.

Extensive popular agitation about environmental risks began on the “border” about 1965. The profession of risk analysis followed. Acceptable Risk is the best critical exposition of new ways to think about risks. But the title, we are told, is to be used as an adjective. The book speaks only of “acceptable-risk problems.”

Fearing that the term “acceptable risk” will tend to connote absolute rather than contingent acceptability, we have chosen to use it only as an adjective, describing a kind of decision problem, and not as a noun describing one feature of the option chosen in such a problem.

You can hear the skeptical environmentalist muttering: “We’ve been telling you there are problems; now it is your turn to show us that the risks are acceptable.”

That reaction has a germ of truth. The pure risk analyst is the peacemaker trying to invent reasonable procedures to make everyone simmer down, quickly. Fischhoff et al. outline three generic approaches in Acceptable Risk. One is traditional—use the best professional technical specialists you can hire. Secondly, there is the method of revealed preferences. Observe the risks society has tolerated in the past. Assume that we have evolved a satisfactory attitude to risk. Then introduce new risks only if they are no greater than already accepted risks.2 This idea can be spruced up to allow for changing knowledge and attitudes, but our authors prefer a third approach adapted from economics. Essentially you consider the probabilities and utilities of all the consequences that may result from taking each of the choices available to you. Ideally, you could then compute the right decision. The authors candidly describe problems of assessing probabilities and utilities, of sorting out facts and values, and of detecting hidden purposes on the part of the analysts themselves.

The book is a report. It ends with many sensible recommendations. We wryly note how a new profession concludes its list of proposals. It would like to see “an experimenting academe”—perhaps a new university department that draws experts from various fields and also makes some new jobs. It wants to “create a profession of risk management.” In the final paragraph it urges more research on acceptable-risk decisions. Here we catch phrases like “Given the enormous stakes riding on acceptable-risk decisions, our investment in research seems very small”; “bargain”; “enormous expected return on investment’; “Such research could be a good place to invest some of society’s venture capital.”

What is the expected benefit of this research? One benefit would be to “inform workers about occupational risks (so as to enable them to make better decisions on their own behalf),” but that generous gift to the working classes is not the “bargain.” We are chiefly to “consider the cost of a day’s delay in returning a nuclear facility to service.” If we could take just one day off the time needed to decide how to restore the facility: what a gain! The expected benefit is, then, possibly “shortening the decision-making period.” Here is a lesson in semantics. Changing a noun into an adjective can make us stop looking for the right decision, and start looking for quicker ways to make decisions that leave everybody happy—for the while, at least.

Who are the risk analysts? Some risk analysis is straightforward engineering. Having designed your “facility,” you had better get an outsider to draw what are called “fault trees.” This engineer figures out all the possible things that might go wrong, and how several failures might conspire to produce something worse than was bargained for. Or there might be a “common-mode failure,” where three safety devices have one shared component, so that if this component quits, all three devices are useless. Such analysis is often supposed to end with some numbers stating probability of risks and costs of disasters. These are weighed against expected benefits. Even when the numbers mean little the sheer presence of the outsider doing fault trees can help detect design blunders. No one doubts that these professionals earn their keep.

The pure risk assessor is more of a generalist than the engineer. Three authors of Acceptable Risk, for example, are research associates at a firm called Decision Research. The fourth heads the decision-analysis branch of a consultancy, while the fifth is a “a consultant to government and industry on problems of decision and risk analysis.” When we turn to other figures in the field, we find that Chauncey Starr, author of the original risk analysis paper of 1969, is vice chairman of the Electric Power Research Institute. Chris Whipple, now president of the Society for Risk Analysis, is technical manager of the Energy Study Center of that institute.

Starr and Whipple defend the center. Their institute is financed by utility companies. People on the border say it is only a front for big industry. In return Starr and Whipple regard environmentalists as part of the “conflict industry” that increases the cost of making decisions. When the border groups learn that the research for Acceptable Risk “was supported by the Nuclear Regulatory Commission under subcontract from Union Carbide Corporation, Nuclear Division, Inc. to Perceptronics,Inc.,” they must lump the book with other industry presentations. In fact that is just how the grant money is “routed.” The book is an independent assessment, not committed to any approach, and it provides systematic summaries of all the ideas and examples that come from the center. There is none of the scorn of Starr and Whipple for “environmentalists who find conflict an effective means of achieving their social objectives.” Starr and Whipple properly add:

the identification of vested interests opposing many proposed energy projects is not intended to dispute the legitimacy of their role in the conflict. Rather it is intended to point out a popular misconception; that these groups are somehow purer than the advocates of supply expansion.3

The move from “social objectives”—like not being irradiated?—to “vested interests” may strike one as rhetorical, but let’s concentrate on the idea that those who question “supply expansion” are “somehow purer.” Is that an accidental “misconception” that can be avoided? No, say the authors of Risk and Culture, for the idea of purity is in the nature of perceived risk itself.

  1. 1

    Chauncey Starr, “Social benefits versus technological risk,” Science, no. 165 (1969), pp. 1232-1238. Of course there is not as yet an official history of this fledgling profession, but there is pretty uniform agreement on the “first” paper; e.g., D. Okrent, in “Comment on Social Risk,” Science, no. 208 (1980), p. 375, note 3, says “Modern risk-benefit thinking had its real birth with the classic paper by C. Starr….” Many of the standard ideas of risk analysis are, however, old-fashioned economics.

  2. 2

    One of the US government film clips in the documentary collage movie Atomic Café illustrates an early use of this method for propaganda purposes. A voice tells us that there are indeed risks associated with testing nuclear weapons, but that life is full of risks. Then we are shown a grossly fat man in the shower who drops a bar of soap. While groping for it he slips and falls—out of sight. Message: we accept this risk, let’s accept the other “comparable” risk. Note how a risk undertaken voluntarily (showering) is equated with an involuntary risk (strontium 90 in the milk). Note also the uncritical assumption that new risks are morally equivalent to old ones, and that we should not care whether we are introducing a new risk, or just keeping an old one in place.

  3. 3

    I quote from a preprint of “Risk/Benefit Analysis and Its Relation to the Energy/Environment Debate,” by Chauncey Starr and Chris Whipple, prepared for the International Scientific Forum on an Acceptable World Energy Future, Miami Beach, Florida, November 30, 1978. For a more restrained account of related matters, see Chauncey Starr and Chris Whipple, “Risks of Risk Decisions,” Science, no. 208 (1980), pp. 1114-1119.

  • Email
  • Single Page
  • Print