• Email
  • Single Page
  • Print

Up for Grabs

For the metaphysic of technology, mankind’s mission is to improve on nature, transmuting what we find by means of our intelligence. We can curb flood, famine, and disease, building an ever better life from the elements around us. Technology liberates humanity by expanding opportunities. Nature awaits to be developed; our battle for its bounty has only just begun.

To all this Sale, and those sharing his view, must reply that human interventions have reached the point where nature is so violated that it is itself in peril. It is not enough to say that pesticides, for example, can upset a natural balance. Nature is never in equilibrium, and upheavals wrought this planet prior to man’s arrival. It must be argued, rather, that we are causing changes as no species ever has; and if we do not stop our surroundings will rebel. Fluorocarbon sprays could so damage the ozone layer as to make this continent a desert. Acid rain can turn the weather against us in ways impervious to control. So the case against technology must be that we are not as clever as we think. The sciences man creates are no better than his intellect, at best a bounded instrument, never made to master the natural world. Mount St. Helens may have been sending us a message. Others may be on the way.

Sale is also anti-city. Once in our rural renaissance, we will “not have need for cities bulging with two, three, or seven million people.” For several thousand years, cities have been luring people from the land. Of course much of this migration was not a matter of choice, but resulted from crop failures, changes in land tenure, and less need for rural labor. Even so, there have always been people who, once there, found urban life to their liking. Persons of all classes have spent their entire lives in cities and would not have it any other way. The pleasures surpass the problems in their personal calculations. So when Sale says we will not “need” cities, he seems to be suggesting that while several million people claim they actually like being pushed against one another, efforts should be made to persuade them to another, better way.

Human Scale calls to mind Jefferson, who called cities “pestilential to the morals, the health, and liberties of man.” The charge, in Jefferson’s time as now, is that many city-dwellers fail to realize how their health and morals have undergone erosion. What passes for urban tolerance, for example, may reveal a weakening of character. Sale really ought to say whether his down-scaled world will disperse cities for their residents’ own good. A perennial problem for Utopias is what to do with citizens who seem bent on harming themselves. Rousseau at least grasped this nettle: those who remain misguided will have “to be forced to be free.”

In one unintended way, Human Scale makes a very convincing case. The book weighs in at 558 pages, bulging with quotations and statistics, summaries of studies and paragraph-long lists. Some readers may feel they lack the stamina for a book of this scale. Yet it may be that Sale feels he needs all these facts and figures to reach his audience, who would dismiss a terser essay as much too simple-minded. If that is the case, it shows we have become captive to the premises of an overscaled society: that in a world grown so complex, analysis must be elaborate to offer comprehension. That is why the social sciences have evolved in tandem with contemporary technology. If the latter can be checked, then the former can fade away.

The most sanguine views of the future pin their faith on new technologies. This in itself is not new. The machines of the nineteenth century also brought predictions of amazing things to come. Indeed, technology has always attracted camp followers ready to announce how much better life will be once the next phase is in place. Alvin Toffler’s The Third Wave, a bestseller last year and now out in paperback, bids us welcome the inevitable: semiconductors and petrochemicals and biological engineering. His chapter headings herald what’s in store: “Enhancing the Brain,” “The Electronic Expanded Family,” “The Rise of the Prosumer,” “The Decisional Implosion.” Of course there will be transitional pains and problems to be solved. But we will be smarter and more creative, experiencing freedoms and enjoyments never known before.

With computer terminals in every home, we can order products to our taste (“prosuming”), register opinions on public issues (“electronic town meetings”), and expand our circles of friends. Society will be “demassified,” owing to “imaginative new arrangements for accommodating and legitimating diversity,” along with “new institutions that are sensitive to the rapidly shifting needs of changing and multiplying minorities.” We will have individuality and community, participation and consensus, not to mention, “new grain varieties which produce higher yields per acre on non-irrigated land.”

How seriously should all this be taken? Toffler finds space for every prediction and projection he could lay his hands on. Anyone who calls himself a “futurist” gets a solemn, uncritical hearing, from Jagdish Kapur, who runs an experimental “solar farm” on the outskirts of New Delhi, to Randy Goldfield of Booz-Allen and Hamilton, who sees secretaries rising to “para-principals” with the phasing out of typewriters.

The issue is not whether the new technologies are coming: they are already in our midst. Thus banks can transfer funds electronically, just as libraries are shifting their card catalogues onto computer tapes. Hardly a day goes by without reports on the latest industrial robots and nuclear-based medical miracles. My own favorite is a software system called HACKER, described in The Microelectronics Revolution, which

writes programs to perform such tasks as stacking bricks in specified ways, and is able to correct its own mistakes—and to avoid similar mistakes in future tasks of a generally similar nature—by way of its understanding of the purposive structure of task and program alike.

Toffler’s delineation of the future shares the same general assumptions as Daniel Bell’s “post-industrial society” and Zbigniew Brzezinski’s “technetronic era.” Certainly few will disagree that in a growing number of societies manufacturing no longer plays the dominant part in providing employment. The question is whether post-industrial technologies will turn out to be the central characteristic of the world we are to see. For these process require a certain set of conditions to carry out their promise. The first which may be mentioned is that a sufficient number of people must accept the new technologies as compatible if not congenial as adjuncts to their lives. On the whole this will probably happen. As modes of production have developed throughout history, people have adapted to their presence. Whereas students once wrote essays, they now attune their minds to machine-graded examinations. For earlier generations of family doctors, diagnosis was largely intuitive; today’s physicians tend to see their patients as reflections of the printout. Consumers accept their food in frozen form, if they even know the difference, and many actually prefer their music mediated by electronic means.

But we will only really be ready for a technetronic era if we agree to see ourselves in ways compatible with the new productive modes. Herbert Simon, in The Microelectronics Revolution, suggests the time is ripe for just this step. Her writes:

Perhaps the greatest significance of the computer and the progress in artificial intelligence lies in its impact on man’s view of himself…. The elementary information processes underlying human thinking are essentially the same as the computer’s elementary information processes.

Simon, a recent winner of the Nobel Prize in economics, truly believes this, which means that he—and others who accept this analogy—will make apt citizens of a post-industrial age. Needless to say, the notion that both computers and human thinking involve “essentially” similar process is a view specific to our time, and only comes about because we have built computers and put them into use. Doubtless the Phoenicians believed that the best model for the mind could be found by watching how their triremes rode the waves.

Daniel Bell, another contributor to The Microelectronics Revolution, charts a country’s post-industrial progress by the number of occupations involving verbal symbols. Bell calculates that by 1980 the United States had become an “information society” in that 51.3 percent of its “experienced civilian workforce” consisted of “information workers.” To get up to this figure one must include a lot of people, such as telephone operators who provide you with a number. (Or tell you that it is unlisted, which is at least the information that you cannot have it.) A smaller group, says Bell, consists of “knowledge elites,” who produce and codify knowledge which ranks as theoretical. Indeed, what they do is what makes the system work:

The axial principle of the post-industrial society is the centrality of theoretical knowledge and its new role, when codified, as the director of social change.

As time proceeds, the combined labors of “information workers” and “knowledge elites” will engender an “information explosion,” which is already underway. This growth can be measured in many ways; for example, the exponential increase in scientific journals.

Because of this, Bell tells us, “the information explosion can only be handled through the expansion of computerized and subsequently automated information systems.” What he is also saying is that knowledge of the future will have to be in forms suited to computers. It follows that knowledge, to rank as such, should have a mathematical dimension. Bell’s own phrasings reflect this disposition: “A Knowledge Theory of Value,” “The Statistics of Language,” “The Measurement of Knowledge.” With information rendered into data, and theories expressed as models, knowledge takes on a form required by the new technologies: “By letting us know the risks and probabilities, the computer has become a powerful tool for exploring the permutations and combinations of different choices by calculating their odds of success or failure.” The “knowledge elites” of an “information society” will be ready, as “directors of social change,” to instill a greater measure of rationality into the human enterprise. Harvard University, ever in the vanguard, has updated its curriculum to require a computer course of every undergraduate.

Bell apparently agrees with Toffler (“enhancing the brain”) that we will be smarter in the future, both in our individual capacities and organized pursuits. (Throughout the 1970s scholastic aptitude scores actually fell, but this trend, we are asked to believe, will be reversed as students learn how computers, which grade their tests, conceive of problems.) Since at least the Enlightenment we have been told that as reason replaces superstition, and analysis emotion, more of our world will be amenable to control. That hope is closer to being realized because the technologies we create now augment intelligence. Or so we are being told; the evidence, after all, is all around us: more people are living longer, two grains of wheat grow where one did earlier; and it is only a matter of time before we find safe ways to dispose of radioactive wastes.

One reply to such expectations is that they are not coming true. “College achievement test scores,” the MIT economist Lester Thurow writes, “have been falling for more than a decade. At every grade level, Japanese children outscore US children in mathematics. Among graduates of big-city school systems, functional illiteracy is so common it is not clear where tomorrow’s work force will come from.” 4 But the more traditional response to such predictions has been that each advance brings new dislocations; if our understanding increases, so do our discontents. Such a view either regards human capacities as finite (Edmund Burke) or holds that we will break the cycle only when technology ceases serving as an arm of exploitation (Karl Marx). As matters now stand, the new machines and processes require protected settings. That is, they function well enough in factories and offices, which are organizationally enclosed. The same presumption applies for private households which can purchase space-age appliances for their kitchens and playrooms. However outside these enclosures stand whole segments of society—both at home and abroad—who benefit hardly at all from this electronic bounty.

Right now it is apparent that considerable parts of the population are not being absorbed into the economy, either as workers or consumers, in a meaningful way. The most recent figures (for 1979) show that America’s productive system could create only 63.4 million full-time jobs, and of these 17.6 million—largely held by women—paid less than $10,000.5 Alongside those currently listed as unemployed is an even larger number not counted on those rolls who will go through life never knowing steady work at decent pay. Not the least reason for this shortage of employment is that of the $1.3 trillion available (in 1979) for wages and salaries, a disproportionate share goes to middle-class “information workers,” which means less payroll money to give other people work.6

The result, as we know, is that those excluded from employment can make life quite uncomfortable for those who are inside. Whether called an “underclass” or by any other name, this group shows no sign of diminishing in size, let alone disappearing. Yet somehow they do not figure in forecasts of the future, except for ritual remarks that poverty can be abolished if we follow one prescription or another.

In this connection it is worth nothing a term which entered everyday language at the outset of the Eighties. “Terrorism” has become the foreign counterpart of crime in the streets at home. Both are condemned as wanton violence, undermining social order. Both can also be viewed as acts of desperation, brought on by sustained exclusion.

If the future is to be built on finely tuned technologies, it will also need protection against violence in various forms. One reason corporations have been moving to the suburbs is that their “information workers” are less likely to be mugged, there when leaving after nightfall. Front-page coverage of hijacking attests to anxieties over how easily a few men with machine guns can capture a jetliner. (Next time will it be the computer consoles that control our basic grids?) In a similar sense, El Salvador’s insurgents are seen as threatening forms of organization on which a continent depends.

The most pressing issue for the future transcends liberal and conservative programs, and even what we may do to nature or with the impact of technology. Rather, given the possibility of the good life in any of its conceptions, we must wonder which of the world’s inhabitants will be allowed to share it. On this, one prediction may be safely made: those who are excluded will make their presence known, and not necessarily politely.

  1. 4

    Foreign Policy, Spring, 1981.

  2. 5

    Current Population Reports, Series P-60, No. 125, US Government Printing Office, October 1980.

  3. 6

    Such a swollen upper echelon can become a competitive disadvantage. Unlike Europe and Japan, Immanuel Wallerstein writes, “In the United States the well-to-do middle stratum is a significantly larger percentage of the total population. Hence, the social bill of the US middle class in dramatically higher.” Foreign Policy, Fall, 1980. At the same time, most industries looking toward the future are themselves technology-intensive and create little new employment. California’s “Silicon Valley”—the center of the semiconductor industry—has few if any openings for refugees from Detroit.

  • Email
  • Single Page
  • Print