During the 1960s, researchers at a company now called SRI International built a mobile robot. “Shakey,” a wheel-driven cart that carried a television camera and a radio link to a separate minicomputer, was designed to move in response to programmed commands. When he received instructions to go from one point in a room to another, for example, the camera would form images of the robot’s immediate vicinity. Like printed photographs, such images consisted of dots representing the presence or absence of light. This arrangement of dots was fed into the minicomputer, which “digitized” it—that is, transformed it into a sequence of ones and zeros. A computer program for recognizing visual patterns compared the new sequence with others previously stored in the computer’s memory. If the computer found a match, it “understood” the robot’s position in relation to surrounding objects. This information could then be used by other programs devised to solve simple problems.

At first Shakey was merely told to push boxes about a room. In 1971 a second, more advanced version of the robot was placed near three other objects: a tall platform, a box lying directly upon it. and a ramp several feet away. Shakey was then ordered to knock the box off the platform, but at first he failed even to reach it. Next, however, he nudged the ramp over to the platform, scampered up the one and onto the other, and coolly pushed the box to the floor. In effect he had “figured out” that the ramp was a means to his end.

Shakey was slow—he spent an hour or so identifying the block and the ramp. His world consisted entirely of large, smooth boxes and clean, smooth walls; anything else literally blew his mind. Unlike most robots; which include mechanical arms that can manipulate objects with some degree of skill, Shakey could only move from place to place. Yet in a sense he could “see,” and in a sense he could “think.” Most industrial robots can do neither.

They don’t have to. Much of what goes on in a factory involves picking things up, moving them, and putting them down. Almost all work of this kind can already be done by robots, though in most cases human labor is cheaper. We cannot be quite sure how long human beings will continue to enjoy that advantage. But we do know who will be most affected when they lose it—in some cases, to improved versions of robots like Shakey. The welders, painters, machinists and toolmakers, machine operators, inspectors, and industrial assemblers of our society—in short, most of the industrial working class—will be facing the end of the road in the not far distant future. If we pretend that this transformation will automatically create new jobs for the men and women it displaces, we will probably end up with a vastly expanded underclass, not a vastly expanded pool of computer programmers.

Unlike the steam engine, the spinning mule, and the power loom, robots were conceived long before they were invented. Yet the writers and inventors of the past had in mind what today would be considered an automaton, a mechanical “self-moving machine” like Pierre Jaquet-Droz’s “mechanical scribe” (1774), a precursor of the humanoids that have been so much on the public mind since R2D2 appeared in Star Wars. It was to this kind of automaton that Karel Capek applied the Czech word robota (work) in his 1920 play R.U.R.

Joseph Engelberger, a contributor to Marvin Minsky’s collection Robotics, was the man who decided that robots should be something else. Late in 1956, at a party, he ran into George Devol, a fellow engineer who had earlier hit on the basic idea of using computer components to control a mechanical arm. Devol, according to Engelberger, “waved his hands a lot and said, ‘You know, we ought to realize that 50 percent of the people in factories are really putting and taking.’ ” He had therefore invented what he called a “programmed article-transfer device.” Engelberger was impressed by Devol’s idea and persuaded his employers to try it out. He had another name in mind, however. “The word is robot,” Engelberger said, “and it should be robot. I was building a robot, damn it, and I wasn’t going to have any fun…unless it was a robot.”

During the summer of 1956, ten other researchers, including Minsky, one of the earliest experts on artificial intelligence, had met at Dartmouth College to speculate on the future of thinking machines. Among other things, they predicted that within a single generation humanity would no longer have to work. Engelberger soon launched his own company with himself as chief executive. The business, currently called Unimation Incorporated, did not show a profit until 1975.

In 1961 Unimation marketed its first commercial robots, the famous “Unimates.” These were mechanical arms that carried out instructions that were lodged in a computer’s memory;1 the operator could change the instructions quite easily by changing the programs that recorded them. In the United States, only such “programmable” robots are regarded as the genuine article. Japan’s much broader definition includes many less sophisticated devices and therefore permits that country to claim that it has more robots than the rest of the world combined. It does not, though it does in fact lead the world.


General Motors, Unimation’s earliest and largest customer, used its first robots for die casting, the nasty work of pouring molten metal into steel dies and removing the red-hot auto part. The technique usually involves relatively small production lots, or “batches,” as opposed to mass production, on the one hand, and the manufacture of individual items, on the other. Companies that turn out unique products one by one, in the manner of a Stradivarius, generally find robotics pointless because it may be necessary to write a new program for each item. Mass production—the production of millions of identical copies of a single part or group of parts—usually justifies traditional “hard” automation: machines specifically designed to perform one task, and that only. Such machines are usually more expensive at the outset than robots, but since they are faster and better suited to the task at hand, their eventual cost is lower if it can be spread over a larger volume. It is goods produced in batches—representing three quarters of manufacturing’s contribution to the US gross national product—that are particularly suited to robotics, for their low volumes rarely justify the higher initial costs of hard automation.

At $40,000 to $100,000 apiece, the earliest robots were costly themselves. Average operating expenses of $6 an hour made them hardly cheaper than a human being. Yet they were sometimes economical, for programs could easily be written to handle new batches and then called up as needed. Besides, as T.A. Heppenheimer, one of Minsky’s contributors, points out, they “didn’t get bored, take vacations, qualify for pensions, or leave soft-drink cans rattling around inside the assembled products. They…would accept heat, radioactivity, poisonous fumes, or loud noise, all without filing a grievance.” Furthermore, they could work round-the-clock without malingering, going to the toilet, or blowing their noses, and were therefore more productive than any human worker, and one man or woman could often supervise several robots. The increase in output per hour was potentially enormous.

Nonetheless, robots were too expensive and exotic for most purposes. During the late 1950s Del Harder, the Ford Motor Company’s manufacturing chief, looked at Engelberger’s specifications for a robot and said, “We could use two thousand of them tomorrow.” But he did not order two thousand Unimates, and no more than thirty had been sold by 1964. The pace was slow because only the car companies, which regarded themselves as leaders in the field of automation and also had unusually high wage bills, would experiment with robots. General Motors, which bought the very first units, had taken the trouble to find out that nine out of ten automobile parts weigh less than five pounds—a significant discovery because robot arms could not carry heavy loads. GM assumed that in the future “a three-thousand pound car would probably remain the sum of many small parts,” most of which changed each year. Since it appeared that these parts might eventually be produced by robots, robots would eventually make economic sense.2 Outside the automobile industry, however, their very existence was something of a secret.

By the mid-1960s it was clear that the Dartmouth predictions of 1956 had been wildly exaggerated. Neither robots nor computers in general were about to produce an age of universal leisure. Unhappily, this truth begot a new, more dangerous myth: the idea that since the advent of the computer had coincided with rising levels of employment, it had in some sense caused them as well. That fallacy in turn gave rise to the popular notion that new technologies by nature create more jobs than they eliminate. In fact, robotics had on balance created no jobs at all, and never will, although this seemingly obvious point (discussed below) was and still is concealed by mystification. During the 1960s the number of jobs rose steadily for reasons that had nothing to do with computers and robots, which in any case were very rare by current standards.

Unlike Shakey, the earliest robots could neither sense changes in their surroundings nor respond to them. Those surroundings therefore had to be controlled with a precision that was unattainable or uneconomic in most industrial processes. To become more useful, robots needed something at least distantly comparable to human senses.


They got it, but slowly, for this turned out to be one of the deepest problems in artificial intelligence, far harder than getting computers to play chess or prove mathematical theorems. Those subjects are simple enough to be reduced to rules. Our general knowledge of the world—“common sense”—is much more elusive and complex. When we see, for example, how do we decide where an object stops and another object begins? Why do we see the buttons on a shirt as, in a sense, a part of it and, in another sense, as distinct objects? The answers to such questions are hardly clear.

Despite our ignorance of the fundamentals, we can build devices that partially simulate human senses and permit robots to cope with unexpected change; Shakey, for example, could “see” his way through a room, though only a very simple room with very simple objects. Not all applications really benefit from such abilities. Robots that merely have to move from point to point on a sheet of metal, make a weld, and go on to the next point—spot welding, the most important single application in robotics—do not need senses. If, however, the weld must be continuous, as in arc welding, the robot must be able to “see” its way along the surface.

So far, computer senses fall very short of our own. A human being can generate vastly more sensory information than any robotics system can and process it about a thousand times faster—all this in a small, attractive package that moves under its own power and reproduces itself. The deficiencies of present-day machine vision are typified by the socalled bin-picking problem: getting a robot to pick out a particular item from a bin packed with various kinds of hardware. Although systems designed at the University of Rhode Island, among other places, have actually done so, they are neither fast nor reliable, in part because today’s computers lack sufficient speed and memory. When an acceptable solution emerges almost every industrial operation will be susceptible to robotics, and the researchers, including some in Minsky’s collection, tell us that the end is within sight.

Of course, predictions of this sort have been made before. Yet there is a difference. Little research on artificial intelligence had been done in 1956, so the extravagant claims made at Dartmouth reflected nearly complete ignorance. The predictions of 1985 reflect a quarter century of research into artificial intelligence and of experience with products that incorporate it. The most fundamental problems of artificial intelligence—teaching computers to understand “natural” languages, such as English, for example—are probably nowhere near to solution. A robot that can make breakfast may be twenty or so years distant. Yet the day when robots will be capable of manufacturing almost all significant industrial products is not far off. The remaining uncertainties will be economic, for in some cases a robot will continue to be more expensive than a human worker even if the two can perform a task equally well.

In 1970, the United States, then the undisputed world leader in the design, production, export, and use of robots, had about two hundred of them installed in its factories. The entire world had only a few times as many. Throughout the decade the absolute numbers continued to be almost contemptibly small. The rate of increase was very high, however, and it is no coincidence that between 1970 and 1980 GM’s wage bill soared by 240 percent while the cost of operating a robot stabilized at $5 or $6 an hour.

Perhaps 40 percent of the eight thousand or so robots in use here by 1982 had been installed in automobile plants. Foundries, many of them owned by auto companies, came next, with about 20 percent of all robots, followed by light manufacturing (notably of plastics, food, drugs, and cosmetics), the electronics industry, and the aerospace industry. Virtually all robots in use at that time were “first-generation” models, which lacked sensory input but nonetheless helped the automobile manufacturers expand their total output by 15 percent between 1980 and 1983. In the same period their production work force fell by four thousand. Spot welding, which occupies 35 to 45 percent of all robots installed in the United States, was and is the commonest application; the handling of materials, arc welding, and paint spraying accounted for 25 to 30 percent, 5 to 8 percent, and 8 to 12 percent of all US robots, respectively.

As of 1985 there are some 16,000 industrial robots in the United States. The auto business owns from 7,000 to 8,000 of them, just under half of all units in the United States. Its share is larger today than in 1982. GM, for example, used 300 robots in 1980, has about 5,000 now, and plans to buy an additional 15,000 or more by 1990. Although the proportion of all units installed in other industries has consequently fallen since the start of the decade, the absolute numbers have shot up from 2,800–3,000 in 1980 to 8,000 or perhaps 9,000 at present. According to the estimates by Wassily Leontief and Faye Duchin in their new book, other industries that make metal parts and machines “will vastly increase” the number of robots they use to change tools and handle materials.3 Light manufacturing is buying many more robots as well.

The most significant new developments are taking place in the electronics industry, where robots are now widely used to assemble finished goods. About nine tenths of Apple’s Macintosh computer, for example, is assembled automatically—in part by equipment purchased from IBM. This astonishing feat is of deep importance. Welding and painting occur in many industries but the assembling of machines and other products is much more widespread, and it accounts for the largest single share of industrial workers and manufacturing costs. The experts agree that by the middle of the next decade it will be the most important application in robotics. In the meantime, assembly already occupies nearly 20 percent of the robots in Japan, where some electronics manufacturers claim that they have automated one half to three quarters of their assembly operations.

It will take time for robotic assembly to become widely diffused. Wassily Leontief and Faye Duchin suggest that even by the year 2000 the “electronic revolution” as a whole may “be no more advanced than the mechanization of European economies” was in the year 1820, when it had hardly begun to spread from mines and cotton factories.4 In the meantime, the sixty or so American companies that produce robots will have downs as well as ups; at present, for example, it appears that only seven of these companies made a profit in 1984, a year in which total sales rose by more than 50 percent, to about $330 million. In fact, many companies have left the business, and many more are expected to follow them, although Arthur D. Little, Inc., in a widely quoted forecast, has predicted that by 1992 the worldwide market will reach $2 billion—50,000 robots a year, more than existed in every factory on earth as recently as 1984.

At present, too many shaky ventures are trying to sell essentially the same products at a time when making and selling robots is ceasing to be a game for small companies. In 1980 six middle-sized companies shared almost 95 percent of a $90 million market for robots. In 1983 their share had fallen to 53 percent.5 Unimation, still the biggest producer of robots, was purchased by Westinghouse. Similarly, General Electric simply bought up an existing company. General Motors, as always the largest consumer of robots, set up the GMF Robotics Corporation, a joint venture with Japan’s Fanuc Ltd., in 1982. Bendix, Renault, Volkswagen, and United Technologies have dealt themselves in to the game as well, and so has IBM.

These large companies are making such investments because they know something that the rest of us do not. They know that whatever may be happening at any particular moment, robotics, like the steam engine and electricity, is destined to be part of an industrial revolution. This Third Industrial Revolution will fuse design, manufacture, and marketing into a single stream of information that will eventually permit us to automate just about anything we do not want to do ourselves.

We are nowhere near that point, but a few companies are moving to implement what they call “computer-integrated manufacturing.” For example, the salespeople of McDonnell Douglas Corporation can send an order for a part directly to a computer-aided design system. This dispenses with the services of the old-fashioned draftsman by permitting the engineer who develops the part to make a freehand sketch on a cathode-ray tube linked to a computer that automatically transforms it into an electronic blueprint which can be revised endlessly. If the part can be turned out on numerical-control machines—computer-controlled machine tools, similar, in essence, to robots—a system designed by the company itself can take the finished drawing and automatically write a program to make the part. (Thanks to this system and others like it, the demand for programmers may not rise dramatically, by the way.) Then, of course, the item is made—automatically.

Meanwhile other computers at McDonnell Douglas concurrently update the inventory, keep sales records, and the like, while upper management has instant access to whatever information it wants. The need constantly to enter and re-enter the same data—at the point of sale, in the engineering department, the drafting department, the production department, the inventory control department, the billing department, the accounting department, and so forth—is largely eliminated. At present, the most advanced systems are very prone to break down, and only about a dozen have been installed successfully. Yet managers and design engineers regard computer-integrated manufacturing as a sort of ideal and some of them will undoubtedly continue to pursue it.


This quest for the factory of the future—the fully automated factory—is the subject of Harley Shaiken’s interesting and important Work Transformed: Automation and Labor in the Computer Age. Shaiken, a former machinist, is currently a research associate in MIT’s Program in Science, Technology, and Society. On the one hand, he says, the “engineers, mesmerized by high technology, veer off toward complex systems as a challenge rather than holding to simpler more effective approaches” to automation. On the other, management seeks “to bypass human input at almost any price,” hoping to dispense with the annoyances of dealing with human workers; it therefore favors delicate, disaster-prone systems of “breathtaking complexity.” The Caterpillar Tractor Company, for instance, bought one of the earliest integrated manufacturing systems in 1971 and then had to spend four years getting it to work at all; for several years thereafter it was out of order 60 to 80 percent of the time. Such disasters suggest to Shaiken that extreme complexity is not a matter of economic or technical rationality “but of power and political choice.”

Less politically committed experts agree with Shaiken that excessive complexity constantly bedevils efforts to implement computer-integrated manufacturing. Meanwhile, less advanced arrangements, like those used on Wall Street to feed customer orders to programs that automatically send out invoices and compile statistical reports, are keeping down the number of computer keyboard operators and giving top executives immediate access to knowledge of what is being produced and how.

Until recently, this sort of knowledge was controlled by middle management—the people actually running our factories and offices, many of whom collect information, analyze it, and make decisions about it. In the future some of these decisions will be made at corporate head-quarters, and some by “expert systems,” artificial-intelligence programs that reduce bodies of knowledge to a set of rules, apply them, and thus make it possible for employers to “put expertise in the hands of less-trained, lower-salaried workers.”6 The jobs of executive assistants, financial analysts, production and inventory controllers, and researchers will be particularly threatened.

They will not be alone. In Shaiken’s words the one great truth of the matter is that “unlike other technologies…which increase the productivity of a worker, the robot actually replaces [italics in original] the worker.” That indeed “is one of the prime tasks for which robots are built” (italics in original), as Peter Scott, a roboticist at the Imperial College of Science and Technology in London, bluntly puts it in his text, The Robotics Revolution.

Not long ago the tomato growers of California hired 40,000 migrant workers a year to pick their crop. Then they started using a robot called the Tomato Harvester, and by the start of the 1980s they required only about eight thousand laborers to pick a crop three times as large. This was a fairly difficult application, too, for the modern commercial tomato, though hard, is less hard than most of the objects that robots manipulate, and tomatoes in general tend to be irregular in shape and to grow at unpredictable locations on the vine.

These are exceptionally dramatic results. The case of General Motors has greater importance, first, because it is more typical and, second, because GM bought the earliest robots and probably knows more about using this technology than anyone else in the US. The mainly first-generation robots it was buying at the start of the 1980s on average eliminated 1.7 jobs—and 2.7 jobs in plants that functioned round-the-clock—figures that include all new positions created by robotics. In 1981 the company declared its intention of purchasing 20,000 additional robots over the coming decade, so more than “40,000 workers could be displaced at GM by this technology alone.”7 For the most part, they will be yielding their places to first-generation robots, which can perform only a limited number of industrial operations and displace many fewer workers than their second-generation counterparts.

Sometimes the possibility that jobs will be lost becomes apparent only gradually. In 1969, for example, McDonnell Douglas installed numerical-control machines in one of its plants. Each was operated by a single machinist. Not until 1977 did the company try to change the rules. Then, Shaiken tells us, workers “were asked to operate a second machine for short, intermittent periods, say, while another operator went to lunch or on a break. After this proved successful, longer and more complex assignments were made.” The union finally challenged these practices, but in the end an arbitrator ruled for management, which in effect won the “right to eliminate half the machine operators.”

A British study suggests that the number of jobs lost in all UK industries has so far averaged 2.5 for each robot (almost all of them first-generation models), as compared with about four fifths of a new job created to manufacture and service it. Researchers in Michigan estimate that by 1990 the United States will have lost 100,000 to 200,000 manufacturing positions to robotics. This may not seem a terribly high figure for a period of thirty years until you consider that these jobs will be lost mostly in the automobile industry and mostly to first-generation robots performing traditional applications, like welding and painting. West Germany’s Commerzbank—perhaps influenced by Volkswagen’s prediction that “second-generation” robots equipped with sensors will soon perform 60 percent of all work on cars—believes that half the jobs held by the country’s 1.2 million production-line workers might be at risk. Each of these second-generation units, Commerzbank believes, will replace at least five people, perhaps ten. And as we have seen, the fact that these second-generation robots can be used in assembly means that such losses will not be confined to a few industries, as they are at present.

Studies like these, however, prove nothing decisively, if only because some of them fail to distinguish between what is technically possible, on the one hand, and what is economically rational, on the other. General Electric, for instance, estimated in 1981 that robots could take over half the 37,000 jobs in its appliance division, but it later insisted that many of them could still be performed more cheaply by human beings. The truth is that specific predictions about the spread of robotics are speculative, perhaps wildly speculative. Yet the general conditions, technical and economic, that will shape the market are fairly clear.

First, robots will cease to be concentrated largely in the automobile industry. The use of second-generation robots in assembly is only part of the explanation for this. No less important is the fact that we now have a generation of practical experience with robotics. Back in the early 1960s, each company that bought a robot had to devise its own production techniques and train its own experts, at enormous trouble and cost. Failure was quite common, much as it is today in attempts to implement computer-integrated manufacturing. Only large and rich corporations that already had a lot of automation know-how would take the chance, and as we know, they were mainly in the car business. Early applications therefore concentrated on its problems and processes, bypassing most others. Even so, these applications created a base of knowledge that cut the risk of failure and made it possible for additional industries to use first-generation robots. Within ten years, companies that wish to install second-generation robots will be able to draw on a comparable body of knowledge, which will make it much easier to set up robotized assembly systems than it is today.

Second, the resistance to the spread of robotics will come from corporate accounting departments, not displaced workers. Financial officers typically demand that every investment break even quickly. Often, the required payback period, formulated years ago under the influence of the older kind of automated equipment, which must be scrapped with every change in products, is just too short for robotics. The same robot can make a variety of products and may therefore be useful through any number of product switches. When the advantage becomes clear, the accountants will probably reconcile themselves to a longer payback period. They are already under pressure to do so.

Third, the accountants will reconcile themselves to robotics even if they cling to their present unrealistically stringent standards. For as the volume of robots being produced rises, and robots themselves come to be produced by robots (already happening at Japan’s Fanuc Ltd.) their cost will go on rising more slowly than the cost of the labor they replace—about three times more slowly during the 1980s, according to GM, whose chairman, Roger Smith, claims that each one-dollar-an-hour pay increase makes it profitable for the company to install one thousand additional robots.

Finally, robotics may become essential in the finicky and rapidly changing markets emerging in most consumer industries,8 since preparing a robot to make a new product is often just a matter of changing one floppy disk or cassette tape for another. At this point robots will be hard to resist.

Many aspects of robotics are surely debatable, but not the identity of its principal victims: the industrial working class and those who will be trying to join it in twenty to thirty years. Most of its present members—some 20 million of our fellow citizens, about a fifth of the total work force—may well avoid permanent joblessness, for companies that install robots usually attempt to reabsorb their employees. But whatever may happen to them, even the optimists implicitly expect this sort of employment to be well on its way to extinction in a generation or so.

The optimists, however, remind us that overall levels of employment will not fall if robotics creates more jobs than it displaces, or if new service jobs come into existence for other reasons, or if the size of the work force declines sufficiently. Unfortunately, none of these conditions will be satisfied. The size of the work force is expected to increase, not decline, through the year 2000. The service industries—transformed by centralized data-processing systems, word processors, electronic scanners, computer-aided design equipment, expert systems, and the like—may in future employ fewer Americans even if their share of our economy continues to expand, as it no doubt will. In fact, Leontief and Duchin, who suggest that three quarters of a million managers and five million clerical workers may find themselves technologically unemployed by 1990, fear the impact of office automation much more than they fear robotics.

There remains the hope that robots themselves will create a substantial amount of employment, perhaps more than they eliminate. Isaac Asimov, who in 1942 coined the term “robotics,” and Karen A. Frenkel insist that “history makes it plain that advancing technology is, in the long run, a job creator and not a job destroyer.” Yet their case, in Robots: Machines in Man’s Image, rests on a single example, and that a fallacious one: the internal-combustion engine, a disaster for blacksmiths and buggy makers but the source (as they see it) of “a far greater number” of jobs in the automobile industry. This historical parallel is not enlightening. Unemployed blacksmiths and buggy makers could turn to an economy that required millions of manual workers, though not millions of blacksmiths and buggy makers. 9 Robotics is different: It is specifically designed to cut the need for labor, and its effects will eventually be felt everywhere.

History provides no true parallels to the advent of robotics, and thus no true grounds for comfort. What about the present and the immediate future? One of the authors under review, V. Daniel Hunt, claims that, for the US as a whole, during some unspecified time period,

only 6 percent of displaced workers can expect to be terminated [as a result of robotics]. This figure represents a maximum of 20,000 individuals. New jobs created by factory automation and resultant service industries are expected to number from 70,000 to 100,000. The new technology [robotics] will therefore add 50,000 to 80,000 [net openings].

Hunt’s reasoning is flawed. To begin with, no corporate accounting department would authorize the purchase of robots if it was told that the net effect would be to increase the size of the work force. If it did, Hunt’s predictions would nonetheless be impossible, even granting his basic figures. The figure of 20,000 does not, as he implies in his arithmetic, represent the total number of jobs eliminated by robotics; it is the number of people who will not be able to transfer to new jobs. Twenty thousand workers are said to represent 6 percent of those displaced by robotics, so the total number, using Hunt’s projections, will amount to some 330,000 workers. “The new technology,” as he calls it, will therefore eliminate some 310,000 jobs, not create 80,000 of them. The point I am making here is not that the estimate of 310,000 jobs is correct; it is merely that neither Hunt nor anyone else has shown that “the new technology” will add jobs to our economy. Besides, should Hunt be right in suggesting that there will be many new openings for repair technicians, we can be sure that the push will be on to design robots that can fix robots.

Of course some of the people who get the sack—perhaps, as Hunt thinks, most of them—will find other employment. Yet cushioning the fate of the present generation of industrial workers will do nothing to change the fact that the number of industrial jobs will fall sharply. If Commerzbank is right in suggesting that second-generation robots might displace as many as ten workers apiece, the deficit will eventually be enormous; and it will become even more so as third-, fourth-, and fifth-generation units appear. New entrants into the job market will find steadily fewer industrial jobs, so that retraining today’s industrial workers, as Scott rightly says, will merely shift the problem “from the company level to the national level…[many of] those who would otherwise have joined the firm to replace the workers who had retired” will now have no such positions to fill. And as Marvin Minsky admits, “it’s really no one’s job” to worry “about the welfare of workers not yet born.”

Shaiken and Scott are intensely aware of these truths. The other authors under review tend to look beyond them, at the distant future. Perhaps by then the difficulties really will be resolved; I am enough of an old-fashioned liberal to respect Engelberger’s belief that in the long run “any gain in productivity is always good.” Robots, however, can produce dross as well as gold, and in the short run Engelberger himself concedes that “there will be dislocations and there will be pockets of distress and unrest.” His assurance that the long-term “benefits to society will overwhelmingly exceed the costs” to the victims is disturbingly reminiscent of the claims made on behalf of certain foreign dictators.

In reality, the engineers and sci-fi writers who currently monopolize knowledge of robotics want to have fun building robots and speculating about them. They do not regard the consequences of their ingenuity as their particular responsibility, and this attitude promotes a certain detachment: One study actually asks, “If it is unseemly for a civilisation to be founded on slavery, is it not also unseemly for a civilisation to be founded on work which is so far below the abilities of those who perform it?”10 It would be easier to take such questions seriously were those who posed them threatened by redundancy.

Minsky, for his part, describes the technologically unemployed as a “special interest group” whose members demand “that their needs take precedence over efficient” production. “Better,” they say—according to Minsky—“that everyone suffer or die” than that their own interests should suffer. This mean little argument hardly does credit to one of the leading computer scientists at MIT. Are the technologically unemployed in a position to insist that everyone else suffer and die on their behalf? Do they claim that their interests should come before all others or that their interests not be ignored? Besides, if their point of view is self-interested, is it more so than that of the companies turning to robotics?

The idea that robots will create rather than eliminate jobs is only one of two fundamental illusions in the field. The other is the belief that robots will necessarily liberate humanity from “hazardous and demeaning work” (Hunt’s phrase). In fact they will create as well as eliminate a lot of mind-numbing toil because the engineers who design robots try to ensure that they make use of the cheapest human labor possible, if they use any at all. American Machinist (quoted by Scott) reports that a twenty-eight-year-old retarded man runs the numerical-control machines installed at a shop in Lincoln, Nebraska, “because his limitations afford him the level of patience and persistence” necessary for the position. Many workers in the factory of the future will do nothing but “bring parts to the robots and then take them away again,” as Scott puts it, and the pace at which they do so will be monitored electronically.

Organized labor, as Shaiken rightly notes, resembles other American institutions in assuming that robotics is “inevitable in any case,” so relatively few attempts have been made to control automation through negotiation. Nor is that likely to change, for the unions know very well that robotics is not the sole threat to their membership: moving production to low-wage third world countries is often cheaper and easier. (Japan’s company unions, incidentally, seem to share the same despairing perspective. That country’s garment industry “is close to developing” a system capable of producing shirts and jackets without human intervention. The Japanese textile union’s director of industrial policy says the system “will have a tremendous impact on employment” but is “absolutely essential for survival. Frankly, we’re somewhat at a loss about what to do.”11 ) In almost any conceivable circumstance, a weakened union movement will constantly be forced to choose between losing jobs to robots and losing them to foreigners. Mere survival will be difficult.

“Direct action” against automation has been no more successful than collective bargaining. In 1975, for instance, the printers’ union struck The Washington Post, which had installed automatic presses; ten of them were vandalized and the strike petered out soon thereafter. This state of affairs reflects passivity, not intelligent acceptance of the future. Even if no shop-floor resistance should ever emerge, it would hardly be astonishing if a huge and permanent workless class turned to something roughly like British soccer violence—street crime, perhaps.

The optimists insist that middle-class work will continue to be available. But the present state of our educational system makes one doubt that many members of the workless class will really be prepared for it. Besides, what middle-class work will they turn to? By the early decades of the twenty-first century, as industrial labor is disappearing, the kind of middle-class employment that consists chiefly of gathering information and making routine decisions will be under pressure as well. Who knows what new kinds of work might emerge? If artificial-intelligence enthusiasts like Minsky are correct, the very concept of work will be economically meaningless within a couple of generations. In any event you do not have to be an enthusiast to see that the number of middle-class positions is not going to rise sufficiently to accommodate the workless class, if it rises at all.

Artificial intelligence, moreover, is quite real. Not in a generation, and perhaps not in two—but not in the impossibly distant future, either—most kinds of work we now do will indeed be economically meaningless. If few of us had skills that anyone cared to hire, our economy would on the face of it support neither consumption nor production—not without major changes, at any rate. We might have to rethink our most fundamental institutions, not because we had any desire to do so, but because those institutions had been overtaken by events.

This Issue

October 24, 1985