Work Transformed: Automation and Labor in the Computer Age
The Robotics Revolution: The Complete Guide for Managers and Engineers
Robots: Machines in Man’s Image
Smart Robots: A Handbook of Intelligent Robotic Systems
During the 1960s, researchers at a company now called SRI International built a mobile robot. “Shakey,” a wheel-driven cart that carried a television camera and a radio link to a separate minicomputer, was designed to move in response to programmed commands. When he received instructions to go from one point in a room to another, for example, the camera would form images of the robot’s immediate vicinity. Like printed photographs, such images consisted of dots representing the presence or absence of light. This arrangement of dots was fed into the minicomputer, which “digitized” it—that is, transformed it into a sequence of ones and zeros. A computer program for recognizing visual patterns compared the new sequence with others previously stored in the computer’s memory. If the computer found a match, it “understood” the robot’s position in relation to surrounding objects. This information could then be used by other programs devised to solve simple problems.
At first Shakey was merely told to push boxes about a room. In 1971 a second, more advanced version of the robot was placed near three other objects: a tall platform, a box lying directly upon it. and a ramp several feet away. Shakey was then ordered to knock the box off the platform, but at first he failed even to reach it. Next, however, he nudged the ramp over to the platform, scampered up the one and onto the other, and coolly pushed the box to the floor. In effect he had “figured out” that the ramp was a means to his end.
Shakey was slow—he spent an hour or so identifying the block and the ramp. His world consisted entirely of large, smooth boxes and clean, smooth walls; anything else literally blew his mind. Unlike most robots; which include mechanical arms that can manipulate objects with some degree of skill, Shakey could only move from place to place. Yet in a sense he could “see,” and in a sense he could “think.” Most industrial robots can do neither.
They don’t have to. Much of what goes on in a factory involves picking things up, moving them, and putting them down. Almost all work of this kind can already be done by robots, though in most cases human labor is cheaper. We cannot be quite sure how long human beings will continue to enjoy that advantage. But we do know who will be most affected when they lose it—in some cases, to improved versions of robots like Shakey. The welders, painters, machinists and toolmakers, machine operators, inspectors, and industrial assemblers of our society—in short, most of the industrial working class—will be facing the end of the road in the not far distant future. If we pretend that this transformation will automatically create new jobs for the men and women it displaces, we will probably end up with a vastly expanded underclass, not a vastly expanded pool of computer programmers.
Unlike the steam engine, the spinning mule, and the power loom, robots were conceived long before they were invented. Yet the writers and inventors of the past had in mind what today would be considered an automaton, a mechanical “self-moving machine” like Pierre Jaquet-Droz’s “mechanical scribe” (1774), a precursor of the humanoids that have been so much on the public mind since R2D2 appeared in Star Wars. It was to this kind of automaton that Karel Capek applied the Czech word robota (work) in his 1920 play R.U.R.
Joseph Engelberger, a contributor to Marvin Minsky’s collection Robotics, was the man who decided that robots should be something else. Late in 1956, at a party, he ran into George Devol, a fellow engineer who had earlier hit on the basic idea of using computer components to control a mechanical arm. Devol, according to Engelberger, “waved his hands a lot and said, ‘You know, we ought to realize that 50 percent of the people in factories are really putting and taking.’ ” He had therefore invented what he called a “programmed article-transfer device.” Engelberger was impressed by Devol’s idea and persuaded his employers to try it out. He had another name in mind, however. “The word is robot,” Engelberger said, “and it should be robot. I was building a robot, damn it, and I wasn’t going to have any fun…unless it was a robot.”
During the summer of 1956, ten other researchers, including Minsky, one of the earliest experts on artificial intelligence, had met at Dartmouth College to speculate on the future of thinking machines. Among other things, they predicted that within a single generation humanity would no longer have to work. Engelberger soon launched his own company with himself as chief executive. The business, currently called Unimation Incorporated, did not show a profit until 1975.
In 1961 Unimation marketed its first commercial robots, the famous “Unimates.” These were mechanical arms that carried out instructions that were lodged in a computer’s memory;1 the operator could change the instructions quite easily by changing the programs that recorded them. In the United States, only such “programmable” robots are regarded as the genuine article. Japan’s much broader definition includes many less sophisticated devices and therefore permits that country to claim that it has more robots than the rest of the world combined. It does not, though it does in fact lead the world.
General Motors, Unimation’s earliest and largest customer, used its first robots for die casting, the nasty work of pouring molten metal into steel dies and removing the red-hot auto part. The technique usually involves relatively small production lots, or “batches,” as opposed to mass production, on the one hand, and the manufacture of individual items, on the other. Companies that turn out unique products one by one, in the manner of a Stradivarius, generally find robotics pointless because it may be necessary to write a new program for each item. Mass production—the production of millions of identical copies of a single part or group of parts—usually justifies traditional “hard” automation: machines specifically designed to perform one task, and that only. Such machines are usually more expensive at the outset than robots, but since they are faster and better suited to the task at hand, their eventual cost is lower if it can be spread over a larger volume. It is goods produced in batches—representing three quarters of manufacturing’s contribution to the US gross national product—that are particularly suited to robotics, for their low volumes rarely justify the higher initial costs of hard automation.
At $40,000 to $100,000 apiece, the earliest robots were costly themselves. Average operating expenses of $6 an hour made them hardly cheaper than a human being. Yet they were sometimes economical, for programs could easily be written to handle new batches and then called up as needed. Besides, as T.A. Heppenheimer, one of Minsky’s contributors, points out, they “didn’t get bored, take vacations, qualify for pensions, or leave soft-drink cans rattling around inside the assembled products. They…would accept heat, radioactivity, poisonous fumes, or loud noise, all without filing a grievance.” Furthermore, they could work round-the-clock without malingering, going to the toilet, or blowing their noses, and were therefore more productive than any human worker, and one man or woman could often supervise several robots. The increase in output per hour was potentially enormous.
Nonetheless, robots were too expensive and exotic for most purposes. During the late 1950s Del Harder, the Ford Motor Company’s manufacturing chief, looked at Engelberger’s specifications for a robot and said, “We could use two thousand of them tomorrow.” But he did not order two thousand Unimates, and no more than thirty had been sold by 1964. The pace was slow because only the car companies, which regarded themselves as leaders in the field of automation and also had unusually high wage bills, would experiment with robots. General Motors, which bought the very first units, had taken the trouble to find out that nine out of ten automobile parts weigh less than five pounds—a significant discovery because robot arms could not carry heavy loads. GM assumed that in the future “a three-thousand pound car would probably remain the sum of many small parts,” most of which changed each year. Since it appeared that these parts might eventually be produced by robots, robots would eventually make economic sense.2 Outside the automobile industry, however, their very existence was something of a secret.
By the mid-1960s it was clear that the Dartmouth predictions of 1956 had been wildly exaggerated. Neither robots nor computers in general were about to produce an age of universal leisure. Unhappily, this truth begot a new, more dangerous myth: the idea that since the advent of the computer had coincided with rising levels of employment, it had in some sense caused them as well. That fallacy in turn gave rise to the popular notion that new technologies by nature create more jobs than they eliminate. In fact, robotics had on balance created no jobs at all, and never will, although this seemingly obvious point (discussed below) was and still is concealed by mystification. During the 1960s the number of jobs rose steadily for reasons that had nothing to do with computers and robots, which in any case were very rare by current standards.
Unlike Shakey, the earliest robots could neither sense changes in their surroundings nor respond to them. Those surroundings therefore had to be controlled with a precision that was unattainable or uneconomic in most industrial processes. To become more useful, robots needed something at least distantly comparable to human senses.
They got it, but slowly, for this turned out to be one of the deepest problems in artificial intelligence, far harder than getting computers to play chess or prove mathematical theorems. Those subjects are simple enough to be reduced to rules. Our general knowledge of the world—“common sense”—is much more elusive and complex. When we see, for example, how do we decide where an object stops and another object begins? Why do we see the buttons on a shirt as, in a sense, a part of it and, in another sense, as distinct objects? The answers to such questions are hardly clear.
Despite our ignorance of the fundamentals, we can build devices that partially simulate human senses and permit robots to cope with unexpected change; Shakey, for example, could “see” his way through a room, though only a very simple room with very simple objects. Not all applications really benefit from such abilities. Robots that merely have to move from point to point on a sheet of metal, make a weld, and go on to the next point—spot welding, the most important single application in robotics—do not need senses. If, however, the weld must be continuous, as in arc welding, the robot must be able to “see” its way along the surface.
So far, computer senses fall very short of our own. A human being can generate vastly more sensory information than any robotics system can and process it about a thousand times faster—all this in a small, attractive package that moves under its own power and reproduces itself. The deficiencies of present-day machine vision are typified by the socalled bin-picking problem: getting a robot to pick out a particular item from a bin packed with various kinds of hardware. Although systems designed at the University of Rhode Island, among other places, have actually done so, they are neither fast nor reliable, in part because today’s computers lack sufficient speed and memory. When an acceptable solution emerges almost every industrial operation will be susceptible to robotics, and the researchers, including some in Minsky’s collection, tell us that the end is within sight.
In other words, the arm was controlled by computer components set up solely to record and play back a robot's positions; it was not a general-purpose computer, which can perform many tasks.↩
Asimov and Frenkel, p. 40.↩