There can be no doubt that high-speed electronic computers are starting to have a considerable impact on modern human society. Moreover, in future years our civilization may well be transformed almost beyond recognition, largely because of new developments in computer technology, many of which are under active consideration at the moment. There is one fundamental question, however, whose answer will determine the very nature of this transformation: Is the process of human thought itself the mere carrying out of a computation, or does human intelligence involve some ingredient that is in principle not possible to incorporate into the action of a computer, as we now understand that term? If all of our mental activity is indeed the effect of mere computation, albeit computation of undoubtedly stupendous complication, then eventually computers will be able to take over even those activities in our society that at present require genuine human intelligence—and our virtually inevitable fate is, in this view, that they will ultimately become our masters. If, on the other hand, our minds transcend the action of any computation in some essential way, no matter how complicated that computation might be, then we may expect that computers will always remain subservient.

There appears to be some tendency to regard the proponents of the different sides in this debate as being, respectively, “scientific” and “mystical.” Thus, if one is not prepared to go wholeheartedly in the direction of the “strong Artificial Intelligence” viewpoint (or strong AI, for short) that all human thinking, whether conscious or unconscious, is merely the enacting of some complicated computation, then one is in danger of being labeled “unscientific.” And once one has been browbeaten into accepting the strong AI view—no matter how reluctantly—one seems compelled to follow a route that the strong AI proponents have clearly laid out for us. As computers get faster, with bigger memory stores, and with more and more operations performed simultaneously, the moment will come when they will equal—and then race beyond—all human capacities. At that stage, humanity itself will have been superseded by its cleverest creations, the computer-controlled robots, and we shall be forced to surrender our superiority to them.

In his recent book, Mind Children, Hans Moravec, who is the Director of the Mobile Robot Laboratory of Carnegie Mellon University, sets before us his vision of what he considers to be, indeed, “The Future of Robot and Human Intelligence.” Moravec wastes no time considering the possibility that there might actually be alternatives to the strong AI viewpoint, and regards it as almost a foregone conclusion that what we do with our brains when we think is of necessity something that a modern computer could do, given an appropriate increase in computer power and capacity. Though the increases that he considers to be necessary might seem to the uninitiated formidably large (namely a thousandfold increase over the most powerful computers that exist today, and a millionfold increase in computer power over that which can be achieved by the computers at present employed in robotics research), Moravec considers that it will be a mere forty years before practical computer-controlled robots will achieve what he calls “human equivalence.”

Although more rashly optimistic predictions have been (and are still being) made by others—and often subsequently retracted—based on such considerations as the millionfold advantage in speed that present-day computers’ transistors have over the brain’s neurons, Moravec uses a more sophisticated and careful argument to obtain his own more “modest” estimate of a forty-year timescale. He considers that the computing action of a human retina—which must be considered as an outpost that is actually part of the human brain—has already been equalled by present-day computer simulations. From there he extrapolates, on the basis of the fraction that the retina is of the brain as a whole, and the extraordinary rate at which computer technology has grown since the early part of this century.

It seems to me that if one is prepared to accept the strong AI point of view, then Moravec’s case might not be an absurdly unreasonable one—although too little is actually known about the action and “purpose” of various parts of the brain for such extrapolations to carry much weight. Lest one feel alarmed by the prospect of a computer takeover, however, Moravec attempts to put us at ease, and he asks us to accept this “inevitability” as being actually desirable. He suggests that if we cannot beat the computers, then we may as well join them. In one of his more horrific passages, evidently intended to put the reader at ease, he describes how a robot brain surgeon gradually replaces parts of your (the reader’s) conscious brain by transferring the action of these brain parts to working programs in small portions of a waiting computer. He continues:

Advertisement

To further assure you of the simulation’s correctness, you are given a pushbutton that allows you to momentarily “test drive” the simulation, to compare it with the functioning of the original tissue. When you press it, arrays of eloctrodes in the [robot] surgeon’s hand are activated. By precise injections of current and electromagnetic pulses, the electrodes can override the normal signaling activity of nearby neurons…. As long as you press the button, a small part of your nervous system is being replaced by a computer simulation of itself. You press the button, release it, and press it again. You should experience no difference. As soon as you are satisfied, the simulation connection is established permanently. The brain tissue is now impotent—it receives inputs and reacts as before but its output is ignored. Microscopic manipulators on the hand’s surface excise the cells in the superfluous tissue and pass them to an aspirator, where they are drawn away…. The process is repeated…. Layer after layer the brain is simulated, then excavated. Eventually your skull is empty, and the surgeon’s hand rests deep in your brainstem. Though you have not lost consciousness, or even your train of thought, your mind has been removed from the brain and transferred to a machine. In a final, disorienting step the surgeon lifts out his hand. Your suddenly abandoned body goes into spasms and dies. Your perspective has shifted…to a shiny new body of the style, color, and material of your choice.

I suppose that some may not find this vision as horrific as I do. Evidently Moravec himself even finds it to be an attractive prospect, so I suppose that others may also. Accordingly, if computer and micromanipulatory technology advances to a stage where such an “operation” has the appearance of practicality, volunteers would no doubt come forth prepared to submit themselves to it. But the action of many parts of the cerebral cortex (not to mention that of other regions of the brain) is notoriously subtle, and one could well imagine that it would be well-nigh impossible for the volunteer to notice the difference when a small part of his (or her) brain has been inactivated. Then, as more and more of his brain is actually removed, and as less and less of his conscious self remains, these differences might become even less noticable to whatever of his mind remained; his awareness would thus pass away—whatever the waiting computer might be able subsequently to achieve.

No doubt the reader will infer, from my own comments above, that I do not myself support the strong AI position. It is a great pity that Moravec does not even indicate that the strong AI viewpoint is a controversial one and that it involves fundamental assumptions. It seems to me that there are at least three positions that one might reasonably adhere to on these issues:

(1) There is the strong AI position itself asserting that the action of the brain is indeed that of a computer and, moreover, that all conscious perceptions arise merely as manifestations of sufficiently elaborate computations being carried out, it being irrelevant what physical object is actually doing the computing, be it brain, electronic computer, or a system of cogs and wheels.

(2) There is the viewpoint (stressed particularly by John Searle of the University of California at Berkeley) that computation does not, in itself, evoke consciousness, and that simulation of the brain’s action by a computer would not give rise to mental phenomena such as awareness, pain, hope, understanding, or intentionality, but that nevertheless a simulation of the action of the brain would in principle be possible since, being a physical system, the brain ought to act according to some well-defined mathematical action—such as being governed by a family of mathematical equations which could be integrated by a sufficiently sophisticated and complicated computer program. Such a computer program could, in principle, control a robot in just the same way that a human could.

(3) As with the previous view, one may hold that computation does not evoke consciousness; but, moreover, the brain’s action involves ingredients—essential aspects of conscious thinking—that are of an essentially noncomputational nature, so that no adequate simulation of the conscious brain’s action would be possible using just a computer constructed according to principles that we understand today. On this view, any appropriate “simulation” would have to make use of the very physical action that underlies actual consciousness, and could not be effected merely by computation.

This last viewpoint I have tried to argue strongly for in a recent book, The Emperor’s New Mind (Oxford University Press, 1989). In fact, there are many mathematical procedures that are precisely determined but that are not computational in nature. It could certainly be the case that the mathematical laws that actually underlie the behavior of our physical world are of this noncomputational character. If the physical action that is made use of by our conscious thought processes accords with mathematical laws of this general kind, then it will indeed not be possible to simulate them merely by computation—which means by the action of a computer based on the principles that we understand today.

Advertisement

I can imagine that Moravec may not have been aware of the alternative (3) as a scientific possibility at the time that he wrote his book, but it is somewhat disconcerting that he does not even address the issues raised by (2), which Searle, in particular, has been promoting forcefully for a good number of years. In fact, if the viewpoint (2) is actually “correct,” but the supporters of (1) have their way in the development of robot technology, then the future of consciousness on this planet is a dismal one. According to (2), eventually computers will indeed be able to do better than we can—conceivably even within the forty-year period that Moravec sets out for us. They would take over from us, but they would not themselves be conscious. If we give up not only our authority but—as Moravec and some other strong AI supporters would have it—also our very bodies, then consciousness on this planet will have surrendered to being ruled by insentient robots.

If, as I believe myself, something more in line with viewpoint (3) turns out to be correct, then computers will never be able to achieve genuine understanding, insight, or intelligence, no matter how rapidly and powerfully they may be able to perform their computations. Although their role in modern society will almost certainly become an increasingly important one, human beings will still supply the guidance, the motivation, and the “being” of society. But if the strong AI position were to turn out to be correct, then ultimately a picture like the one painted by Moravec would have to be faced up to, even if his proposed span of forty years might be unrealistically short.

Moravec’s account is at its best when he authoritatively describes what computers and robot technology have actually been able to achieve to date (although his historical account of the development of computers is curiously remiss in not making any mention of the seminal work of Alan Turing on the nature of computation, or of the computers developed by Turing and others at Bletchley Park in England to decode German ciphers during the Second World War).

Among the actual technical innovations that Moravec vividly describes are “magic glasses,” which in their current experimental phase resemble goggles, or a helmet. These can convey to the wearer the visual images of a scene being transmitted by a TV camera elsewhere, thus making the wearer “feel” himself or herself to be located in that completely different location. The same could be done with hearing or the sense of touch, and also for the motor control that the wearer exerts. In this way the wearer’s “awareness” could have the appearance of being transported to a “robot” whose eyes are the TV camera and whose ears are the radio transmitter sending signals to the wearer. This is not yet actually a robot, despite the fact that it would appear to be one, since its movements are being controlled by a human brain. According to Moravec, such glasses could even stage a fantasy world for a viewer by means of a “powerful computer that can generate realistic synthetic imagery, sound, and speech,” providing an accurate rendition of the fantasy landscape.

Moravec believes that the actual robots of the 1950s had behavior patterns that were a match only for bacteria, and that present-day robot control systems are on a par with those of spiders (though I should like to see a robot of a spider’s size spinning a web). His view is that all of these developments are in keeping with his forty-year timescale for “human equivalence.” He does not stress the fact that modern computers actually do very different things from animal brains. It seems to me that the notion of “equivalence” in this context is not a very appropriate one. No one would ask that a spider should be capable of performing the enormously complicated calculations that present-day computers frequently do in scientific work, for example. The spider’s nervous system is specially attuned to performing the very specific tasks that it needs to perform in order to keep itself alive and to help propagate its species. A general-purpose computer, in order to achieve what a spider’s nervous system can do, would have to be able also to achieve vastly more than a spider in other respects. To be “equivalent” to a spider in Moravec’s sense, the computer needs to be enormously superior to it.

This issue is addressed only to a partial extent by Moravec’s discussion of the distinction between “top-down” and “bottom-up” AI research. His top-down procedures are those that employ general purpose computers, carefully programmed to perform specific tasks like “behaving like a spider.” His bottom-up procedures would involve computers that learn to do what they are supposed to do through “experience” and are not specially preprogrammed in detail—as is the case with current research into so-called neural networks (not specifically referred to by Moravec), in which the strengths of the computer “neuron” connections are gradually changed through such an automatic learning process.

Moravec’s account of computer viruses is interesting and provocative, as are his comments concerning the possibility of mutations and the natural selection of such “life” as can exist in computer information systems. Computer viruses are “viruses of computer program information,” inserted by malevolent programmers, which pass from one computer to another as soon as the “infected” program is copied by the second computer. These viruses are sometimes preprogrammed to wipe out the entire information on all the infected computers’ hard discs on, say, some preordained day such as Friday the thirteenth. Virus “antibodies” have also been constructed in order to protect uninfected computers or cure infected ones.

Already there is a kind of “arms race” developing between the programmers. But the situation could get even more out of hand if mutations develop so that a kind of internal natural selection could arise, outside normal human control. Moravec considers that “the fun has just begun,” but I for one find alarming the equanimity with which he views a future taken over by information systems over which all human control is ultimately to be lost. In the context of biology, natural selection has become a stable and effective, albeit ruthless, means of controlling the destiny of the creatures of Earth—at least up until the emergence of human technology—but biological natural selection has had thousands of millions of years of practice. It remains to be seen whether much trust can be placed in the efficacy of a corresponding process envisaged to be taking place in computer information systems. It is almost inconceivable to me that such a putative evolutionary process could ever be trusted.

My main complaint about Moravec’s book, however, is that its author does not make clear which parts of it are established scientific fact and which parts are wild speculation. Indeed, some of his speculations are quite extraordinary and go enormously beyond anything that can be scientifically justified. He extrapolates from the strong AI view to infer that one’s personal identity can be transferred to a computer system that has merely acquired masses of information about oneself. He infers that it would even be possible to reconstruct long-dead historical figures, even rekindling their very awareness, by amassing all the information about them that would still be available. He argues that, similarly to the way that present-day computers can predict the future behavior of planets and spacecraft, the computers of the future would be able to “retrodict” every detail of the past, by evolving the equations of physics in the reverse direction. However, this is a total misconception. No conceivable increase in computer power would make this possible; nor could the physical state remotely be known accurately enough. Yet he claims that “it might be fun” for the computers of the future “to resurrect all the past inhabitants of the earth this way and to give them an opportunity to share with us in the (ephemeral) immortality of transplanted minds.”

Moravec seems to regard the problem of conjuring up consciousness by a computation as a minor one, and he takes the view that “the sensory and motor portions of the brain may embody one million times the effective computational power of the conscious part of our minds.” But it is not at all clear to me how he would purport to explain, in strong AI terms, how that tiny millionth part can evoke conscious awareness, while the enormous remainder of our mental computation remains unconscious.

In one of his wilder fantasies, he imagines that the “inhabitants” of a certain mathematical computer game called “Life” (invented by the mathematician John Horton Conway) could somehow become self-aware and actually converse with their human programmer, ultimately to “escape” from their world and join our own. Yet this fantasy is not even the wildest of those considered by Moravec. In another (following up on a fictional suggestion by John Gribbin) he envisages a “doomsday computer” that works by destroying enormous numbers of universes in its wake (or in a milder form, merely an equivalent number of copies of the computer’s operator!). For this Moravec employs (rather inaccurately) one of the more far out (though often considered) interpretations of quantum mechanics, known as the “many-worlds” view, in which all the alternative universe possibilities are supposed to coexist in one vast superposition. One of Moravec’s considerably more sober fantasies is to envisage that human equivalence would be “bested more than a millionfold” by atomic-scale machinery. He then imagines that machines constructed from the material of a neutron star (a collapsed stellar remnant whose enormous density would be such that a ping-pong ball-sized part of its substance might weigh as much as Mars’s moon Deimos) could have a million million million million million times the power of the human mind.

Many of the speculations in the book are, in my opinion, clearly science fiction; and I should have had no quarrel with them if they had been simply presented as such. Indeed, in my view, science fiction of this kind can have a valuable scientific purpose in stimulating one’s imagination and in opening up possibilities that one may not have thought of before. Moreover, science fiction can be fun to read. But in a work that is presented as providing a serious and authoritative picture of what the future might hold, it is important that it is made clear where the science ends and the science fiction begins. Also, disputed assumptions on which the various conclusions are based should be clearly stated, and the pros and cons weighed. It is unfortunate that this short book does not at all live up to such basic standards. If this is borne in mind, it can certainly be read with a good deal of interest, profit, and enjoyment—but it should also be read with a good deal of skepticism.

This Issue

February 1, 1990