In response to:

What Can't the Computer Do? from the March 15, 1990 issue

To the Editors:

In his review of Penrose’s The Emperor’s New Mind [NYR, March 15], John Maynard Smith expresses some doubt about whether the views he attributes to me are in fact mine. His doubts are justified. I do not hold the views that a computer “would not be conscious, because it was made of transistors and not of neurons.” That is not my view at all. My position rather, is this: I take it as a fact that certain quite specific, though still largely unknown, neurobiological processes in human and some animal brains, cause consciousness. But from the fact that brains cause consciousness we can derive trivially that any other system capable of causing consciousness would have to have the relevant causal powers at least equivalent to brains. If brains do it causally then any other system that does it causally will have to share with brains the power to do it causally. I hope that sounds tautological, because it is. Some other system might use a different chemistry, a different medium altogether; but any such medium has to be able to do what brains do. (Compare: airplanes don’t have to be made of feathers in order to fly, but they do have to share with birds the causal capacity to overcome the force of gravity in the earth’s atmosphere.)

This obvious result becomes interesting when tied to another result, equally obvious. Just implementing a computer program is not sufficient by itself to guarantee the presence of mental contents, conscious or otherwise, because the program is defined entirely in terms of abstract symbol manipulation, and such abstract syntactical objects do not guarantee the presence of mental phenomena with semantic content. Syntax is not by itself the same as, nor is it sufficient for, semantics. This is shown by my Chinese Room Argument. A “computer,” me for example, might follow the rules of a program for “understanding” questions in Chinese and giving the right answers in Chinese and still not understand a word of Chinese. Now these two results, each in its way obvious, lead to some other interesting conclusions when they are conjoined. First, it follows that the way that brains produce conscious mental contents cannot be solely in virtue of implementing a formal computer program. And second, any artifact could not have specific mental contents solely in virtue of implementing a program but would have to have the relevant causal powers equivalent to the brain. (I am summarizing here a much longer argument, for more details see, for example, my Mind, Brains and Science, Harvard University Press, 1984.) The result is not that “a computer can’t be conscious,” nor is it that “only systems made of neurons can be conscious,” but rather, implementing a computer program is not by itself sufficient to guarantee the presence of mental contents, conscious or otherwise.

I think Maynard Smith is prevented from seeing these points, because he holds a set of very deep but unstated assumptions about the relation of the mind to the body, about the nature of computation and about the nature of consciousness. He is certainly not responsible for these views. They are part of our contemporary culture. Nonetheless, they are mistaken and I would like to state—all too briefly—what some of them are and why they are mistaken. For the sake of brevity, I will confine myself to half a dozen of these basic assumptions.

  1. He treats the question, “Is the brain a digital computer?” as if it were is simple factual question, as if we might simply discover that the processes in the brain are computational in the way that we have discovered that the heart is a pump or that green plants do photosynthesis. He even discusses at some length the question as to which functions of the brain might be performed by analog computation, which by digital computation.

But I hope he would agree with me on reflection that the question is more complex than his discussion implies. The reason can be stated succinctly: In the standard textbook definition “digital computation” does not name a set of physical processes in virtue of their physical features, but names a set of abstract formal symbolic processes which can go on in an indefinitely large range of physical media. But now we immediately face a difficulty. The question, “Is this a digital computer?” is not like the question, “Is this a pump?” or “Is this process photosynthesis?” but more like the question, “Is this a symbol?” or “Is this set of objects a set of symbols?” But being a symbol is not just a matter of physical properties; rather a physical object is a symbol only insofar as it is used or could be used by some agent as a symbol. Similarly, on the definition of computation as symbol manipulation, being a digital computer is not just a matter of physical properties; rather something is a digital computer if it can be used to compute with or can be described in computational terms. But then the problem is that anything that meets certain minimal formal conditions can be described as a digital computer, because just about anything can be assigned a symbolic interpretation. We can describe its operation using the famous 0’s and 1’s. Thus, molecules, solar systems, and beer cans are all in a trivial sense digital computers, because they can all be described as implementations of computer programs. Oddly enough, a similar point about analog computers is already implicit in Maynard Smith’s discussion. His examples of analog computers are match sticks and bits of string. But if match sticks and bits of string can be analog computers, then just about anything can be an analog computer. Similarly, just about anything can be a digital computer.

Advertisement

But this gives us a dilemma where our original question is concerned. Are brains digital computers? If this asks whether brains can be described in computational terms, the answer is in a trivial sense yes, because just about anything at all can be described in computational terms. But that was not the question we wanted to ask: We wanted to ask whether brains are somehow intrinsically digital computers, and that question has not so far been given a clear sense.

I do not say that we could not succeed in giving it a clear sense, but rather that we have not so far done it. This point is different form the Chinese Room Argument, but is a natural extension of it: the Chinese Room Argument showed that semantics is not intrinsic to syntax, the point I am making now is that syntax is not intrinsic to physics.

Is there any way out of this dilemma? Yes there is and it is a route that is standardly taken in artificial intelligence and cognitive science as well as in Maynard Smith’s review. But it is out of the frying pan and into the fire.

  1. The standard way to avoid the dilemma is to describe the brain as if its processes were used for computation, in the same sense that, for example, regular commercial computers are used for computation. This works fine for commercial computers because such computers are precisely designed, programmed, and bought to be used for computation by some outside agent; but to treat the brain this way is to commit a homunculus fallacy. (The homunculus fallacy is the fallacy of explaining our mental processes by tacitly postulating a “little man,” a homunculus, in our heads who is having those thought processes for us.) On several occasions (e.g., on p. 22) Maynard Smith speaks of “representations” as being “used” for computation. But think about that for a minute. Who is doing the using? And remember, something is a representation only to the extent that some agent uses it as a representation. So if we even consider the possibility that, as he puts it, “the representation is used in analog computing” we have to attribute some mental capacity to the brain that stands outside the representation and could use it as a representation. This is the familiar form of the homunculus fallacy.
  2. The combination of 1 and 2 leads him to misdescribe the actual biology of cognition.

If you think that there is a little man in the head using the brain to compute you will say things like the following: “But the problem of vision is to explain how this [2-D retinal] image is translated into ‘there is a car approaching me on the wrong side of the road.’ ” I think if Maynard Smith reflects on this he will agree with me that as it stands this account makes little biological sense. Biologically speaking what actually happens is something like this: A series of photons strike the photoreceptor cells in my retina. This signal is then processed through four other layers of the retina and passes through the optic nerve to the lateral geniculate nucleus. From the LGN the signal goes to the striate cortex, zone 17, and then through the rest of the visual cortex, through zones 18 and 19. Eventually this complex electro-chemical process causes a concrete conscious visual experience. As a piece of biology, the whole process is as specific as, say, digestion, and like digestion it is a specific causal chain of events that results in concrete biological events, in this case it ends in a concrete mental (hence physical) event of me seeing this very scene. Someone, I or someone else, might describe the content of the visual experience as “There is a car approaching me on the wrong side of the road”; but the concrete biological visual reality is not that of a bunch of words, it is an actual conscious experience. And there is literally no “translation” going on, nor is there literally any homunculus computing over the visual image.

It is always possible to describe vision or other mental processes in abstract computational information processing terms, as one can describe any process in these terms, biological or otherwise. Digestion can also be described as a computational information processing sequence, and there is nothing harmful about these descriptions provided you don’t confuse the computational model with the real thing. Nobody supposes that the question, “Is the stomach a digital computer?” is the right question to ask, even though the stomach can be described computationally and thus can be simulated on a digital computer. A computational model of a vision will indeed “translate” information about a two-dimensional visual array into the sentence, “There is a car approaching me on the wrong side of the road.” But that gives us a model of a visual process, not a visual process.

No one confuses model and reality where digestion is concerned; why does anyone make the confusion where consciousness is concerned? Part of the reason is this:

  1. Maynard Smith, though not a dualist, is still making use of certain dualistic categories. He has difficulty in seeing how the subjective inner mental state of consciousness can be part of the ordinary biological world of digestion, photosynthesis, the secretion of bile, and mitosis. It is, I believe, his difficulty in seeing that consciousness is an ordinary higher level feature of the brain (in the same sense that the solidity of this table is a higher level feature of the table), that leads him to say such things as “What I find most puzzling about Penrose’s position is that he wants consciousness to ‘do something.’ He writes as if consciousness were an additional cause of thought, or of behavior, over and above the physical events in the brain.” But how are we to take this puzzlement except as an expression of the traditional dualistic assumption that “consciousness” and “physical events” name mutually exclusive categories? Once you see that consciousness, i.e. the subjective, inner mental experience of consciousness, is indeed a higher level physical feature of the brain—and only the dualist assumption that physical and “mental” are mutually exclusive prevents us from seeing that—then there is no philosophical puzzlement about how consciousness can function causally. Of course there are plenty of factual problems about how it works. We have only scratched the surface.

Now, because of his use of the dualistic categories he says things like the following.

  1. “It seems more plausible that any ‘computer’ that is formally similar to a brain will be conscious” (p. 21).

It is important to make clear the ways in which this is not really plausible. Where digital computation is concerned formal similarity is defined in terms of computational similarity. Two systems are formally similar if they implement the same program, i.e. if they exemplify the same patterns of symbols. Formal similarity is thus totally different from physical or causal similarity since the same program can be implemented in all sorts of physical or causally different media. Now if we assume that he is not confusing “formal” with “causal,” then the difficulty is this. We can make a “computer” out of any material you like that will be formally similar to the brain to any degree you like. We can make a computer out of old beer cans rigged together with wires and powered by windmills, or out of millions of water pipes with men stationed at the connections to turn the pipes on and off to match the patterns in the brain.

Advertisement

Now is it really plausible to think that such systems must have conscious states, indeed exactly the same conscious states as our brains? Well let’s try it out with an example. We think we know to some extent how cocaine acts on the brain. It impedes the capacity of certain synaptic receptors to reabsorb a certain neurotransmitter, norepinephrine. So let’s get the same pattern in our beer can computer. We arrange collections of beer cans to match the patterns of the synapses and we bombard the beer cans with ping pong balls to match the pattern of activity of the norepinephrine. We can do this to any degree of accuracy you like. But of course, no neurobiologist seriously believes that the beer can system must therefore be literally consciously feeling a cocaine high. And notice that the thesis he finds plausible is not that for all we know the beer cans might be conscious—who knows what it feels like to be a system of beer cans?—but rather the system must be conscious because that is all there is to consciousness, having a certain pattern in a computer.

Notice that no one would make this mistake about, e.g., digestion. No one thinks that if only we got the right pattern in the computer to match the pattern in the stomach we can get the computer to digest beer and pizza. Only a residual dualism leads us to make this mistake about consciousness.

  1. Finally, also in common with many, Maynard Smith implies that success in achievements in AI programs is somehow evidence of psychological significance. Indeed he begins his article with the claim that the new chess playing programs are “a milestone in the history of artificial intelligence.” No, they mark a milestone in the application of recent hardware improvements, but the key to the success of these programs is that they use what is known correctly, though metaphorically, as “brute force.” The programs can quickly scan millions of possible positions, unlike any human chess player. Such programs are about as relevant to human psychology as the fact that any pocket calculator can calculate better and faster than any human mathematician.

It seems to me Maynard Smith himself takes essentially the same view later in the article when he points out that the ways that the Deep Thought program works is really unlike human chess players. But the implication of his own convincing observations would seem to be that successful performance by itself is no evidence of psychological relevance. Motorcycles can outrun any sprinter and steam shovels outdig any shovel man; similarly modern circuitry, properly programmed can compute faster than any chess player. But so what? I am not belittling the technological achievement, but the technological achievement in each case is not by itself any help in understanding how the human system works.

There isn’t any doubt that as far as human performance is concerned we should in principle be able to build computers that exceed human beings in many kinds of activities. In many respects, the human brain is after all a fairly primitive device evolved for coping with hunter-gatherer environments. But one of its most interesting features is its capacity to cause and sustain all the varieties of human consciousness; and about that, we understand rather little. And a computer simulation of consciousness is no more the real thing than a computer simulation of digestion is an actual digestive process or the computer simulation of a rainstorm is a solution to the drought. Simulation is not duplication.

So what is the upshot of all this? We have to stop thinking that the question “Is the brain a digital computer?” is the right question to ask. If that question asks whether brain processes can be described in computational terms then of course the brain is a digital computer, and so is the stomach, liver, heart, and big toe. And there is no way that we can avoid this conclusion by supposing that the brain differs from these other systems in being intrinsically a digital computer because so far we have given no sense to that notion. Our deepest confusion here is the traditional philosophical confusion about “the mind and the body.” Once we get out of that confusion, once we escape the clutches of two thousand years of dualism, we can see that consciousness is a biological phenomenon like any other and ultimately our understanding of it is most likely to come through biological investigation.

John R. Searle
University of California
Berkeley, California

John Maynard Smith replies:

In replying to Searle, it may help to start by listing some of the things I agree with him about:
i) I agree that “certain…neurobiological processes in human and some animal brains cause consciousness,” and that “our understanding of [consciousness] is most likely to come through biological investigation.”

ii) A digital computer can be made of many different kinds of units—valves, transistors, neurons, jets of water. What matters is the formal rules governing the activities of the units, and not what the units are made of.

iii) I agree that “successful performance by itself is no evidence of psychological relevance.” This was the point I tried to make by mentioning the chess program Deep Thought. More precisely, I agree that successful performance is no proof of psychological relevance, but it may be a necessary pre-condition: I would not be easily persuaded of the psychological relevance of a program that did not perform successfully.

Despite these agreements, however, I cannot follow Searle all the way. First, I do not see computation as he does. He says that “something is a digital computer if it can be used to compute with or can be described in computational terms.” I disagree with both parts of this sentence. An analog computer (for example, an adjustable electrical circuit that can be used to predict the behavior of mechanical systems) can be used to compute with, but it is not a digital computer. I spent some time in my review on the difference between analog and digital computing because the distinction seemed to me crucial to Penrose’s argument. Further, it is not true that anything that can be described in computational terms is a computer, digital or otherwise. Anything that obeys physical laws can be simulated on a computer—albeit with limitations on accuracy and speed—but that does not make it a computer. A computer can be used to simulate the weather, or an airplane landing, or digestion, but you cannot use the digestive tract to simulate airplanes or the weather.

My real difficulty, however, is with consciousness, and in particular with the idea that “consciousness can function causally.” Suppose that I put my hand in a flame, I feel pain, and I withdraw my hand. I agree that my pain is caused by neurobiological processes in my brain. But I think that the withdrawal of my hand is also caused by neurobiological processes, although not necessarily the ones that cause my pain. I do not think it would be sensible to say that the pain caused my hand to withdraw, if that implies that the pain is a cause additional to and independent of the physiological one, as the phrase “consciousness can function causally” seems to imply. If all that is meant is that the conscious feeling, pain, is a necessary concomitant of certain specific neurophysiological events, I would be happy to agree. I admit that, when discussing more complex behavior, one often identifies conscious ideas as causes: for example, “I reviewed Penrose’s book because my bank balance was in the red.” This is an entirely appropriate way of talking, if only because we are incapable of describing the physiological events associated with my knowledge of my bank balance. But the sentence in quotes does not imply that consciousness is “doing something” independently of the chain of physiological causation.

I must emphasize that I regard the view that consciousness is a necessary concomitant of certain types of neurophysiological events, and cannot be a cause of actions independent of or additional to physiology, as a hypothesis that seems to me plausible, but not necessarily true. However, if Searle thinks that brains cause consciousness, it is a hypothesis that he is almost committed to: I do not think he can hold the alternative view, that brains cause consciousness, and consciousness then causes physiological events (actions) that would not otherwise happen. We are left with the following difficult question: If consciousness is caused by events in the brain, what kinds of events are required, and could they happen anywhere else but in brains? Searle says that he does not hold the view that “a computer would not be conscious because it was made of transistors and not neurons”—a view I wrongly ascribed to him. If this is not what he thinks, what view does he hold? He is highly critical of the view that consciousness depends, not on the nature of the units, neurons or transistors, but on the way in which the units are arranged and on their behavior. He may be right to dismiss this idea, but if consciousness depends neither on the nature of the units nor on the way in which they are arranged, what does it depend on? As an evolutionary biologist, I think that once no organisms were conscious, and that now some are. What happened?

The last point concerns whether I think that there is a little man in my head using my brain to do computations. I agree that this is a trap it is easy to fall into, and maybe I sometimes do so. But I do not think I was guilty of this error when I wrote, “the problem of vision is to explain how this image is translated into ‘there is a car approaching on the wrong side of the road.’ ” Searle objects that the outcome of what happens in the brain is “me seeing this very scene,” and not a set of words, although “I might describe the content of my visual experience as seeing a car on the wrong side of the road,” and he adds that “there is literally no translation going on.” I think this is nonsense. Imagine that you are trying to design a computer that can drive a car in traffic—a task that is still beyond us. One thing you would have to do is to write a program that would examine the input on a screen, and recognize the small but important subset of patterns that correspond to the real-world fact of a car approaching on the wrong side of the road. This would be difficult, but, if successful, it would be trivial to ensure that, when this subset was recognized, the brakes were slammed on. No words need to be spoken, and no homunculus would be present to read them. But the process whereby a 2-D image on the screen was converted into a specific conclusion (which could be verbalized as a car on the wrong side) would be one of translation. Translation does not require words, as every geneticist knows. I apologize for not writing “the problem in vision is to explain how this image is translated into the visual experience that would be verbalized as ‘there is a car approaching on the wrong side of the road.’ ” But I do not think that this, or any of the other things I said in my review and that Searle objects to, require that I assume there is a little man using my brain as a computer.

Several correspondents have pointed out that I misused the term NP. This does not mean “non-polynomial.” I apologize for my mistake, but I do not think it alters the point I was trying to make, which is that there are problems that can be solved on a digital computer, but only after an excessive time.

This Issue

June 14, 1990