Some years ago the philosopher John Searle badly shook the little world of artificial intelligence by claiming and proving (so he said) that there was no such thing, i.e., that machines, at least as currently conceived, cannot be said to be able to think. His argument, called sometimes the Chinese Room Thought Experiment, gained wide diffusion with an article in The New York Review of Books,1 and even penetrated the unlikely columns of The Economist.

The advocates of artificial intelligence were not slow to react, and arguments were thrown back and forth. Searle’s original article and his answers to his critics have been reprinted in The Mind’s I, edited by Douglas Hofstadter and Daniel Dennett,2 and more recently in Readings in Cognitive Science, edited by Alan Collins and Edward Smith.3

For the innocent few left untouched by this storm, let us recapitulate briefly what it was all about.

The question was first raised by Alan Turing during the 1930s, before the modern computer had been developed, and seemed purely theoretical at the time: At what point could we say that a computer is intelligent, i.e., has intelligence, can think? Turing’s idea was that we should judge a machine’s intelligence as we would judge a student’s, that is, by asking it rapid-fire, unexpected questions, and then seeing how it copes. Turing went on to propose some very amusing sample “conversations” we might have with the machine, some of which are reproduced in Hofstadter’s Gödel, Escher, Bach and in Sagan’s Broca’s Brain. If in the end the machine’s answers are those of an intelligent student, so much so that there is no way for us to tell that it is not indeed such a student, hidden inside the machine, who is doing the answering, why then (having first checked that this is not the case), we may as well admit that the machine is intelligent.

This test is called Turing’s test, and such a machine, a Turing machine. Of course, such a machine has never been constructed (though we are getting closer), and whether it is even theoretically feasible has been hotly debated. Still, this, it must be said, very high standard of intelligence for a machine is the one generally accepted in theoretical discussions.

Searle’s objection to this test was based on the following argument: Suppose we placed a non-Chinese speaker in a closed room together with a list of Chinese characters and an instruction book explaining in detail the rules according to which strings (sequences) of characters may be formed, but without giving the meaning of the characters. Suppose now that we pass to this man through a slot in the wall a sequence of Chinese characters which he is to complete by following the rules he has learned. We may call the sequence passed to him from the outside a “question” and the completion an “answer.” We can imagine the man becoming so adept at this game that no one, not even a Chinese, would be able to tell from his answers that he himself is not Chinese. But in fact not only is he not Chinese, he does not even understand Chinese, far less think in it. Now, the argument goes on, a machine, even a Turing machine, is just like this man, in that it does nothing more than follow the rules given in an instruction book (the program). It does not understand the meaning of the questions given to it nor its own answers, and thus cannot be said to be thinking.

This brilliant argument has provoked a host of answers, which were in turn answered by Searle. The most favored answer seems to be that while the man cannot be said to be thinking, the room in its entirety (that is, man plus instruction books) is doing just that.

Let us see if we cannot follow another tack.

First, let us remark that Searle assumes that a Turing machine can be constructed, for his native Chinese simulator is exactly that. This may not be realistic, but it is a legitimate assumption for his purposes, because if even a Turing machine cannot be said to be thinking, neither can much more primitive machines existing now and in the foreseeable future be said to do so. The assumption that a Turing machine is conceivable thus only strengthens the argument.

Now we come to the crux of the question. How do we know our man is thinking in, say, English and not in Chinese? The obvious answer is that we assumed this at the beginning of the argument. This is basically Searle’s position. But this assumption, so reasonable on the face of it, so easy to make, is in fact a momentous one, as we see from what it implies for the case of the computer. Assuming that our man does not understand Chinese translates into assuming that our computer does not understand the meaning of its program. This may be so, but this was what the argument was supposed to prove! The whole point of Turing’s idea was that we should assume no such thing—that comprehension (or intelligence, or intentionality, or consciousness) should be judged from the outside, by objective tests. What we have done is to assume at the outset that our machine is not intelligent. This makes it easy to prove that it isn’t, and impossible to prove that it is. Technically this is called Begging the Question.

Advertisement

The other possibility is simply to ask the man if he understands Chinese, and of course he would say that he does not, if asked in English. If asked in Chinese, he would say that he does (remember, the whole point of the game is to simulate a native-born speaker, and such a speaker would certainly say that he understands Chinese). How are we to decide which is his true language? The answer is that we cannot. Replace the “Chinese room” by a computer, and the situation becomes absurd: if we ask the computer in our language if it understands us, it will say that it does, since it is imitating a clever student. This corresponds to talking to the man in the closed room in Chinese; and we cannot communicate with a computer in a way that would correspond to our talking to the man in English.

So Searle’s argument does not really work, because we do not know and cannot truly know that our man does not really think in Chinese. But it is suggestive of the way out of the morass.

For let us look more closely at what is going on here. For the argument to work—that is, for the parallel with the computer to be plausible—our man must serve as a perfect transmission belt between the instruction books and the outside world. So actually we are not having a conversation with him, we are having a conversation with the instruction books, whose rules our man does no more than regurgitate. And the instruction books in turn were written by people, one of whom at least must have been a native Chinese (otherwise there would be risk of an error creeping in, and we do not want that, because the whole point of the exercise is for our man to simulate a native-born Chinese). Now the instruction books themselves are no more than a transmission belt between the authors and our man. So finally when the people outside the room think they are having a conversation with a native-born Chinese, they are absolutely right!

Translated, this means that when we talk to a machine, we are not really talking to the hardware, or even to the software, but to the programmer. The fact that the programmer is not physically present is quite irrelevant. (I have a friend who, when I phone him and ask if I woke him up, says, “No, it was the telephone.” But of course he’s wrong.) When we read a book, we are not communicating with the paper or even with the letters, we are communicating with the author. Now it is true that this communication is limited, because I cannot throw questions at the author. But many authors are aware of this, and try to anticipate the readers’ questions and objections (Plato wrote in dialogue form for no other reason). And this is exactly what a programmer does, or tries to do.

Turing and his followers dissociate the text from the author, and claim that we can communicate with the one without paying attention to the other. This is tempting but not tenable. Texts and programs are not disembodied, spontaneously generated objects, they are a means of communication devised by human authors and programmers, and when we read books and use programs we are communicating with their authors and with no one else. We have seen what difficulties we get into if we refuse this evident truth. It is true that in a literary context the received message is always a bit different from the intended one (the point Borges was always making), owing to poetic haze and general misunderstandings, not to mention distance from the time of composition or revision, all of which tempt us to imagine that the text is in some way autonomous. This is much less the case, if at all, with a computer program, where even misunderstandings and impressions of impreciseness have to be planned in advance. A program is a more perfect means of communication and has even less of an independent existence than a book. So if we accept that a book has no mind of its own, we cannot then endow a computer with intelligence and remain consistent.

Advertisement

So finally Searle is right! The machine is not and cannot be intelligent (or stupid)—only the programmer can be that. I think we really knew that all the time.

It may not be without interest to speculate on why Searle’s argument, in spite of its inadequacies, should have had such success. It seems to me that there is a pyschological trick at work here, based on the natural tendency of the reader to identify with the protagonist of the story, and also on the choice of Chinese. Since most of us do not know Chinese either, we tend to identify with the man in the closed room, to see him from the inside as it were—exactly what we should guard against. This is what makes us feel that we “know” what he knows and what he doesn’t know. No doubt it is also the symbolic nature of Chinese writing that makes us swallow the truly mindboggling idea that we could speak a language without understanding what we are saying.

There is one question left: Is the programmer himself a machine? Are we in the end communicating with a machine anyway? Better still, aren’t we machines too, so that all we’re having is machines communicating with each other? Searle says yes (I think), following, in that response, La Mettrie; but he believes that human machines are of a different order of complication and subtlety than any computer, not only any computer yet devised, but any conceivable computer constructed on principles known or envisaged today.

Here we get into deep waters—or is it science fiction?—and we are reminded of the question raised already in medieval times: If we managed to construct a perfectly human-like robot which can (so far as we can tell) feel pain, would destroying it be murder? The medieval answer was no, because the creature has no soul (only creatures created by God have souls, by definition). How would we answer the question today, since we no longer believe in souls?

It may thus be that the question: “Is man himself a machine?” is not a scientific or even philosophical question, but a theological one, i.e., one which may be answered according to taste.
Paris, France

John R. Searle replies:

The original Chinese room argument is so simple that its point tends to get lost in the dozens of interpretations, comments, and criticisms to which it has been subjected over the years. The point is this: a digital computer is a device which manipulates symbols, without any reference to their meaning or interpretation. Human beings, on the other hand, when they think, do something much more than that. A human mind has meaningful thoughts, feelings, and mental contents generally. Formal symbols by themselves can never be enough for mental contents, because the symbols, by definition, have no meaning (or interpretation, or semantics) except insofar as someone outside the system gives it to them.
You can see this point by imagining a monolingual English speaker who is locked in a room with a rule book for manipulating Chinese symbols according to computer rules. In principle he can pass the Turing test for understanding Chinese, because he can produce correct Chinese symbols in response to Chinese questions. But he does not understand a word of Chinese, because he does not know what any of the symbols mean. But if he does not understand Chinese solely by virtue of running the computer program for “understanding” Chinese, then neither does any other digital computer because no computer just by running the program has anything the man does not have.

To this, Motzkin makes two comments. First, the argument assumes that the man does not understand Chinese, and that is just “Begging the Question” because, “The whole point of Turing’s idea was that we should assume no such thing.” So perhaps the man does understand Chinese after all!

I blocked this move in the original presentation with a simple device: let the man be me. It is just a plain fact about me that I do not understand Chinese; and it is an equally plain fact that just having a bunch of rules for manipulating Chinese symbols in virtue of their shapes would not teach me the meaning of any of the symbols. The first person case demonstrates the inadequacy of the Turing test, because even if from the third person point of view my behavior is indistinguishable from that of a native Chinese speaker, even if everyone were convinced that I understood Chinese, that is just irrelevant to the plain fact that I don’t understand Chinese. And the reason is that the entire system, me, symbols, rule books, room, etc., contain only Chinese symbolic devices but no meanings.

Motzkin misses this point because he thinks that “the crux of the question” is “How do we know our man is thinking in, say, English and not in Chinese?” But the epistemic question, “How do we know?” is not the crux of the question. The crux is: “Under what conditions in fact does a system have understanding regardless of how, or if at all, observers outside the system can tell?” And the point of the example is to remind us of something we knew all along, namely just having a formal syntax is not sufficient for understanding; syntax is not semantics. And this is a logical point—not an epistemic point—which Motzkin does not address.

Inadvertently Motzkin has put his finger on the basic defect in the methodology of the Turing test. The test is very much part of the behaviorism of the era in which Turing wrote his article; and like all such forms of behaviorism it makes a fundamental confusion between the way we would verify the presence of a mental phenomenon from the third person point of view with the actual first person existence of the phenomenon. As interpreted by Motzkin, the Turing test confuses epistemology with ontology.

In Motzkin’s second argument, which I believe is the one he thinks more interesting, he finds himself in agreement with me that the computer does not think solely in virtue of manipulating symbols in a way that passes the Turing test. But he adds an extra twist. He says that when we “communicate” with the computer we are really communicating with the programmer. The symbols in the program are like the words in a book; and the programmer is like the author of the book. The machine implementing the program is just the medium through which we understand the programmer.

Well, some programs are like books in that you just use the program to look up information which the programmer has put in it. But, in general, there is an important difference between books and programs. The book is purely static; the symbols just lie there on the page. But the computer is wonderfully active, and that is what enables computers and not books to simulate intelligent behavior. Just by manipulating meaningless symbols the computer can prove theorems, win chess games, and form new hypotheses. There is no reason in principle, for example, why a chess-playing program could not beat its programmer at chess. Motzkin’s analogy is right in one respect, and that is that the symbols in the program like the words in a book have only the meanings that human beings have given them. Neither books nor computers can think just by having symbols. But it does not follow that when we use a computer the programmer is communicating with us like the author of a book. For most purposes it is useful to think of the computer as a tool like any other. I am now, for example, composing this reply using a program that simulates the behavior of a typewriter; but neither the machine nor the programmers are thereby communicating with me, any more than a typewriter or its maker communicates with me when I type.

Motzkin concludes his reply by suggesting that the question whether human beings are machines is a “theological” one, “which may be answered according to taste.” But given what we now know about how the world works there isn’t any question that we are machines: if by “machine” we mean a physical system capable of performing certain functions, then it is obvious that we are biological machines. It is possible, though unlikely, that we may be something more than biological machines, but we are at the very least such machines. And our mental processes are biological phenomena located in our brains, which in turn are biological organs. Computer programs, on the other hand, have no essential connection with any physical medium, biological or otherwise. They are purely formal and abstract sets of rules for manipulating symbols. And that is why the theory that says minds are computer programs is best understood as perhaps the last gasp of the dualist tradition that attempts to deny the biological character of mental phenomena.*

This Issue

February 16, 1989