• Email
  • Single Page
  • Print

The Myth of the Computer

The Mind’s I: Fantasies and Reflections on Self and Soul

composed and arranged by Douglas R. Hofstadter, by Daniel C. Dennett
Basic Books, 501 pp., $16.95

Our ordinary ways of talking about ourselves and other people, of justifying our behavior and explaining that of others, express a certain conception of human life that is so close to us, so much a part of common sense that we can hardly see it. It is a conception according to which each person has (or perhaps is) a mind; the contents of the mind—beliefs, fears, hopes, motives, desires, etc.—cause and therefore explain our actions; and the continuity of our minds is the source of our individuality and identity as persons.

In the past couple of centuries we have also become convinced that this common-sense psychology is grounded in the brain, that these mental states and events are somehow, we are not quite sure how, going on in the neurophysiological processes of the brain. So this leaves us with two levels at which we can describe and explain human beings: a level of common-sense psychology, which seems to work well enough in practice but which is not scientific; and a level of neurophysiology, which is certainly scientific but which even the most advanced specialists know very little about.

But couldn’t there be a third possibility, a science of human beings that was not introspective common-sense psychology but was not neurophysiology either? This has been the great dream of the human sciences in the twentieth century, but so far all of the efforts have been, in varying degrees, failures. The most spectacular failure was behaviorism, but in my intellectual lifetime I have lived through exaggerated hopes placed on and disappointed by games theory, cybernetics, information theory, generative grammar, structuralism, and Freudian psychology, among others. Indeed it has become something of a scandal of twentieth-century intellectual life that we lack a science of the human mind and human behavior, that the methods of the natural sciences have produced such meager results when applied to human beings.

The latest candidate or family of candidates to fill the gap is called cognitive science, a collection of related investigations into the human mind involving psychology, philosophy, linguistics, anthropology, and artificial intelligence. Cognitive science is really the name of a family of research projects and not a theory, but many of its practitioners think that the heart of cognitive science is a theory of the mind based on artificial intelligence (AI). According to this theory minds just are computer programs of certain kinds. The main ideological aim of Hofstadter and Dennett’s book is to advance this theory.

The book consists of twenty-seven essays “composed and arranged” by the two editors, all but one of them followed by “reflections” written by the composers and arrangers. The book’s aim, they tell us, is “to provoke, disturb, and befuddle its readers, to make the obvious strange and, perhaps, to make the strange obvious.” The result is a very heterogeneous collection, almost a hodgepodge, in which such well-known works as “Computing Machinery and Intelligence” by the British logician Alan Turing and Thomas Nagel’s “What Is It Like to Be a Bat?” appear along with two pieces of fiction by Borges, and several works of science fiction—three of them by the Polish author Stanislaw Lem; and a selection by the biologist Richard Dawkins occurs alongside fantasy dialogues by Hofstadter and an imaginative piece about brain-splitting by Arnold Zuboff. In addition to their reflections, the editors include three pieces by Hofstadter and one by Dennett.

The table of contents is heavy with philosophers—Nagel, Nozick, Cherniak, Dennett, Leiber, and Smullyan, as well as the present reviewer. This is perhaps not surprising because, though the argument of the book is in favor of cognitive science, the issues it raises have mostly to do with the philosophy of cognitive science and not with actual contemporary practice.

A standard device used throughout the book is the “Gedankenexperiment” where we are asked to imagine some more or less fantastic eventuality as a way of challenging and testing our common sense and theoretical convictions: What if…your brain was transferred to another body, your brain was split in two, you were a brain in a vat, the information in your brain was put into some other medium; what if your brain were separated from your body but still controlled it, what if inside your head were tiny conscious demons, etc.?

With the exception of a few of the pieces, the general tone of the book is whimsical, almost playful, and the composers and arrangers are inordinately fond of puns. I think many readers will find all the conscientious whimsy, the eager cuteness, a bit wearying. One can put up with “Is the soul greater than the hum of its parts?” but one senses a certain straining for effect when one reads “Prelude…Ant Fugue” as the title of a dialogue involving an anteater. It is the sort of collection that publishers describe as “delightful.”

The editors are also a bit coy about stating their own views. Their thesis is often insinuated by asking rhetorical questions. Thus we are asked “Is mentality like milk or like a song?” The correct answer is supposed to be “song.” But in spite of their protestations about wanting only to “provoke, disturb, and befuddle” the editors are in fact using the book throughout as a platform to promote their own theory. There is nothing reprehensible about using your book to state your theory, but there is at least a hint of the disingenuous in advertising your book as a collection of ideologically diverse musings and then loading it largely to favor one side. They indirectly admit that this is what is going on, in the introduction, where they state the theory and discuss how they will deal with the various “roadblocks” that it faces.

The theory, which is fairly widely held in cognitive science, can be summarized in three propositions.

  1. Mind as Program. What we call minds are simply very complex digital computer programs. Mental states are simply computer states and mental processes are computational processes. Any system whatever that had the right program, with the right input and output, would have to have mental states and processes in the same literal sense that you and I do, because that is all there is to mental states and processes, that is all that you and I have. The programs in question are “self-updating” or “self-designing” “systems of representations.”

  2. The Irrelevance of the Neurophysiology of the Brain. In the study of the mind actual biological facts about actual human and animal brains are irrelevant because the mind is an “abstract sort of thing” and human brains just happen to be among the indefinitely large number of kinds of computers that can have minds. Our minds happen to be embodied in our brains, but there is no essential connection between the mind and the brain. Any other computer with the right program would also have a mind.

Theses 1 and 2 are summarized in the introduction where the authors speak of “the emerging view of the mind as software or program—as an abstract sort of thing whose identity is independent of any particular physical embodiment.”

  1. The Turing Test as the Criterion of the Mental. The conclusive proof of the presence of mental states and capacities is the ability of a system to pass the Turing test, the test devised by Alan Turing and described in his article in this book. If a system can convince a competent expert that it has mental states then it really has those mental states. If, for example, a machine could “converse” with a native Chinese speaker in such a way as to convince the speaker that it understood Chinese then it would literally understand Chinese.

The three theses are neatly lumped together when one of the editors writes, “Minds exist in brains and may come to exist in programmed machines. If and when such machines come about, their causal powers will derive not from the substances they are made of, but from their design and the programs that run in them. And the way we will know they have those causal powers is by talking to them and listening carefully to what they have to say.”

We might call this collection of theses “strong artificial intelligence” (strong AI).1 These theses are certainly not obviously true and they are seldom explicitly stated and defended.

Let us inquire first into how plausible it is to suppose that specific biochemical powers of the brain are really irrelevant to the mind. It is an amazing fact, by the way, that in twenty-seven pieces about the mind the editors have not seen fit to include any whose primary aim is to tell us how the brain actually works, and this omission obviously derives from their conviction that since “mind is an abstract sort of thing” the specific neurophysiology of the brain is incidental. This idea derives part of its appeal from the editors’ keeping their discussion at a very abstract general level about “consciousness” and “mind” and “soul,” but if you consider specific mental states and processes—being thirsty, wanting to go to the bathroom, worrying about your income tax, trying to solve math puzzles, feeling depressed, recalling the French word for “butterfly”—then it seems at least a little odd to think that the brain is so irrelevant.

Take thirst, where we actually know a little bit about how it works. Kidney secretions of renin synthesize a substance called angiotensin. This substance goes into the hypothalamus and triggers a series of neuron firings. As far as we know these neuron firings are a very large part of the cause of thirst. Now obviously there is more to be said, for example about the relations of the hypothalamic responses to the rest of the brain, about other things going on in the hypothalamus, and about the possible distinctions between the feeling of thirst and the urge to drink. Let us suppose we have filled out the story with the rest of the biochemical causal account of thirst.

Now the theses of the mind as program and the irrelevance of the brain would tell us that what matters about this story is not the specific biochemical properties of the angiotensin or the hypothalamus but only the formal computer programs that the whole sequence instantiates. Well, let’s try that out as a hypothesis and see how it works. A computer can simulate the formal properties of the sequence of chemical and electrical phenomena in the production of thirst just as much as it can simulate the formal properties of anything else—we can simulate thirst just as we can simulate hurricanes, rainstorms, five-alarms fires, internal combustion engines, photosynthesis, lactation, or the flow of currency in a depressed economy. But no one in his right mind thinks that a computer simulation of a five-alarm fire will burn down the neighborhood, or that a computer simulation of an internal combustion engine will power a car or that computer simulations of lactation and photosynthesis will produce milk and sugar. To my amazement, however, I have found that a large number of people suppose that computer simulations of mental phenomena, whether at the level of brain processes or not, literally produce mental phenomena.

  1. 1

    Strong” to distinguish the position from “weak” or “cautious” AI, which holds that the computer is simply a very useful tool in the study of the mind, not that the appropriately programmed computer literally has a mind.

  • Email
  • Single Page
  • Print