This is a time of many facile answers and pseudo answers to the question of consciousness. Many writers are making money from the so-called “consciousness market.” Some of them are sincere but muddled, others are out for the quick buck. Most people, it should be clear, do not want hard answers to deep questions—they want catchwords and slogans. It is too bad that the crowds of people looking for Big Answers About Brains will probably pass by the book Brainstorms by the Tufts philosopher Daniel Dennett. It is too bad, because people who probably could read Dennett’s book will instead turn elsewhere and find small pithy slogans instead of big answers.

Before I get into the meaty part of this review, I should say that, in my opinion, Brainstorms is one of the most important contributions to thinking about thinking yet written. This is the case partly because Dennett is a penetrating thinker with an obsession for clarity in his analyses, and partly because he is lucky enough to be alive in an age when science can finally begin to approach the mysteries of brains, minds, consciousnesses, and souls. One of the virtues of Dennett’s book is its skillful mixing of philosophy and science. There are plentiful citations of rather esoteric scientific articles on artificial intelligence, mathematical logic, mental imagery, behaviorism, neurology, linguistics—even anesthesia. Dennett is well read, and his knowledge serves him well. I must say, I am glad I did not know of this book while writing my own book (Gödel, Escher, Bach: an Eternal Golden Braid), for I might have become discouraged that someone had already said it all.

What are the issues he is most concerned with? One of them is what I like to call “the soul-searching question”: where is the “I” that each one of us has, and what in the world is it? I am not sure that any answer will ever satisfy us, and yet it is in some sense the most important question of human existence. It is the ultimate attempt to “find oneself.” I believe finding an answer to this question is the underlying motivation for Brainstorms. Everything in the book turns on this issue, coming at it from angle after angle, as a mountain climber might try to size up a forbidding mountain peak by circling around it several times and going partway up on several sides before eventually attempting to scale it. These reconnoiterings form an integral part of Dennett’s “answer” to the soul-searching question. I doubt that Dennett would claim to have presented the final answer; he probably feels, as I do, that a definitive answer is impossible. Nonetheless, he comes about as close as I have seen anyone do to giving a well-worked-out theory of the meaning of the terms “consciousness,” “mind,” and “I.”

What is Dennett’s approach to mind and brain? Its cornerstone is the notion of an “intentional system.” By this Dennett means some mechanism or organism whose activities we can best understand by imbuing them with purposiveness and other characteristics that we traditionally reserve for animate beings: desires, beliefs, knowledge. Now, one ought to ask, what are the definitions of these attributes? Knowledge, for instance. Does access to information on subject X make one knowledgeable on subject X? A Frenchman who cannot speak English will not gain much illumination from the Encyclopedia Britannica sitting handsomely on his shelf. Does my bank’s computer “know” that two plus two is four? Does it know what my bank account is? Does it “know” anything at all?

To get at this question and many other similar ones, Dennett turns to the rich and concrete image of chess-playing computers (or, to be more accurate, computers loaded with chess-playing programs). I happen to own one of those little machines, called a “Chess Challenger 7,” and I can attest to the awe I feel when its lights flash and it calculates—in five seconds—a move that defeats my latest stratagem. I am a computer scientist and an old hand at programming. I have written game-playing programs and I understand them inside-out—yet I can still get an eerie feeling from watching this single-minded book-sized alien object “outthink” me. That is to say, I find it possible to think about it that way. If I want to, I can back off and adopt a more careful mode of expression. Around certain people, in fact, I would take great pains never to use such anthropomorphic language. My point, however, is that it is just as easy for professional computer scientists as it is for outsiders to slip into language such as, “If the computer wants to capture my bishop and believes I wouldn’t trade my queen for his knight, then the computer will move his pawn forward one space.”

Advertisement

Who knows? Perhaps such language is easier for us to use. I have a friend who is a chess master and who respects his Chess Challenger’s game, although he beats it easily. His twelve-year-old son still gets beaten by the machine, however. Remembering my awe of “electronic brains” when I was about that age, I asked the boy, “Do you think this machine can think?” He replied without much interest, “Naw, it’s just programmed to play chess.” This blasé response floored me. Is there no romance left about “thinking machines”? What is the difference, if you please, between genuine thought and following some program or other?

One could try to strip the computer of all its talents. Dennett does this brilliantly in the following passage:

It used to be popular to say, “A computer can’t really think, of course; all it can do is add, subtract, multiply, and divide.” That leaves the way open to saying, “A computer can’t really multiply, of course; all it can do is add numbers together very, very fast,” and that must lead to the admission: “A computer cannot really add numbers, of course; all it can do is control the opening and closing of hundreds of tiny switches,” which leads to: “A computer can’t really control its switches, of course; it’s simply at the mercy of the electrical currents pulsing through it.” What this chain of claims adds up to “prove,” obviously, is that computers are really pretty dull lumps of stuff—they can’t do anything interesting at all. They can’t really guide rockets to the moon, or make out paychecks, or beat human beings at chess, but of course they can do all that and more.

This is a hilarious reductio ad absurdum, typical of Dennett’s style of argument. It is beautifully put. One of its charms is that it is so disorienting. What in the world, one wonders after reading it, does it mean, “to ‘do’ something”? And if something by chance gets “done” anyway, to whom should we give credit? As my friend Donald Byrd pointed out, “Hofstadter can’t really write reviews, of course; all he can do is select, juxtapose, and rearrange various prefabricated units composed of letters”—“and,” I hastened to add, “take suggestions from friends.”

Suppose that a chess-playing program were criticized by its human designer for its habit of getting its queen out too early. As Dennett points out, this is all the more difficult an attribution to make to the computer, since,

for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer’s remark belongs describes features that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality.”

I find the idea of an “innocently emergent” quality of a program most appealing. In my book, I used the word “epiphenomenon” for the same notion, but perhaps Dennett’s term is better. Here, it seems at least somewhat real—or easy—to attribute “desire” to the program: it wants to get its queen out fast (for whatever misguided reasons). Which brings us back to the question of what these things are—desires, goals, beliefs, opinions, and other intentional attributes.

In order to show how such intentional attributes can be reduced to mechanistic ones—for this is pretty much the purpose of his book—Dennett uses the picturesque concept of a “homunculus discharging.” A homunculus is a hypothetical “mini-man” who plays a role in explaining how intelligence is organized. If we explain my perception of a book on the table by positing an “inner screen” in my brain on which the image of the book is projected, then whose is the “inner eye” that perceives the image on that inner screen? Answer: a mini-man’s! And does that mini-man have an inner screen and a mini-mini-man to watch it within his mini-brain? How far does this regress go? Where does the buck stop?

Does a player of “inner tennis” have an “inner tennis court” on which homunculi combat each other, each with a yet further “inner tennis” game going on in his mini-head? Or consider the “Tidy Bowl” man, of whom I read once in my local paper (author unknown):

First of all, what kind of weirdo midget spends his day floating around in a toy boat in someone else’s toilet? And how does he avoid being flushed? Does he have a little mooring in every john in the world? And does he have a little Tidy Bowl man in his toilet who makes the water blue?

There you have, in a nutshell, the homunculus problem. How can intelligence be explained by recourse to littler intelligences? Dennett puts the problem this way:

Advertisement

Any time a theory builder proposes to call any event, state, structure, etc., in any system (say the brain of an organism) a signal or message or command or otherwise endows it with content, he takes out a loan of intelligence. He implicitly posits along with his signals, messages, or commands, something that can serve as a signal-reader, message-understander, or commander, else his “signals” will be for naught, will decay unreceived, uncomprehended. This loan must be repaid eventually by finding and analyzing away these readers or comprehenders; for, failing this, the theory will have among its elements unanalyzed man-analogues endowed with enough intelligence to read the signals, etc., and thus the theory will postpone answering the major question: what makes for intelligence?

This metaphor of “taking out a loan” on intelligence is used to good effect by Dennett:

Intentionality…serves as a reliable means of detecting exactly where a theory is in the red relative to the task of explaining intelligence; wherever a theory relies on a formulation bearing the logical marks of intentionality, there a little man is concealed.

Skinner’s and Quine’s adamant prohibition of intentional idioms at all levels of theory is the analogue of rock-ribbed New England conservatism: no deficit spending when building a theory!

In his chapter called “Artificial Intelligence as Philosophy and Psychology,” Dennett observes, tongue in cheek, that “psychology without homunculi is impossible. But psychology with homunculi is doomed to circularity or infinite regress, so psychology is impossible.” He calls this seeming inexplicability or irreducibility of mind “Hume’s problem,” and sees its resolution, as do most workers in artificial intelligence (AI), by thinking of littler as meaning lesser.

If we then look closer at the individual boxes [in a flow chart for an artificial intelligence program] we see that the function of each is accomplished by subdividing it via another flow chart into still smaller, more stupid homunculi. Eventually this nesting of boxes within boxes lands you with homunculi so stupid (all they have to do is remember whether to say yes or no when asked) that they can be, as one says, “replaced by a machine.” One discharges fancy homunculi from one’s scheme by organizing armies of such idiots to do the work.

The AI programmer uses intentional language fearlessly because he knows that if he succeeds in getting his program to run, any questions he has been begging provisionally will have been paid back. The computer is more unforgiving than any human critic; if the program works then we can be certain that all homunculi have been discharged from the theory.

In the chapter called “Toward a Cognitive Theory of Consciousness,” Dennett makes a stab at suggesting what a high-level flow chart of a conscious system such as his own mind would look like. Later in the chapter he mocks his own design: “Is my model akin to the blueprint for a perpetual motion machine, or have I merely forgotten to provide a way out for the gases?” But this modesty does not prevent him from making rather strong claims for the idea of such a design in general:

[Y]ou are a realization of this flow chart, and…it is in virtue of this fact that it seems—to us and to you—that there is something it is like to be you.

This last phrase comes from the philosopher Thomas Nagel’s article “What Is It Like to Be a Bat?” and it is a phrase which Dennett has taken to heart. The mere title of the article leads one immediately to wonder: What is it like to be—a country? a government? a company? the operating system of a timesharing computer? a person of the opposite sex? a guillotined head? an ant colony? an ant? an AI program? a lunatic? a speaker of a language one doesn’t speak? a rock?

A vivid example of this question is posed by the following hypothetical situation:

Let a functionalist theory of pain (whatever its details) be instantiated by a system the subassemblies of which are not such things as C-fibers and reticular systems but telephone lines and offices staffed by people. Perhaps it is a giant robot controlled by an army of human beings that inhabit it. When the theory’s functionally characterized conditions for pain are now met we must say, if the theory is true, that the robot is in pain. That is, real pain, as real as our own, would exist in virtue of the perhaps disinterested and businesslike activities of these bureaucratic teams, executing their proper functions. It does seem that there must be more to pain than that.

Actually, to me, it seems entirely credible that there is real pain at the organization’s level. And it seems to me, moreover, that it would be manifested at the component level (i.e., the human level) by a general kind of agitation. Think, for instance, of the present condition of the United States as it feels more and more the strangling effects of the OPEC strategy. It would be quite unrealistic to say that everyone in the country is feeling physical pain from this, but it would be equally unrealistic to ignore the very real effects upon people’s morales, their tempers, general anxiety levels, ability to do what they want, and so on. It is this collective mood of the country that constitutes its pain, and that, if anything, will be responsible for its deciding to go to war over oil. (Of course, a flow chart for the United States would not look very much like one for a human brain; I am not claiming that the United States is a “big person,” only that there could be a “big person” built out of ordinary people—but probably it would take many trillions of people, not just hundreds of millions, on the assumption that one ordinary person is simulating the function of a single cell in the giant person.

If such an enormous model of a thinking system were realized, whether in a computer or in an organization whose “cells” are people, one has to answer the fundamental questions of where and how beliefs are stored. This is the issue of knowledge representation, perhaps the core issue of artificial intelligence research today. In a provocative chapter titled “Brain Writing and Mind Reading,” Dennett muses over whether, in the hypothetical “Golden Age of Neurocryptography,” it will be possible to “read out” your beliefs by examining your brain’s structure, or even more bizarre, to “write” new beliefs directly into your brain by some sort of surgical procedure. It has to be pointed out that if there were in principle no way of “reading” the brain, then we ourselves could not read our own brains, and could not retrieve our beliefs. So there is some sort of “brain writing” which encodes our ideas. But how implicitly are they coded? Can we hope to “crack the cerebral code,” as Dennett puts it?

One has to recognize that we have yet to characterize what we mean by “knowing” something, or “believing” it. Suppose—to take an amusing example that he gives—that we found the Lord’s Prayer written in freckles on a man’s back. That would be “something strange and marvelous” indeed—but not a piece of his cognitive structure. Dennett then considers pieces of the brain which control involuntary eyeball motions, such as contraction, focusing, and so on. Certainly that part of my brain knows how to do its job, but it is not part of my knowledge. One must distinguish between knowledge (or beliefs) belonging to a person, and knowledge belonging to parts of that person. Only at the personal level do we reach ideas which seem, in some sense, to be stored in a way that can be verbalized. Are these, then, synonymous with our beliefs? And if so, does that mean that beliefs are stored in some sort of “language”? That is to say, are beliefs stored in something akin to a neural version of sentences? This is, to me, an implausible hypothesis which Dennett explores at some length. At first I couldn’t believe that he was taking it seriously, but indeed he is. He too concludes, to my relief, that it is implausible.

To do so, he draws the distinction, very familiar to artificial intelligence workers, between procedural and declarative forms of knowledge. A perfect example of procedural knowledge is our ability to make sense of what we see. We don’t know how we do it, but, by God, we certainly do it well! Declarative knowledge is that which we have some sort of direct access to, which we can verbalize and modify with ease: “Shakespeare was born on April 23.” What about the knowledge that we seem to store somewhat implicitly in imagery? Dennett gives several excellent examples of knowledge that we all share, yet of which the question can still be asked whether we all share the “same” structures in which it is stored. For example: “New York is not on the moon, or in Venezuela; salt is not sugar, or green, or oily; salt is good on potatoes, on eggs; tweed coats are not made of salt; a grain of salt is smaller than an elephant….”

Perhaps my favorite example of how weird “systems that can believe” can be is the tic-tac-toe machine that Danny Hillis of MIT built in his spare time. In this day of chess-playing computers, a tic-tac-toe machine wouldn’t be much cause for attention, except that this one happens to be made entirely of Tinkertoys—50,000 of them, arranged in a complex pattern that Danny programmed a large computer to figure out. As Danny said to me, “Any four-year-old with 50,000 Tinkertoys and a PDP-10 computer could have built it.” Called “TTL” (Tinker-Toy Logic), the machine encodes in its wooden configuration all the moves that have been made by it and by its opponent, and this configuration then determines its response to the opponent’s next move. Charles Babbage would have loved this device, I’m sure. Where in its complex tangle of Tinkertoys do its “beliefs” about how to play tic-tac-toe lie? Where are its “beliefs” about what moves have been made? And in both cases, are they procedural or declarative?

Another kind of knowledge which we use but to which, at a conscious level, we have no direct access, is knowledge of how to store incoming perceptions, and how to retrieve memories by association. Much of the time we retrieve memories effortlessly; what we don’t notice is that we don’t retrieve all possible relevant associations, but only one or two. “Who” does the selecting? What kind of filter is used? And for filing memories away: “Who” is it that very thoughtfully has stored memories of passages from Dennett’s book according to where they are placed on the page (and without any conscious direction from “me”)? It very much helps me to find passages I remember when I scan for them. I certainly appreciate the kindness but don’t know whom to thank! This kind of knowledge of memory management has been termed “metaknowledge”—knowledge about how to represent knowledge—and is vital to any intelligent system. Are such strategies always procedural, cryptic, invisible, unmodifiable? Or can we voluntarily modify our memory management strategies?

One of the central mysteries of consciousness, indeed, is the fact that, upon careful reflection, we find that we do not actually exert any control over the next thought we will think. We have to take potluck, so to speak—for instance, the very choice to use the word “potluck” was not my choice—it just bubbled up from my subconscious before I could possibly have intervened. So who should get credit for my prose style? Me, or those subconscious entities which collectively determine my style? Dennett’s point is that this collection of subselves, this cognitive style, is me, and that thinking, being conscious, is having such a collection of subselves which in turn are composed of less conscious, less intelligent, less complex subselves, and so on, until we reach the entirely unconscious level of the cell—or, if you prefer, of the molecule or electron.

This is only part of the story, of course. It’s clear that not just any old collection of electrons or cells or homunculi will add up to a conscious organism. Which ones will, to use that haunting phrase of Nagel’s again, add up to “something which it is like something to be”? How do we characterize the overall organization of homunculi so that we know we are describing a conscious being?

Having defined intentional systems expressly for this purpose, Dennett attempts to characterize which sorts of intentional attributes are necessary for “personhood” (which I take to be synonymous with the notion of consciousness I have just referred to) in his chapter “Conditions of Personhood.” One of the possibilities he entertains is that to be a person one

must be able to reciprocate the stance [of another entity], which suggests that an intentional system that itself adopted the intentional stance toward other objects would meet the test. Let us define a second-order intentional system as one to which we ascribe not only simple beliefs, desires, and other intentions, but beliefs, desires, and other intentions about beliefs, desires, and other intentions.

To examine the validity of this proposed criterion, he then examines some fascinating cases of animal behavior. For instance, consider the mother bird who will feign a broken wing to lure a predator away from the nest. Does the bird “intend” to induce a false “belief” in the predator, or does a piece of rote behavior just get dredged up from its brain in some clever way, and then executed unconsciously, much as a subroutine in a program? Does the bird think to herself, “Let’s see, if I were to flap my wing as if it were broken, the fox would think…”?

After considering further examples having to do with dog behavior, Dennett finds himself unsatisfied even with third-order intentional systems, for in one example he shows how dogs might have intentions about beliefs about desires—and certainly (or so Dennett seems to think) dogs aren’t persons! I for one tend to attribute quite a lot of consciousness, or “personhood,” to dogs—yet I have also seen canine behavior which makes me wonder how realistic I am being when I begin to identify with dogs. Just how much of a mind does a dog have? I would very much like to see a serious study called “The Intelligence of Dogs” in which well-verified stories are put together with results of psychological experiments. Has such a book ever been written?

In any case, Dennett is dissatisfied with merely taking the presence of some types of higher-order intentions as the condition for personhood. He goes on to consider the hypothesis, derived from the philosopher Harry Frankfurt, that what characterizes the human condition is that we have second-order volitions—desires to change our own desires. Frankfurt calls “wanton” those intentional systems which have first-order desires, but no second-order volitions.

But what should be so special about second-order volitions? Why are they, among higher-order intentions, the peculiar province of persons? Because, I believe, the “reflective self-evaluation” Frankfurt speaks of is, and must be, genuine self-consciousness, which is achieved only by adopting toward oneself the stance not simply of communicator but of Anscombian reason-asker and persuader…. One schools oneself, one offers oneself persuasions, arguments, threats, bribes, in the hopes of inducing oneself to acquire the first-order desire.

It seems funny to me to replace the classic idea that we are “featherless bipeds” with the more modern, up-to-date idea that we are “intentional systems with second-order volitions.” It seems funny because, as Wittgenstein pointed out, it’s hard enough to characterize what ordinary, “simple” categories are, such as chairs, games, and the letter “a.” How can Dennett or anyone hope to have caught the essence of personhood in a pithy slogan? Fortunately, Dennett realizes that personhood has slipped through his net when he considers what happens when questions of responsibility arise.

There is no objectively satisfiable sufficient condition for an entity’s really having beliefs, and as we uncover apparent irrationality under an intentional interpretation of an entity, our grounds for ascribing any beliefs at all wanes, especially when we have (what we always can have in principle) a non-intentional, mechanistic account of the entity.

In just the same way our assumption that an entity is a person is shaken precisely in those cases where it matters: when wrong has been done and the question of responsibility arises. For in these cases the grounds for saying that the person is culpable (the evidence that he did wrong, was aware he was doing wrong, and did wrong of his own free will) are in themselves grounds for doubting that it is a person we are dealing with at all. And if it is asked what could settle our doubts, the answer is: nothing. When such problems arise we cannot even tell in our own cases if we are persons.

An excellent, surprising way of dealing with one of the central questions Dennett raises.

I have not touched on many of the most interesting points in the book. Dennett discusses articulately why Gödel’s Theorem does not preclude artificial intelligence—a theme that has considerable interest to me. He has a delightful attack on behaviorism called “Skinner Skinned.” And one could really not complete a review of Brainstorms without mentioning the delicious “dessert” that comes at the end—a tale of a man whose brain has been removed and which sits in a chemical bath in Houston while “he”—linked to his brain by radio at the speed of light—goes on a dangerous underground mission near Tulsa. “Where am I?” wonders the brain-body combination—and that is the title of the tale. It is astonishingly effective at evoking the mind-boggling quality of the soul-searching question, and is a powerful ending to the book.

I have tended to be suspicious of philosophers and their approach to most questions, even their prose, filled as it is with “qua“s, “simpliciter“s, “tout court“s, and so on (a habit which Dennett unfortunately shares). But in the past year, I have become more sympathetic to the philosophers’ views of mind, partly because I have had some delightful discussions with philosophers, and largely because of Dennett’s book. To me it is remarkably lucid and well written, refreshing and unpompous. He has set up an excellent frame for the discussion of mentalistic phenomena, and though I believe some of his ideas will be outmoded in the not-too-distant future, many will survive. Dennett has done much that should sharpen the thinking of AI people, philosophers, and cognitive psychologists about the most difficult questions they face.

This Issue

May 29, 1980