Brainstorms: Philosophical Essays on Mind and Psychology
This is a time of many facile answers and pseudo answers to the question of consciousness. Many writers are making money from the so-called “consciousness market.” Some of them are sincere but muddled, others are out for the quick buck. Most people, it should be clear, do not want hard answers to deep questions—they want catchwords and slogans. It is too bad that the crowds of people looking for Big Answers About Brains will probably pass by the book Brainstorms by the Tufts philosopher Daniel Dennett. It is too bad, because people who probably could read Dennett’s book will instead turn elsewhere and find small pithy slogans instead of big answers.
Before I get into the meaty part of this review, I should say that, in my opinion, Brainstorms is one of the most important contributions to thinking about thinking yet written. This is the case partly because Dennett is a penetrating thinker with an obsession for clarity in his analyses, and partly because he is lucky enough to be alive in an age when science can finally begin to approach the mysteries of brains, minds, consciousnesses, and souls. One of the virtues of Dennett’s book is its skillful mixing of philosophy and science. There are plentiful citations of rather esoteric scientific articles on artificial intelligence, mathematical logic, mental imagery, behaviorism, neurology, linguistics—even anesthesia. Dennett is well read, and his knowledge serves him well. I must say, I am glad I did not know of this book while writing my own book (Gödel, Escher, Bach: an Eternal Golden Braid), for I might have become discouraged that someone had already said it all.
What are the issues he is most concerned with? One of them is what I like to call “the soul-searching question”: where is the “I” that each one of us has, and what in the world is it? I am not sure that any answer will ever satisfy us, and yet it is in some sense the most important question of human existence. It is the ultimate attempt to “find oneself.” I believe finding an answer to this question is the underlying motivation for Brainstorms. Everything in the book turns on this issue, coming at it from angle after angle, as a mountain climber might try to size up a forbidding mountain peak by circling around it several times and going partway up on several sides before eventually attempting to scale it. These reconnoiterings form an integral part of Dennett’s “answer” to the soul-searching question. I doubt that Dennett would claim to have presented the final answer; he probably feels, as I do, that a definitive answer is impossible. Nonetheless, he comes about as close as I have seen anyone do to giving a well-worked-out theory of the meaning of the terms “consciousness,” “mind,” and “I.”
What is Dennett’s approach to mind and brain? Its cornerstone is the notion of an “intentional system.” By this Dennett means some mechanism or organism whose activities we can best understand by imbuing them with purposiveness and other characteristics that we traditionally reserve for animate beings: desires, beliefs, knowledge. Now, one ought to ask, what are the definitions of these attributes? Knowledge, for instance. Does access to information on subject X make one knowledgeable on subject X? A Frenchman who cannot speak English will not gain much illumination from the Encyclopedia Britannica sitting handsomely on his shelf. Does my bank’s computer “know” that two plus two is four? Does it know what my bank account is? Does it “know” anything at all?
To get at this question and many other similar ones, Dennett turns to the rich and concrete image of chess-playing computers (or, to be more accurate, computers loaded with chess-playing programs). I happen to own one of those little machines, called a “Chess Challenger 7,” and I can attest to the awe I feel when its lights flash and it calculates—in five seconds—a move that defeats my latest stratagem. I am a computer scientist and an old hand at programming. I have written game-playing programs and I understand them inside-out—yet I can still get an eerie feeling from watching this single-minded book-sized alien object “outthink” me. That is to say, I find it possible to think about it that way. If I want to, I can back off and adopt a more careful mode of expression. Around certain people, in fact, I would take great pains never to use such anthropomorphic language. My point, however, is that it is just as easy for professional computer scientists as it is for outsiders to slip into language such as, “If the computer wants to capture my bishop and believes I wouldn’t trade my queen for his knight, then the computer will move his pawn forward one space.”
Who knows? Perhaps such language is easier for us to use. I have a friend who is a chess master and who respects his Chess Challenger’s game, although he beats it easily. His twelve-year-old son still gets beaten by the machine, however. Remembering my awe of “electronic brains” when I was about that age, I asked the boy, “Do you think this machine can think?” He replied without much interest, “Naw, it’s just programmed to play chess.” This blasé response floored me. Is there no romance left about “thinking machines”? What is the difference, if you please, between genuine thought and following some program or other?
One could try to strip the computer of all its talents. Dennett does this brilliantly in the following passage:
It used to be popular to say, “A computer can’t really think, of course; all it can do is add, subtract, multiply, and divide.” That leaves the way open to saying, “A computer can’t really multiply, of course; all it can do is add numbers together very, very fast,” and that must lead to the admission: “A computer cannot really add numbers, of course; all it can do is control the opening and closing of hundreds of tiny switches,” which leads to: “A computer can’t really control its switches, of course; it’s simply at the mercy of the electrical currents pulsing through it.” What this chain of claims adds up to “prove,” obviously, is that computers are really pretty dull lumps of stuff—they can’t do anything interesting at all. They can’t really guide rockets to the moon, or make out paychecks, or beat human beings at chess, but of course they can do all that and more.
This is a hilarious reductio ad absurdum, typical of Dennett’s style of argument. It is beautifully put. One of its charms is that it is so disorienting. What in the world, one wonders after reading it, does it mean, “to ‘do’ something”? And if something by chance gets “done” anyway, to whom should we give credit? As my friend Donald Byrd pointed out, “Hofstadter can’t really write reviews, of course; all he can do is select, juxtapose, and rearrange various prefabricated units composed of letters”—“and,” I hastened to add, “take suggestions from friends.”
Suppose that a chess-playing program were criticized by its human designer for its habit of getting its queen out too early. As Dennett points out, this is all the more difficult an attribution to make to the computer, since,
for all the many levels of explicit representation to be found in that program, nowhere is anything roughly synonymous with “I should get my queen out early” explicitly tokened. The level of analysis to which the designer’s remark belongs describes features that are, in an entirely innocent way, emergent properties of the computational processes that have “engineering reality.”
I find the idea of an “innocently emergent” quality of a program most appealing. In my book, I used the word “epiphenomenon” for the same notion, but perhaps Dennett’s term is better. Here, it seems at least somewhat real—or easy—to attribute “desire” to the program: it wants to get its queen out fast (for whatever misguided reasons). Which brings us back to the question of what these things are—desires, goals, beliefs, opinions, and other intentional attributes.
In order to show how such intentional attributes can be reduced to mechanistic ones—for this is pretty much the purpose of his book—Dennett uses the picturesque concept of a “homunculus discharging.” A homunculus is a hypothetical “mini-man” who plays a role in explaining how intelligence is organized. If we explain my perception of a book on the table by positing an “inner screen” in my brain on which the image of the book is projected, then whose is the “inner eye” that perceives the image on that inner screen? Answer: a mini-man’s! And does that mini-man have an inner screen and a mini-mini-man to watch it within his mini-brain? How far does this regress go? Where does the buck stop?
Does a player of “inner tennis” have an “inner tennis court” on which homunculi combat each other, each with a yet further “inner tennis” game going on in his mini-head? Or consider the “Tidy Bowl” man, of whom I read once in my local paper (author unknown):
First of all, what kind of weirdo midget spends his day floating around in a toy boat in someone else’s toilet? And how does he avoid being flushed? Does he have a little mooring in every john in the world? And does he have a little Tidy Bowl man in his toilet who makes the water blue?
There you have, in a nutshell, the homunculus problem. How can intelligence be explained by recourse to littler intelligences? Dennett puts the problem this way:
Any time a theory builder proposes to call any event, state, structure, etc., in any system (say the brain of an organism) a signal or message or command or otherwise endows it with content, he takes out a loan of intelligence. He implicitly posits along with his signals, messages, or commands, something that can serve as a signal-reader, message-understander, or commander, else his “signals” will be for naught, will decay unreceived, uncomprehended. This loan must be repaid eventually by finding and analyzing away these readers or comprehenders; for, failing this, the theory will have among its elements unanalyzed man-analogues endowed with enough intelligence to read the signals, etc., and thus the theory will postpone answering the major question: what makes for intelligence?
This metaphor of “taking out a loan” on intelligence is used to good effect by Dennett:
Intentionality…serves as a reliable means of detecting exactly where a theory is in the red relative to the task of explaining intelligence; wherever a theory relies on a formulation bearing the logical marks of intentionality, there a little man is concealed.
Skinner’s and Quine’s adamant prohibition of intentional idioms at all levels of theory is the analogue of rock-ribbed New England conservatism: no deficit spending when building a theory!
In his chapter called “Artificial Intelligence as Philosophy and Psychology,” Dennett observes, tongue in cheek, that “psychology without homunculi is impossible. But psychology with homunculi is doomed to circularity or infinite regress, so psychology is impossible.” He calls this seeming inexplicability or irreducibility of mind “Hume’s problem,” and sees its resolution, as do most workers in artificial intelligence (AI), by thinking of littler as meaning lesser.