1.

Can we find a convincing account of how brain processes cause—or even could cause—our conscious experiences? That is the question I raised in the previous issue.* It is differently addressed in all the books under review, and indeed some of the writers do not think the relation of brain to consciousness is a causal relation in the first place.

Of the neurobiological theories of consciousness I have seen, the most impressively worked out and the most profound is that of Gerald Edelman, His two books discussed here are the third and the fourth in a series that began with Topobiology and Neural Darwinism. The aim of the series is to construct a global theory of the brain that situates brain science in relation to physics and evolutionary biology. The centerpiece of the series is the theory of consciousness advanced in The Remembered Present and summarized in Bright Air, Brilliant Fire.

In order to explain Edelman’s theory of consciousness I need first to explain briefly some of the central concepts and theories which he uses, especially those he uses to develop a theory of perceptual categorization. As we saw in the last issue, Crick wants to extend an account of “the binding problem” to a general account of consciousness. (The “binding problem” poses the question of how different stimulus inputs to different parts of the brain are bound together so as to produce a single, unified experience, for example, of seeing a cat.) Similarly, Edelman wants to extend an account of the development of perceptual categories—categories ranging from shapes, color, and movement to objects such as cats and dogs—into a general account of consciousness. The first idea central to Edelman is the notion of maps. A map is a sheet of neurons in the brain where the points on the sheet are systematically related to the corresponding points on a sheet of receptor cells, such as the surface of the skin or the retina of the eye. Maps may also be related to other maps. In the human visual system there are over thirty maps in the visual cortex alone.

The second idea is his Theory of Neuronal Group Selection. According to Edelman we should not think of brain development, especially in matters such as perceptual categorization and memory, as a matter of the brain learning from the impact of the environment. Rather, the brain is genetically equipped from birth with an overabundance of neuronal groups and the brain develops by a mechanism which is like Darwinian natural selection: some neuronal groups die out, others survive and are strengthened. In some parts of the brain, as many as 70 percent of the neurons die before the brain reaches maturity. The unit which gets selected is not the individual neuron, but neuronal groups of hundreds to millions of cells. The basic point is that the brain is not an instructional mechanism, but a selectional mechanism; that is, the brain does not develop by alterations in a fixed set of neurons, but by selection processes that eliminate some neuronal groups and strengthen others.

The third, and the most important, idea is that of reentry. Reentry is a process by which parallel signals go back and forth between maps. Map A signals to map B and map B signals back. The signals enter B from A and then reenter back into A. Edelman is anxious to insist that reentry is not just feedback, because there can be many parallel pathways operating simultaneously.

Now how is all this supposed to give rise to perceptual categories and generalizations? Edelman’s prose is not very clear at this point, but the idea that emerges is this: the brain has a problem to solve. It has to develop perceptual categories beginning with shapes, color, movement, and eventually including objects—tree, horse, and cup—and it has to be able to abstract general concepts. It has to do all this in a situation where the world is not already labeled and divided up into categories, and in which the brain has no program in advance and no homunculus inside to guide it.

How does it solve the problem? It has a large number of stimulus inputs for any given category—some stimuli are from the borders, or edges, of an object, others from its color, etc.—and after many stimulus inputs, particular patterns of neuronal groups will be selected in maps. But now similar signals will activate not only the previously selected neuronal groups in one map but also in another map or even in a whole group of maps, because the operations in the different maps are linked together by the reentry channels. Each map can use discriminations made by other maps for its own operations. Thus one map might figure out the borders of an object, another might figure out its movements, and the reentry mechanisms might enable yet other maps to figure out the shape of the object from its borders and its movements.

Advertisement

So the result is that you can get a unified representation of objects in the world even though the representation is distributed over many different areas of the brain. Different maps in different areas are busy signaling each other through the reentry pathways. This leads to the possibility of categorization and simple generalization without having to have a program or a homunculus. When you get maps all over the brain signaling to each other by reentry you have what Edelman calls “global mapping” and this allows the system not only to have perceptual categories and generalization but also to coordinate perception and action.

This is a working hypothesis and not an established theory. Edelman does not claim to have proven that this is how the brain works in forming perceptual categories. But the hypothesis is made at least plausible by the fact that Edelman’s research group has designed Weak AI computer models of a robot (“Darwin III”) that can acquire perceptual categories and can generalize them by using these mechanisms. It is important to emphasize that none of these processes so far is conscious. When Edelman talks about perceptual categorization he is not talking about conscious perceptual experiences.

The question then is, how do we get from the apparatus I have described so far to conscious experiences? What more is needed? Edelman devotes most of The Remembered Present to answering this question and any brief summary is bound to be inadequate. It is essential to distinguish between “primary consciousness,” which is a matter of having what he calls imagery, by which he means simple sensations and perceptual experiences, and “higher-order consciousness,” which includes self-consciousness and language. In order to have primary consciousness in addition to the mechanisms just described, the brain needs at least the following:

  1. It must have memory. Memory for Edelman is not just a passive process of storing but an active process of recategorizing on the basis of previous categorizations. This conception of memory seems to me one of the most powerful features of the book because it provides an alternative to the traditional idea of memory as a storehouse of knowledge and experience, and of remembering as a process of retrieval from the storehouse.
  2. The brain must have a system for learning. Learning for Edelman involves not only memory but also value, a way of valuing some stimuli over others. A system has to prefer some things to others in order that it can learn. Learning is a matter of changes in behavior that are based on categorizations governed by positive and negative values. For example, an animal might value what is light over what is dark, or what is warm over what is cold; learning for the animal involves relating perceptual categorization and memory to such a set of values.
  3. The brain also needs the ability to discriminate the self from the nonself. This is not yet self-consciousness, because it is done without a concept of the self, but the nervous system must be able to discriminate the organism of which it is a part from the rest of the world.

These three features are necessary but not yet sufficient conditions of primary consciousness. To get the full account of primary consciousness we have to add three more elements.

  1. The organism needs a system for categorizing successive events in time, and for forming concepts.
  2. A special kind of memory is needed. There must be ongoing interactions between system 4 and the systems described in 1, 2, and 3, in such a way as to give us a special memory system for values matched to past categories.

  3. We need a set of reentrant connections between the special memory system and the anatomical systems that are dedicated to perceptual categorizations. It is the functioning of these reentrant connections that gives us the sufficient conditions for the appearance of primary consciousness.

So, to summarize, on Edelman’s view, in order to have consciousness the following requirements are both necessary and sufficient: the brain must have systems for categorization; it must also have the kinds of memory Edelman describes as well as a system for learning, where learning necessarily involves values. The brain must be able to make the distinction between the self and the rest of the world, and there must be brain structures that can order events in time. And most important of all, the brain needs global reentry pathways connecting these anatomical structures.

Higher-order consciousness evolves when animals such as ourselves are able not only to feel and perceive but also to symbolize the self-nonself distinction, that is, to have a concept of the self, and this can only come through social interaction. And this, Edelman thinks, eventually leads to the development of syntax and semantics. These involve the ability to symbolize the relations of past, present, and future in a way that enables the animal to make plans free of its immediate present experiences. Higher-order consciousness can only be developed on the basis of primary consciousness.

In this summary I have left out the details of how all of this might be implemented in the actual anatomy of the brain, but Edelman is quite explicit about which brain structures he takes to be performing which functions.

So much for Edelman’s apparatus for consciousness. It is a powerful one, and Edelman spends most of the book developing its implications in detail. There are chapters on memory as recategorization, on space and time, concept formation, value as essential to learning, the development of language and higher order consciousness, and mental illness, among other subjects. One of the most fascinating of his speculations is how certain mental illnesses such as schizophrenia might result from breakdowns in reentry mechanisms.

What are we to think of this as an account of consciousness? As I said, it is the most thorough and profound attempt that I have seen in the neurobiological literature to deal with the problem of consciousness. Like Crick, whose recent work I discussed in the last issue, Edelman regards much of his theory as speculative, but so much the better. Without theses to test there is no advance in our knowledge. The main difficulty is, however, obvious: so far Edelman has given no reason why a brain that has all these features would thereby have sentience or awareness. Remember, all the features of primary consciousness that I mentioned—perceptual categorization, value, memory, etc.—are supposed to be understood only through specifying their structure and the functions they perform. We are not to think of them as already conscious. The idea is that the whole set of interlocking systems produces consciousness by way of the reentrant mappings. But as so far described, it is possible that a brain could have all these functional, behavioral features, including reentrant mapping, without thereby being conscious.

Advertisement

The problem is the same one we encountered before: How do you get from all these structures and their functions to the qualitative states of sentience or awareness that all of us have, which some philosophers call “qualia”? Our states of awareness when we see the color red or feel warm are qualitatively different from our states of awareness when we see the color black or feel cold. Edelman is well aware of the problem of qualia. His answer to the problem of qualia in The Remembered Present seems to me somewhat different from the one in Bright Air, Brilliant Fire, but neither seems to me to be adequate. In The Remembered Present he says that science cannot tell us why warm feels warm and we should not ask it to. But it seems to me that is exactly what a neuroscience of consciousness should tell us: What anatomical and physiological features of the brain cause us to have consciousness at all, and which features cause which specific forms of conscious states. The perception of the redness of red and the warmth of warm are—among many other things—precisely the conscious states that need explaining.

In Bright Air, Brilliant Fire, Edelman says we cannot solve the problem of qualia because no two people will have the same qualia and there is no way that science, with its generality, can account for these peculiar and specific differences. But this does not seem to me the real difficulty. Everyone also has a different set of fingerprints from everyone else, but this does not prevent us from getting a scientific account of skin. No doubt my pains are a little bit different from yours and perhaps we will never have a complete causal account of how and why they differ. All the same, we still need a scientific account of how exactly pains are caused by brain processes and such an account need not worry about minute differences between one person’s pain and another’s. So the peculiarity of individual experience does not place the subject of individual experience outside the realm of scientific inquiry.

Any explanation of consciousness must account for subjective states of awareness, i.e., conscious states. Edelman’s account is confronted with the following difficulty. Either the brain’s physiological features are supposed to be constitutive of consciousness—i.e., they somehow make up the state of consciousness—or they are supposed to cause consciousness. But clearly they are not constitutive of consciousness because a brain might have all these features and still be totally unconscious. So the relationship must be causal, and that interpretation is supported by Edelman’s talk of necessary and sufficient conditions. But if the brain has physical structures which are supposed to cause consciousness, then we need to be told how they might do so.

How is it supposed to work? Assuming that we understand how the reentrant mechanisms cause the brain to develop categories corresponding to its stimulus inputs, how exactly do the reentrant mechanisms also cause states of awareness? One might argue that any brain sufficiently rich to have all this apparatus in operation would necessarily have to be conscious. But for such a causal hypothesis the same question remains—how does it cause consciousness? And is it really the case that brains that have these mechanisms are conscious and those that do not are not? So the mystery remains. The problem of what accounts for the inner qualitative states of awareness or sentience called qualia is not an aspect of the problem of consciousness that we can set on one side; it is the problem of consciousness, because every conscious state is a qualitative state, and “qualia” is just a misleading name for the consciousness of all conscious states.

Edelman has written two brilliant books, both of them rich in ideas. He discusses with remarkable erudition topics ranging from quantum mechanics, to computer science, to schizophrenia, and often his insights are dazzling. One impressive feature of his theory is its detailed attempt to specify which neuronal structures in the brain are responsible for which functions. Though Edelman differs from Crick on many issues, they share the one basic conviction that drives their research. To understand the mind and consciousness we are going to have to understand in detail how the brain works.

2.

Before discussing Dennett’s Consciousness Explained, I want to ask the reader to perform a small experiment to remind himself or herself of what exactly is at issue in theories of consciousness. Take your right hand and pinch the skin on your left forearm. What exactly happened when you did so? Several different sorts of things happened. First, the neurobiologists tell us that the pressure of your thumb and forefinger set up a sequence of neuron firings that began at the sensory receptors in your skin, went into the spine and up the spine through a region called the tract of Lissauer, and then into the thalamus and other basal regions of the brain. The signal then went to the somato-sensory cortex and perhaps other cortical regions as well. A few hundred milliseconds after you pinched your skin, a second sort of thing happened, one that you know about without professional assistance. You felt a pain. Nothing serious, just a mildly unpleasant pinching sensation in the skin of your forearm.

This unpleasant sensation had a certain particular sort of subjective feel to it, a feel which is accessible to you in a way it is not accessible to others around you. This accessibility has epistemic consequences—you can know about your pain in a way that others cannot—but the subjectivity is onto-logical rather than epistemic. That is, the mode of existence of the sensation is a first-person or subjective mode of existence whereas the mode of existence of the neural pathways is a third-person or objective mode of existence; the pathways exist independently of being experienced in a way that the pain does not. The feeling of the pain is one of the “qualia” I mentioned earlier.

Furthermore, when you pinched your skin, a third sort of thing happened. You acquired a behavioral disposition you did not previously have. If someone asked you, “Did you feel anything?” you would say something like, “Yes, I felt a mild pinch right here.” No doubt other things happened as well—you altered the gravitational relations between your right hand and the moon, for example—but let us concentrate on these first three.

If you were asked what is the essential thing about the sensation of pain, I think you would say the second feature, the feeling, is the pain itself. The input signals causes the pain, and the pain in turn causes you to have a behavioral disposition. But the essential thing about the pain is that it is a specific, internal, qualitative feeling. The problem of consciousness in both philosophy and the natural sciences is to explain these subjective feelings. Not all of them are bodily sensations like pain. The stream of conscious thought is not a bodily sensation and neither are visual experiences, yet both have the quality of ontological subjectivity that I have been talking about. The subjective feelings are the data that a theory of consciousness has to explain, and the account of the neural pathways that I sketched is a partial theory to account for the data. The behavioral dispositions are not part of the conscious experience, but they are caused by it.

The peculiarity of Daniel Dennett’s book can now be stated: he denies the existence of the data. He thinks there is no such thing as the second sort of entity, the feeling of pain. He thinks there are no such things as qualia, subjective experiences, first-person phenomena, or any of the rest of it. Dennett agrees that it seems to us that there are such things as qualia, but this is a matter of a mistaken judgment we are making about what really happens. Well, what does really happen according to him?

What really happens, according to Dennett, is that we have stimulus inputs, such as the pressure on your skin in my experiment, and we have dispositions to behavior, “reactive dispositions” as he calls them. And in between there are “discriminative states” that cause us to respond differently to different pressures on the skin, to discriminate red from green, etc., but the sort of state that we have for discriminating pressure is exactly like the state of a machine for detecting pressure. It does not experience any special feeling; indeed, it does not have any inner feelings at all, because there are no such things as “inner feelings.” It is all a matter of third-person phenomena: stimulus inputs, discriminative states, and reactive dispositions. The feature which makes these all hang together is that our brains are a type of computer and consciousness is a certain sort of software, a “virtual machine” in our brain.

The main point of Dennett’s book is to deny the existence of inner mental states and offer an alternative account of consciousness, or rather what he calls “consciousness.” The net effect is a performance of Hamlet without the Prince of Denmark. Dennett, however, does not begin on page one to tell us that he thinks conscious states, as I have described them, do not exist, and that there is nothing there but a brain implementing a computer program. Rather, he spends the first two hundred pages discussing questions which seem to presuppose the existence of subjective conscious states and proposing a methodology for investigating consciousness. He does not, in short, write with the candor of a man who is completely confident of his thesis and anxious to get it out into the open as quickly as he can. On the contrary, there is a certain evasiveness about the early chapters, since he conceals what he really thinks. It is not until after page 200 that you get his account of “consciousness,” and not until well after page 350 that you find out what is really going on.

The main issue in the first part of the book is to defend what he calls the “Multiple Drafts” model of consciousness as opposed to the “Cartesian Theater” model. The idea, says Dennett, is that we are tacitly inclined to think that there must be a single place in the brain where it all comes together, a kind of Cartesian Theater where we witness the play of our consciousness. And in opposition he wants to advance the view that a whole series of information states are going on in the brain, rather like multiple drafts of an article. On the surface, this might appear to be an interesting issue for neurobiology: where in the brain are our subjective experiences localized? Is there a single locus or many? A single locus, by the way, would seem neurobiologically implausible, because any organ in the brain that might seem essential to consciousness—as for example the thalamus is essential according to Crick’s hypothesis—has a twin on the other side of the brain. Each lobe has its own thalamus. But that is not what Dennett is driving at. He is attacking the Cartesian Theater not because he thinks subjective states occur all over the brain, but rather because he does not think there are any such things as subjective states at all and he wants to soften up the opposition to his counterintuitive (to put it mildly) views by first getting rid of the idea that there is a unified locus of our conscious experiences.

If Dennett denies the existence of conscious states as we usually think of them, what is his alternative account? Not surprisingly, it is a version of Strong AI. In order to explain it I must first briefly explain four notions that he uses: von Neumann machines, connectionism, virtual machines, and memes. A digital computer, the kind you are likely to buy in a store today, proceeds by a series of steps performed very rapidly, millions per second. This is called a serial computer, and because the initial designs were by John von Neumann, a Hungarian-American scientist and mathematician, it is sometimes called a von Neumann machine. Recently there have been efforts to build machines that operate in parallel, that is with several computational channels working at once and interacting with each other. In physical structure these are more like human brains. They are not really much like brains, but certainly they are more like brains than the traditional von Neumann machines. Computations of this type are called, variously, Parallel Distributed Processing, Neuronal Net Modelling, or simply Connectionism. Strictly speaking, any computation that can be performed on a connectionist structure—or “architecture,” as it is usually called—can also be performed on a serial architecture, but connectionist nets have some other interesting properties; for example they are faster and they can “learn”—that is, they can change their behavior—by having the strengths of the connections altered.

Another notion Dennett uses is that of a “virtual machine.” The actual machine I am now working on is made of actual wires, transistors, etc.; in addition, we can get machines like mine to simulate the structure of another type of machine. The other machine is not actually part of the wiring of this machine but exists entirely in the patterns of regularities that can be imposed on the wiring of my machine. This is called the virtual machine.

The last notion Dennett uses is that of a “meme.” This notion is not very clear. It was invented by Richard Dawkins to have a cultural analogue to the biological notion of a gene. The idea is that just as biological evolution occurs by way of genes, so cultural evolution occurs through the spread of memes. According to Dawkins’s definition, quoted by Dennett, a meme is

a unit of cultural transmission, or a unit of imitation…. Examples of memes are tunes, ideas, catch-phrases, clothes fashions, ways of making pots or of building arches. Just as genes propagate themselves in the gene pool by leaping from body to body via sperm or eggs, so memes propagate themselves in the meme pool by leaping from brain to brain via a process which, in the broad sense, can be called imitation.

I believe the analogy between gene and “meme” is mistaken. Biological evolution procceds by brute, blind, natural forces. The spread of ideas and theories through “imitation” is typically a conscious process directed toward a goal. It misses the point of Darwin’s account of the origin of the species to lump the two sorts of processes together.

On the basis of these four notions, Dennett offers the following explanation of consciousness:

Human consciousness is itself a huge collection of memes (or more exactly, meme-effects in brains) that can best be understood as the operation of a “von Neumannesque” virtual machine implemented in the parallel architecture of a brain that was not designed for any such activities.

In other words, being conscious is entirely a matter of implementing a certain sort of computer program or programs in a parallel machine that evolved in nature.

It is essential to see that once Dennett has denied the existence of conscious states he does not see any need for additional arguments to get to Strong AI. All of the moves in the conjuring trick have already been made. Strong AI seems to him the only reasonable way to account for a machine that lacks any qualitative, subjective, inner mental contents but behaves in complex ways. The extreme anti-mentalism of his views has been missed by several of Dennett’s critics, who have pointed out that, according to his theory, he cannot distinguish between human beings and unconscious zombies who behave exactly as if they were human beings.

Dennett’s riposte is to say that there could not be any such zombies, that any machine, regardless of what it is made of, that behaved like us would have to have consciousness just as we do. This looks as if he is claiming that sufficiently complex zombies would not be zombies but would have inner conscious states the same as ours; but that is emphatically not the claim he is making. His claim is that in fact we are zombies, that there is no difference between us and machines that lack conscious states in the sense I have explained. The claim is not that the sufficiently complex zombie would suddenly come to conscious life, like Pygmalion. Rather, Dennett argues that there is no such thing as conscious life, for us, for animals, for zombies, or for anything else. There is only complex zombiehood. In one of his several discussions of zombies, he considers whether there is any difference between human pain and suffering and a zombie’s pain and suffering. This is in a section about pain where the idea is that pain is not the name of a sensation but rather a matter of having one’s plans thwarted and one’s hopes crushed. The idea is that the zombie’s “suffering” is no different from our conscious suffering:

Why should a “zombie’s” crushed hopes matter less than a conscious person’s crushed hopes? There is a trick with mirrors here that should be exposed and discarded. Consciousness, you say, is what matters, but then you cling to doctrines about consciousness that systematically prevent us from getting any purchase on why it matters. Postulating special inner qualities that are not only private and intrinsically valuable, but also unconfirmable and uninvestigatable is just obscurantism.

The rhetorical flourishes here are typical of the book, but to bring the discussion down to earth, ask yourself: When you performed the experiment of pinching yourself, were you “postulating special inner qualities” that are “unconfirmable and uninvestigatable”? Were you being “obscurantist”? And most important, is there no difference at all between you who have pains and an unconscious zombie that behaves like you but has no pains or any other conscious states?

Since Dennett defends a version of Strong AI, it is not surprising that he takes up the Chinese Room Argument (summarized in my previous article), which presents the hypothesis of a man in a room who does not know Chinese but nevertheless is carrying out the steps in a program necessary to give a perfect simulation of a Chinese speaker. This time Dennett’s objection is that a man could not in fact convincingly carry out the steps. The answer to this is to say that, of course, we could not do this in real life. The reason we have thought experiments is because for many ideas we wish to test it is impossible to carry out the experiment in reality. In Einstein’s famous discussion of the clock paradox he asks us to imagine that we go to the nearest star in a rocket ship that travels at 90 percent of the speed of light. It really does miss the point totally—though it is quite true—to say that we could not in practice build such a rocket ship.

Similarly it misses the point of the Chinese Room thought experiment to say that we could not in practice design a program complex enough to fool native Chinese speakers but simple enough that an English speaker could carry it out in real time. In fact, we cannot even design programs for commercial computers that can fool an able speaker of any natural language, but that is beside the point. The point of the Chinese room, as I hope I made clear, is to remind us that the syntax of the program is not sufficient to account for the semantic content (or mental content or meaning) in the mind of the Chinese speaker.

Now why does Dennett not face the actual argument as I have stated it? Why does he not tell us which of the three premises he rejects in the Chinese Room Argument I described in the first installment of this article? They are not very complicated and take the following form: (1) programs are syntactical, (2) minds have semantic contents, (3) syntax by itself is not the same as nor sufficient for semantic content. I think the answer is clear. He does not address the actual formal argument because to do so he would have to admit that what he really objects to is premise (2), the claim that minds have mental contents. Given his assumptions, he is forced to deny that minds really do have intrinsic mental contents. Most people who defend Strong AI think that the computer might have mental contents just as we do, and they mistakenly take Dennett as an ally. But he does not think that computers have mental contents, because he does not think there are any such things. For Dennett, we and the computer are both in the same situation as far as the mind is concerned, not because the computer can acquire the sorts of intrinsic mental contents that any normal human has, but because there never were any such things as intrinsic mental contents to start with.

Dennett’s book is unique among the several under discussion here in that it makes no contribution to the problem of consciousness but rather denies that there is any such problem in the first place. Dennett, as Kierkegaard said in another connection, keeps the forms while stripping them of their significance. He keeps the vocabulary of consciousness while denying its existence.

But someone might object: Is it not possible that science might discover that Dennett is right, that there really are no such things as inner qualitative mental states, that the whole thing is an illusion, like sunsets, in which the sun appears to move below the western horizon, while in fact it is the earth that is moving? After all, if science can discover that sunsets are a systematic illusion, why could it not also discover that conscious states such as pains are illusions too? There is this difference: In the case of sunsets, science does not deny the existence of the datum, that the sun appears to move through the sky. Rather, it gives an alternative explanation for such data. Science preserves the appearance while giving us a deeper insight into the reality behind the appearance. But Dennett denies the existence of the data to start with.

But couldn’t we disprove the existence of these data by proving that they are only illusions? No, you can’t disprove the existence of conscious experiences by proving that they are only an appearance disguising the underlying reality, because where consciousness is concerned the existence of the appearance is the reality. If it seems to me exactly as if I am having conscious experiences, then I am having conscious experiences. This is not an epistemic point. I might make various sorts of mistakes about my experiences, for example if I suffered from phantom limb pains. But whether reliably reported or not, the experience of feeling the pain is identical with the pain in a way that the experience of seeing a sunset is not identical with a sunset.

I regard Dennett’s denial of the existence of consciousness not as a new discovery or even as a serious possibility but rather as a form of intellectual pathology. The interest of his account lies in figuring out what assumptions could lead an intelligent person to paint himself into such a corner. In Dennett’s case the answers are not hard to find. He tells us that “the idea at its simplest was that since you can never ‘see directly’ into people’s minds, but have to take their word for it, any such facts as there are about mental events are not among the data of science.” And later,

Even if mental events are not among the data of science, this does not mean we cannot study them scientifically…. The challenge is to construct a theory of mental events, using the data that scientific method permits.

Such a theory will have to be constructed from the third-person point of view, since all science is constructed from that perspective.

Scientific objectivity, according to Dennett’s conception, requires “the third-person point of view.” At the end of his book, he combines this view with verificationism—the idea that only things that can be scientifically verified really exist. These two theories lead him to deny that there can exist any phenomena that have a first-person ontology. That is, his denial of the existence of consciousness derives from two premises: scientific verification always takes the third-person point of view, and nothing exists which cannot be verified by scientific verification so construed. This is the deepest mistake in the book, and it is the source of most of the others, so I want to end this discussion by exposing it.

We need to distinguish the epistemic sense of the distinction between the first- and the third-person points of view (i.e., between the subjective and the objective) from the ontological sense. Some statements can be known to be true or false independently of any prejudices or attitudes on the part of observers. They are objective in the epistemic sense. For example, if I say, “Van Gogh died in the south of France,” the statement is epistemically objective. Its truth has nothing to do with anyone’s personal prejudices or preferences. But if I say, for example, “Van Gogh was a better painter than Renoir,” that statement is epistemically subjective. Its truth or falsity is a matter, at least in part, of the attitudes and preferences of observers. In addition to this sense of the objective-subjective distinction, there is an ontological sense. Some entities, mountains for example, have an existence which is objective in the sense that it does not depend on any subject. Others, pain for example, are subjective in that their existence depends on being felt by a subject. They have a first-person or subjective ontology.

Now here is the point. Science does indeed aim at epistemic objectivity. The aim is to get a set of truths that are free of our special preferences and prejudices. But epistemic objectivity of method does not require ontological objectivity of subject matter. It is just an objective fact—in the epistemic sense—that I and people like me have pains. But the mode of existence of these pains is subjective—in the ontological sense. Dennett has a definition of science which excludes the possibility that science might investigate subjectivity, and he thinks the third-person objectivity of science forces him to this definition. But that is a bad pun on “objectivity.” The aim of science is to get a systematic account of how the world works. One part of the world consists of ontologically subjective phenomena. If we have a definition of science that forbids us from investigating that part of the world, it is the definition that has to be changed and not the world.

I do not wish to give the impression that all 511 pages of Dennett’s book consist of repeating the same mistake over and over. On the contrary, he makes many valuable points and is especially good at summarizing much of the current work in neurobiology and cognitive science.

Dennett’s prose, as some reviewers have pointed out, is breezy and sometimes funny, but at crucial points it is imprecise and evasive, as I have tried to explain here. At his worst he tries to bully the reader with abusive language and rhetorical questions, as the above passage about zombies illustrates. A typical move is to describe the opposing view as relying on “ineffable” entities. But there is nothing ineffable about the pain you feel when you pinch yourself.

3.

Israel Rosenfield’s book is the shortest and apparently the most unassuming of the books under review, but it is quite ambitious in its aims. On the surface the book consists mostly of a series of case histories describing various forms of neural damage that people have suffered and the consequences for their mental life and consciousness. Anyone at all familiar with the standard literature of neurology, and particularly the work of Oliver Sacks, which is frequently referred to, will recognize some of the patients. There is the famous case of HM, who because of removal of the hippocampus on both sides of his brain, is incapable of preserving short-term memory. There is the case of Madame W, who, because of paralysis, cannot recognize her left hand as her own. There is the sufferer from Korsakov’s syndrome whose memory came to a halt in 1945 and who in the 1980s retains the personality and memories of the young man he used to be over thirty years earlier.

However, Rosenfield wants to put forward his own view of consciousness. He is a former colleague and collaborator of Edelman, and like Edelman he emphasizes the connection between consciousness and memory. Not only is it impossible to have memory without consciousness but equally it is impossible to have anything like a fully developed consciousness without memory. Consciousness arises from the “dynamic interrelations of the past, the present, and the body image.” On the basis of his examination of brain-damaged patients whose reactions are disconnected and otherwise impaired, he goes on to say,

A sense of consciousness comes precisely from the flow of perceptions, from the relations among them (both spatial and temporal), from the dynamic but constant relation to them as governed by one unique personal perspective sustained throughout a conscious life; this dynamic sense of consciousness eludes the neuroscientists’ analyses.

In his view, it is the act of relating the moments of perception, not the moments themselves, that accounts for consciousness. The continuity of consciousness derives from the correspondence which the brain establishes from moment to moment with events in space and time. The vital ingredient in consciousness is self-awareness:

My memory emerges from the relation between my body (more specifically, my bodily sensations at any given moment) and my brain’s “image” of my body (an unconscious activity in which the brain creates a constantly changing generalized idea of the body by relating the changes in bodily sensations from moment to moment). It is this relation that creates a sense of self.

What is Rosenfield driving at? The best reconstruction I can make of the argument is this: when he talks about “consciousness,” he does not mean the fact of sentience as such but rather normal, unified, non-pathological forms of human consciousness. Thus when he says that newborn babies at the moment of birth are “probably not conscious,” I think he can’t mean that literally. What he must mean is that they lack the sort of coherent forms of consciousness that go with memory and a sense of the self. So the book is not a theory of consciousness as such but a theory—derived largely from studies of pathological cases—of the normal healthy consciousness. Rosenfield’s basic idea of “self-reference,” which according to him is a crucial component of consciousness, part of the very structure of consciousness itself, in turn depends on the concept of “the body image.” Neither of these notions is very well explained by Rosenfield, but they still seem to me suggestive, so I will try to clarify them.

One of the most remarkable things about the brain is its capacity to form what neurobiologists call “the body image.” To understand this, remember when I asked you to pinch your left forearm. When you did so, you felt a pain. Now, where exactly does the event of your feeling the pain occur? Common sense and our own experience tells us that it occurs in our forearm exactly in the area of the skin that we have been pinching. But in fact, that is not where it occurs. It occurs in the brain. The brain forms an image of our entire body. And when we feel pains or any other sensations in the body, the actual occurrence of the experience is in the body image in the brain.

That we experience bodily sensations in the body image is most obvious in the case of phantom limbs. In such cases, for example, a patient may continue to feel pain in his toe even after his entire leg has been amputated. It might sound as if phantom limb pains were some extremely peculiar oddity, but in fact, many of us have a version of the phantom limb in the form of sciatic pains. In the case of sciatica, the patient feels a pain in his leg, but what exactly is going on in his leg that corresponds to his pain? Exactly nothing. What happens is the sciatic nerve in the spine is stimulated and this triggers neuron firings in his brain which give him the experience of feeling a pain in his leg even though there is nothing going on in his leg to cause pain. The discovery of the body image is not new in neuroscience, but it is one of the most exciting discoveries in the history of the field. In a sense all of our bodily sensations are phantom body experiences, because the match between where the sensation seems to be and the actual physical body is entirely created in the brain.

It seems to me Rosenfield wants to use the body image to defend the following thesis: our sense of self is precisely a sense of experiences affecting the body image, and all the experiences involve this sense of self, and hence involve the body image. This is what he calls the “self-reference” of all consciousness. All of our conscious experiences are “self-referential” in the sense that they are related to the experience of the self which is the experience of the body image. The coherence of consciousness through time and space is again related to the experience of the body by way of the body image, and without memory there is no coherent consciousness.

Rosenfield uses clinical evidence very intelligently to try to show how normal consciousness works, by contrasting it with the abnormal cases. Thus Madame I has lost the normal body image. She cannot locate the position of her arms and legs; she is insensitive to pain and is constantly touching herself all over to try to reassure herself that she still exists. Furthermore, she is incapable of normal recall of her experiences, which Rosenfield takes to support the claim that there are no memories without a sense of self. Another example is provided by the patients with Korsakov’s syndrome who cannot remember even events of a few minutes earlier. They lose all sense of time, and with that they lose a coherent sense of the self. According to Rosenfield, they lack the capacity that the rest of us have to understand the ordinary meaning of words. They cannot even mean what we mean by ordinary words such as “teacup” or “clock.”

Similarly, the patient whose arm is paralyzed refuses to recognize the limb as her own: “When her left hand was shown to her, she said, ‘It’s not mine, it’s yours.’ ‘Therefore I have three hands,’ the examining physician said, and Madame W answered, ‘Perhaps.’ ” And just as the physical trauma of paralysis creates the phenomenon of the alien limb, so great psychological trauma creates multiple-personality disorder. In such cases the great psychological pain divides the self, so that it loses an aspect of self-reference. We should not think of these cases, says Rosenfield, as matters of “inhibition” or “repression” but rather as a reorganization of the ways in which the brain responds to stimuli.

On Rosenfield’s view, then, memory must not be understood as a storehouse of information but as a continuing activity of the brain. You see this most obviously in the case of images. When I form an image of some event in my childhood, for example, I don’t go into an archive and find a preexisting image, I have to consciously form an image. A sense of self is essential to memory because all of my memories are precisely mine. What makes them memories is that they are part of the structure that is part of my sense of self. Memory and the self are all tied up together and are especially linked to the body image.

Rosenfield’s book is not an attempt to present a well-worked-out theory of consciousness. Rather, his aim is to make some suggestions about the general nature of consciousness by studying the “deficits,” or distortions, of consciousness that occur in particular pathologies. I believe the most important implication of his book for future research is that we ought to think of the experience of our own body as the central reference point of all forms of consciousness.

I said at the beginning of the first of these two articles that the leading problem in the biological sciences is the problem of explaining exactly how neurobiological processes cause conscious experiences. This is not Rosenfield’s direct concern and none of the books under review provides an adequate answer to that question; but Crick, Edelman, and Penrose, in their quite different ways, are at least on the right track. They are all trying to explain how the physical matter in our head could cause subjective states of sentience or awareness. We have a long way to go, but with the philosophical ground cleared of various confusions such as Strong AI it is at least possible to state clearly what the problem is and to work toward its solution.

(This is the second of two articles.)

This Issue

November 16, 1995