1.

Consciousness is hard to miss but easy to avoid, theoretically speaking. Nothing could be more present to you than your current state of consciousness—all those vivid sensations, pressing thoughts, indomitable urges. But it has proved only too easy for theorists of mind to turn a blind eye to what gives them a sense of sight to start with. Thus for most of the century consciousness has been comparable to sex in Victorian England: everyone knew it was there, throbbing away, but it was not a fit subject for polite conversation, or candid investigation. With the rise of behaviorism, in both philosophy and psychology, consciousness was deemed the “ghost in the machine,” an ethereal legacy of Cartesianism that could be neither observed nor measured, a purely private realm of no conceivable relevance to objective science.

Neither did neurophysiologists find it necessary to recognize the scientific legitimacy of consciousness: they did just fine by regarding the brain as a wholly physical system, a complex of neurons and their biochemistry. Even the nascent computer-based theories of the mind had no place for consciousness, since computers can perform their information-processing operations without benefit of conscious awareness. Consciousness seemed like a phenomenon it was not necessary to consider, and hence possible to deny—common sense notwithstanding. Other subjects took up the intellectual space that one might have thought would be occupied by consciousness: overt physical behavior, environmental “stimuli,” internal states of the nervous system, abstract computations. In principle, as they have defined “principle,” the sciences of human nature need make no reference to consciousness and suffer no explanatory or predictive inadequacy.

Yet to any sensible person consciousness is the essence of mind: to have a mind precisely is to endure or enjoy conscious states—inner, subjective awareness. Recently consciousness has leaped naked from the closet, streaking across the intellectual landscape. People are conscious—all of them! The deep, dark secret is out. Even animals carry their own distinctive quantum of consciousness, their own inner life.You can almost hear the sigh of relief across the learned world as theorists let loose and openly acknowledge what they have repressed for so long. The Nineties are to consciousness what the Sixties were to sex.

Why this has occurred is somewhat obscure, as intellectual revolutions often are. Post-positivist disenchantment with behavioristic and materialistic reductionism began to grow in the Seventies, abetted by a greater willingness to return to the deep old problems of philosophy. Philosophers became less ready to assume that a recalcitrant philosophical problem could be diagnosed as mere conceptual confusion, as a pseudoquestion. At the same time neuroscientists began trying to build connections from the neural to the mental, acknowledging that the brain is nothing if not the seat of the mind. It was only a matter of time until they faced up to the fact that the brain is also the organ of conscious awareness. Instead of shunning consciousness as prescientific, maybe it could be approached as the holy grail of brain science, from which many a Nobel Prize might be harvested.

But there is a price to pay for all this theoretical liberation: once consciousness is admitted as a real and distinctive phenomenon in the natural world we have to find a place for it in our scheme of things; we have to give an explanation of its nature. How does consciousness fit into the scientific world-picture so laboriously constructed since the seventeenth century? How does it relate to the physical world of atoms, space, and fields of force? How is it that the organ known as the brain contrives to usher consciousness into existence? These are the troubling questions that arise once our state of denial is exposed for what it is.

There was a reason for all that denial: consciousness is threatening. It looks like an anomaly in our conception of the universe, a place where our usual methods of understanding run out of steam. How can the objective sciences of the natural world, dealing with particles and their modes of aggregation, find a place for the subjective phenomenon of consciousness? How do the biological cells of the brain give rise to experiences of seeing red and emotions of despair? Must we suppose that consciousness exists outside the realm accessible to the natural sciences? Is the long-rejected dualism of mind and body the right position after all? Has the ghost come back to haunt the machine? Worse, is the machine really a ghost in disguise? Is our entire conception of the material world suspect in the light of the fact that full-blown consciousness does have its ghostly roots in the nature of matter? Is matter less material than we thought?

All these questions, and others, become pressing once the reality and uniqueness of consciousness is forthrightly acknowledged. I believe myself that the new interest in consciousness represents the next big phase in human thought about the natural world, as large as the determination to understand the physical world that gathered force in the seventeenth century. We are now beginning to face up to the aspect of nature we do not understand. Whether this phase will be crowned with the same success as our efforts to understand matter in motion is not at all clear.

Advertisement

Certainly it is not that consciousness has gained recognition and respect because we have established a consensus about what it is and how it arises. Quite the opposite: discussion of consciousness is marked by divergences of opinion as wide as can be found anywhere. John Searle’s Mind, Language, and Society and Paul and Patricia Churchland’s On the Contrary (an appropriate title given the radical lack of consensus) exemplify this divergence. Searle takes consciousness to be incontrovertibly real and fundamentally irreducible to the familiar terms of brain science—neurons, electrochemical impulses, synaptic gaps. All the neurophysiology in the world, he argues, will not provide an adequate account of the very nature of consciousness, even though neural processes operate as the causal basis of conscious processes. He also takes consciousness to be a central issue in the science and philosophy of mind.

The Churchlands, on the other hand, waver between denying that consciousness exists altogether and claiming that it is completely reducible to their own preferred neurophysiological theory. This difference between Searle and the Churchlands is as great as that which separates flat- earthers from round-earthers or Darwinists from Creationists. For my part, I differ from both points of view—and there are others with yet other doctrines to defend. We are in the uncomfortable position of having admitted a topic into discussion about which we cannot agree, even about the basics.

Searle’s book aims to be a synthesis and summary of his major philosophical preoccupations over the last forty-odd years, centering on the philosophy of mind. After a preliminary discussion, in which Searle defends the Enlightenment vision of our gradually accumulating knowledge of an objective external world, he squares up to his main positive target: consciousness. His position here combines three principal theses: (1) Consciousness consists of inner, qualitative, subjective states and processes, such as experiences of red, thoughts of skiing, feelings of pain. (2) Consciousness cannot be reduced to the “third-person phenomena” investigated by the neurosciences. (3) Consciousness is nevertheless a “biological process,” a higher-order natural feature of the organic brain.

Searle encourages us to reject the traditional Cartesian frame for thinking about these questions, which takes the world to be divided into a physical part and a mental part that are mutually exclusive. This is the root of all our confusion, he thinks, and once it is abandoned we can state the solution to the mind-body problem with gratifying simplicity. After listing his three theses he writes: “But that is it. That is our account of the metaphysical relations between consciousness and the brain. Nowhere do we even raise the questions of dualism and materialism. They have simply become obsolete categories.”

Searle calls this solution to the mind-body problem “biological naturalism” and summarizes it thus: “Consciousness is caused by brain processes and is a higher-order feature of the brain system.” The idea is that the third-person phenomena of the brain—neurons and their activities—operate to cause higher-level subjective processes that have what Searle calls “first-person ontology,” i.e., they exist only insofar as they are experienced by a conscious subject. He writes:

Grant me that consciousness, with all its subjectivity, is caused by processes in the brain, and grant me that conscious states are themselves higher-level features of the brain. Once you have granted these two propositions, there is no metaphysical mind-body problem left. The traditional problem arises only if you accept the vocabulary with its mutually exclusive categories of mental and physical, mind and matter, spirit and flesh.

Of course, consciousness is still special among biological phenomena. Consciousness has a first-person ontology and so cannot be reduced to, or eliminated in favor of, phenomena with a third-person ontology. But that is just a fact about how nature works. It is a fact of neurobiology that certain brain processes cause conscious states and processes. I am urging that we should grant the facts without accepting the metaphysical baggage that traditionally goes along with the facts.

This view has a beguiling simplicity, and there is much about it that seems to me clearly on the right track. Conscious processes are indeed different in kind from standard physical processes in the brain, being defined by what they are like for their subject. They are also biological processes in at least three senses: (1) they characteristically occur in organic systems, unlike computer programs; (2) they must have resulted from the process of natural selection and not from intentional design, unlike CDs and bell-bottoms; (3) they are genetically based rather than learned or acquired, unlike knowledge of history and typing skills. (Searle in fact says almost nothing about what he means by “biological,” but I take it he has in mind some such theses as these.) Further, brain states surely operate to cause conscious states and are preconditions for the existence of conscious states. Finally, conscious states are higher-level properties of the brain in the sense that they do not belong to the primary components of the brain in isolation but somehow result from combining these elements into a complex organ.

Advertisement

The question is: Is that really the solution to the mind-body problem? Let us first note a concession Searle makes. After likening consciousness to other higher-level macrophenomena that depend upon lower-level microphenomena—solidity, liquidity, photosynthesis, digestion—he notes, correctly enough, that these phenomena are wholly explicable in terms of the microprocesses that underlie them. There is nothing more to the liquidity of water, say, than is contained in a description of its constituent molecules and the chemical bonds that obtain between them; liquidity is not something over and above these underlying chemical facts.

But, as Searle acknowledges, indeed urges, consciousness is something over and above the neurophysiological facts that “cause” it; consciousness is not reducible to its underlying causal basis. This is a radical asymmetry between the two kinds of cases, and it implies that the conceptually uncontroversial nature of other higher-level properties cannot be adduced to make us more comfortable with the dependence of consciousness on brain processes. Those other cases are straightforward precisely because they permit reduction: we have no difficulty seeing how molecules and the forces that bind them can give rise to impenetrability in macroscopic objects, for example. In the case of consciousness, by contrast, what we have is an unexplained mode of dependence, and one that is unique in nature—a dependence of subjective facts on objective facts. And the question is: How can this be? Suppose we are told that a visual experience of red causally depends upon X-neurons firing thus and so in the occipital cortex. The following question then cries out for an answer: How could a subjective experience like that owe its very being to the electrochemical activity of mere biological cells? What has a cell got to do with an experience?

Searle says nothing about the concept of “supervenience” in his book, and this is a crucial omission. Supervenience entails that a person’s conscious mental state is wholly determined by his or her physical brain state: if your neurons are firing thus and so, then your consciousness must be internally such and such. Presumably Searle would agree with this rather modest thesis, but it immediately raises the question, in virtue of what does such supervenience hold? What is there about neurons that enables them to determine consciousness in this way? It can hardly be a brute fact. To that central question Searle offers us no answer, and indeed he doesn’t really ever raise the question. But to many of us that is the mind- body problem. I think that what Searle offers as the solution to the problem is really just a statement of it. The problem, precisely, is how it is that the higher-level biological process of consciousness results from lower-level physical properties of neurons. Searle assures us that it is a fact of nature that consciousness is so produced. I agree, but it is a fact that demands some kind of explanation. How can subjective consciousness result from the operations of little gray cells all bunched together into a few pounds of bland-tasting meat?

I can guess what Searle might say to this objection, though he does not say it in his book. He might say that this is a purely scientific question, not a metaphysical or philosophical one. He has done all the philosophical work when he states his main theses; it is now up to empirical science to discover the actual mode of dependence that links consciousness to the brain. But this reply will not do at all. It doesn’t much matter whether we label the problem “scientific” or “philosophical,” the fact remains that it is a profound and unsolved theoretical problem—a problem we have no inkling even of how to set about solving. We have no conception of what it is about neurons, as distinct from (say) cells in the kidneys, that could explain their remarkable ability to generate or constitute an episode of conscious awareness. Neurophysiologists find correlations between brain states and conscious states, but nothing in neurophysiology even begins to explain such correlations; there isn’t even an explanation of why organisms with brains have a capacity for sensation or feeling to begin with.

The reason philosophers are interested in this problem, as opposed to the mechanical problem of how to derive liquidity from water molecules, is precisely that it is a conceptual problem, in the sense that it seems to test our very conception of mind and brain. The concepts of consciousness and the brain seem intrinsically unsuited to permit a smooth explanatory theory that links them—unlike the concepts of liquidity and molecular bonding. There is nothing more to liquidity than molecular bonding, but there is vastly more to consciousness than neural firings—and it is this more that demands explanation.

Consciousness is not even an observable phenomenon! We cannot see a person’s conscious states when we peer into her brain, observing all those gray fissures and biochemical reactions, because conscious states are not the kind of thing that can be so observed; yet the underlying causative brain processes are apparently just ordinary observable physical events. This is the kind of peculiarity that sets consciousness apart from other higher-level phenomena; and it is not a peculiarity that goes away once we assert, however confidently, that consciousness is a higher-level biological phenomenon. My response to such an assertion is, “So what?” That is the beginning of the problem, not the end.

Much the same objection applies to Searle’s treatment of “intentionality.” This technical use of the term has no specific connection with intentions in the ordinary sense. Intentionality is the capacity of the mind to be about things, to have meaning or content, to point beyond itself. There are many kinds of intentionality: my belief that London is dingy is about London; my desire to go skiing is directed toward skiing; my fear of heights has heights as its reference; my sensation of red takes redness as its object. Intentionality is the capacity of the mind to connect with and represent the external world, and hence distinguishes symbolizing animals from those that merely exist in the world. In a broad sense, intentionality is what makes an animal a semantic being, a repository of representational states. It is also what underlies the meaning of spoken languages.

Searle has done as much as anyone to make a case for the importance of intentionality and has said many insightful things about it over the years. But he is a philosopher with an inbuilt resistance to admitting he is stumped (he is by no means alone in this). Many philosophers in recent years have attempted to “naturalize” intentionality, to render it explicable by reducing it to something more familiar—causality, biological function, computational structure. In this way intentionality will emerge as nothing but a special case of something we already have on our list of scientifically acceptable facts.

Searle will have none of this, but he claims to have his own explanation of the nature of intentionality. Instead of offering to reduce intentionality to something else, as other writers do, he declares it irreducible, while nevertheless proposing to explain it in such a way as to render it “biologically natural.” By way of illustration he gives a textbook account of the physiological processes that underlie thirst: a lack of water in the body causes the hypothalamus, via certain biochemical mechanisms, to increase its rate of neuron firing, this in turn causing the animal to feel a conscious desire to drink. Since the desire to drink is an intentional state—it is directed toward the act of drinking—this is held to provide an explanation of one mode of conscious intentionality. And the same kind of story could be told, Searle thinks, about other forms of intentionality—perceptual, cognitive, etc. Thus we render intentionality biologically explicable.

But this is not a genuine naturalistic explanation of intentionality per se; it merely tells us the physiological mechanisms that underlie intentionality. What philosophers interested in intentionality have wanted is some kind of account of what the intentional relation itself consists in—what it is for the mind to be directed onto things outside itself when we are thinking or desiring or perceiving. What is this mysterious relation of “aboutness” that our various mental states exhibit? What is the nature of mental representation?

Searle’s textbook summary gives us no account of this; it merely describes what causes a state that exhibits this kind of intentionality. And this leaves the conceptual problem where it was: How can a brain succeed in giving rise to mental states that represent the external world? What is it about bunches of neurons that makes them into symbols with reference? Kidney cells have no intentionality, so why do brain cells? What relation between cells in my brain and London makes it the case that I am thinking about London? It is not so much that Searle’s proposal is false; it is simply irrelevant to the question. It would be better for him to stick with his claim of the irreducibility of intentionality and not attempt to “explain” it; but then of course there would remain the puzzling question of how intentionality is possible in a physical system. Searle is trying to have it both ways: declare a conceptually perplexing phenomenon irreducible but not incur the charge that he has left something unexplained.

The remainder of Mind, Language, and Society is taken up with a discussion of the meaning of “speech acts,” such as asserting and commanding and questioning, and with a restatement of the view of social facts Searle developed in The Construction of Social Reality.1 The temperature of the book goes down considerably during these chapters. The basic idea is to exploit intentionality, specifically intentions themselves, in the explanation of how symbols get their meaning and how institutions like money come to exist. The intention to treat pieces of paper as having economic value has intentionality, since this intention is about paper and value, and such intentions underlie the capacity of the pieces of paper to have economic value. Thus the institution of money exists because people have intentions that treat certain physical objects in a certain way. Social facts, Searle argues, result from underlying intentional facts. Here he presents a clear discussion of institutions and intentions that does not depend on the view of consciousness I have criticized.

2.

Most of the papers in On the Contrary are written by Paul Churchland and I shall focus on these (though Patricia Churchland appears to share his views). Paul Churchland is best known for his advocacy of the doctrine of “eliminative materialism,” a view maintained in the Sixties by Richard Rorty and Paul Feyerabend and prefigured by J.B. Watson at the turn of the century. Churchland has revived this view, coupling it with attention to the details of contemporary work in neuroscience. The view, put baldly, is that mental states do not exist. We talk as if they do when we use what has come to be called “folk psychology,” the kind of explanation we use to account for human action: we commonsensically refer to beliefs and desires, sensations, thoughts, decisions, and so forth. For example, we might explain a person’s dialing a phone number by attributing to him both a desire to speak to his girlfriend and a belief that dialing that number is a means to satisfying his desire. This is the kind of psychological explanation of human action we use all the time—hence “folk” psychology—and it involves ascribing certain kinds of inner mental states to people.

Folk psychology describes people as beings with desires and beliefs and intentions and feelings. It is common throughout human cultures and periods. It is the way we ordinarily understand each other. But this is all empty talk, according to Churchland’s eliminativism; in reality there are no such things, any more than there are witches or ghosts or spirits in the weather. Accordingly, we should eliminate all such psychological talk as outmoded error and replace it with descriptions of the nervous system and its physical processes. It is highly probable that our inherited folk psychology is a radically false theory, destined to be replaced by a mature neuroscience—just as folk physics has been jettisoned in favor of the scientific physics of Newton and Einstein.

We don’t yet have the replacement theory in full, Churchland acknowledges, and we cannot at present be sure that there are no mental states, but he supposes that elimination is the most likely theoretical development as science progresses. His position is that folk psychology was cobbled together in an earlier, pre-scientific age, as a speculative theory of what causes people’s behavior, and it is high time to examine it critically with a view to finding a more streamlined theory of our inner workings. The familiar conceptions of belief and desire, and the accompanying mental states, are about to go the way of weather gods and fairies—there simply are no such things. It is not merely that folk psychology gives the wrong theory of human desire and belief; Churchland wants to claim it is mistaken to suppose that anyone has any desires and beliefs. It is folk psychology itself that is at fault, not the specific details of this or that psychological explanation. The superior theory that replaces folk psychology will not preserve its ontology—the mental entities it assumes to exist—but will replace this ontology with something radically different. Instead of referring to beliefs and desires it will work with patterns of activation in populations of neurons. Mental states as we now articulate them will go the way of phlogiston, the mythical substance that earlier theorists mistakenly thought to be released when something burns.

According to Churchland, the folk-psychological understanding of people in terms of beliefs and desires is a degenerate theory that suffers from the following defects: (1) it is objectionably partial in its treatment of the mind, providing no explanation of sleep, learning, memory, madness, etc.; (2) it is dogmatically resistant to change over time, having remained roughly the same since before the ancient Greeks; (3) it refuses to be integrated with the developing studies of human nature, such as evolutionary biology, neuroscience, biochemistry. Since the concept of consciousness is at the heart of folk psychology, we can look forward to the day when we no longer speak of it at all, being content to describe what is actually whirring away deep in our neural circuitry. The mind is a myth.

I will forgo the usual expressions of incredulity that are elicited by this doctrine (though I share them) and confine myself to some obvious objections to it. To begin with, the arguments offered in support of eliminativism are remarkably weak. Without undertaking a full-scale criticism of them, we can note the following: First, it is no argument for the falsity of folk psychology that it does not cover everything about the mind; partiality does not entail error. With respect to the second objection, the constancy of folk psychology over time could as well be explained by its obvious truth, not its inherent dogmatism; compare the stability of elementary arithmetic since the ancient Greeks. As for Churchland’s third objection, it is tendentious at best to suppose folk psychology not to be capable of integration with the sciences of human nature, since the standard contemporary model of cognitive science is arguably continuous with the apparatus of folk psychology. The Rutgers philosopher Jerry Fodor, for example, has argued convincingly that the conception of the mind as a symbol-manipulating information processor fits smoothly with the folk-psychological picture of the mind as consisting of a range of “propositional attitudes” like belief and desire.2 So none of these arguments shows that folk psychology is radically on the wrong track about what makes us tick.

Churchland, moreover, severely underplays the first-person aspect of folk psychology. Folk psychology is not just a “speculative theory” we apply to others; it is also the means by which we directly report on our own mental states. And such first-person reporting carries special privileges: my knowledge that I am thinking about philosophy right now is as secure as any knowledge can be—well-nigh incorrigible, as Descartes pointed out. But Churchland holds that this kind of first-person knowledge is no knowledge at all, since I do not have thoughts, according to the eliminativist doctrine: folk psychology for him is a false speculative theory, not the vehicle of incorrigible first-person knowledge. But once we acknowledge the first-person privileges of folk psychology, it becomes inconceivable that we could be simply wrong about having mental states. The simple truth is that evolution equipped us with both mental states and the conceptual apparatus to describe those states in a uniquely privileged way. Hence the well-grounded conviction, contra eliminativism, that we simply know that we have beliefs and desires and all the rest.

One of Churchland’s recurrent themes is that the neural dynamics that underlie what we are pleased to call the mind do not involve symbolic representations of a sentencelike kind. The brain is not to be construed as a device for processing internal sentences that underlie our cognitive capacities. Instead, he writes that the neurons choreograph themselves into “activation vectors,” patterns of activity that do not involve anything that looks like a sentence or proposition. Yet folk psychology insists on describing the mind by using the language of propositions: Sally believes that Clinton will be impeached, Jack hopes that he won’t be. But, Churchland thinks, there is nothing in the brain itself that corresponds to the propositional apparatus of folk psychology, and so folk psychology is trafficking in illusions.

I will make only two points about this, though many more could be made. First, we haven’t been convinced by Churchland or anyone else that folk psychology is not simply providing a description of the brain that abstracts from the details of what the neurons are doing, as a software description of a computer abstracts from its hardware description. After all, from the perspective of basic physics the brain does not have “activation vectors” either, being merely a collection of subatomic particles. Reality comes in levels, and what is invisible at one level might be salient at another. Second, Churchland studiously avoids confronting the question of language processing itself. But surely when we understand speech we must suppose that our mental dynamics involve the manipulation of sentencelike structures, since speech consists of sentences. And if propositional attitudes like beliefs and desires are bound up with language, they too will involve internal sentencelike structures.

It may indeed be true that the representational machinery of the brain can at one level be described without reference to sentencelike symbols, but it does not follow that such symbols play no part in our mental functioning. And once it is admitted that they do, then folk psychology can claim vindication by reference to an established science of the brain. (This is precisely the position of theorists like Jerry Fodor, who subscribe to a “language of thought.”)

Churchland’s insistence on the nonpropositional character of neural representation leads to a strange result, namely that it is more plausible that computers think than that human beings do. He observes that the standard architecture of computers consists in the serial manipulation of sentence-like structures, while the brain (he claims) works by means of parallel nonpropositional neural activations (what are nowadays called connectionist networks). Thus computers display the kind of internal machinery that folk psychology demands, while human brains display a quite different kind of machinery. The result is that eliminativism is more likely to be true of us than of our computers! By Churchland’s skewed lights the computer I am typing on has a greater claim to be a thinker than I do. I take this to be a reductio ad absurdum of his position. He does not notice this consequence of his arguments explicitly, but he hands the reader the materials with which to draw the strange conclusion.

Signs that Churchland does not take his eliminativism quite as seriously as he invites us to take it are evident in a certain inconsistency that pervades the essays in this book. Once I started to notice this inconsistency I became increasingly irritated by it. On the one hand, he preaches the eliminativist gospel, bravely announcing that folk psychology is on its last legs; on the other hand, we find him elsewhere arguing that this or that aspect of folk psychology is reducible to processes in the brain. But you can’t have it both ways: if eliminativism is true, then there is nothing to reduce. Yet here is Churchland blithely arguing that he can explain the notion of sameness of concept within his neural network scheme, that “qualia”—the subjective features of sensory experiences—can be identified with certain patterns in the nervous system, that consciousness itself is a phenomenon we need to know more about.

But none of these claims is available to a consistent eliminativist, any more than a reduction of phlogiston to atomic physics is an option for someone who denies the existence of phlogiston—since if phlogiston is identical to particular existent physical facts, then it must exist after all. Churchland is perfectly aware of the logical tension between reductionism and eliminativism, and indeed is careful to explain the distinction when he is espousing eliminativism. But then he inconsistently slides into reductionism when it suits him to do so.

I can find no explanation of this inconsistency or recognition of it in his text. But I think I understand the psychology behind it: Churchland doesn’t want to be left out of the fun, which is what his professed eliminativism would require. He wants to theorize about the nature of concepts and consciousness, about reasoning and perceptual experience; so he conveniently forgets that according to his official eliminativist position there is nothing to talk about here. I suppose we could surmise that his good sense has triumphed over his theoretical pronouncements, but he owes it to us to qualify his eliminativism, if that is his true position. Otherwise he is playing a duplicitous game.

3.

Searle and Churchland represent opposite ends of the philosophical spectrum. Searle takes our common-sense view of the mind seriously and resists attempts to reduce or eliminate it in favor of a materialistic metaphysics; Churchland regards the very idea that human beings have beliefs and desires as a false theory of how our brains work, soon to be replaced by a better theory that describes us according to neuroscience. Is there any middle ground? My own view is that these two extremes are intelligible—though mistaken—responses to a genuine conceptual and explanatory problem. The problem is how to integrate the conscious mind with the physical brain—how to reveal a unity beneath this apparent diversity. That problem is very hard, and I do not believe anyone has any good ideas about how to solve it.

In view of this gap in our understanding, two kinds of response might be expected: either that there is no unification of mind and brain to be had or that there is no mind to unify with the brain. Thus we get dualistic antireductionism, of which Searle’s position is an example (though I am sure he will not welcome the description). Or we get Churchland’s kind of eliminativism, perhaps inconsistently combined with an attempt to reduce mental phenomena to our current understanding of the brain. My own position is that there is a theory that unifies conscious minds with physical brains, but we do not have any idea what that theory is. In reality there is an underlying unity here, even though we have no understanding of it.

There has to be natural underlying unity here, for if there is not, we have to postulate miraculous kinds of emergence in the biological world; consciousness cannot just spring into existence from matter like a djinn from a lamp. But our modes of access to consciousness and the brain—by means of introspection and sensory perception, respectively—do not, as a matter of principle, disclose the hidden structure of this indispensable nexus. I know that I am in pain by feeling my pain from the inside, and I can know that my neurons are activated thus and so by using scientific methods to observe that they are; but I have no awareness of the necessary links that bind sensation and brain process together—nor any method for extrapolating to these links. We cannot deduce brain states from our inner awareness of consciousness and we cannot deduce consciousness from our sensory awareness of the brain; so the manner of their association remains elusive to our cognitive faculties.

We can apprehend each side of the great divide between mind and body, but we have no faculty that reveals how they slot seamlessly together. That is the root of our troubles in trying to form a theory of what could connect consciousness to the brain. But it is hardly surprising to find that not every aspect of the natural world comes within the scope of our powers of understanding. We do not expect other evolved species to be omniscient, so why assume that our intelligence has evolved with the capacity to solve every problem that can be raised about the universe of which we are such a small and contingent part? But even if this strong unknowability thesis is mistaken—and I have only sketched my reasons for maintaining it here 3—we should surely allow for the possibility that our knowledge of the mind and brain is severely limited, thus producing the impression that the association is brute and inexplicable. There may be an explanatory theory of the psychophysical link somewhere in Plato’s heaven; it is just that our minds are miles away from grasping what this theory looks like. So we are apt to flail around in ignorance, going from one implausible extreme to another. If this is right, then anti-reductionism is wrong as a claim about all possible theories of the brain, and eliminativism is not necessary after all. But one can at least understand why people might be tempted by these unsatisfactory views: both are misguided, though intelligible, reactions to matters about which human beings are still deeply ignorant.

This Issue

June 10, 1999