• Email
  • Single Page
  • Print

Can We Ever Understand Consciousness?

According to Churchland, the folk-psychological understanding of people in terms of beliefs and desires is a degenerate theory that suffers from the following defects: (1) it is objectionably partial in its treatment of the mind, providing no explanation of sleep, learning, memory, madness, etc.; (2) it is dogmatically resistant to change over time, having remained roughly the same since before the ancient Greeks; (3) it refuses to be integrated with the developing studies of human nature, such as evolutionary biology, neuroscience, biochemistry. Since the concept of consciousness is at the heart of folk psychology, we can look forward to the day when we no longer speak of it at all, being content to describe what is actually whirring away deep in our neural circuitry. The mind is a myth.

I will forgo the usual expressions of incredulity that are elicited by this doctrine (though I share them) and confine myself to some obvious objections to it. To begin with, the arguments offered in support of eliminativism are remarkably weak. Without undertaking a full-scale criticism of them, we can note the following: First, it is no argument for the falsity of folk psychology that it does not cover everything about the mind; partiality does not entail error. With respect to the second objection, the constancy of folk psychology over time could as well be explained by its obvious truth, not its inherent dogmatism; compare the stability of elementary arithmetic since the ancient Greeks. As for Churchland’s third objection, it is tendentious at best to suppose folk psychology not to be capable of integration with the sciences of human nature, since the standard contemporary model of cognitive science is arguably continuous with the apparatus of folk psychology. The Rutgers philosopher Jerry Fodor, for example, has argued convincingly that the conception of the mind as a symbol-manipulating information processor fits smoothly with the folk-psychological picture of the mind as consisting of a range of “propositional attitudes” like belief and desire.2 So none of these arguments shows that folk psychology is radically on the wrong track about what makes us tick.

Churchland, moreover, severely underplays the first-person aspect of folk psychology. Folk psychology is not just a “speculative theory” we apply to others; it is also the means by which we directly report on our own mental states. And such first-person reporting carries special privileges: my knowledge that I am thinking about philosophy right now is as secure as any knowledge can be—well-nigh incorrigible, as Descartes pointed out. But Churchland holds that this kind of first-person knowledge is no knowledge at all, since I do not have thoughts, according to the eliminativist doctrine: folk psychology for him is a false speculative theory, not the vehicle of incorrigible first-person knowledge. But once we acknowledge the first-person privileges of folk psychology, it becomes inconceivable that we could be simply wrong about having mental states. The simple truth is that evolution equipped us with both mental states and the conceptual apparatus to describe those states in a uniquely privileged way. Hence the well-grounded conviction, contra eliminativism, that we simply know that we have beliefs and desires and all the rest.

One of Churchland’s recurrent themes is that the neural dynamics that underlie what we are pleased to call the mind do not involve symbolic representations of a sentencelike kind. The brain is not to be construed as a device for processing internal sentences that underlie our cognitive capacities. Instead, he writes that the neurons choreograph themselves into “activation vectors,” patterns of activity that do not involve anything that looks like a sentence or proposition. Yet folk psychology insists on describing the mind by using the language of propositions: Sally believes that Clinton will be impeached, Jack hopes that he won’t be. But, Churchland thinks, there is nothing in the brain itself that corresponds to the propositional apparatus of folk psychology, and so folk psychology is trafficking in illusions.

I will make only two points about this, though many more could be made. First, we haven’t been convinced by Churchland or anyone else that folk psychology is not simply providing a description of the brain that abstracts from the details of what the neurons are doing, as a software description of a computer abstracts from its hardware description. After all, from the perspective of basic physics the brain does not have “activation vectors” either, being merely a collection of subatomic particles. Reality comes in levels, and what is invisible at one level might be salient at another. Second, Churchland studiously avoids confronting the question of language processing itself. But surely when we understand speech we must suppose that our mental dynamics involve the manipulation of sentencelike structures, since speech consists of sentences. And if propositional attitudes like beliefs and desires are bound up with language, they too will involve internal sentencelike structures.

It may indeed be true that the representational machinery of the brain can at one level be described without reference to sentencelike symbols, but it does not follow that such symbols play no part in our mental functioning. And once it is admitted that they do, then folk psychology can claim vindication by reference to an established science of the brain. (This is precisely the position of theorists like Jerry Fodor, who subscribe to a “language of thought.”)

Churchland’s insistence on the nonpropositional character of neural representation leads to a strange result, namely that it is more plausible that computers think than that human beings do. He observes that the standard architecture of computers consists in the serial manipulation of sentence-like structures, while the brain (he claims) works by means of parallel nonpropositional neural activations (what are nowadays called connectionist networks). Thus computers display the kind of internal machinery that folk psychology demands, while human brains display a quite different kind of machinery. The result is that eliminativism is more likely to be true of us than of our computers! By Churchland’s skewed lights the computer I am typing on has a greater claim to be a thinker than I do. I take this to be a reductio ad absurdum of his position. He does not notice this consequence of his arguments explicitly, but he hands the reader the materials with which to draw the strange conclusion.

Signs that Churchland does not take his eliminativism quite as seriously as he invites us to take it are evident in a certain inconsistency that pervades the essays in this book. Once I started to notice this inconsistency I became increasingly irritated by it. On the one hand, he preaches the eliminativist gospel, bravely announcing that folk psychology is on its last legs; on the other hand, we find him elsewhere arguing that this or that aspect of folk psychology is reducible to processes in the brain. But you can’t have it both ways: if eliminativism is true, then there is nothing to reduce. Yet here is Churchland blithely arguing that he can explain the notion of sameness of concept within his neural network scheme, that “qualia”—the subjective features of sensory experiences—can be identified with certain patterns in the nervous system, that consciousness itself is a phenomenon we need to know more about.

But none of these claims is available to a consistent eliminativist, any more than a reduction of phlogiston to atomic physics is an option for someone who denies the existence of phlogiston—since if phlogiston is identical to particular existent physical facts, then it must exist after all. Churchland is perfectly aware of the logical tension between reductionism and eliminativism, and indeed is careful to explain the distinction when he is espousing eliminativism. But then he inconsistently slides into reductionism when it suits him to do so.

I can find no explanation of this inconsistency or recognition of it in his text. But I think I understand the psychology behind it: Churchland doesn’t want to be left out of the fun, which is what his professed eliminativism would require. He wants to theorize about the nature of concepts and consciousness, about reasoning and perceptual experience; so he conveniently forgets that according to his official eliminativist position there is nothing to talk about here. I suppose we could surmise that his good sense has triumphed over his theoretical pronouncements, but he owes it to us to qualify his eliminativism, if that is his true position. Otherwise he is playing a duplicitous game.


Searle and Churchland represent opposite ends of the philosophical spectrum. Searle takes our common-sense view of the mind seriously and resists attempts to reduce or eliminate it in favor of a materialistic metaphysics; Churchland regards the very idea that human beings have beliefs and desires as a false theory of how our brains work, soon to be replaced by a better theory that describes us according to neuroscience. Is there any middle ground? My own view is that these two extremes are intelligible—though mistaken—responses to a genuine conceptual and explanatory problem. The problem is how to integrate the conscious mind with the physical brain—how to reveal a unity beneath this apparent diversity. That problem is very hard, and I do not believe anyone has any good ideas about how to solve it.

In view of this gap in our understanding, two kinds of response might be expected: either that there is no unification of mind and brain to be had or that there is no mind to unify with the brain. Thus we get dualistic antireductionism, of which Searle’s position is an example (though I am sure he will not welcome the description). Or we get Churchland’s kind of eliminativism, perhaps inconsistently combined with an attempt to reduce mental phenomena to our current understanding of the brain. My own position is that there is a theory that unifies conscious minds with physical brains, but we do not have any idea what that theory is. In reality there is an underlying unity here, even though we have no understanding of it.

There has to be natural underlying unity here, for if there is not, we have to postulate miraculous kinds of emergence in the biological world; consciousness cannot just spring into existence from matter like a djinn from a lamp. But our modes of access to consciousness and the brain—by means of introspection and sensory perception, respectively—do not, as a matter of principle, disclose the hidden structure of this indispensable nexus. I know that I am in pain by feeling my pain from the inside, and I can know that my neurons are activated thus and so by using scientific methods to observe that they are; but I have no awareness of the necessary links that bind sensation and brain process together—nor any method for extrapolating to these links. We cannot deduce brain states from our inner awareness of consciousness and we cannot deduce consciousness from our sensory awareness of the brain; so the manner of their association remains elusive to our cognitive faculties.

We can apprehend each side of the great divide between mind and body, but we have no faculty that reveals how they slot seamlessly together. That is the root of our troubles in trying to form a theory of what could connect consciousness to the brain. But it is hardly surprising to find that not every aspect of the natural world comes within the scope of our powers of understanding. We do not expect other evolved species to be omniscient, so why assume that our intelligence has evolved with the capacity to solve every problem that can be raised about the universe of which we are such a small and contingent part? But even if this strong unknowability thesis is mistaken—and I have only sketched my reasons for maintaining it here 3—we should surely allow for the possibility that our knowledge of the mind and brain is severely limited, thus producing the impression that the association is brute and inexplicable. There may be an explanatory theory of the psychophysical link somewhere in Plato’s heaven; it is just that our minds are miles away from grasping what this theory looks like. So we are apt to flail around in ignorance, going from one implausible extreme to another. If this is right, then anti-reductionism is wrong as a claim about all possible theories of the brain, and eliminativism is not necessary after all. But one can at least understand why people might be tempted by these unsatisfactory views: both are misguided, though intelligible, reactions to matters about which human beings are still deeply ignorant.

  1. 2

    See Jerry Fodor, The Language of Thought (Crowell, 1975); A Theory of Content and Other Essays (MIT Press, 1990).

  2. 3

    I discuss this view further in The Problem of Consciousness (Basil Blackwell, 1991), Problems in Philosophy (Basil Blackwell, 1993), and The Mysterious Flame (Basic Books, 1999).

  • Email
  • Single Page
  • Print