• Email
  • Single Page
  • Print

Homunculism

mcginn_2-032113.jpg
Eric Edelman/RetroCollage.com
Eric Edelman: An Unanswered Question, 2013

Here I must say something briefly about the standard language that neuroscience has come to assume in the last fifty or so years (the subject deserves extended treatment). Even in sober neuroscience textbooks we are routinely told that bits of the brain “process information,” “send signals,” and “receive messages”—as if this were as uncontroversial as electrical and chemical processes occurring in the brain. We need to scrutinize such talk with care. Why exactly is it thought that the brain can be described in these ways? It is a collection of biological cells like any bodily organ, much like the liver or the heart, which are not apt to be described in informational terms. It can hardly be claimed that we have observed information transmission in the brain, as we have observed certain chemicals; this is a purely theoretical description of what is going on. So what is the basis for the theory?

The answer must surely be that the brain is causally connected to the mind and the mind contains and processes information. That is, a conscious subject has knowledge, memory, perception, and the power of reason—I have various kinds of information at my disposal. No doubt I have this information because of activity in my brain, but it doesn’t follow that my brain also has such information, still less microscopic bits of it. Why do we say that telephone lines convey information? Not because they are intrinsically informational, but because conscious subjects are at either end of them, exchanging information in the ordinary sense. Without the conscious subjects and their informational states, wires and neurons would not warrant being described in informational terms.

The mistake is to suppose that wires and neurons are homunculi that somehow mimic human subjects in their information-processing powers; instead they are simply the causal background to genuinely informational transactions. The brain considered in itself, independently of the mind, does not process information or send signals or receive messages, any more than the heart does; people do, and the brain is the underlying mechanism that enables them to do so. It is simply false to say that one neuron literally “sends a signal” to another; what it does is engage in certain chemical and electrical activities that are causally connected to genuine informational activities.

Contemporary brain science is thus rife with unwarranted homunculus talk, presented as if it were sober established science. We have discovered that nerve fibers transmit electricity. We have not, in the same way, discovered that they transmit information. We have simply postulated this conclusion by falsely modeling neurons on persons. To put the point a little more formally: states of neurons do not have propositional content in the way states of mind have propositional content. The belief that London is rainy intrinsically and literally contains the propositional content that London is rainy, but no state of neurons contains that content in that way—as opposed to metaphorically or derivatively (this kind of point has been forcibly urged by John Searle for a long time).

And there is theoretical danger in such loose talk, because it fosters the illusion that we understand how the brain can give rise to the mind. One of the central attributes of mind is information (propositional content) and there is a difficult question about how informational states can come to exist in physical organisms. We are deluded if we think we can make progress on this question by attributing informational states to the brain. To be sure, if the brain were to process information, in the full-blooded sense, then it would be apt for producing states like belief; but it is simply not literally true that it processes information. We are accordingly left wondering how electrochemical activity can give rise to genuine informational states like knowledge, memory, and perception. As so often, surreptitious homunculus talk generates an illusion of theoretical understanding.*

Returning to Ray Kurzweil, I must applaud his chapter on consciousness and free will—for its existence, if not for its content. He is at least aware that these are difficult philosophical and scientific problems; he commendably refrains from offering facile “solutions” of the kind beloved by the brain-enamored. But the chapter sits ill with the earlier parts of the book, in which we are confidently assured that the author has a grand theory of the mind, in the form of the PRTM. For consciousness and free will are surely central aspects of the human mind and yet Kurzweil makes no claim (wisely) that they can be reductively explained by means of his 300 million “pattern recognizers” (which don’t, as I have noted, really recognize anything).

To create a mind one needs at a minimum to create consciousness, but Kurzweil doesn’t even attempt to describe a way for doing that. He is content simply to record his conviction (he calls it a “leap of faith”) that if a machine can pass the Turing test we can declare it to be conscious—that is, if it talks like a conscious being it must be a conscious being. But this is not to provide any theory of the mechanism of consciousness—of what it is in the brain that enables an organism to be conscious. Clearly, unconscious processes of so-called “pattern recognition” in the neocortex will not suffice for consciousness, being precisely unconscious. All we really get in this chapter is a ramble over very familiar terrain, with nothing added to what currently exists. Worse, there are some quite execrable remarks about the philosophy of Wittgenstein, which demonstrate zero understanding of his philosophy during the periods of the Tractatus-Logico Philosophicus and the Philosophical Investigations. Kurzweil asks:

What is it that the later Wittgenstein thought was worth thinking and talking about? It was issues such as beauty and love, which he recognized exist imperfectly in the minds of men.

So what are we to make of all the discussion of language and meaning in the Investigations? Kurzweil is way out of his depth here.

The computer engineer gets back to his main field of competence in the penultimate chapter, which restates his earlier published views about the future of information technology. His “futurist” thesis is that computing power doubles every year—information technology improves exponentially, not linearly (he calls this the Law of Accelerating Returns). He boasts that this prediction has been borne out every year since 1890 (the year of the first automated US census), and there does seem to be an empirical basis for it. But is it a law of nature and if so of what kind? What exactly is the reason for it? Technology does not in general improve exponentially, so what is it about information technology that makes this putative law hold? Is it somehow inherent in information itself? That seems hard to understand. Perhaps it is just the way things have contingently been so far, so that the rate of growth may slow down at any minute.

Kurzweil acknowledges that there are physical limits on the “law,” imposed by the structure of the atom and its possible states; it is not that computing power will double every year for all eternity! So the “law” doesn’t seem much like other scientific laws, such as the law of gravity or even the law of supply and demand. What seems to me worth noting is that the growth of information technology does not depend on the nature of the material substrate in which information exists (such as silicon chips), because new substrates keep being invented. Once the information capacity of one medium has been exhausted, engineers come up with a new medium, with even more potential states and yet more tightly packed. But then the “law” depends on a prediction about human ingenuity—that we will keep inventing ever more powerful physical systems for computation.

It is therefore ultimately a psychological law: to the effect that human creativity in the field of information technology improves exponentially. And that doesn’t look like a natural law at all, but just a fortunate historical fact about the twentieth century. Thus Kurzweil’s “law” is more likely to be fortuitous than genuinely law-like: there is no necessity that information technology improves exponentially over (all?) time. It is just an accidental, though interesting, historical fact, not written into the basic workings of the cosmos. As philosophers say, the generalization lacks nomological necessity.

Here then is my overall assessment of this book: interesting in places, fairly readable, moderately informative, but wildly overstated.

Letters

How to Create a Mind’ May 23, 2013

  1. *

    Not all neuroscience employs homuncular language. Many neuroscientists limit themselves to descriptions of electrical and chemical activity in the brain. The recent announcement by the Obama administration of an ambitious project to map the human brain seems commendably free of homunculus mythology. The same can be said for a recent article in the journal Neuron by six scientists recommending such a project. See A. Paul Alivisatos et al., “The Brain Activity Map Project and the Challenge of Functional Connectomics,” Neuron, Vol. 74 (June 21, 2012). 

  • Email
  • Single Page
  • Print