• Email
  • Single Page
  • Print

Making up the Mind

1.

Five years ago the concepts of “mind” and “consciousness” were virtually excluded from scientific discourse. Now they have come back, and every week we see the publication of new books on the subject—Wet Mind by Stephen Kosslyn, Nature’s Mind by Michael Gazzaniga, Consciousness Explained by Daniel Dennett, The Computational Brain by Patricia Churchland and Terry Sejnowski, to mention only a few of the more distinguished. Reading most of this work, we may have a sense of disappointment, even outrage; beneath the enthusiasm about scientific developments, there is a certain thinness, a poverty and unreality compared to what we know of human nature, the complexity and density of the emotions we feel and of the thoughts we have. We read excitedly of the latest chemical, computational, or quantum theory of mind, and then ask, “Is that all there is to it?”

I remember the excitement with which I read Norbert Wiener’s Cybernetics when it came out in the late 1940s. And then, in the early 1950s, reading the work of Wiener’s younger colleagues at MIT—a galaxy of some of the finest minds in America including Warren McCulloch, Walter Pitts, John von Neumann—and learning about their pioneer explorations of logical automata and nerve nets. I thought, as many of us did, that we were on the verge of computer translation, perception, cognition; a brave new world in which ever more powerful computers would be able to mimic, and even take over, the chief functions of brain and mind. The very titles of the MIT papers were exalted and thrilling—“Machines that Think and Want,” “The Genesis of Social Evolution in the Mindlike Behavior of Artifacts.”1

During the 1960s, there was some faltering and questioning: it proved possible to put a man on the moon in this decade but not possible for a computer to achieve a decent translation of a child’s speech, much less a text of any complexity, or to achieve more than the most rudimentary mechanical perception (if indeed “perception” was a legitimate word here). Or was it simply that one needed more computer power, and perhaps different programs or designs? Supercomputers emerged, and, soon, so-called neural networks, which do not consist of actual neurons but computer simulations or models that attempt to mimic the nervous system. Though such networks start with random connections, and learn in a fashion—for example, how to recognize faces or words—they are always instructed what to do, even if they are not instructed how to do it. They are able to recognize in a formal, rule-bound way, not in terms of context and meaning, the way an organism does.

Some of these networks have been developed on the West Coast, under the presiding genius of Francis Crick. And yet Crick himself has expressed fundamental reservations about them—can they, he has asked, really be said to think? Are they, in fact, like minds at all? We must indeed be very cautious before we allow that any artifact is (except in a superficial sense) “mind-like” or “brainlike.”2

Thus if we are to have a model or theory of mind as this actually occurs in living creatures in the world, it may have to be radically different from anything like a computational one. It will have to be grounded in biological reality, in the anatomical and developmental and functional details of the nervous system; and also in the inner life or mental life of the living creature, the play of its sensations and feelings and drives and intentions, its perception of objects and people and situations, and, in higher creatures at least, the ability to think abstractly and to share through language and culture the consciousness of others.

Above all such a theory must account for the development and adaptation peculiar to living systems. Living organisms are born into a world of challenge and novelty, a world of significances, to which they must adapt or die. Living organisms grow, learn, develop, organize knowledge, and use memory in a way that has no analogue in the nonliving. Memory itself is characteristic of life. And memory brings about a change in the organism, so that it is better adapted, better fitted, to meet environmental challenges. The very “self” of the organism is enlarged by memory.

Such a notion of organic change as taking place with experience and learning, and as being an essential change in the structure and “being” of the organism, had no place in the classical theories of memory, which tended to portray it as a thing-in-itself, something deposited in the brain and mind—an impression, a trace, a replica of the original experience, like a photograph. (For Socrates, the brain was soft wax, imprinted with impressions as with a seal or signet ring.) This was certainly the case with Locke and the empiricists, and has its counterpart in many of the current models of memory, which see it as having a definite location in the brain, something like the memory core of a computer.

The neural basis of memory, and of learning generally, the Canadian neuroscientist Donald Hebb hypothesized, lay in a selective strengthening or inhibition of the synapses between nerve cells and the development of groups of cells or “cell-assemblies” embodying the remembered experience. This change, for Hebb, was only a local one, not a change in the brain (or the self) as a whole. At the opposite extreme, his teacher Karl Lashley, who trained rats to do complex tasks after removing various parts of their brains, came to feel that it was impossible to localize memory or learning; that, with remembering and learning, changes took place throughout the entire brain. Thus, for Lashley, memory, and indeed identity, did not have discrete locations in the brain.3 There seemed no possible meeting point between these two views: an atomistic or mosaic view of the brain as parceling memory and perception into small, discrete areas, and a global or “gestalt” view, which saw them as being somehow spread out across the entire brain.

These disparate views of memory and brain function were only part of a more general chaos, a flourishing of many fields and many theories, independently and in isolation, a fragmentation of our approaches to, and views about, the brain. In his newest book, Bright Air, Brilliant Fire, the neuroscientist Gerald Edelman speaks of this fragmentation:

Thus the picture of psychology was a mixed one, behaviorism, gestalt psychology, psychophysics, and memory studies in normal psychology; studies of the neuroses by Freudian analysis; clinical studies of brain lesions and motor and sensory defects…and a growing knowledge both of neuroanatomy and the electrical behavior of nerve cells in physiology…. Only occasionally were serious efforts made…to connect these disparate areas in a general way.

A comprehensive theory of brain function that could make sense of the diverse observations of a dozen different disciplines has been missing, and the enormous but fragmented growth of neuroscience in the last two decades has made the need for such a general theory more and more pressing. This was well expressed in a recent article in Nature, in which Jeffrey Gray spoke of the tendency of neuroscience to gather more and more experimental data, while lacking “a new theory…that will render the relations between brain events and conscious experience ‘transparent.’ “4

The needed theory, indeed, must do more: it must account for (or at least be compatible with) all the facts of evolution and neural development and neurophysiology that we know, on the one hand, and on the other all the facts of neurology and psychology, of mental life, that we know. It must be a theory of self-organization and emergent order at every level and scale, from the scurrying of molecules and their micropatterns in a million synaptic clefts to the grand macro-patterns of an actual lived life. Such a theory, Gray feels, “is at present unimaginable.”

But just such a theory has been imagined, and with great force and originality, by Gerald Edelman, who, with his colleagues at the Neurosciences Institute at Rockefeller University over the past fifteen years, has been developing a biological theory of mind, which he calls Neural Darwinism, or the Theory of Neuronal Group Selection (TNGS).

He first presented this in a relatively brief essay written in 1978 (The Mindful Brain, MIT Press). This essay was written, Edelman has said, in a single sitting, during a thirteen-hour wait for a plane in the Milan airport, and it is fascinating to see in this the germ of all his future thought—one gets an intense sense of the evolution occurring in him. Between 1987 and 1990 Edelman published his monumental and sometimes impenetrable trilogy—Neural Darwinism (1987), Topobiology (1988), and The Remembered Present: A Biological Theory of Consciousness (1989), which presented the theory, and a vast range of relevant observations, in a much more elaborate and rigorous form. He now presents the theory more informally, but within a richer historical and philosophical discussion, in his new book Bright Air, Brilliant Fire.

Edelman’s early work dealt not with the nervous system, but with the immune system, by which all vertebrates defend themselves against invading bacteria and viruses. It was previously accepted that the immune system “learned,” or was “instructed,” by means of a single type of antibody which molded itself around the foreign body, or antigen, to produce an appropriate, “tailored” antibody. These molds then multiplied and entered the blood-stream and destroyed the alien organisms. But Edelman showed that a radically different mechanism was at work; that we possess not one basic kind of antibody, but millions of them, an enormous repertoire of antibodies, from which the invading antigen “selects” one that fits. It is such a selection, rather than a direct shaping or instruction, that leads to the multiplication of the appropriate antibody and the destruction of the invader. Such a mechanism, which he called a “clonal selection,” was suggested in 1959 by MacFarlane Burnet, but Edelman was the first to demonstrate that such a “Darwinian” mechanism actually occurs, and for this he shared a Nobel Prize in 1972.

Edelman then began to study the nervous system, to see whether this too was a selective system, and whether its workings could be understood as evolving, or emerging, by a similar process of selection. Both the immune system and the nervous system can be seen as systems for recognition. The immune system has to recognize all foreign intruders, to categorize them, reliably, as “self” or “not self”. The task of the nervous system is roughly analogous, but far more demanding: it has to classify, to categorize, the whole sensory experience of life, to build from the first categorizations, by degrees, an adequate model of the world; and in the absence of any specific programming or instruction to discover or create its own way of doing this. How does an animal come to recognize and deal with the novel situations it confronts? How is such individual development possible?

  1. 1

    The heady atmosphere of these days is vividly captured in The Cybernetics Group by Steve J. Heims (MIT Press, 1991), and many of the McCulloch papers were later collected in Embodiments of Mind (MIT Press, 1965).

  2. 2

    See Francis Crick, “The Recent Excitement about Neural Networks,” Nature, Vol. 337 (January 12, 1989), pp. 129–132.

  3. 3

    Lashley expressed this in a famous paper, “In Search of the Engram,” published shortly before his death; London: Symposia of the Society for Experimental Biology, Vol. 4, 1950.

  4. 4

    Jeffrey Gray’s article is to be found in Nature, Vol. 358 (July 1992), p. 277, and my own reply to it in Nature, Vol. 358 (August 1992), p. 618.

  • Email
  • Single Page
  • Print