Five years ago the concepts of “mind” and “consciousness” were virtually excluded from scientific discourse. Now they have come back, and every week we see the publication of new books on the subject—Wet Mind by Stephen Kosslyn, Nature’s Mind by Michael Gazzaniga, Consciousness Explained by Daniel Dennett, The Computational Brain by Patricia Churchland and Terry Sejnowski, to mention only a few of the more distinguished. Reading most of this work, we may have a sense of disappointment, even outrage; beneath the enthusiasm about scientific developments, there is a certain thinness, a poverty and unreality compared to what we know of human nature, the complexity and density of the emotions we feel and of the thoughts we have. We read excitedly of the latest chemical, computational, or quantum theory of mind, and then ask, “Is that all there is to it?”
I remember the excitement with which I read Norbert Wiener’s Cybernetics when it came out in the late 1940s. And then, in the early 1950s, reading the work of Wiener’s younger colleagues at MIT—a galaxy of some of the finest minds in America including Warren McCulloch, Walter Pitts, John von Neumann—and learning about their pioneer explorations of logical automata and nerve nets. I thought, as many of us did, that we were on the verge of computer translation, perception, cognition; a brave new world in which ever more powerful computers would be able to mimic, and even take over, the chief functions of brain and mind. The very titles of the MIT papers were exalted and thrilling—“Machines that Think and Want,” “The Genesis of Social Evolution in the Mindlike Behavior of Artifacts.”1
During the 1960s, there was some faltering and questioning: it proved possible to put a man on the moon in this decade but not possible for a computer to achieve a decent translation of a child’s speech, much less a text of any complexity, or to achieve more than the most rudimentary mechanical perception (if indeed “perception” was a legitimate word here). Or was it simply that one needed more computer power, and perhaps different programs or designs? Supercomputers emerged, and, soon, so-called neural networks, which do not consist of actual neurons but computer simulations or models that attempt to mimic the nervous system. Though such networks start with random connections, and learn in a fashion—for example, how to recognize faces or words—they are always instructed what to do, even if they are not instructed how to do it. They are able to recognize in a formal, rule-bound way, not in terms of context and meaning, the way an organism does.
Some of these networks have been developed on the West Coast, under the presiding genius of Francis Crick. And yet Crick himself has expressed fundamental reservations about them—can they, he has asked, really be said to think? Are they, in fact, like minds at all? We must indeed be very cautious before we allow that any artifact is (except in a superficial sense) “mind-like” or “brainlike.”2
Thus if we are to have a model or theory of mind as this actually occurs in living creatures in the world, it may have to be radically different from anything like a computational one. It will have to be grounded in biological reality, in the anatomical and developmental and functional details of the nervous system; and also in the inner life or mental life of the living creature, the play of its sensations and feelings and drives and intentions, its perception of objects and people and situations, and, in higher creatures at least, the ability to think abstractly and to share through language and culture the consciousness of others.
Above all such a theory must account for the development and adaptation peculiar to living systems. Living organisms are born into a world of challenge and novelty, a world of significances, to which they must adapt or die. Living organisms grow, learn, develop, organize knowledge, and use memory in a way that has no analogue in the nonliving. Memory itself is characteristic of life. And memory brings about a change in the organism, so that it is better adapted, better fitted, to meet environmental challenges. The very “self” of the organism is enlarged by memory.
Such a notion of organic change as taking place with experience and learning, and as being an essential change in the structure and “being” of the organism, had no place in the classical theories of memory, which tended to portray it as a thing-in-itself, something deposited in the brain and mind—an impression, a trace, a replica of the original experience, like a photograph. (For Socrates, the brain was soft wax, imprinted with impressions as with a seal or signet ring.) This was certainly the case with Locke and the empiricists, and has its counterpart in many of the current models of memory, which see it as having a definite location in the brain, something like the memory core of a computer.
The neural basis of memory, and of learning generally, the Canadian neuroscientist Donald Hebb hypothesized, lay in a selective strengthening or inhibition of the synapses between nerve cells and the development of groups of cells or “cell-assemblies” embodying the remembered experience. This change, for Hebb, was only a local one, not a change in the brain (or the self) as a whole. At the opposite extreme, his teacher Karl Lashley, who trained rats to do complex tasks after removing various parts of their brains, came to feel that it was impossible to localize memory or learning; that, with remembering and learning, changes took place throughout the entire brain. Thus, for Lashley, memory, and indeed identity, did not have discrete locations in the brain.3 There seemed no possible meeting point between these two views: an atomistic or mosaic view of the brain as parceling memory and perception into small, discrete areas, and a global or “gestalt” view, which saw them as being somehow spread out across the entire brain.
These disparate views of memory and brain function were only part of a more general chaos, a flourishing of many fields and many theories, independently and in isolation, a fragmentation of our approaches to, and views about, the brain. In his newest book, Bright Air, Brilliant Fire, the neuroscientist Gerald Edelman speaks of this fragmentation:
Thus the picture of psychology was a mixed one, behaviorism, gestalt psychology, psychophysics, and memory studies in normal psychology; studies of the neuroses by Freudian analysis; clinical studies of brain lesions and motor and sensory defects…and a growing knowledge both of neuroanatomy and the electrical behavior of nerve cells in physiology…. Only occasionally were serious efforts made…to connect these disparate areas in a general way.
A comprehensive theory of brain function that could make sense of the diverse observations of a dozen different disciplines has been missing, and the enormous but fragmented growth of neuroscience in the last two decades has made the need for such a general theory more and more pressing. This was well expressed in a recent article in Nature, in which Jeffrey Gray spoke of the tendency of neuroscience to gather more and more experimental data, while lacking “a new theory…that will render the relations between brain events and conscious experience ‘transparent.’ “4
The needed theory, indeed, must do more: it must account for (or at least be compatible with) all the facts of evolution and neural development and neurophysiology that we know, on the one hand, and on the other all the facts of neurology and psychology, of mental life, that we know. It must be a theory of self-organization and emergent order at every level and scale, from the scurrying of molecules and their micropatterns in a million synaptic clefts to the grand macro-patterns of an actual lived life. Such a theory, Gray feels, “is at present unimaginable.”
But just such a theory has been imagined, and with great force and originality, by Gerald Edelman, who, with his colleagues at the Neurosciences Institute at Rockefeller University over the past fifteen years, has been developing a biological theory of mind, which he calls Neural Darwinism, or the Theory of Neuronal Group Selection (TNGS).
He first presented this in a relatively brief essay written in 1978 (The Mindful Brain, MIT Press). This essay was written, Edelman has said, in a single sitting, during a thirteen-hour wait for a plane in the Milan airport, and it is fascinating to see in this the germ of all his future thought—one gets an intense sense of the evolution occurring in him. Between 1987 and 1990 Edelman published his monumental and sometimes impenetrable trilogy—Neural Darwinism (1987), Topobiology (1988), and The Remembered Present: A Biological Theory of Consciousness (1989), which presented the theory, and a vast range of relevant observations, in a much more elaborate and rigorous form. He now presents the theory more informally, but within a richer historical and philosophical discussion, in his new book Bright Air, Brilliant Fire.
Edelman’s early work dealt not with the nervous system, but with the immune system, by which all vertebrates defend themselves against invading bacteria and viruses. It was previously accepted that the immune system “learned,” or was “instructed,” by means of a single type of antibody which molded itself around the foreign body, or antigen, to produce an appropriate, “tailored” antibody. These molds then multiplied and entered the blood-stream and destroyed the alien organisms. But Edelman showed that a radically different mechanism was at work; that we possess not one basic kind of antibody, but millions of them, an enormous repertoire of antibodies, from which the invading antigen “selects” one that fits. It is such a selection, rather than a direct shaping or instruction, that leads to the multiplication of the appropriate antibody and the destruction of the invader. Such a mechanism, which he called a “clonal selection,” was suggested in 1959 by MacFarlane Burnet, but Edelman was the first to demonstrate that such a “Darwinian” mechanism actually occurs, and for this he shared a Nobel Prize in 1972.
Edelman then began to study the nervous system, to see whether this too was a selective system, and whether its workings could be understood as evolving, or emerging, by a similar process of selection. Both the immune system and the nervous system can be seen as systems for recognition. The immune system has to recognize all foreign intruders, to categorize them, reliably, as “self” or “not self”. The task of the nervous system is roughly analogous, but far more demanding: it has to classify, to categorize, the whole sensory experience of life, to build from the first categorizations, by degrees, an adequate model of the world; and in the absence of any specific programming or instruction to discover or create its own way of doing this. How does an animal come to recognize and deal with the novel situations it confronts? How is such individual development possible?
The answer, Edelman proposes, is that an evolutionary process takes place—not one that selects organisms and takes millions of years, but one that occurs within each particular organism and during its lifetime, by competition among cells, or selection of cells (or, rather, cell groups) in the brain. This for Edelman is “somatic selection.”
Edelman and his colleagues have been concerned not only to propose a principle of selection but to explore the mechanisms by which it may take place. Thus they have tried to answer three kinds of questions: Which units in the nervous system select and give different emphasis to sensory experience? How does selection occur? What is the relation of the selecting mechanisms to such functions of brain and mind as perception, categorization, and, finally, consciousness?
Edelman discusses two kinds of selection in the evolution of the nervous system—“developmental” and “experiential.” The first takes place largely before birth. The genetic instructions in each organism provide general constraints for neural development, but they cannot specify the exact destination of each developing nerve cell—for these grow and die, migrate in great numbers and in entirely unpredictable ways: all of them are “gypsies,” as Edelman likes to say. The vicissitudes of fetal development themselves produce in every brain unique patterns of neurons and neuronal groups. Even identical twins with identical genes will not have identical brains at birth: the fine details of cortical circuitry will be quite different. Such variability, Edelman points out, would be a catastrophe in virtually any mechanical or computational system, where exactness and reproducibility are of the essence. But in a system in which selection is central, the consequences are entirely different; here variation and diversity are themselves of the essence.
Now, already possessing a unique and individual pattern of neuronal groups through developmental selection, the creature is born, thrown into the world, there to be exposed to a new form of selection which forms the basis of experience. What is the world of a newborn infant (or chimp) like? Is it a sudden incomprehensible (perhaps terrifying) explosion of electromagnetic radiations, sound waves, and chemical stimuli which make the infant cry and sneeze? Or an ordered, intelligible world, in which the infant discerns people, objects, meanings, and smiles? We know that the world encountered is not one of complete meaninglessness and pandemonium, for the infant shows selective attention and preferences from the start.
Clearly there are some innate biases or dispositions at work; otherwise the infant would have no tendencies whatever, would not be moved to do anything, seek anything, to stay alive. These basic biases Edelman calls “values.” Such values are essential for adaptation and survival; some have been developed through eons of evolution; and some are acquired through exploration and experience. Thus if the infant instinctively values food, warmth, and contact with other people (for example), this will direct its first movements and strivings. These “values”—drives, instincts, intentionalities—serve to differentially weight experience, to orient the organism toward survival and adaptation, to allow what Edelman calls “categorization on value,” e.g. to form categories such as “edible” and “nonedible” as part of the process of getting food. It needs to be stressed that “values” are experienced, internally, as feelings—without feeling there can be no animal life. “Thus,” in the words of the late philosopher Hans Jonas, “the capacity for feeling, which arose in all organisms, is the mother-value of all
At a more elementary physiological level, there are various sensory and motor “givens,” from the reflexes that automatically occur (for example, in response to pain) to innate mechanisms in the brain, as, for example, the feature detectors in the visual cortex which, as soon as they are activated, detect verticals, horizontals, boundaries, angles, etc., in the visual world.
Thus we have a certain amount of basic equipment; but, in Edelman’s view, very little else is programmed or built in. It is up to the infant animal, given its elementary physiological capacities, and given its inborn values, to create its own categories and to use them to make sense of, to construct, a world—and it is not just a world that the infant constructs, but its own world, a world constituted from the first by personal meaning and reference.
Such a neuro-evolutionary view is highly consistent with some of the conclusions of psychoanalysis and developmental psychology—in particular, the psychoanalyst Daniel Stern’s description of “an emergent self.” “Infants seek sensory stimulation,” writes Stern. “They have distinct biases or preferences with regard to the sensations they seek…. These are innate. From birth on, there appears to be a central tendency to form and test hypotheses about what is occurring in the world…[to] categorize…into conforming and contrasting patterns, events, sets, and experiences.”5 Stern emphasizes how crucial are the active processes of connecting, correlating, and categorizing information, and how with these a distinctive organization emerges, which is experienced by the infant as the sense of a self.
It is precisely such processes that Edelman is concerned with. He sees them as grounded in a process of selection acting upon the primary neuronal units with which each of us is equipped. These units are not individual nerve cells or neurons, but groups ranging in size from about fifty to ten thousand neurons; there are perhaps a hundred million such groups in the entire brain. During the development of the fetus, a unique neuronal pattern of connections is created, and then in the infant experience acts upon this pattern, modifying it by selectively strengthening or weakening connections between neuronal groups, or creating entirely new connections.
Thus experience itself is not passive, a matter of “impressions” or “sensedata,” but active, and constructed by the organism from the start. Active experience “selects,” or carves out, a new, more complexly connected pattern of neuronal groups, a neuronal reflection of the individual experience of the child, of the procedures by which it has come to categorize reality.
But these neuronal circuits are still at a low level—how do they connect with the inner life, the mind, the behavior of the creature? It is at this point that Edelman introduces the most radical of his concepts—the concepts of “maps” and “reentrant signaling.” A “map,” as he uses the term, is not a representation in the ordinary sense, but an interconnected series of neuronal groups that responds selectively to certain elemental categories—for example, to movements or colors in the visual world. The creation of maps, Edelman postulates, involves the synchronization of hundreds of neuronal groups. Some mappings, some categorizations, take place in discrete and anatomically fixed (or “prededicated”) parts of the cerebral cortex—thus color is “constructed” in an area called V4. The visual system alone, for example, has over thirty different maps for representing color, movement, shape, etc.
But where perception of objects is concerned, the world, Edelman likes to say, is not “labeled,” it does not come “already parsed into objects.” We must make them, in effect, through our own categorizations: “Perception makes,” Emerson said. “Every perception,” says Edelman, echoing Emerson, “is an act of creation.” Thus, our sense organs, as we move about, take samplings of the world, creating maps in the brain. Then a sort of neurological “survival of the fittest” occurs, a selective strengthening of those mappings which correspond to “successful” perceptions—successful in that they prove the most useful and powerful for the building of “reality.”
In this view, there are no innate mechanisms for complex “personal” recognition, such as the “grandmother cell” postulated by researchers in the 1970s to correspond to one’s perception of one’s grandmother.6 Nor is there any “master area,” or “final common path,” whereby all perceptions relating (say) to one’s grandmother converge in one single place. There is no such place in the brain where a final image is synthesized, nor any miniature person or homunculus to view this image. Such images or representations do not exist in Edelman’s theory, nor do any such homunculi. (Classical theory, with its concept of “images” or “representations” in the brain, demanded a sort of dualism—for there had to be a miniature “someone in the brain” to view the images; and then another, still smaller, someone in the brain of that someone; and so on, in an infinite regress. There is no way of escaping from this regress, except by eliminating the very concept of images and viewers, and replacing it by a dynamic concept of process or interaction.)
Rather, the perception of a grandmother or, say, of a chair depends on the synchronization of a number of scattered mappings throughout the visual cortex—mappings relating to many different perceptual aspects of the chair (its size, its shape, its color, its “leggedness,” its relation to other sorts of chairs—armchairs, kneeling chairs, baby chairs, etc.). In this way the brain, the creature, achieves a rich and flexible percept of “chairhood,” which allows the recognition of innumerable sorts of chairs as chairs (computers, by contrast, with their need for unambiguous definitions and criteria, are quite unable to achieve this). This perceptual generalization is dynamic and not static, and depends on the active and incessant orchestration of countless details. Such a correlation is possible because of the very rich connections between the brain’s maps—connections which are reciprocal, and may contain millions of fibers.
These extensive connections allow what Edelman calls “reentrant signaling,” a continuous “communication” between the active maps themselves, which enables a coherent construct such as “chair” to be made. This construct arises from the interaction of many sources. Stimuli from, say, touching a chair may affect one set of maps, stimuli from seeing it may affect another set. Reentrant signaling takes place between the two sets of maps—and between many other maps as well—as part of the process of perceiving a chair.
This construct, it must be emphasized once again, is not comparable to a single image or representation—it is, rather, comparable to a giant and continually modulating equation, as the outputs of innumerable maps, connected by reentry, not only complement one another at a perceptual level but are built up to higher and higher levels. For the brain, in Edelman’s vision, makes maps of its own maps, or “categorizes its own categorizations,” and does so by a process which can ascend indefinitely to yield ever more generalized pictures of the world.
This reentrant signaling is different from the process of “feedback,” which merely corrects errors.7 Simple feedback loops are not only common in the technological world (as thermostats, governors, cruise controls, etc.) but are crucial in the nervous system, where they are used for control of all the body’s automatic functions, from temperature to blood pressure to the fine control of movement. (This concept of feedback is at the heart of both Wiener’s cybernetics and Claude Bernard’s concept of homeostasis.) But at higher levels, where flexibility and individuality are all-important, and where new powers and new functions are needed and created, one requires a mechanism that can construct, not just control or correct.
The process of reentrant signaling, with its scores—perhaps hundreds—of reciprocal connections within and between maps, may be likened to a sort of neural United Nations, in which dozens of voices are talking together, while including in their conversation a variety of constantly inflowing reports from the outside world, and giving them coherence, bringing them together into a larger picture as new information is correlated and new insights emerge. There is, to continue the metaphor, no secretary general in the brain; the activity of reentrant signaling itself achieves the synthesis. How is this possible?
Edelman, who himself once planned to be a concert violinist, uses musical metaphors here. “Think,” he said in a recent BBC radio broadcast,
if you had a hundred thousand wires randomly connecting four string quartet players and that, even though they weren’t speaking words, signals were going back and forth in all kinds of hidden ways [as you usually get them by the subtle nonverbal interactions between the players] that make the whole set of sounds a unified ensemble. That’s how the maps of the brain work by re-entry.
The players are connected. Each player, interpreting the music individually, constantly modulates and is modulated by the others. There is no final or “master” interpretation—the music is collectively created. This, then, is Edelman’s picture of the brain, an orchestra, an ensemble—but without a conductor, an orchestra which makes its own music.
The construction of perceptual categorizations and maps, the capacity for generalization made possible by re-entrant signaling, is the beginning of psychic development, and far precedes the development of consciousness or mind, or of attention or concept formation—yet it is a prerequisite for all of these; it is the beginning of an enormous upward path, and it can achieve remarkable power even in relatively primitive animals like birds.8 Perceptual categorization, whether of colors, movements, or shapes, is the first step, and it is crucial for learning, but it is not something fixed, something that occurs once and for all. On the contrary—and this is central to the dynamic picture presented by Edelman—there is then a continual re-categorization, and this itself constitutes memory.
“In computers,” Edelman writes, “memory depends on the specification and storage of bits of coded information.” This is not the case in the nervous system. Memory in living organisms by contrast takes place through activity and continual recategorization.
By its nature, memory…involves continual motor activity….in different contexts. Because of the new associations arising in these contexts, because of changing inputs and stimuli, and because different combinations of neuronal groups can give rise to a similar output, a given categorical response in memory may be achieved in several ways. Unlike computer-based memory, brainbased memory is inexact, but it is also capable of great degrees of generalization.
In the extended Theory of Neuronal Group Selection, which he has developed since 1987, Edelman has been able, in a very economical way, to accommodate all the “higher” aspects of mind—concept formation, language, consciousness itself—without bringing in any additional considerations. Edelman’s most ambitious project, indeed, is to try to delineate a possible biological basis for consciousness. He distinguishes, first, “primary” from “higher-order” consciousness:
Primary consciousness is the state of being mentally aware of things in the world—of having mental images in the present. But it is not accompanied by any sense of [being] a person with a past and a future…. In contrast, higher-order consciousness involves the recognition by a thinking subject of his or her own acts and affections. It embodies a model of the personal, and of the past and future as well as the present…. It is what we as humans have in addition to primary consciousness.
The essential achievement of primary consciousness, as Edelman sees it, is to bring together the many categorizations involved in perception into a scene. The advantage of this is that “events that may have had significance to an animal’s past learning can be related to new events.” The relation established will not be a causal one, one necessarily related to anything in the outside world; it will be an individual (or “subjective”) one, based on what has had “value” or “meaning” for the animal in the past.
Edelman proposed that the ability to create scenes in the mind depends upon the emergence of a new neuronal circuit during evolution, a circuit allowing for continual reentrant signaling between, on the one hand, the parts of the brain where memory of such value categories as warmth, food, and light takes place and, on the other, the ongoing global mappings that categorize perceptions as they actually take place. This “bootstrapping process” (as Edelman calls it) goes on in all the senses, thus allowing for the construction of a complex scene. The “scene,” one must stress, is not an image, not a picture (any more than a “map” is), but a correlation between different kinds of categorization.
Mammals, birds, and some reptiles, Edelman speculates, have such a scene-creating primary consciousness; and such consciousness is “efficacious”; it helps the animal adapt to complex environments. Without such consciousness, life is lived at a much lower level, with far less ability to learn and adapt.
Primary consciousness [Edelman concludes] is required for the evolution of higher-order consciousness. But it is limited to a small memorial interval around a time chunk I call the present. It lacks an explicit notion or a concept of a personal self, and it does not afford the ability to model the past or the future as part of a correlated scene. An animal with primary consciousness sees the room the way a beam of light illuminates it. Only that which is in the beam is explicitly in the remembered present; all else is darkness. This does not mean that an animal with primary consciousness cannot have long-term memory or act on it. Obviously, it can, but it cannot, in general, be aware of that memory or plan an extended future for itself based on that memory.
Only in ourselves—and to some extent in apes—does a higher-order consciousness emerge. Higher-order consciousness arises from primary consciousness—it supplements it, it does not replace it. It is dependent on the evolutionary development of language, along with the evolution of symbols, of cultural exchange; and with all this brings an unprecedented power of detachment, generalization, and reflection, so that finally self-consciousness is achieved, the consciousness of being a self in the world, with human experience and imagination to call upon.
Higher-order consciousness releases us from the thrall of the here and now, allowing us to reflect, to introspect, to draw upon culture and history, and to achieve by these means a new order of development and mind. The most difficult and tantalizing portions of Bright Air, Brilliant Fire are about how this higher-order consciousness is achieved and how it emerges from the primary consciousness. No other theorist I know of has even attempted a biological understanding of this step. To become conscious of being conscious, Edelman stresses, systems of memory must be related to representation of a self. This is not possible unless the contents, the “scenes,” of primary consciousness are subjected to a further process and are themselves recategorized.
Though language, in Edelman’s view, is not crucial for the development of higher-order consciousness—there is some evidence of higher-order consciousness and self-consciousness in apes—it immensely facilitates and expands this by making possible previously unattainable conceptual and symbolic powers. Thus two steps, two reentrant processes, are envisaged here: first the linking of primary (or “value-category”) memory with current perception—a perceptual “boot-strapping,” which creates primary consciousness; second, a linking between symbolic memory and conceptual centers—the “semantic boot-strapping” necessary for higher consciousness. The effects of this are momentous: “The acquisition of a new kind of memory,” Edelman writes, “…leads to a conceptual explosion. As a result, concepts of the self, the past, and the future can be connected to primary consciousness. ‘Consciousness of consciousness’ becomes possible.”
At this point Edelman makes explicit what is implicit throughout his work—the interaction of “neural Darwinism” with classical Darwinism. What occurs “explosively” in individual development must have been equally critical in evolutionary development. Thus “at some transcendent moment in evolution,” Edelman writes, there emerged “a variant with a reentrant circuit linking value-category memory” to current perception. “At that moment,” Edelman continues, “memory became the substrate and servant of consciousness.” And then, at another transcendent moment, by another, higher turn of reentry, higher-order consciousness arose.
There is indeed much paleontological evidence that higher-order consciousness developed in an astonishingly short space of time—some tens (perhaps hundreds) of thousands of years, not the many millions usually needed for evolutionary change. The speed of this development has always been a most formidable challenge for evolutionary theorists—Darwin himself could offer no detailed account of it, and Wallace was driven back to thoughts of a grand design. But Edelman, drawing from his own observations of cell and tissue development detailed in his earlier book Topobiology, is able to suggest how it might have come about.
The principles underlying brain development and the mechanisms outlined in the Theory of Neuronal Group Selection can, he argues, account for this rapid emergence, since they allow for enormous changes in brain size over the relatively short evolutionary period in which Homo sapiens emerged. According to topobiology, relatively large changes in the structure of the brain can occur through changes in the genes that regulate the brain’s morphology—changes that can come about as the result of relatively few mutations. And the premises of the Theory of Neuronal Group Selection allow for the rapid incorporation into existing brain structures of new and enlarged neuronal maps with a variety of functions.
This interweaving of concept and observation typifies the ambition and the grandeur of Edelman’s thought. His two chapters on consciousness are the most original, the most exhilarating, and the most difficult in the entire book—but they achieve, or aspire to achieve, what no other theorist has even tried to do, a biologically plausible model of how consciousness could have emerged.
A sense of excitement runs through all of Edelman’s books. “We are at the beginning of the neuroscientific revolution,” he writes in the preface to Bright Air, Brilliant Fire. “At its end, we shall know how the mind works, what governs our nature, and how we know the world.” This century, as he observes, has been rich in theories—going all the way from psychophysics to psychoanalysis—but all these have been partial. New theories arise from a crisis in scientific understanding, when there is an acute incompatibility between observations and existing theories. There are many such crises in neuroscience today. Edelman, with his background in morphology and development, speaks of the “structural” crisis, the now well-established fact that there is no precise wiring in the brain, that there are vast numbers of unidentifiable inputs to each cell, and that such a jungle of connections is incompatible with any simple computational theory. He is moved, as William James was, by the apparently seamless quality of experience and consciousness—the unitary appearance of the world to a perceiver despite (as we have seen in regard to vision) the multitude of discrete and parallel systems for perceiving it; and the fact that some integrating or unifying or “binding” must occur, which is totally inexplicable by any existing theory.
Since the Theory of Neuronal Group Selection was first formulated, important new evidence has emerged suggesting how widely separated groups of neurons in the visual cortex can become synchronized and respond in unison when an animal is faced with a new perceptual task—a finding directly suggestive of reentrant signaling. (I discussed this work in an earlier article, “Neurology and the Soul.”)9 There is also much evidence of a more clinical sort, which one feels may be illuminated, and perhaps explained, by the Theory of Neuronal Group Selection.
I often encounter situations in day-to-day neurological practice which completely defeat classical neurological explanations, which cry out for explanations of a radically different kind, and which are clarified by Edelman’s theory. (Some of these situations are discussed by Israel Rosenfield in his new book The Strange, Familiar and Forgotten,10 where he speaks of “the bankruptcy of classical neurology.”) Thus if a spinal anesthetic is given to a patient—as used to be done frequently to women in childbirth—there is not just a feeling of numbness below the waist. There is, rather, the sense that one terminates at the umbilicus, that one’s corporeal self has no extension below this, and that what lies below is not-self, not-flesh, not-real, not-anything. The anesthetized lower half has a bewildering nonentity, completely lacks meaning and personal reference. The baffled mind is unable to categorize it, to relate it in any way to the self. One knows that sooner or later the anesthetic will wear off, yet it is impossible to imagine the missing parts in a positive way. There is an absolute gap in primary consciousness which higher-order consciousness can report, but cannot correct.
This indeed is a situation I know well from personal no less than clinical experience, for it is what I experienced myself after a nerve injury to one leg, when for a period of two weeks, while the leg lay immobile and senseless, I found it “alien,” not me, not real. I was astonished when this happened, and unassisted by my neurological knowledge—the situation was clearly neurological, but classical neurology has nothing to say about the relation of sensation to knowledge and to “self”; about how, normally, the body is “owned”; and how, if the flow of neural information is impaired, it may be lost to consciousness, and “disowned”—for it does not see consciousness as a process.11
Such body-image and body-ego disturbances can be fully understood, in Edelman’s thinking, as breakdowns in local mapping, consequent upon nerve damage or disuse. It has been confirmed, further, in animal experiments that the mapping of body-image is not something fixed, but plastic and dynamic, and dependent upon a continual inflow of experience and use; and that if there is continuing interference with, say, one’s perception of a limb or its use, there is not only a rapid loss of its cerebral map, but a rapid remapping of the rest of the body which then excludes the limb itself.12
Stranger still are the situations which arise when the cerebral basis of body-image is affected, especially if the right hemisphere of the brain is badly damaged in its sensory areas. At such times patients may show an “anosognosia,” an unawareness that anything is the matter, even though the left side of the body may be senseless, and perhaps paralyzed, too. Or they may show a strange levity, insisting that their own left sides belong to “someone else.” Such patients may behave (as an eminent neurologist, M.M. Mesulam, has written) “…as if one half of the universe had abruptly ceased to exist…as if nothing were actually happening [there]…as if nothing of importance could be expected to occur there.” Such patients live in a hemi-space, a bisected world, but for them, subjectively, their space and world is entire. Anosognosia is unintelligible (and was for years misinterpreted as a bizarre neurotic symptom) unless we see it (in Edelman’s term) as “a disease of consciousness,” a total breakdown of high-level reentrant signaling and mapping in one hemisphere—the right hemisphere, which, Edelman suggests, may have only primary but no higher-order consciousness—and a radical reorganization of consciousness in consequence.
Less dramatic than these complete disappearances of self or parts of the self from consciousness, but still remarkable in the extreme, are situations in which, following a neurological lesion, a dissociation occurs between perception and consciousness, or memory and consciousness, cases in which there remain only “implicit” perception or knowledge or memory. Thus my amnesiac patient Jimmie (“The Lost Mariner”) had no explicit memory of Kennedy’s assassination, and would indeed say, “No president in this century has been assassinated, that I know of.” But if asked, “Hypothetically, then, if a presidential assassination had somehow occurred without your knowledge, where might you guess it occurred: New York, Chicago, Dallas, New Orleans, or San Francisco?” he would invariably “guess” correctly, Dallas.
Similarly, patients with visual agnosias, like Dr. P. (“The Man who Mistook his Wife for a Hat”), while not consciously able to recognize anyone, often “guess” the identity of people’s faces correctly. And patients with total cortical blindness, from massive bilateral damage to the primary visual areas of the brain, while asserting that they can see nothing, may also mysteriously “guess” correctly what lies before them—so-called “blindsight.” In all these cases, then, we find that perception, and perceptual categorization of the kind described by Edelman, has been preserved, but has been divorced from consciousness.
In such cases it appears to be only the final process, in which the reentrant loops combine memory with current perceptual categorization, that breaks down. Their understanding, so elusive hitherto, seems to come closer with Edelman’s “reentrant” model of consciousness.
Dissatisfaction with the classical theories is not confined to clinical neurologists; it is also to be found among theorists of child development, among cognitive and experimental psychologists, among linguists, and among psychoanalysts. All find themselves in need of new models. This was abundantly clear in May of 1992, at an exciting conference on “Selectionism and the Brain” held at the Neurosciences Institute in New York and attended by prominent workers in all of these fields. Particularly suggestive was the work of Esther Thelen and her colleagues at the University of Indiana in Bloomington, who have for some years been making a minute analysis of the development of motor skills—walking, reaching for objects—in infants. “For the developmental theorist,” Thelen writes, “individual differences pose an enormous challenge…. Developmental theory has not met this challenge with much success.” And this is, in part, because individual differences are seen as extraneous, whereas Thelen argues that it is precisely such differences, the huge variation between individuals, that allow the evolution of unique motor patterns.
Thelen found that the development of such skills, as Edelman’s theory would suggest, follows no single programmed or prescribed pattern. Indeed there is great variability among infants at first, with many patterns of reaching for objects; but there then occurs, over the course of several months, a competition among these patterns, a discovery or selection of workable patterns, or workable motor solutions. These solutions, though roughly similar (for there are a limited number of ways in which an infant can reach), are always different and individual, adapted to the particular dynamics of each child, and they emerge by degrees, through exploration and trial. Each child, Thelen showed, explores a rich range of possible ways to reach for an object and selects its own path, without the benefit of any blueprint or program. The child is forced to be original, to create its own solutions. Such an adventurous course carries its own risks—the child may evolve a bad motor solution—but sooner or later such bad solutions tend to destabilize, break down, and make way for further exploration, and better solutions.13
When Thelen tries to envisage the neural basis of such learning, she uses terms very similar to Edelman’s: she sees a “population” of movements being selected or “pruned” by experience. She writes of infants “remapping” the neuronal groups that are correlated with their movements, and “selectively strengthening particular neuronal groups.” She has, of course, no direct evidence for this, and such evidence cannot be obtained until we have a way of visualizing vast numbers of neuronal groups simultaneously in a conscious subject, and following their interactions for months on end. No such visualization is possible at the present time, but it will perhaps become possible by the end of the decade. Meanwhile, the close correspondence between Thelen’s observations and the kind of behavior that would be expected from Edelman’s theory is striking.
If Esther Thelen is concerned with direct observation of the development of motor skills in the infant, Arnold Modell of Harvard, at the same conference, was concerned with psychoanalytical interpretations of early behavior; he too felt, like Thelen, that a crisis had developed, but that it might also be resolved by the Theory of Neuronal Group Selection—indeed, the title of his paper was “Neural Darwinism and a Conceptual Crisis in Psychoanalysis.” The particular crisis he spoke of was connected with Freud’s concept of Nachtraglichkeit, the re-transcription of memories which had become part of pathological fixations but were opened to consciousness, to new contexts and reconstructions, as a crucial part of the therapeutic process of liberating the patient from the past, and allowing him to experience and move freely once again.
This process cannot be understood in terms of the classical concept of memory, in which a fixed record on trace or representation is stored in the brain—an entirely static or mechanical concept—but requires a concept of memory as active and “inventive.”14 That memory is essentially constructive (as Coleridge insisted, nearly two centuries ago) was shown experimentally by the great Cambridge psychologist Frederic Bartlett. “Remembering,” he wrote,
is not the re-excitation of innumerable fixed, lifeless and fragmentary traces. It is an imaginative reconstruction, or construction, built out of the relation of our attitude toward a whole mass of organized past reactions or experience.
It was just such an imaginative, context-dependent construction or reconstruction that Freud meant by Nachtraglichkeit—but this, Modell emphasizes, could not be given any biological basis until Edelman’s notion of memory as recategorization. Beyond this, Modell as an analyst is concerned with the question of how the self is created, the enlargement of self through finding, or making, personal meanings. Such a form of inner growth, so different from “learning” in the usual sense, he feels, may also find its neural basis in the formation of ever-richer but always self-referential maps in the brain, and their incessant integration through reentrant signaling, as Edelman has described it.15
Others too—cognitive psychologists and linguists—have become intensely interested in Edelman’s ideas, in particular by the implication of the extended Theory of Neuronal Group Selection which suggests that the exploring child, the exploring organism, seeks (or imposes) meaning at all times, that its mappings are mappings of meaning, that its world and (if higher consciousness is present) its symbolic systems are constructed of “meanings.” When Jerome Bruner and others launched the “cognitive revolution” in the mid-1950s, this was in part a reaction to behaviorism and other “isms” which denied the existence and structure of the mind. The cognitive revolution was designed “to replace the mind in nature,” to see the seeking of meaning as central to the organism. In a recent book, Acts of Meaning, Bruner describes how this original impetus was subverted, and replaced by notions of computation, information processing, etc., and by the computational (and Chomskian) notion that the syntax of a language could be separated from its semantics.16
But, as Edelman writes, it is increasingly clear, from studying the natural acquisition of language in the child, and, equally, from the persistent failure of computers to “understand” language, its rich ambiguity and polysemy, that syntax cannot be separated from semantics. It is precisely through the medium of “meanings” that natural language and natural intelligence are built up. From Boole, with his “Laws of Thought” in the 1850s, to the pioneers of Artificial Intelligence at the present day, there has been a persistent notion that one may have an intelligence or a language based on pure logic, without anything so messy as “meaning” being involved. That this is not the case, and cannot be the case, may now find a biological grounding in the Theory of Neuronal Group Selection.
None of this, however, can yet be proved—we have no way of seeing neuronal groups or maps or their interactions; no way of listening in to the reentrant orchestra of the brain. Our capacity to analyze the living brain is still far too crude. Partly for this reason researchers in neuroscience, Edelman among them, have felt it necessary to simulate the brain, and the power of computers and supercomputers makes this more and more possible. One can endow one’s simulated neurons with physiologically realistic properties, and allow them to interact in physiologically realistic ways.
Edelman and his colleagues at the Neurosciences Institute have been deeply interested in such “synthetic neural modeling,” and have devised a series of “synthetic animals” or artifacts designed to test the Theory of Neuronal Group Selection. Although these “creatures”—which have been named DARWIN I, II, III, and IV—make use of supercomputers, their behavior (if one may use the word) is not programmed, not robotic, in the least, but (in Edelman’s word) “noetic.” They incorporate both a selectional system and a primitive set of “values”—for example, that light is better than no light—which generally guide behavior but do not determine it or make it predictable. Unpredictable variations are introduced in both the artifact and its environment so that it is forced to create its own categorizations.
DARWIN IV or NOMAD, with its electronic eye and snout, has no “goal,” no “agenda,” but resides in a sort of pen, a world of varied simple objects (with different colors, shapes, textures, weights). (See illustration on page 46.) True to its name, it wanders around like a curious infant, exploring these objects, reaching for them, classifying them, building with them, in a spontaneous and idiosyncratic way (the movement of the artifact is exceedingly slow, and one needs time-lapse photography to bring home its creatural quality). No two “individuals” show identical behavior—and the details of their reachings and learnings cannot be predicted, any more than Thelen can predict the development of her infants. If their value circuits are cut, the artifacts show no learning, no “motivation,” no convergent behavior at all, but wander around in an aimless way, like patients who have had their frontal lobes destroyed. Since the entire circuitry of these DARWINS is known, and can be seen functioning in detail on the screen of a supercomputer, one can continuously monitor their inner workings, their internal mappings, their reentrant signalings—one can see how they sample the environment, one can see how the first, vague, tentative percepts emerge, and how, with hundreds of further samplings, they evolve and become recognizable, refined models of reality, following a process similar to that projected by Edelman’s theory.17
Seeing the DARWINS, especially DARWIN IV, at work can induce a curious state of mind. Going to the zoo after my first sight of DARWIN IV, I found myself looking at birds, antelopes, lions, with a new eye: were they, so to speak, nature’s DARWINS, somewhere up around DARWIN XII in complexity? And the gorillas, with higher-order consciousness but no language—where would they stand? DARWIN XIX? And we, writing about the gorillas, where would we stand? DARWIN XXVII perhaps? A particularly intriguing, sometimes frightening part of Bright Air, Brilliant Fire is its penultimate chapter, “Is It Possible to Construct a Conscious Artefact?” Edelman has no doubt of the possibility, but places it, mercifully, well on in the next century.
Such then is the sweep of Bright Air, Brilliant Fire, and its central ambition of “replacing the mind in nature.” It is a book of astonishing variety and range, which runs from philosophy to biology to psychology to neural modeling, and attempts to synthesize them into a unified whole.
Neural Darwinism (or Neural Edelmanism, as Francis Crick has called it) coincides with our sense of “flow,” that feeling we have when we are functioning optimally, of a swift, effortless, complex, ever-changing, but integrated and orchestrated stream of consciousness;18 it coincides with the sense that this consciousness is ours, and that all we experience and do and say is, implicitly, a form of self-expression, and that we are destined, whether we wish it or not, to a life of particularity and self-development; it coincides, finally, with our sense that life is a journey—unpredictable, full of risk and uncertainty, but, equally, full of novelty and adventure, and characterized (if not sabotaged by external constraints or pathology) by constant advance, an ever deeper exploration and understanding of the world.
Edelman’s theory proposes a way of grounding all this in known facts about the nervous system and testable hypotheses about its operations. Any theory, even a wrong theory, is better than no theory; and this theory—the first truly global theory of mind and consciousness, the first biological theory of individuality and autonomy—should at least stimulate a storm of experiment and discussion.
Merlin Donald, at the end of his fine and far-reaching recent book Origins of the Modern Mind, speaks of this in his conclusion:
Mental materialism is back, with a vengeance. It is not only back, but back in an unapologetic, out-of-the-closet, almost exhibitionistic form. This latest incarnation might be called “exuberant materialism.” Changeux (1985), Churchland (1986), Edelman (1987), Young (1988), and many others have announced a new neuroscientific apocalypse.
Optimism is basically more productive than pessisism, and exuberant materialists are certainly optimists. Neuroscience is in its adolescence, and the field is drunk with its own dizzying growth; how not to be optimistic?19
There is no better place to read about this than in Edelman’s own works, dense and difficult though they frequently are. Bright Air, Brilliant Fire is the most wide-ranging and accessible. It is strenuous and sometimes maddening, and one must struggle to understand it; but if one struggles, if one reads and reads again, the stubborn paragraphs finally yield their meaning, and a brilliant and captivating new vision of the mind emerges.
April 8, 1993
The heady atmosphere of these days is vividly captured in The Cybernetics Group by Steve J. Heims (MIT Press, 1991), and many of the McCulloch papers were later collected in Embodiments of Mind (MIT Press, 1965). ↩
See Francis Crick, “The Recent Excitement about Neural Networks,” Nature, Vol. 337 (January 12, 1989), pp. 129–132. ↩
Lashley expressed this in a famous paper, “In Search of the Engram,” published shortly before his death; London: Symposia of the Society for Experimental Biology, Vol. 4, 1950. ↩
Jeffrey Gray’s article is to be found in Nature, Vol. 358 (July 1992), p. 277, and my own reply to it in Nature, Vol. 358 (August 1992), p. 618. ↩
The Interpersonal World of the Infant: A View from Psychoanalysis and Developmental Psychology (Basic Books, 1985). ↩
There may however be built-in mechanisms for certain generic recognitions—such as the ability, which we share with all primates, to recognize the category of “snakes,” even if we have never seen a snake before; or infants’ ability to recognize the generic category of “faces” long before they recognize particular ones. There is now evidence for “face-detecting” cells in the cerebral cortex. ↩
Confusingly, the very term “reentrant” has occasionally been used in the past to denote such feedback loops. Edelman gives the term “reentry” a radically new meaning. ↩
Thus if pigeons are presented with photographs of trees, or oak leaves, or fish, surrounded by extraneous features, they rapidly learn to “home in” upon these, and to generalize, so that they can thereafter recognize any trees, or oak leaves, or fish straightaway, however distracting or confusing the context may be. ↩
The New York Review, November 22, 1990. ↩
Knopf, 1992. ↩
A full discussion of such body-image or body-ego disturbance in relation to TNGS can be found in a new afterword to the UK edition of my book A Leg to Stand On (Picador, 1992). ↩
Fundamental work showing the plasticity of the cerebral cortex, and the remarkable degree to which it can reorganize itself after injuries, amputations, strokes, etc. has been done by Michael Merzenich and his colleagues at the University of California in San Francisco. See (for example): “Cortical Representational Plasticity,” by M. M. Merzenich, G. Recanzone, W. M. Jenkins, T. T. Allard, and R. J. Nudo in Neurobiology of the Neocortex, edited by P. Rakic and W. Singer (John Wiley and Sons, Ltd., 1988), pp. 41–67. ↩
See Esther Thelen, “Dynamical Systems and the Generation of Individual Differences” in Individual Differences in Infancy: Reliability, Stability, and Prediction, edited by J. Colombo and J. W. Fagen (Hillsdale, New Jersey: Erlbaum, 1990). Similar considerations arise with regard to recovery and rehabilitation after strokes and other injuries. There are no rules, there is no prescribed path of recovery; every patient must discover, or create, his own motor and perceptual patterns, his own solutions to the challenges that face him; and it is the function of a sensitive therapist to help him in this. This is well understood in the practice of “functional integration,” pioneered by Moshe Feldenkrais, and used increasingly both in rehabilitation after injury and in the training of dancers and athletes. “One cannot teach a person how to perceive,” writes Carl Ginsburg, a leading Feldenkrais teacher. “We need a system that organizes itself as it experiences a system that has both stability and extraordinary plasticity to shift with changing circumstances. It is a system that is exceedingly difficult to model.” Ginsberg feels that TNGS is closest to the model required (“The Roots of Functional Integration, Part III: The Shift in Thinking,” The Feldenkrais Journal, No. 7 (Winter 1992), pp. 34–47. ↩
See Israel Rosenfield, The Invention of Memory: A New View of the Brain (Basic Books, 1991). ↩
Modell’s ideas have been set out in full in Other Times, Other Realities (Harvard University Press, 1990), and in a forthcoming book, The Private Self (Harvard University Press, 1993). ↩
Jerome Bruner, Acts of Meaning (Harvard University Press, 1990). ↩
Normally one is not aware of the brain’s almost automatic generation of “perceptual hypotheses” (in Richard Gregory’s term) and their refinement through a process of repeated samplings and testing. But under certain circumstances, as in recovery after acute nerve injury, one may become vividly aware of these normally unconscious (and sometimes exceedingly rapid) operations. I give a personal example of this in A Leg to Stand On. ↩
See Mihaly Csikszentmihalyi, Flow: The Psychology of Optimal Experience (HarperCollins, 1990). ↩
Harvard University Press, 1991 ↩