Mind-Reading

The Oxford Companion to the Mind

edited by Richard L. Gregory, with the assistance of O.L. Zangwill
Oxford University Press, 856 pp., $49.95

We are now beginning to understand the physical basis of normal and abnormal mental activity largely because of recent advances in the neurosciences. Among the concerns of The Oxford Companion to the Mind, edited by Richard L. Gregory and the late Oliver L. Zangwill, both well-known British psychologists, is to describe these advances. The book includes many entries on important writers in the history of philosophy and psychology. It has several well-informed and skeptical entries on telepathy, clairvoyance, and paranormal phenomena. It takes account of modern developments in linguistics and learning theory. But the heart of the book is its numerous entries on the neurosciences; the longest, “Nervous System,” by the English neurologist Peter Nathan, gives a twenty-page account of current knowledge of the brain, and it is supplemented by such articles as “Brain Development,” “Brain Function and Awareness,” “Neurotransmitters,” and “Neuronal Connectivity and Brain Function.” Articles on Parkinsonism, schizophrenia, depression, and dementia discuss the breakdown of normal function.

Some of these articles describe current knowledge of the mechanisms by which nerve cells interact, for example, the ways in which the fifty or more chemical transmitters communicate information from one nerve cell to another, and, for another, the possible relation of these transmitters to some forms of brain disease. As exciting as these discoveries are, many central questions remain unanswered, such as what are the purposes of these mechanisms in the overall functioning of the brain. The discovery, for example, in the 1970s of the “natural opiates,” the endorphins, led to the suggestion that their release within the brain may be responsible for the “high” experienced by joggers; and an endorphin malfunction may explain why some people suffer from claustrophobia and therefore develop severe anxiety in elevators and other enclosed spaces, but the precise connections between the chemicals and such feelings remain to be established. (The entry on endorphins itself is fairly brief and the physiological and psychological effects of chemical transmitters in general are discussed in the entry “Neurotransmitters.”)

It has been suggested that neurotransmitters may provide the key to our understanding of the biochemical basis of normal and abnormal psychology. In the entry “Parkinsonism” K.A. Flowers of the University of Hull writes, for example, of the various possible functions of the neurotransmitter dopamine:

Of particular interest is the finding that while a deficiency of dopamine is associated with Parkinsonism, over-activity in the dopamine system produces schizophrenia-like behavioural effects. This opens up the possibility that it is the biochemical status of the nervous system that underlies psychological and psychiatric changes in Parkinsonian patients and schizophrenics. It also holds out hope for the continued development of effective treatment, but as with studies of the biochemical basis of schizophrenia, the exact relation of the mechanisms of the nervous system to the characteristics of the mind remains elusive. Progress in this problem may well depend as much on advances in our understanding of the latter as of the former.

As this entry makes clear, research on neurotransmitters is still far from explaining how specific information is created, recognized, stored, and retrieved by the brain. The entries in The Companion generally agree that brain function can best be understood by analogy with the computer. The article “Neurotransmitters and Neuromodulators,” for example, notes:

The soft warm living substance of the brain and nervous system stands in stark contrast to the rigid metal and plastic hardware of a modern day computer, but at the fundamental level there are clear similarities between these two apparently disparate organizational systems…. Not only are the nerve cell units (neurones) self-repairing and self-wiring under the grand design built into our genes, but they can also promote, amplify, block, inhibit, or attenuate the micro-electric signals which are passed on to them…and this provides the physical substrate of mind.

While many neuroscientists would agree with this view, it is hardly universal, and some leading neuroscientists have questioned the idea that brain function can be explained by analogy to computers that are carefully programmed to recognize, store, and retrieve specific information. That such skeptics are scarcely mentioned in The Oxford Companion is regrettable since the inclusion of their opposing views would have provided a better basis for understanding the philosophical and scientific questions currently at issue in the argument over the workings of brain and mind.

Still, The Companion presents diverse and often contradictory philosophical arguments about the mind in its articles on Western philosophers from the PreSocratics, Plato, and Aristotle to Russell and Wittgenstein. Japanese, Egyptian, Chinese, Indian, and numerous Arab thinkers are also included. The reader becomes aware that there are deep controversies about such matters as language and learning, but these disputes are not coherently related, as they should be, to what we know in the neurosciences; and the reader would hardly know that neuroscientists themselves fiercely disagree with each other about the neurophysiological basis of memory, learning, and ultimately language.

The Companion is particularly useful in displaying deep disagreements among philosophers and linguists about the nature of language. Margaret Donaldson in her entry “Language: Learning Word Meanings,” a short essay of about two thousand words, gives evidence that when children learn a language, they do not first learn words and then learn how to organize sentences: they first learn sentences and later come to understand that these are composed of words. “If a child understands an utterance,” Margaret Donaldson writes,

it may seem obvious that the words which compose it are “known” and that, in the process of making sense of the utterance, each of these words is given “its meaning.” But this is to suppose that a child interprets the language in isolation from its immediate context, which is not what typically happens…. Thus a child can begin to learn the meaning of “Do you want some milk?” because when someone picks up a jug and holds it over a cup the intention to offer milk is understood. On this view it is to be expected that for a long time the interpretation of language should remain, for the child, embedded in, and powerfully dependent on, the context of occurrence.

In the entry “Symbols” the American philosopher W. V. Quine makes a point about language that seems related to Donaldson’s observations. “Meaning,” he writes, “accrues primarily to whole sentences, and only derivatively to separate words.” “We give the meaning of a sentence by explaining the sentence, and the meaning of a word by explaining how it works in sentences.”

Arthur Cooper’s “Chinese Evidence on the Evolution of Language” adds a fascinating and unusual historical argument to these claims. According to Cooper there was, in prehistory, an “original, natural (poetic) language” that was purely metaphorical. People used a limited number of often similar terms to express different needs and desires. Understanding what was being said therefore depended on the circumstances in which an utterance was made. More complex symbolic language, “the newer, artificial (logical) language,” developed later. Cooper finds evidence for this development of language in the evolution of the written forms of Chinese. The earlier metaphorical form “is well illustrated,” Cooper writes,

by a Chinese character with meanings now like “to retire to rest,” but in ancient texts also “to go busily to and fro.” The character 栖[mù XI], which was “tree” plus “bird’s nest” [“tree” 木 plus “west” 西 in modern usage], illustrated the metaphor lying behind both. Birds “nest” (go to roost) at sunset [from whence the modern usage of “west”] and “nest” (build, go to and from their nests) in spring. [See illustration on opposite page.] Contexts would make it perfectly clear which sense was meant before the notion grew of “words” possessing meanings in themselves.

Thus Cooper’s account of the metaphorical nature of early forms of language seems similar to Margaret Donaldson’s evidence that children first learn utterances and later specific words. This suggests that the rules of grammar may be acquired as utterances become more precise through the use of words with stable meanings, and that they are therefore derived from the examples of a language the child hears.

Other entries on language in the Companion, however, show sharp disagreements with such views, and present a very different conception of language and mind. Take, for example, Noam Chomsky’s entry on his own theory of language, a view that seems exactly contrary to that of W. V. Quine. Quine argues that a child initially learns sentences such as “It’s raining” and “This is red” by conditioning, unaided by auxiliary sentences, and then achieves higher levels of linguistic competence by analogies (“from the apparent role of a word in one sentence he guesses its role in another”) and by noting how sentences are related to each other (“he discovers that people assent to a sentence of some one form only contingently upon assenting to a corresponding sentence of some related form”).

Chomsky, for his part, argues that we are born with genetically determined “mental organs,” among them one specialized in language that contains specific “rule systems” that “cannot be derived from the data of experience by ‘induction,’ ‘abstraction,’ ‘analogy,’ or ‘generalization,’ in any reasonable sense of these terms, any more than the basic structure of the mammalian visual system is inductively derived from experience.”

The brain, in Chomsky’s view, could not use the samples of language a child hears to derive the rules necessary to produce grammatical sentences. The rules must, in some sense, be innate, for the samples are too impoverished for generalization to be possible. The continuity of mental structures such as language—the ways in which past experience with language is related to present and future experience—is established through largely innate mental structures or rules, which are similar for all human beings and which form the basis of a grammar that can generate an infinite number of sentences.

In sharp contrast, again, are the arguments of the behaviorists, who, as David Cohen notes in his entry “Behaviourism,” claimed that “by controlling the rewards and punishments [that] the environment offers in response to particular behaviours, you can shape behaviour.” More recently Gerald Edelman has proposed a theory suggesting that the brain consists of structures that are both constrained by the actions of genes and respond selectively to experience. In this view the brain is capable of making powerful generalizations—such as extracting a grammar from limited samples of sentences—and can correlate past and present events, ideas, needs, and desires. Generalization in this sense could be the result of acquiring a set of procedures. We can write, or draw, with our right hand, our left hand, our foot, or even a pencil clenched between our teeth. In each case we use different sets of muscles, but the procedures we follow are related.1

Ultimately, whether or not there are mental organs containing innate rules in the brain is a biological question. And if there are innate rules for language there must be innate rules for other mental functions as well.

In fact, Chomsky’s argument receives considerable support from the work of two scientists, David Hubel and Torsten Wiesel, who are responsible for much of our present knowledge of the visual system, and who have described what they consider to be innate connections in the visual system. Cells in the visual system of the brain, Hubel and Wiesel showed, respond to the presence of specific stimuli (a line at a 40 degree angle, for example) in a particular part of the visual field. Such cells, in their view, are innately determined to detect specific visual features (for example, lines with specific orientations) and Hubel and Wiesel suggested that a hierarchical arrangement of feature detectors could be the basis for the formation of more complex visual images in the brain. Innate connections and the rules implicit in them therefore determine the functioning of the visual system; in a larger sense innate mechanisms would be responsible for brain function in general. Just as we learn to see colors and objects of all shapes because of rules that are embedded in the nerve cell connections that make up the visual system of the brain, we learn to understand specific languages because of certain general rules embedded in the language centers of the brain that permit the derivation of the more specific rules of grammar of the language the child actually hears.

In the excellent entry on their work, Horace Barlow writes that Hubel and Wiesel reared newborn kittens with one eye closed and noted that these kittens developed a different pattern of brain connections from that of kittens with the use of both eyes, or those who had only one eye occasionally closed during the early postnatal period. These experiments have been repeatedly confirmed, and they have convinced some scientists that experience may be important in determining the nerve cell connections in the brain’s visual system. But Hubel and Wiesel remain convinced that these connections are innately determined. Barlow notes:

It is interesting that Hubel and Wiesel have consistently argued that experience does no more than preserve innately formed connections…. Later results have shown, however, that immature and inexperienced cortical neurones lack the [functional capacities of]…adult cells, and that the cortex is therefore far from normal. Hubel and Wiesel have thus provided some of the best evidence for the effects of experience on the cortex, while making some of the most dogmatic statements on the predominance of ontogenetic factors in determining its properties.

Other biological entries, however, endorse the presence of innate programs in the brain as if the claim needed neither clarification nor justification. This sense of lively controversy, which emerges throughout The Companion‘s superb entries on language, is largely missing from the discussion of the biology of the mind.

Colwyn Trevarthen of the University of Edinburgh, for example, states in “Brain Development” that “the way the human brain parts grow before birth suggests that the interacting nerve-cells might make up and coordinate basic rules for object perception, for purposeful movement patterns, and for motive states, without benefit of experience.” There may be innate rules in the brain, as Trevarthen suggests, but then the critical questions are how they are created genetically and how they are carried out by the complicated circuitry of the brain. The claim that a precisely programmed neural circuitry embodying innate rules is created during the early development and the maturation of an individual has recently been challenged by the work of Gerald Edelman and his colleagues on cell adhesion molecules (CAMs). This work accepts that the general patterns of neural connections are shaped by gene action, but suggests that the exact connections of individual cells are not genetically determined.

While the idea of genetically determined programs goes largely unquestioned in The Companion, one of the most significant neurophysiological discoveries of the last few decades is mentioned but largely unexplained. It has become increasingly evident that the brain maps and remaps stimuli from the body’s various sensory receptors (in the eyes, ears, skin, etc.). These maps are collections of nerve cells that make up thin slices of brain tissue and they are activated by such stimuli as touch or sound, or by other maps. Thus there are maps for frequencies of sound, maps for the place of origin of sounds, maps for the surface of the body that are activated by touch, and so on.

The maps are so pervasive in the brain that they appear crucial to an understanding of its function. Alan Cowey of Oxford, in “Localization of Brain Function and Cortical Maps,” emphasizes this point when he writes, “A computer programmed to recognize patterns does not need within its components anything like a map of the original scene. So why does the brain have [such a map]?” And not one, but many. Why does the brain of the cat have “at least thirteen mapped representations of the retina, the owl monkey at least eight, and the rat…six?” Cowey suggests that the maps are coding many different attributes of, for example, the visual image—color, size, orientation, etc. “If all of this were to be attempted within one map,” he writes,

the local interconnections would again have to be longer and the problem of interconnecting the right cells would increase. By having many maps, each of which is small and contains nerve-cells concerned only with one or a few of the stimulus attributes [color, size, orientation, etc.],…interconnections can be kept as short as possible and the problem of interconnecting the right type of cell is minimized.

A more encompassing explanation of the way the brain organizes its reaction to external stimuli is, as I have already mentioned, Gerald Edelman’s theory of Neural Darwinism, a theory that was first published in 1978. It is not mentioned in The Companion. Unlike the view of Cowey in which the various attributes of a visual scene are grouped together in relatively fixed ways, Neural Darwinism argues that the significance of a stimulus or set of stimuli will be different for the organism at different times. Sounds, for example, may represent speech, noise, or music, or they can be used to locate things in space. The different ways in which stimuli are organized, or categorized, are a consequence of different patterns of interactions among the maps. In general these interactions are not based on a fixed hierarchy. As the animal’s environment changes, according to Edelman, the strengths of connections among the maps change, and so does the nature of the information represented. Any particular pattern of activity in the brain does not have an absolute meaning; its meaning is determined by the immediate environmental setting of the organism and the selection of particular circuits over others at any one time. Mapping, then, is a biological mechanism apparently capable of creating powerful generalizations that are constantly “revised” or updated by new experiences, and constantly generate new ways of behaving without exclusively relying on the precision of innate rules or programs.

Unlike the views of Hubel and Wiesel, the theory of Neural Darwinism is not completely dependent on genetically determined specific connections between nerve cells at microscopic levels of connections, or for that matter on the specific activities of any individual nerve cell. Neural Darwinism depends, in contrast, on the ways in which collections of highly variable groups of neurons respond to stimuli. Thus, in Edelman’s theory, certain brain structures are determined by gene action, but rules (such as the rules of grammar) arise by interaction of these structures with the environment. Supporters of theories of detailed innate rules in the brain have still to confront these specific biological claims about the structure and function of the brain.

Our present knowledge of neurophysiology is too limited to decide between these opposing views. Neurophysiologists have been loath to discuss the physiological basis of language. However, as we have seen, Noam Chomsky’s claim that human language depends on innate rules rests on controversial biological assumptions, such as the view that there are innate “mental organs.”2

One of the more important attempts to study what innate rules might be like has been the use of computer simulations to study the visual system. The late David Marr, Tomaso Poggio, and their collaborators at MIT have shown how a set of general rules programmed into a computer can transform a two-dimensional image (similar to the image on the retina) into three dimensions without using any specific knowledge about the nature of the scene being viewed.3 Unfortunately this work is not clearly presented in The Companion, and the editors fail to establish important cross-references. For example, the late Richard Jung’s entry, “Art and Visual Abstraction,” is very like an account of Marr’s and Poggio’s theory of vision, though Jung doesn’t seem to be aware of their work.

Whatever artificial intelligence might contribute to our understanding of the possible structure of innate rules, so far, at least, it hasn’t added anything to our understanding of emotions. Nor, unfortunately, as other reviewers have pointed out, does The Companion.

The understanding of the anatomical basis of emotions dates from 1878 when Paul Broca described a set of structures deep inside the brain—the limbic system—that are, he argued, more developed in lower animals than in higher forms of life, including human beings. (The entry on Broca succinctly describes this work as well as his discovery of the anatomical center for speech articulation.) Subsequently it was found that the limbic system is connected to a small structure at the base of the brain, the hypothalamus, that releases hormones directly into the blood stream and controls the release of other hormones from the pituitary gland. The release of these hormones regulates body rhythms, temperature, patterns of eating, growth, and sexuality. Emotional responses are also associated with the hypothalamus. For example, electrical stimulation of one part of the hypothalamus causes an “anger response” in cats: an arched back, hair standing up, increased heart rate, etc. Destruction of the same area produces placid animals.

In 1937 James Papez at Cornell University (who is not mentioned in The Companion) in a famous paper, “A Proposed Mechanism of Emotion,”4 suggested that the areas of the brain concerned with sensory information (vision, touch, smell, hearing) communicated with the hypothalamus through the complicated circuitry of the limbic system and that, therefore, the limbic system and the hypothalamus were the anatomical substratum of emotional reactions such as fear, anger, love, etc. The argument received some experimental support in 1939 when Heinrich Klüver and Paul Bucy described experiments in which parts of the limbic systems of monkeys were destroyed. The animals lost their fear of humans, indiscriminately smelled and sucked inedible objects, became hypersensitive to any stimulus and were sexually overactive. This reaction, called the Klüver–Bucy syndrome and noted in the entry on Klüver, is now known to be largely the consequence of the destruction of a part of the limbic system known as the amygdala (it resembles an almond in shape), which has extensive connections with the brain centers for the different sensory modalities such as touch, vision, etc.

Mortimer Mishkin and his colleagues at the National Institute of Mental Health in Bethesda, Maryland, have suggested that the monkeys’ bizarre behavior following destruction of the amygdala may be a consequence of the brain’s inability to establish correlations among the various sensory modalities: the monkey “seeing an object, may be unable to recall how that object feels, and even after feeling and smelling it may still not recall its taste.”5 This work suggests that recollection and recognition could, in a deep sense, be connected to emotions. As Mishkin has written: “It is possible that the amygdala not only enables sensory events to develop emotional associations but also enables emotions to shape perception and the storage of memories.”6 None of this research is discussed in The Companion, though an important work of Mishkin’s is referred to in the entry “Brain Function and Awareness.”

The recent reevaluation of the famous experiments by the Canadian neurosurgeon Wilder Penfield is also suggestive of the importance of emotions in recognition and recollection. Beginning in the 1930s Penfield first described the electrical stimulation of conscious patients’ brains during operations for epilepsy. (There is an entry for Penfield, though this work, for which he is best known, is not described.) In some cases the patients had what Penfield called memory “flashbacks.” “The flashback responses to electrical stimulation,” Penfield wrote,

bear no relation to present experience in the operating room. Consciousness for the moment is doubled, and the patient can discuss the phenomenon. If he is hearing music, he can hum in time to it. The astonishing aspect of the phenomenon is that suddenly he is aware of all that was in his mind during an earlier strip of time.7

Penfield’s work gave rise to the popular belief that our brains contain the equivalent of video tapes of our past. But fewer than 10 percent of Penfield’s patients had memory flashbacks and recently it was shown that patients have them only when there is simultaneous electrical activity in the limbic system, again suggesting that emotions such as fear, love, hate, etc., may have some function in determining perception, recognition, and recollection.8

Although The Companion‘s entries on memory and the limbic system briefly note claims about the connections between limbic activity and memory, the experimental work I have described is hardly taken into account. For example, in his long entry, “Memory: Biological Basis,” Steven Rose writes,

The relationship between the language used to discuss these phenomena in the brain and that used in the description of the properties of computers and their memory stores is not accidental, for much…is directed—and constrained—by a framework of analogies from computer technology and information theory.

But the appropriateness of the analogy is questionable. Memory does not seem to work in ways analogous to computers and The Companion‘s entries on memory should have, at the very least, noted the recent studies that have explored the possible connections between limbic activity and recollection. For one of the central issues of our understanding of the mind is how the brain can create the different procedures that permit the complex and continual interactions between past and present, emotions and knowledge; and understanding the nature of memory is an essential element in the solution of the problem.

The Oxford Companion to the Mind illustrates the diversity of often penetrating ideas that have been suggested throughout history and the varied sources of those ideas, East and West. To have undertaken such an effort was courageous, and Gregory and his contributors deserve praise for providing a reference work that will have many uses. It is regrettable that the neurophysiological core of the book, as I have tried to show, does not always reflect the controversial nature of its subject.

Letters

Paranormal Companionship February 16, 1989

  1. 1

    I have discussed this work in an article, “Neural Darwinism,” The New York Review (October 9, 1986), and in my The Invention of Memory (Basic Books, 1988).

  2. 2

    The pattern of language acquisition discussed by Donaldson, in which children first learn the meaning of sentences and only later individual words, suggests that the brain may first be generalizing about the larger phonetic contours of the sentence, then about the phonetic boundaries that establish the individual words, and then about how the words are (grammatically) related to one another. One could imagine how a series of maps might abstract the larger phonetic contours of sentences, and how subsequently similar abstracting procedures might establish word boundaries with other maps abstracting correlations among the words. In this view, language would be specific to human beings because it would depend on their having brains large enough, and with enough maps, to carry out these abstracting procedures, as well as a voice box for producing the necessary combinations of sounds. Whether the maps could account for the generative power of the brain is now a central question, but as Reeke and Edelman show, language need not be determined by preprogrammed rules. See their article in Daedalus (Winter 1988) for a fascinating description of a machine based on the principles of Neural Darwinism in which coherent behavior emerges without specific programs.

  3. 3

    See Anya Hurlbert and Tomaso Poggio’s excellent discussion of these matters in “Making Machines (and Artificial Intelligence) See,” Daedalus (Winter 1988).

  4. 4

    J.W. Papez, “A Proposed Mechanism of Emotion,” Arch. Neurol, Psychiatry, Vol. 38 (1937), pp. 725–743.

  5. 5

    Amygdalectomy Impairs Crossmodal Association in Monkeys,” Elisabeth A. Murray and Mortimer Mishkin, Science, Vol. 228 (May 3, 1985), pp. 604–606.

  6. 6

    Mortimer Mishkin and Tim Appenzeller, “The Anatomy of Memory,” Scientific American (June 1987), pp. 80–89.

  7. 7

    Wilder Penfield, “Consciousness, Memory, and Man’s Conditioned Reflexes,” in On the Biology of Learning, Karl H. Pribram, ed. (Harcourt, Brace and World, 1969), p. 152.

  8. 8

    For an excellent discussion of emotion from a philosophical perspective that views emotions as a kind of perception, see Ronald de Sousa’s The Rationality of Emotion (MIT Press, 1987).