• Email
  • Single Page
  • Print

Body, Mind, and Machine

Mind in Science: A History of Explanations in Psychology and Physics

by Richard L. Gregory
Cambridge University Press, 641 pp., $29.95

The Question of Animal Awareness: Evolutionary Continuity of Mental Experience

by Donald R. Griffin
William Kaufmann, Inc. (Los Altos, California), 209 pp., $8.95 (paper)

The Universe Within: A New Science Explores the Human Mind

by Morton Hunt
Simon and Schuster, 415 pp., $18.75

The Enlightened Machine: An Analytical Introduction to Neuropsychology

by Daniel N. Robinson
Columbia University Press, 158 pp., $20.00; $8.00 (paper)

Common sense and neurophysiology stubbornly insist on making a sharp distinction between mind and body, despite the efforts of monistic philosophers to make them one. Food in the mind is qualitatively different from food in the mouth. Saliva may flow in response to both stimuli, and the monistic preacher may therefore call the two cases one. But the most mechanistic physiologists—Pavlov, for example—assume a basic difference, and search for different neural mechanisms to explain it. It is mythology, not history, to describe Pavlov as working with “purely objective methods, without any assumptions about unseen processes,” as Morton Hunt does in his popular survey of recent developments in psychology. Physiologists constantly guess at unseen processes, and devise experiments to prove their existence. Especially if they are seeking neural processes that are assumed to underlie particular mental processes, physiologists must always be examining the distinction between “neural” and “mental,” which constantly brings them back to dualism, or at least to an agnostic refusal of monism.

Our puppet bodies can be involuntarily jerked about on neural wires—as when the foot leaps from contact with fire—or, in the absence of fire, our minds can voluntarily move the foot or refrain from moving it, at will. Nerves activate the machinery in both cases, but they are qualitatively different cases. Descartes made that point over three hundred years ago, and is therefore widely credited or blamed for the curious mixture of mechanistic and dualistic assumptions that one finds in neurophysiology. In historical fact the contemporary mixture was created long after Descartes, by experimental scientists in the nineteenth century. Their mode of experimentation required a sharp distinction between neural processes, which lend themselves to mechanistic analysis, and mental phenomena, which do not. Thus they came to see Descartes, in retrospect, as the patron saint of their discipline.1 Two hundred years before them he had hypothesized a neural circuit that jerks the foot reflexively from contact with fire, to be distinguished from the immaterial mind that chooses to activate the foot-moving machinery or to refrain from activating it—and is aware of itself in the process.

Those who dislike a metaphysics with more than one substance, who yearn to know the One-in-all of the scientistic faith, are impatient with neurophysiology. They like to imagine all the brain’s mechanisms discovered, all neural circuits mapped, the body exalted and the mind laid low, or maybe vice versa as in the fantasies of Artificial Intelligence or “the Force” of sci-fi. One way or another—materialist or spiritualist, evasive behaviorist or fantasist—such dreamers imagine the mind-body distinction abolished, “in principle.” In fact, scientists studying actual nervous systems must go on making the distinction, or lose any hope of puzzling out the incredibly complex functioning of living animals.

Seeing, to take the process that Richard Gregory has been analyzing for many years, is broken into component processes: light, which is physical; excitation in the neural network of eye and brain, which is also physical; sensation, which is subjective and resists analysis in strictly physical terms; and perception, which involves cognitive inference from sensation and is thus even less susceptible to strictly physical analysis. I am simplifying, in order to bring out an elementary point. The neuropsychologist may have a personal aversion to metaphysical dualism, as Gregory does, but he cannot avoid distinctions that constantly draw him back toward the belief he is trying to avoid. Thus Gregory poses the question, “How is sensation related to neural activity?” and gives the honest answer: “Unfortunately, we do not know.” He even speaks of “an irreducible gap between physics and sensation which physiology cannot bridge,” or, more generally, of an “impassable gulf between our two realms.”

Scientistic philosophers are constantly disturbed by such heresy where they most expect reinforcement of the faith. V.I. Lenin, for example, rebuked the great neurophysiologist Helmholtz for vacillating between materialism and “physiological idealism.”2 A.J. Ayer, to take a more commonplace example of the monistic preacher, tried to persuade a later generation of neurophysiologists that it makes no sense to speak of mind and body interacting. He was answered by Wilder Penfield, a great neurosurgeon and pioneer of neuropsychology: “The riddle we must try to solve is this: What is the nature of the mind? How is it joined to action within the brain? To declare that these two things are one does not make them so. But it does block the progress of research.3 Richard Gregory, though demurring from Penfield’s explicit dualism, rejects Ayer’s philosophy, for the same reason that many critics have rejected Lenin’s: it “requires a theory of perception that is not tenable, a Direct Realism.” (Gregory loves to Magnify by Capitalizing.)

For the most part, however, neuropsychologists try to avoid entanglement in metaphysical issues. It is not only a matter of getting on with their proper work. When they write general, meditative books, such as those under review, they show a strong aversion to “immaterial mental essences.” I am quoting Donald R. Griffin, an American physiologist who won fame by showing how bats perceive objects by bouncing sound waves on them. He also drew the bold conclusion that bats must have minds within or behind their sound-wave system of perception. In the present book he goes further. He descends the evolutionary tree even to bees and ants, which (or whom?) he also endows with mental processes, as evinced in their communication or “speech.”

Griffin makes a genial little confession of possible romanticism in his outlook, but turns away unsmiling from explicit dualism—or pluralism, which also shimmers in the possible extension of his line of thought. Below the social insects to whom he attributes mind lie other living things—worms, for example, and amoebas—who (or which?) must still be distinguished from sticks and stones and space. To assemble all in one category, nature, which must also include our distinctively human minds, requires either a pluralistic metaphysics of emergent qualities or some such animistic monism as Wordsworth professed:

To every natural form, rock, fruits or flower,
Even the loose stones that cover the highway,
I gave a moral life: I saw them feel,
Or linked them to some feeling: the great mass
Lay imbedded in a quickening soul,
   and all
That I beheld respired with inward meaning.
[“The Prelude”]

Romanticism of that kind was becoming extinct even in Wordsworth’s time, as mechanistic science systematically excluded mind from its field of explanation.

Daniel Robinson’s response to the problem in The Enlightened Machine is explicit acceptance of emergent qualities. He is a young neuropsychologist with an exceptionally strong interest in the history and philosophy of his discipline, and a refreshingly bold way of acknowledging its very imperfect state. “Wittingly or otherwise, the neural sciences have been far more effective in creating the impression of success with psychological matters than they have been in fact.” He will not let himself forget that “in research of this kind [correlating neural and psychic processes], no matter how keen and careful the investigator may be, arbitrariness abounds.” Thus he brings the reader to a vivid appreciation of the verbal shuffles that justify the One-in-all faith, the sleight of hand that permits fusion of disparate realities:

When the honey bee returns to the hive, it engages in a “dance,” the pattern of which can be used by other bees to locate where the dancing bee has just been. Thus, a symbolic representation of space has been communicated by one organism, and this received information can be used by others to solve a problem. Shall we admit this behavior into the domain of “language”? Clearly, we must if we define language as symbolic communication that allows the passage of information from a sender to a receiver. The hungry baby who cries for food satisfies the same definition. So does the puppy who learns to give us his paw. Indeed, the clouds reliably report their supersaturation by raining. It should be getting clear that something is wrong with our definition. It does not eliminate those events that we all believe have no place in a discussion of language. We might be willing to call the “dancing” bee or even the puppy a linguist. But clouds?

To avoid such nonsense Robinson follows common sense, and the implicit assumptions of his discipline, to the notion of emergent properties:

Sooner or later, those who have speculated at length on the relationship between brain activity and mind activity have adopted the notion of emergent properties—properties that are not simply reducible to the added influences of their constituents…. Consciousness, that gray figure who has haunted La Mettrie’s machine for two centuries, would seem to be a condition that emerges from the neural mix of which our brains are made. It appears to be more a fact than a thing.

Robinson compares it to “gravity, relativity, time,” which he also conceives as facts rather than things, and then he comes very close to open talk of immaterial essences: such facts “are conditions of nature that transform matter without being matter.” Richard Gregory also approaches those taboo essences, as he ponders the “irreducible gap” or “impassable gulf” between “our two realms.” Neural systems are somehow linked with minds, which somehow transform physical “signals” into meaningful “data.” The neural systems and their signals are physical, while the emergent minds and their meaningful data are “highly peculiar,…outside the physical world, though essential for describing the physical world.”

Those who believe that computer science has fused “signals” and “data,” that information theory and Artificial Intelligence offer final liberation from metaphysical dualism, will be disappointed by all of the authors under review. Gregory and Hunt strive to accept the creed but ultimately admit the victory of reason, as we shall see. Robinson and Griffin are insultingly brief in their dismissal of AI. As neuropsychologists they simply take for granted the difference between information in the everyday sense of the word, which assumes living agents sending and receiving meaningful messages, and information in the technical sense of the engineer analyzing purely physical processes. The pattern of electrical pulses in a telephone wire, and the analogous pattern of sound waves at both ends, can be described in mathematical terms that dispense with the assumption of living agents speaking and hearing meaningful messages. Information theory of that engineering kind has been of enormous help in improving telephone equipment and computers. It has been of some limited use in the analysis of neural systems, but when it is extended to specifically mental phenomena, such as communication of meaning and feeling, paying attention and having consciousness, information theory creates mysteries or nonsense, or both together.

Neuropsychologists have problems enough without taking on those of the AI enthusiasts. Griffin sends them off to argue with the philosopher J.R. Searle,4 adding a disdainful explanation of their “belief pattern”: it “resonates with contemporary enthusiasm for computers; and many of us tend to feel less demeaned by comparison with computer systems than with ‘lower’ animals.” No doubt Griffin would exempt Richard Gregory from that disdainful judgment, for Gregory is quite willing to make downward comparisons, yet he is also mightily intrigued by the analogy between minds and computers. Not that he perceives any great conceptual aid currently flowing from computer science to neuropsychology. He expects such benefit in the future, for computers are most intriguing machines, and he is convinced that human beings have always learned about themselves by analogies with the machines they have invented.

  1. 1

    I am repeating the judgment of Ruth Leys, “Background to the Reflex Controversy,” Studies in the History of Biology, Vol. IV (Johns Hopkins University Press, 1980). Cf. the rich accumulation of historical studies cited in her notes.

  2. 2

    See Lenin’s Materialism and Empiriocriticism, chapter IV, part 6.

  3. 3

    Penfield, The Second Career (Little, Brown, 1964), p. 141.

  4. 4

    See his article, “Minds, Brains, and Programs,” in The Behavioral and Brain Sciences, Vol. 3 (Cambridge University Press, 1980), and “The Myth of the Computer,” NYR, April 29, 1982.

  • Email
  • Single Page
  • Print