Common sense and neurophysiology stubbornly insist on making a sharp distinction between mind and body, despite the efforts of monistic philosophers to make them one. Food in the mind is qualitatively different from food in the mouth. Saliva may flow in response to both stimuli, and the monistic preacher may therefore call the two cases one. But the most mechanistic physiologists—Pavlov, for example—assume a basic difference, and search for different neural mechanisms to explain it. It is mythology, not history, to describe Pavlov as working with “purely objective methods, without any assumptions about unseen processes,” as Morton Hunt does in his popular survey of recent developments in psychology. Physiologists constantly guess at unseen processes, and devise experiments to prove their existence. Especially if they are seeking neural processes that are assumed to underlie particular mental processes, physiologists must always be examining the distinction between “neural” and “mental,” which constantly brings them back to dualism, or at least to an agnostic refusal of monism.

Our puppet bodies can be involuntarily jerked about on neural wires—as when the foot leaps from contact with fire—or, in the absence of fire, our minds can voluntarily move the foot or refrain from moving it, at will. Nerves activate the machinery in both cases, but they are qualitatively different cases. Descartes made that point over three hundred years ago, and is therefore widely credited or blamed for the curious mixture of mechanistic and dualistic assumptions that one finds in neurophysiology. In historical fact the contemporary mixture was created long after Descartes, by experimental scientists in the nineteenth century. Their mode of experimentation required a sharp distinction between neural processes, which lend themselves to mechanistic analysis, and mental phenomena, which do not. Thus they came to see Descartes, in retrospect, as the patron saint of their discipline.1 Two hundred years before them he had hypothesized a neural circuit that jerks the foot reflexively from contact with fire, to be distinguished from the immaterial mind that chooses to activate the foot-moving machinery or to refrain from activating it—and is aware of itself in the process.

Those who dislike a metaphysics with more than one substance, who yearn to know the One-in-all of the scientistic faith, are impatient with neurophysiology. They like to imagine all the brain’s mechanisms discovered, all neural circuits mapped, the body exalted and the mind laid low, or maybe vice versa as in the fantasies of Artificial Intelligence or “the Force” of sci-fi. One way or another—materialist or spiritualist, evasive behaviorist or fantasist—such dreamers imagine the mind-body distinction abolished, “in principle.” In fact, scientists studying actual nervous systems must go on making the distinction, or lose any hope of puzzling out the incredibly complex functioning of living animals.

Seeing, to take the process that Richard Gregory has been analyzing for many years, is broken into component processes: light, which is physical; excitation in the neural network of eye and brain, which is also physical; sensation, which is subjective and resists analysis in strictly physical terms; and perception, which involves cognitive inference from sensation and is thus even less susceptible to strictly physical analysis. I am simplifying, in order to bring out an elementary point. The neuropsychologist may have a personal aversion to metaphysical dualism, as Gregory does, but he cannot avoid distinctions that constantly draw him back toward the belief he is trying to avoid. Thus Gregory poses the question, “How is sensation related to neural activity?” and gives the honest answer: “Unfortunately, we do not know.” He even speaks of “an irreducible gap between physics and sensation which physiology cannot bridge,” or, more generally, of an “impassable gulf between our two realms.”

Scientistic philosophers are constantly disturbed by such heresy where they most expect reinforcement of the faith. V.I. Lenin, for example, rebuked the great neurophysiologist Helmholtz for vacillating between materialism and “physiological idealism.”2 A.J. Ayer, to take a more commonplace example of the monistic preacher, tried to persuade a later generation of neurophysiologists that it makes no sense to speak of mind and body interacting. He was answered by Wilder Penfield, a great neurosurgeon and pioneer of neuropsychology: “The riddle we must try to solve is this: What is the nature of the mind? How is it joined to action within the brain? To declare that these two things are one does not make them so. But it does block the progress of research.3 Richard Gregory, though demurring from Penfield’s explicit dualism, rejects Ayer’s philosophy, for the same reason that many critics have rejected Lenin’s: it “requires a theory of perception that is not tenable, a Direct Realism.” (Gregory loves to Magnify by Capitalizing.)

For the most part, however, neuropsychologists try to avoid entanglement in metaphysical issues. It is not only a matter of getting on with their proper work. When they write general, meditative books, such as those under review, they show a strong aversion to “immaterial mental essences.” I am quoting Donald R. Griffin, an American physiologist who won fame by showing how bats perceive objects by bouncing sound waves on them. He also drew the bold conclusion that bats must have minds within or behind their sound-wave system of perception. In the present book he goes further. He descends the evolutionary tree even to bees and ants, which (or whom?) he also endows with mental processes, as evinced in their communication or “speech.”

Advertisement

Griffin makes a genial little confession of possible romanticism in his outlook, but turns away unsmiling from explicit dualism—or pluralism, which also shimmers in the possible extension of his line of thought. Below the social insects to whom he attributes mind lie other living things—worms, for example, and amoebas—who (or which?) must still be distinguished from sticks and stones and space. To assemble all in one category, nature, which must also include our distinctively human minds, requires either a pluralistic metaphysics of emergent qualities or some such animistic monism as Wordsworth professed:

To every natural form, rock, fruits or flower,
Even the loose stones that cover the highway,
I gave a moral life: I saw them feel,
Or linked them to some feeling: the great mass
Lay imbedded in a quickening soul,
   and all
That I beheld respired with inward meaning.
[“The Prelude”]

Romanticism of that kind was becoming extinct even in Wordsworth’s time, as mechanistic science systematically excluded mind from its field of explanation.

Daniel Robinson’s response to the problem in The Enlightened Machine is explicit acceptance of emergent qualities. He is a young neuropsychologist with an exceptionally strong interest in the history and philosophy of his discipline, and a refreshingly bold way of acknowledging its very imperfect state. “Wittingly or otherwise, the neural sciences have been far more effective in creating the impression of success with psychological matters than they have been in fact.” He will not let himself forget that “in research of this kind [correlating neural and psychic processes], no matter how keen and careful the investigator may be, arbitrariness abounds.” Thus he brings the reader to a vivid appreciation of the verbal shuffles that justify the One-in-all faith, the sleight of hand that permits fusion of disparate realities:

When the honey bee returns to the hive, it engages in a “dance,” the pattern of which can be used by other bees to locate where the dancing bee has just been. Thus, a symbolic representation of space has been communicated by one organism, and this received information can be used by others to solve a problem. Shall we admit this behavior into the domain of “language”? Clearly, we must if we define language as symbolic communication that allows the passage of information from a sender to a receiver. The hungry baby who cries for food satisfies the same definition. So does the puppy who learns to give us his paw. Indeed, the clouds reliably report their supersaturation by raining. It should be getting clear that something is wrong with our definition. It does not eliminate those events that we all believe have no place in a discussion of language. We might be willing to call the “dancing” bee or even the puppy a linguist. But clouds?

To avoid such nonsense Robinson follows common sense, and the implicit assumptions of his discipline, to the notion of emergent properties:

Sooner or later, those who have speculated at length on the relationship between brain activity and mind activity have adopted the notion of emergent properties—properties that are not simply reducible to the added influences of their constituents…. Consciousness, that gray figure who has haunted La Mettrie’s machine for two centuries, would seem to be a condition that emerges from the neural mix of which our brains are made. It appears to be more a fact than a thing.

Robinson compares it to “gravity, relativity, time,” which he also conceives as facts rather than things, and then he comes very close to open talk of immaterial essences: such facts “are conditions of nature that transform matter without being matter.” Richard Gregory also approaches those taboo essences, as he ponders the “irreducible gap” or “impassable gulf” between “our two realms.” Neural systems are somehow linked with minds, which somehow transform physical “signals” into meaningful “data.” The neural systems and their signals are physical, while the emergent minds and their meaningful data are “highly peculiar,…outside the physical world, though essential for describing the physical world.”

Those who believe that computer science has fused “signals” and “data,” that information theory and Artificial Intelligence offer final liberation from metaphysical dualism, will be disappointed by all of the authors under review. Gregory and Hunt strive to accept the creed but ultimately admit the victory of reason, as we shall see. Robinson and Griffin are insultingly brief in their dismissal of AI. As neuropsychologists they simply take for granted the difference between information in the everyday sense of the word, which assumes living agents sending and receiving meaningful messages, and information in the technical sense of the engineer analyzing purely physical processes. The pattern of electrical pulses in a telephone wire, and the analogous pattern of sound waves at both ends, can be described in mathematical terms that dispense with the assumption of living agents speaking and hearing meaningful messages. Information theory of that engineering kind has been of enormous help in improving telephone equipment and computers. It has been of some limited use in the analysis of neural systems, but when it is extended to specifically mental phenomena, such as communication of meaning and feeling, paying attention and having consciousness, information theory creates mysteries or nonsense, or both together.

Advertisement

Neuropsychologists have problems enough without taking on those of the AI enthusiasts. Griffin sends them off to argue with the philosopher J.R. Searle,4 adding a disdainful explanation of their “belief pattern”: it “resonates with contemporary enthusiasm for computers; and many of us tend to feel less demeaned by comparison with computer systems than with ‘lower’ animals.” No doubt Griffin would exempt Richard Gregory from that disdainful judgment, for Gregory is quite willing to make downward comparisons, yet he is also mightily intrigued by the analogy between minds and computers. Not that he perceives any great conceptual aid currently flowing from computer science to neuropsychology. He expects such benefit in the future, for computers are most intriguing machines, and he is convinced that human beings have always learned about themselves by analogies with the machines they have invented.

That central thesis is not borne out by Gregory’s history of psychological ideas. Indeed, his huge, sprawling book has surprisingly few examples of man-made machines conceived as metaphors of ourselves, and those few examples are not adequately explored or illuminated. Gregory fails, to take a case that could have been most revealing, to explore the evolving significance of the clock metaphor, beginning with religious visions of the heavens proclaiming the glory of the Clockmaker, changing to the romantic lament that clocklike nature, deprived of gods, is inhumanly unaware of the poet celebrating its (or her) perfection, emerging finally in the present-day use of clockwork to symbolize repulsive dehumanization. The analogous trend in downward comparisons—to borrow Robert Frost’s term for analogies between ourselves and other animals—also escapes the notice of Gregory. The ant, which was once a cheerful symbol of industry (as in Aesop’s fable, and in the Biblical injunction: “Go to the ant, thou sluggard; consider her ways, and be wise”), was transformed during the nineteenth century into a fearful symbol of mechanical sociality. (See Dostoevsky’s rage at “the anthill,” and note how common it was to describe communist China as “the empire of the blue ants,” until our government made them allies in the crusade for freedom.)

Gregory makes a crucial distinction between machines and mechanisms, but does not adequately explore the significance of that distinction. Machines are invented, and therefore exhibit the purposes of their inventors and users. Mechanisms are discovered in nature, which is conceived as essentially without feeling or purpose or consciousness. If mechanisms exhibit that essential quality of nature, and we are a part of nature, our consciousness becomes a delusion to be explained away or an anomaly to be avoided by the prudent scientist. Gregory concludes, “We do not know our place in Nature,” but he fails to reflect on the diversity concealed in that “we.” The neurophysiologist remains within the pale of natural science by sharply restricting his analysis to the neural processes of human beings. The artist and the humanist scholar, not to speak of the ordinary person immersed in the stream of consciousness, are in ridiculous or defiant or sullen exile from nature. The neuropsychologist is in an awkward straddle, for he is trying to correlate physical and mental processes.

That is the historical context that has given rise to the romantic dream of machines as the essential metaphor of monistic fusion, for machines are both mechanistic and embodiments of human qualities. Whether horrifying or humorous, morally elevating or trivial, that romantic view of machines has become an enormous presence in popular culture, while remaining distinctly peripheral to high art and serious thought. Gregory does not examine that anomaly, which suggests some crucial flaw or superficiality in the effort to bridge the gulf between our two realms by viewing machines as metaphors for the minds that invent them.

Let me stress that this critical interpretation of Gregory is intended as praise for an extraordinarily learned and thoughtful scholar. His present book is an enormous grab bag of notes and thoughts on all manner of subjects, gathered in a lifetime of broad reading and heaped into a book with indifference to the virtue of blue-penciled restraint. The reader who is sufficiently patient will find within the author’s extravagantly garrulous vice the higher virtue of utter intellectual honesty. He longs to “use the tools and paradigms of modern physics as ancient myths, magic and machines were invoked at the dawn of written thought, to reveal ourselves in the Universe.” He is honest enough to confess frustration in that grand project. In particular, he wants to believe that computer science shows the way to the monistic fusion of mind and brain, but he admits that the mapping so far has been largely negative: “What AI has done…is to show the inadequacy of our present concepts for explaining even the simplest behavior.” He finds that negative accomplishment “almost sufficient in itself,” for he doubts “whether we can see how the brain works without understanding how it could work.”

In other words, a mathematically imagined possibility of a mind-brain, though contrary to the factual reality of living brains and actual minds, should serve as a guide in the study of the factual reality. That is Gregory’s explicit argument, which draws him repeatedly to the verge of the critical interpretation that I am drawing from his book. He acknowledges that so far computers have revealed the human essence in the way that hammers and saws and all our other tools reveal it. They are invented to do the jobs that our mind-brain-hand finds incongenial or difficult or impossible, yet nevertheless strives to do. Computers and other machines may therefore be able to show us not what the mind is, but rather what it is not, the endless number of other things that it aspires to create in spite of itself.

To see the matter this way is to leave the worship of technics and approach another form of romanticism, the belief that creative “objectification” is the essential quality of human beings: we create ourselves by making things that embody and transcend our transient selves. In the writings of Friedrich Schiller or the young Karl Marx that romantic theory is an exciting mixture of discontent and aspiration, of despair at what we are and hope of what we may become. In contemporary AI and sci-fi we find a trivialized version of the romance of Homo Faber, a superficial fantasy of escape from the confusions and bewilderment, from the fragmented and alienated consciousness, which are generated by modern science and the common sense of industrial society.

In this respect Morton Hunt’s book is especially revealing. He is a scholarly journalist who has fallen in love with the latest fashion in revitalizing psychology by renaming it. He bubbles with joyful news of “cognitive science,” “a revolutionary reappraisal” of the mind. “In the past twenty years—and for the first time in human history—a scientific discipline devoted to exploring how our minds work has emerged.” The computer is the chief deliverer. “One can ‘build a machine’…that processes information in ways comparable to those of the human intellect; mind is mind, no matter what its physical embodiment.” Yet Hunt is a sufficiently thorough reporter to retail the subversive views of infidels and heretics. Claude Shannon, to take a very notable example, the man who is often called the founder of information theory, finds the ant mind, the product of “only a few hundred nerve cells,” beyond the capacity of computer science: “It seems incredible, because if I had to do [what ants do] with a few hundred relays, I really couldn’t—and I’m pretty good at relays.”

Hunt goes on to note that “only the tiniest part of what the human mind can do” has been simulated by computers, and that “many cognitive scientists” consider it “extremely unlikely” that the mind’s most distinctive characteristics can ever be simulated by the machines it makes. In addition to the hammer-and-saw argument—our tools show what our minds are not, rather than what they are—he retails other, even weightier objections to the AI faith. Computers can be only “algorithmic” or routine in problem solving, not “heuristic” or creative. They are “electronic idiot savants,” which show no signs of consciousness of self, “the essence of what being alive means to us.”

Why then at another point does Hunt respectfully repeat the scientistic sneer at consciousness of self as an illusion, founded on the fantasy of some “homunculus” within our brain? Why does he solemnly declare “information processing theory…a deeply satisfying view of the human intellect?” Enthusiasm for technics and a cheerful eclecticism are the reasons why. They pervade the whole book, giving it its hallelujah tone and its happy indifference to inconsistency. The warring schools of psychological science are thoughtlessly reconciled by a phrase, an uncritical benediction on all: “cognitive science.” The confusion and alienation that have attended efforts to understand human beings as natural beings are airily ignored, resolved by “a splendid paradox” worthy of Dr. Pangloss himself: “Evolution has produced in us an intellect that is imperfect throughout and finds only tolerable solutions to its problems—and is, therefore, more successful than a perfect design would have been.”

Hunt is an industrious reporter and a clear writer who has assembled much useful information, but his reportage misses the dominant mood of contemporary psychology. To mark the hundredth anniversary of that fractious discipline, the psychologist Eliot Hearst assembled a volume of essays,5 which he introduced by recalling the disillusionment of William James. In 1867, aged twenty-five, James sent home from Germany an enthusiastic reaction to the work of Helmholtz and Wundt: “Perhaps the time has come for psychology to become a science—some measurements have already been made in the region lying between the physical changes in the nerves and the appearance of consciousness.” Twenty-five years later, writing his great textbooks. James had become the great debunker of the would-be science: “a string of raw facts; a little gossip and wrangle about opinions; a little classification and generalization on the mere descriptive level…. We don’t even know the terms between which the elementary laws would obtain if we had them. This is no science, it is the hope of a science.”

James nevertheless clung to the original faith: “the Galileo and the Lavoisier of psychology…will surely come, or past successes are no index to the future.” Eliot Hearst, quoting that, goes on to report a deeper disillusion reached by psychologists in their latest cycle:

A large number of contemporary experimental psychologists seriously doubt whether psychology is a field that will ever see the emergence of truly global principles of the kind that Galileo or Lavoisier identified or that Darwin bequeathed to biology…. Bands of skeptics today contend that the scientific approach to psychology has failed to meet reasonable standards of progress in the 100 years it has had to prove itself.

Nevertheless, Hearst and his colleagues persevere, and in their stubborn labors they continue William James’s refusal of any monistic incantation, any “sovereign means for believing what one likes in psychology, and of turning what might become a science into a tumbling-ground for whimsies.” Unfortunately they have turned away from James’s insistence on the importance of metaphysics—“nothing but an unusually obstinate effort to think clearly”—whose separation from psychological science can be only a self-deception.

No contemporary psychologist shows James’s passionate eloquence in confronting the mind-body problem and refusing pseudo solutions, “in which inconsistencies cease from troubling and logic is at rest. It may be a constitutional infirmity, but I can take no comfort in such devices for making a luxury of intellectual defeat. They are but spiritual chloroform. Better live on the ragged edge, better gnaw the file forever!” Such extravagance of spirit has faded from the would-be science of the mind, but Eliot Hearst and his colleagues—including Daniel Robinson, Donald Griffin, and Richard Gregory—still keep their thinking focused on the homelier Jamesian rule: “The only thing then is to use as much sagacity as you possess, and to be as candid as you can.”6

This Issue

October 21, 1982