Five years ago the concepts of “mind” and “consciousness” were virtually excluded from scientific discourse. Now they have come back, and every week we see the publication of new books on the subject—Wet Mind by Stephen Kosslyn, Nature’s Mind by Michael Gazzaniga, Consciousness Explained by Daniel Dennett, The Computational Brain by Patricia Churchland and Terry Sejnowski, to mention only a few of the more distinguished. Reading most of this work, we may have a sense of disappointment, even outrage; beneath the enthusiasm about scientific developments, there is a certain thinness, a poverty and unreality compared to what we know of human nature, the complexity and density of the emotions we feel and of the thoughts we have. We read excitedly of the latest chemical, computational, or quantum theory of mind, and then ask, “Is that all there is to it?”
I remember the excitement with which I read Norbert Wiener’s Cybernetics when it came out in the late 1940s. And then, in the early 1950s, reading the work of Wiener’s younger colleagues at MIT—a galaxy of some of the finest minds in America including Warren McCulloch, Walter Pitts, John von Neumann—and learning about their pioneer explorations of logical automata and nerve nets. I thought, as many of us did, that we were on the verge of computer translation, perception, cognition; a brave new world in which ever more powerful computers would be able to mimic, and even take over, the chief functions of brain and mind. The very titles of the MIT papers were exalted and thrilling—“Machines that Think and Want,” “The Genesis of Social Evolution in the Mindlike Behavior of Artifacts.”
During the 1960s, there was some faltering and questioning: it proved possible to put a man on the moon in this decade but not possible for a computer to achieve a decent translation of a child’s speech, much less a text of any complexity, or to achieve more than the most rudimentary mechanical perception (if indeed “perception” was a legitimate word here). Or was it simply that one needed more computer power, and perhaps different programs or designs? Supercomputers emerged, and, soon, so-called neural networks, which do not consist of actual neurons but computer simulations or models that attempt to mimic the nervous system. Though such networks start with random connections, and learn in a fashion—for example, how to recognize faces or words—they are always instructed what to do, even if they are not instructed how to do it. They are able to recognize in a formal, rule-bound way, not in terms of context and meaning, the way an organism does.
Some of these networks have been developed on the West Coast, under the presiding genius of Francis Crick. And yet Crick himself has expressed fundamental reservations about them—can they, he has asked, really be said to think? Are they, in fact, like minds at all? We must indeed be very cautious before we allow that …
This article is available to online subscribers only.
Please choose from one of the options below to access this article:
Purchase a print premium subscription (20 issues per year) and also receive online access to all all content on nybooks.com.
Purchase an Online Edition subscription and receive full access to all articles published by the Review since 1963.