As a rule, a book review in an obscure journal by an unknown scholar rarely attracts attention. Noam Chomsky’s lengthy review of B.F. Skinner’s Verbal Behavior, published in the journal Language in 1959, is a striking exception. At the time Skinner was the most respected experimental psychologist in the world and the leader of the influential behaviorist movement. His comprehensive study of speech and language had been widely anticipated and was receiving respectful attention. In his book, Skinner attempted to demonstrate that the principles of learning which had emerged in decades of work with pigeons and rats could account fully for the oral and written statements produced by human beings.

Chomsky, who had just turned thirty and was already teaching linguistics at MIT, was not impressed. In thirty tightly reasoned and scathing pages, he subjected nearly every facet of Skinner’s book to criticism and much of it to ridicule. While acknowledging the value of some of the findings that had emerged from animal studies, Chomsky argued that they simply did not apply to language. Unaware of the intricate structure of language, Skinner had simply transferred “laws of learning” that governed the behavior of rats running mazes or pigeons pecking at disks to language in ways that did not apply or that revealed nothing. As Chomsky dismissively wrote, a “critical account” of Skinner’s book

must show that with a literal reading (where the terms of the descriptive system have something like the technical meanings given in Skinner’s definitions) the book covers almost no aspect of linguistic behavior, and that with a metaphoric reading, it is no more scientific than the traditional approaches to this subject matter, and rarely as clear and careful.

Skinner, Chomsky concluded, was “play-acting at science.”1

In the final pages of his review, Chomsky hinted at the kind of study he found more promising. He spoke favorably of Karl Lashley, a widely respected behaviorist who had come to appreciate the limits of that explanatory system. Lashley had recently pointed out that utterances cannot be thought of, as Skinner thought of them, simply as words strung together in response to external stimuli; analysis of language had to take into account syntax, abstract underlying general patterns from which particular utterances were generated. Drawing on this insight, and his own work in his 1957 book, Syntactic Structures, Chomsky concluded that

in principle it may be possible to study the problem of determining what the built-in structure of an information-processing (hypothesis-forming) system must be to enable it to arrive at the grammar of a language from the available data in the available time.

By “the available data” he meant the experience of hearing other people speak that a child acquiring a language would have. This succinct formulation suggested the direction of the Chomskian research program of the following decades, a program that continues today.

During the first half of the century, Skinner and other behaviorists had put forward views that became widely accepted throughout the social sciences. They held that one should study only external, measurable stimuli and responses and avoid talk of abstract and covert mental entities like “ideas” or “thoughts.” One should seek to identify principles that could be applied across all species and across all kinds of behaviors, from exploring to communicating to earning a living. By the middle of the 1950s, however, each of these assertions was being challenged. Though the term “cognitive revolution” was not yet in vogue, Chomsky’s review of Verbal Behavior was a major event in the movement that was to topple behaviorism and itself become a new orthodoxy.

While it is tempting to envision a scientific revolution as beginning with a swift and definitive stroke against an Establishment, the origins of cognitive science were more dispersed. Working at the Carnegie Institute of Technology, the pioneering researchers in artificial intelligence, Herbert Simon and Allen Newell, were showing that computers could carry out logical proofs and decipher codes: If electromechanical devices could “manipulate symbols” and “think,” why not human beings? In studies conducted in Switzerland, the biologist-turned-psychologist Jean Piaget was demonstrating that children passed through several distinct stages of cognitive development, in which they move from having a purely physical relationship with the world, to naming and classifying objects, to experimenting with formal logical operations. In Paris, Claude Lévi-Strauss was arguing that variations of the same mental structures produced “savage” as well as “scientific” thought. And in Cambridge, working separately from Harvard’s Skinner and MIT’s Chomsky, the psychologists Jerome Bruner and George Miller set up a Center for Cognitive Studies, where researchers from various disciplines examined the strategies of human thinking and the nature of human information-processing. Taking account of the limits of the type of studies of animal behavior carried out by Skinner, on the one hand, and the powerful new methods and model of the computer, on the other, these scholars and their colleagues soon proved behaviorism to be not so much wrong as irrelevant.


Chomsky’s theories can now be seen as associated with much of the other work on cognition that emerged during the 1950s and 1960s. His own research, however, was quite specifically grounded in linguistics and took a decidedly unusual perspective on human language. Chomsky would gradually distinguish himself not only from the other biologists and social scientists who challenged behaviorism but from nearly everyone except his closest students.

Chomsky criticized not only the pervasive behaviorism of the time but also the school of structural linguistics that was dominant within his own field. Most of the linguists preceding him had been content to describe and classify languages and to note regularities within and across them. For Chomsky, however, the study of language must concentrate on the investigation of grammar, or, more specifically, the structures of syntax. The task of the linguist should be to uncover the set of rules or principles that could account for all of the permissible (or grammatical) sentences of the language and none of the impermissible (un-grammatical) ones. This task, closer to a logical-mathematical investigation than to the compiling of a dictionary, involved the painstaking identification of underlying syntactical processes, the writing and rewriting of rules, the search for counter-examples—all in an effort to delineate the nature of the formal system that underlay not only English but all other languages spoken by human beings.

Chomsky was then trying to devise a system that could show why superficially contrasting sets of sentences (“The boy kissed the girl,” “The girl was kissed by the boy,” “Was the girl kissed by the boy?” etc.) capture the same relationships, and why apparently similar sentences (“Sally is eager to please” vs. “Sally is easy to please”) have underlying differences in structure. No existing grammar had successfully met this challenge. According to Chomsky, previous linguists had not even defined the problem correctly.

In his initial explanation, Chomsky posited the existence of two levels of language: an underlying deep structure, which governed the fundamental syntactic relations among such components as noun phrases and verb phrases; and a set of surface structures which were generated by transformations of elements in the deep structure, as in the passive or interrogative statements given above. These surface transformations yielded the sentences which individuals actually utter and comprehend.

Simply to identify the laws of syntax proved to be a formidable task, one which has yet to be completed. Indeed, every decade or so, Chomsky has put forth different formal schemes to do so. For the nonspecialist these schemes seem ever more abstract and complex, making the initial formulation of the late 1950s appear surprisingly elementary. Chomsky himself has downplayed the differences among the several versions of his theory, emphasizing the continuities during forty years of research. It should also be said that, like Freud, Chomsky has a tendency to berate his critics in one decade only to incorporate some of their criticisms sotto voce in the following one. Whatever the eventual judgment of the merit of the several versions of his theory, Chomsky qualifies as a central figure for having brought about a major shift in the study of language.

But Chomsky’s interests and ambitions extended much further: Within a decade of the publication of his notorious review, Chomsky had begun to influence the work of scholars and researchers in a wide range of fields. Philosophers dissatisfied with the reigning logical empiricism (which had fit hand-in-glove with behaviorism) took an interest in the rationalist, Cartesian aspects of Chomsky’s work. Chomsky’s formalistic approach to the seemingly idiosyncratic field of language inspired mathematicians and computer scholars, who detected possible clues to the creation of software that could process or translate language. Experimental psychologists began looking for empirical evidence of the “psychological reality” of syntactic transformations described by Chomsky; for example, they tried to find out whether it takes a person longer to process a sentence which is generated by more transformations than other sentences are. (They found out that it does not.) Developmental psychologists attempted to write grammars that captured a child’s linguistic competence at different stages. (It turns out that they could.) And as long-time readers of these pages will recall, Chomsky burst onto the American political scene with the publication of “The Responsibility of Intellectuals”2 and other challenging essays on the Vietnam War. Initially a scholar in one of the more obscure disciplines, Chomsky was becoming a major figure both across a variety of fields and outside the academy.

In 1975, Chomsky entered into an intellectual confrontation of another sort. At a symposium at Royaumont, outside Paris, he engaged in a debate lasting several days with Jean Piaget, who was then almost eighty years old. Piaget was not a behaviorist—indeed, he had spent decades in almost solitary opposition to Skinner and his followers—and he accepted many of the same structuralist, formalist, and cognitive ideas as Chomsky did. A conciliator by nature, Piaget seemed quite prepared to have a polite discussion in which he and Chomsky would underscore their many common views and perhaps agree to disagree on a few particulars.


But Chomsky would have none of this. Chomsky debates with great ferocity and delights in not merely distinguishing himself from others but devastating their arguments; he (along with his close colleague, the philosopher Jerry Fodor) proceeded to attack Piaget’s fundamental principles. Piaget, a biologist by training, rejected explanations based on native abilities or “innate ideas” in favor of a perspective which emphasized a progression of more and more complex interactions between human beings and physical objects. For example, virtually all infants eventually learn that objects continue to exist even when they are out of sight, and all schoolchildren eventually understand that the number of objects in a group is independent of the way in which these objects happen to be arranged. According to Piaget, children only come to these understandings after months of experimentation with different objects in different contexts.

To Chomskians, such talk of “interaction” and “context” is beside the point. We know what we know because of our biological heritage. In debate, Chomsky (and Fodor) insisted that human language was only possible because of linguistic structures that derive from the human genome. Where Piaget saw the child’s development as a sequence of qualitatively distinct stages, Chomsky (and Fodor) argued that in explaining the child’s acquisition of language, there was no need for any notion at all of development or learning. Where Piaget believed that language ability depends upon certain experiences in infancy, Chomsky (and Fodor) questioned the need to identify any particular stages on the way to a person’s linguistic competence. Chomsky criticized the followers of Skinner and Piaget for their insensitivity to what was special about language. He challenged their inclination to view language acquisition as part of “general intelligence,” and to assimilate language to other mental representations, such as those involved in visual imagery or the classification of objects.

Chomsky was developing the case for what Fodor called modularity. Instead of treating the mind as if it were an all-purpose computer that deals in the same way with data ranging from linguistic signals to musical tones to visual patterns, Chomsky took the view that the mind consists of a set of quite distinct computational devices. His mission was to lay bare the structure and processes of the “mental organ,” or module, specifically governing language, and to demonstrate that it operated according to its own genetically programmed laws.

At Royaumont Chomsky repeatedly emphasized his view that linguistic rules depend on innate structures which could not conceivably have been induced from experience and therefore must be programmed into the human mind. He noted that in English one converts a declarative into a question by moving a linguistic entity (such as the verb “is”) from the middle to the beginning of a sentence. “The man is here” becomes “Is the man here?” Let us suppose that the mind lacked a built-in rule for converting declaratives into questions. Hearing several examples, the mind would presumably conclude that it should move the first “is” to the beginning of the sentence. The declarative “The man who is here is tall” would then be converted into the incorrect interrogative “Is the man who here is tall?” However, according to Chomsky, the human mind is programmed so that it automatically treats “the man who is here” as a single, indissoluble noun phrase. Thus all speakers of English, including young children who certainly could never have been taught the rule, naturally generate the correct “Is the man who is here tall?”

The empiricist or behaviorist can come up with a number of explanations for such linguistic regularities—for example, imitation, analogy, familiarity, and the melodic contours of spoken phrases. But Chomsky rejects all of these explanations as fundamentally flawed. In his view, only a device that can detect abstract, deep structures and carry out complex computations can acquire a natural language. Human beings, as a species, have been programmed to carry out such computations in language just as they have been programmed to perform the equally distinctive cognitive operations that enable them to recognize a face or find their way around a terrain.


By the 1980s, Chomsky’s once exotic ideas—that human beings are born with specific forms of linguistic knowledge, that the mind is a collection of mental organs, that language stands apart from other cognitive operations—had become a kind of orthodoxy, particularly at strongholds like MIT and the University of Pennsylvania, but increasingly in philosophy, psychology, and linguistics departments throughout the world. Even where such notions were not accepted completely, the Chomskian program had redefined the nature of the debate.

In no field was this more evident than in the study of the way children learn language, a subdiscipline that has come to be known as developmental psycholinguistics. And no scholar has been more prominent in this field in recent years than Steven Pinker, who teaches in the department of Brain and Cognitive Sciences at MIT. For anyone with even the slightest sympathy for Chomsky’s work, Pinker’s book The Language Instinct is a most impressive achievement. Already much acclaimed for his ingenious research, Pinker demonstrates here a remarkable ability to explain the principal methods and findings of the contemporary study of language. In a manner reminiscent of the work of the paleontologist Stephen Jay Gould, Pinker moves back and forth between the questions and concerns of the intelligent lay reader and those of the specialist.

Pinker begins by citing a number of common-sense ideas, all of which he promises to demolish in the ensuing chapters. These include the notions that different languages construe reality in different ways; that children learn to talk by imitating others; that grammatical expertise is in steady decline in our society; and that English spelling is uniquely illogical. For the most part Pinker’s arguments and examples are convincing, as when he shows that children do not learn to speak simply by imitating adults. In some cultures, he observes, parents do not even address remarks to children, and when parents do monitor their children’s remarks, they correct them for meaning but rarely for grammatical errors. Children use a great many grammatical forms that they never hear from their parents and avoid grammatical errors that might seem likely. Pinker then shows that children’s astonishingly rapid mastery of their native language can be far more easily accounted for if one assumes that they are “wired” to pay attention to certain kinds of phrases (e.g., noun phrases) and to carry out certain kinds of operations (such as always moving them as a whole, as in the above example of the tall man). I expect that many readers will be delighted and informed, if not reformed, by his book.

In fact Pinker’s book is an extended encomium to the work launched forty years ago by Noam Chomsky. Pinker carefully explains the formal methods of linguistic analysis worked out over the decades by Chomsky and his colleagues. While beginning appropriately with the question of syntax, Pinker shows how Chomskian methods can also be applied to problems such as the analysis of the structure of words (morphology), the sounds of a given language (phonology), and systems of versification (prosody). To cite just one example among dozens, Pinker shows that the ways in which human beings create new words strictly follows principles analogous to those discovered by Chomsky in his explorations of syntax. There are specific, testable, and built-in syntactical procedures that explain why we say Darwinian and even Darwinianisms but why we would never say Darwinismian, why baseball batters “flied out” rather than “flew out,” why we listen to “Walkmans” rather than to “Walkmen,” why hockey players are called “Maple Leafs” rather than “Maple Leaves,” and why even three-year-olds know that a monster who likes to eat mice is a “mice-eater” but a monster who likes to eat rats is a “rat-eater” not a “rats-eater.” These chapters emphasize an underappreciated part of Chomsky’s legacy—that his methods have proved useful for many different kinds of linguistic study.

Pinker also attempts to extend Chomsky’s analysis to matters that Chomsky’s followers have not explored. In an ambitious closing chapter called “Mind Design,” Pinker states his personal, politically unfashionable conviction that there is such a thing as universal human nature. He goes on to list fifteen modules, or “families of instincts,” that might eventually be proven by cognitive scientists to make up the universal human mind. The modules may govern such disparate activities as detecting danger, deciding which foods are good to eat, compiling a mental database of different people we encounter, constructing mental maps of large physical territories, and intuitively sensing how things work mechanically—i.e., knowing how objects move and what changes objects can and cannot undergo. Finally, he proposes a module for “mating, including feelings of sexual attraction, love, and intentions of fidelity and desertion.” As he notes, this description of the mind differs dramatically from the one given by standard psychology textbooks, with their predictable chapters on Memory, Attention, Personality, and Decision Making. “I believe that with the exception of Perception and, of course, Language,” he writes, “not a single curriculum unit in psychology corresponds to a cohesive chunk of the mind.”

As one who has himself been much influenced by Chomsky’s ideas and his research, I find myself in substantial agreement with most of Pinker’s conclusions. Nevertheless, in some cases he seems to me not quite fair to the opposition. He is too dismissive of the ways that mothers and others who bring up children help infants to acquire language. While the principles of grammar may indeed be acquired with little help from parents or other caretakers, adults are needed to help children to build a rich vocabulary, master the rules of discourse, and distinguish between culturally acceptable and unacceptable forms of expression.

Pinker is also overly critical of claims deriving from efforts to teach language-like symbol systems to chimpanzees. While chimpanzees have not mastered complex syntactic structures, they have proved to be more skillful communicators than had been supposed; and studies with primates have also sharpened our understanding of deception and other forms of communication that can occur even in the absence of natural language. Pinker too often invokes the superiority of biological science, as when he compares his laboratory research with that of biologists who study the machinery of genes.

Finally, like most Chomskians, Pinker shows little interest in differences among individuals, and, to my way of thinking, insufficient interest in the possibly deep differences among families of languages and the sometimes powerful effects cultural patterns and values can have on the ways languages are used. Pinker shares with his colleagues a preference for unadorned, sometimes even nonsensical sentences, which can be explained simply as the product of a computational linguistic device without reference to who said it and why. (Chomsky’s only entry in Bartlett’s Familiar Quotations is the meaningless but grammatical “Colorless green ideas sleep furiously.”)

While the greatest value of Pinker’s book is in his masterful exposition of the human language faculty, the two most provocative chapters go further. In “The Big Bang” Pinker takes issue with Chomsky’s belief that “a uniquely human language instinct seems to be incompatible with the modern Darwinian theory of evolution.” Pinker argues that language could have evolved through a series of proto-languages in the 350,000 generations since modern Homo sapiens branched off from chimpanzees, the surviving species to whom we are most closely related. Disentangling two often conflated lines of argument, he points out the difference between “analogous” traits which (like the wings of birds and those of bees) have a common function but arose independently on different branches of the evolutionary tree, and “homologous” traits which (like the human hand and the bat’s wing) do not have a common function but descend from a common ancestor. Then, making an effective analogy to the evolution of the eye, Pinker suggests that a complex adaptation like language could readily have evolved over a three to five million year period through a “revamping of primate brain circuits that originally had no role in vocal communication, and by the addition of some new ones.”

In “The Language Mavens” (a term borrowed from columnist William Safire) Pinker takes issue with those who purport to be the guardians of the English language. In his view, the mavens are guilty of two sins: an inappropriate disdain for popular linguistic practices that are actually defensible and a lamentable ignorance of linguistics, which renders their explanations flawed or incoherent.

I am quite prepared to believe that William Safire and the late Theodore Bernstein, a New York Times specialist in usage, are drawing on linguistics that is outdated. But I am not persuaded by Pinker’s principal complaints about critics who try to uphold standards of usage. In any culture, there are practices that are widely accepted, others that are considered offensive, and others that are on the margin. If people are to communicate effectively they have to be generally aware of such distinctions. Pinker himself cites words such as “disinterested” and “parameter” that make him shudder when they are misused. But the only clue he gives to his own standards applies to written language. He believes the “clarity and style of written prose” are often defective and should be improved through “practice, instruction, feedback, and—probably most important—intensive exposure to good examples.” But he gives no idea either of his own standards of clarity or of the prose he finds admirable. Arguing against William Safire, he makes a convoluted attempt to justify both Bill Clinton’s “Give Al Gore and I a chance to bring America back” and Barbra Streisand’s description of a tennis star as “very evolved; more than his linear years.” But these defenses are similarly unconvincing.

Pinker’s failure to address the ways culture shapes language suggests a deficiency in the position of Chomsky, Fodor, and their associates. During the last few years, a number of social scientists have begun to criticize the modular hypothesis, pointing to its possible limitations. From within the field of developmental psycholinguistics, the British researcher Annette Karmiloff-Smith has attempted, in Beyond Modularity, to reconcile Chomsky and Fodor’s nativist position and Piaget’s notion of development, the two views that clashed at Royaumont. Writing as a general commentator on the social sciences, Jerome Bruner has lamented the technical direction taken by the cognitive school of psychology that he helped found. In a series of provocative lectures published under the title Acts of Meaning, he calls instead for a new “cultural psychology.”

Karmiloff-Smith herself studied with Piaget in the 1960s and, while considered a “heretic, both personally and theoretically” among her colleagues (she recalls how she alone refused to call him “Patron“), she subscribed to many of the positions he took at Royaumont. However, Karmiloff-Smith found herself increasingly persuaded by Chomsky’s ideas, and came to believe that language depended more on built-in structures than Piaget had allowed.

Karmiloff-Smith’s studies of language acquisition provide some revealing insights. It had long been observed that children of two or three first use regular and irregular verb forms correctly (he walked, he went, he came). They then go through a phase where they over-generalize the regular past tense (he goed, he comed); at about the age of five they abandon such inventions and go on to use without difficulty two sets of verbs—a large set of regular verbs and a small number of exceptions. Karmiloff-Smith noted that this apparent U-shaped behavior (knowing—forgetting—knowing again) actually reflected a highly constructive enterprise. As children become able to reflect on language, they go on to a more comprehensive and explicit understanding and use of it.

Similarly, when children become aware of the possible dual function of a word, they feel the need to “mark” the specific function that they have in mind. Three-year-old French children comfortably (and correctly) say “une voiture” both for “a car” and for “one car.” Five-year-olds, however, are newly aware of the possible ambiguity in this expression, and so they say “une voiture” when they mean “a car” and “une de voiture” when they mean “one car.” Although grammatically incorrect, the latter phrase has the advantage of conveying unambiguously the child’s meaning of “one” car rather than simply “a” car. By age six or seven, children have given up trying to make this logical but ungrammatical distinction.

Collecting many such examples, Karmiloff-Smith has formulated a more general theory of linguistic and cognitive development. In her view, children initially behave in a way that suggests an implicit, perhaps even instinctual, understanding. But something in the nature of human beings—in contrast to other organisms—stimulates us to go beyond our initial success, to alter our representations and create new knowledge. Any explanation of this fact requires some idea of how children’s thinking develops, an idea which is dismissed by Chomskians. Karmiloff-Smith views development as a set of “representational redescriptions” proceeding from the implicit to the explicit, from unconscious knowledge to conscious knowledge to verbally expressed knowledge.

Having outlined this general process, Karmiloff-Smith describes how it works in five different spheres of mental activity. Her examples include the child as linguist, as in the above examples; the child as physicist; the child as mathematician; the child as psychologist, or “theorist-of-mind”; and the child as a notator, who creates drawings and maps. In each case, Karmiloff-Smith seeks to show that children are able to venture beyond their initial instinctive practices, understandings, and successes. They address inconsistencies and anomalies and end up with a more flexible and more explicit form of understanding.

In Karmiloff-Smith’s treatment of the child as “theorist-of-mind,” she notes that even toddlers of eighteen months or two years are able to pretend; they can treat a stick as a horse or themselves play the role of a dog or a parent. But while very young children can treat objects or people as different from what they actually are, as late as the age of three-and-a-half children do not understand that another person might hold a “false belief,” or incorrect idea. In a typical false belief experiment, a child watches a researcher hide a piece of candy under a box in the presence of a second child. The second child then leaves the room, and the researcher moves the candy to another hiding place. The first child is then asked where the second child will look for the candy upon returning. A three-year-old assumes that the second child must know where the candy has been moved to; the four-year-old, however, understands that he or she knows something the second child does not, and that the second child holds a false belief about the candy. Karmiloff-Smith combines her examples to trace a developmental sequence: a toddler has an implicit understanding that objects can be treated as something else, but only a child who explicitly understands that minds may have access to different information can answer correctly in a false belief experiment.

In putting forward this analysis, Karmiloff-Smith is attempting no less than a reconciliation of the nativism of Chomsky and Fodor and the developmentalism of Piaget. She recognizes that there are in fact separate categories of cognitive activities, each with its own initial structure. And she concedes that Piaget was mistaken in holding that there are general stages of cognitive sophistication which sprawl across and organize a person’s apparently different understandings at a particular moment. But she takes issue with the Chomsky-Fodor claim that modules are necessarily present from the beginning. She makes the intriguing proposal that modularization may perhaps be seen as a product of a child’s development; greater distinctness among the different kinds of mental activity seems to follow from much practice by each “mental muscle.”

Karmiloff-Smith’s central claim, which reveals her fundamental Piagetian outlook, is that the same U-shaped sequence she found in children’s use of language will occur across widely disparate kinds of mental activity. If this is true, she has then indeed discovered a fundamental law of development, one that both rescues the very concept of development and gives support to the notion that quite different mental activities exhibit the same sequential characteristics.

While I admire Karmiloff-Smith’s project, I am not convinced by her specific argument. What works well with her linguistic examples simply does not apply convincingly to other kinds of behavior. She stretches her terminology and twists her examples in order to demonstrate parallel instances of implicit and explicit representation in disparate kinds of mental activity. For example, I find only the loosest analogy between the developmental sequence that takes a child from came to comed and back to came and the sequence that occurs as a child moves from the two-year-old’s pretending that a stick is a horse to the four-year-old’s understanding of others’ false beliefs. The linguistic example involves two versions of the same word or phrase; the account of children’s early ability to pretend and later to perceive false belief lumps together two quite different sequences. In general, Karmiloff-Smith’s analysis works much better for procedures that people gradually master over time, such as playing the piano or using irregular verbs, than for conceptual understandings of the sort arrived at by the very young physicist, mathematician, or theorist of mind.

Indeed, Karmiloff-Smith herself seems to sense the fragile nature of her argument. Her opening examples, designed to introduce the book’s general argument, illustrate what it is like to play a passage on the piano and what it is like to master a Rubik’s Cube. While these examples supposedly describe what happens after one achieves a certain behavioral mastery, they actually point out the many differences between a sequence in which a person masters a procedure, such as playing a passage on the piano, and a sequence, such as playing a logical game, which requires a conceptual analysis.

Despite this central criticism, however, Karmiloff-Smith’s work is important for the questions it raises. For example, following a line of inquiry avoided by Chomskians, she explores the relations between two contrasting aspects of the mind—the internal modules available to all members of the human species (such as those governing language and simple arithmetic), and culturally defined fields of knowledge (such as New-tonian physics or the ability to use Cartesian-style maps). Most significantly, Karmiloff-Smith is trying to mediate between the Piagetian and Chomskian approaches, a necessary and neglected critical task.

When he was the co-director of the influential Center for Cognitive Studies in the 1960s, Jerome Bruner would have been vitally interested in these topics. And as a psychologist familiar with the work of both Piaget and Chomsky, he might well have contributed substantially to the debate. But in Acts of Meaning, a set of lectures delivered in Jerusalem in 1989, Bruner takes a dim view of cognitive science in recent decades. As he sees it, cognitive specialists have become increasingly caught up in making models of tiny machine-like sequences that take place in the brain—models that ape computer programs—as their interest has shifted from “the construction of meaning to the processing of information.” In so doing they have lost sight of the humanist ideas that originally animated the cognitive revolution.

Bruner’s new “cultural psychology” is centered, he says, on “the concept of meaning and the process by which meanings are created and negotiated within a community.” Bruner laments his colleagues’ rejection of commonsense “folk psychology,” his term for “the culturally shaped notions in terms of which people organize their views of themselves, of others, and of the world in which they live.” For too long cognitivists, like the behaviorists before them, have tended to avoid any consideration of human beings as active agents in pursuit of goals and in search of meaning. In a direct challenge to his contemporaries, Bruner writes:

To understand man you must understand how his experiences and his acts are shaped by his intentional states…. The form of these intentional states is realized only through participation in the symbolic systems of the culture. Indeed, the very shape of our lives—the rough and perpetually changing draft of our autobiography that we carry in our minds—is understandable to ourselves and to others only by virtue of those cultural systems of interpretation.

Bruner is certainly right in noting that this way of conceiving the sciences of the mind is remote from the approach most cognitivists take to their work. Bruner calls for a “contextualist” revolution, drawing his single example of a new “cultural psychology” from experience that is equally remote from the work of the modularists and the anti-modularists. Bruner introduces us to the Goodhertzes, a self-described “close” family consisting of two parents and four grown children. George Goodhertz is “a self-made man in his sixties, a heating contractor dedicated to work but just as proud of his role as a trusted man in the community.” His wife Rose is “a second-generation Italian-American, very family oriented, much involved with old friends in the Brooklyn neighborhood where they’ve lived for thirty years, ‘a Catholic and a Democrat.”‘ All but one of their four grown children live nearby and meet regularly for family meals in Brooklyn.

For a year Bruner and his associates recorded the oral biographies of each family member and also observed the family members together. The deliberately broad investigation was designed to find out what it is like to be an individual Goodhertz and a member of a family that, like most families, is both typical and idiosyncratic.

To give “a sense of how research can be conducted in the spirit of cultural psychology,” Bruner concentrates on a central theme that emerges from his talks with the Goodhertz family. Despite numerous differences, each of the family members unconsciously draws a distinction between “home” and “the real world.” Home is safe, intimate, tolerant, somewhat boring. The real world is dangerous, exciting, unforgiving. Family stories center on the confrontation between these worlds and the interaction between a “legitimizing ‘real Self’ and the instrumental ‘street-smart’ Self that protects them from the ‘real world.”‘ Bruner sees the “real” Goodhertz self, identified with the home, as typical of the contemporary “privatization of meaning and the self” that has been commented on by many writers. And he claims that a sensitive cultural psychology can draw on narratives such as the ones he extracted from the Good-hertzes to show how the self emerges in different cultural milieus.

Bruner sees himself as returning to the original mission of the cognitive revolution. But as one who has tried to chronicle that revolution, I have difficulty recognizing Bruner’s version of what happened in the 1950s.3 While it is true that Bruner himself always maintained a keen interest in more humanistic studies, the “tilt toward meaning” he refers to as one of the emphases of the early cognitive psychologists was hardly characteristic of the other founding fathers of cognitive science. And even Bruner kept his “softer interests” at a distance. In A Study of Thinking (1956), a work that helped to launch the cognitive revolution, Bruner concentrated on the familiar issue of how human beings categorize and classify objects and forms. While his work was useful for examining the strategies individuals use, it could readily be simulated on a computer and largely ignored issues of meaning and culture. Only occasional essays, such as those reprinted in On Knowing: Essays for the Left Hand (1962, 1979), revealed that a cognitive psychologist like Bruner might speculate about “art as a way of knowing,” or “the conditions of creativity,” or issues of identity in the modern novel.

I see Bruner today as deeply influenced by literary studies of narrative on the one hand and by anthropology on the other. Drawing on the work of such critics as Paul Ricoeur, Bruner proposes the existence of two primary modes of knowing: a logical or scientific mode, usually explored (and epitomized) by scientists; and a narrative mode, of interest not only to artists and writers but to all of us who try to make sense of our daily experiences. From anthropological writings, particularly those of Clifford Geertz, Bruner has been stimulated to investigate the particular cultural milieus in which people like the Goodhertzes have been formed, the conceptions they have of themselves, and the ways in which they think, imagine, and make decisions. Bruner’s writings appeal to readers like me who have long despaired of psychology as a unitary pursuit and who view it instead as a computer-centered cognitive science at one extreme and as a study of different cultural idioms at the other. But it must be noted that Bruner’s vision is as remote from Chomsky’s syntactic structures as the study of the genetic code is from the taxonomy of animal species.

Or is it even more remote? When we speak of biology, we believe we will eventually be able to specify the relation of genes to the species. When we speak of physics, we reflect a belief in a link between subatomic particles and the cosmos. We trust that experts can help us to trace the great chain of being that joins the microscopic with the macroscopic. It would be reassuring to believe that one can trace a path from the kinds of syntactic parsing described by Chomsky to the kinds of learning traced by Karmiloff-Smith to the autobiographical patterns discerned by Bruner. But at present no such path is being traced—the psychological sciences remain a set of feuding schools, and we have not yet had our Newton, Darwin, or Einstein.

In Verbal Behavior Skinner recalls an evening in 1934 when he found himself sitting next to the great philosopher Alfred North Whitehead at dinner. Skinner explained his new ideas enthusiastically to Whitehead, who listened with some sympathy. After a while, Whitehead said, “Let me see you account for my behavior as I sit here saying ‘No black scorpion is falling upon this table.”‘ Skinner reports that the next morning he drew up plans for the study reported in his book a quarter of a century later. We now know that Skinner’s effort was flawed. What remains to be seen is whether Chomsky, Pinker, Karmiloff-Smith, Bruner, or their associates, individually or collectively, can shed more light on the question Whitehead asked—a question about the sources of human distinctiveness, imagination, and playfulness. Perhaps an entirely different perspective will be required.

This Issue

March 23, 1995