• Email
  • Single Page
  • Print

End of the Revolution

1.

Almost three decades ago I reviewed in these pages a striking development in the study of language that I called “Chomsky’s Revolution in Linguistics.”1 After such a long time it would seem appropriate to assess the results of the revolution. This article is not by itself such an assessment, because to do an adequate job one would require more knowledge of what happened in linguistics in these years than I have, and certainly more than is exhibited by Chomsky’s new book. But this much at least we can say. Judged by the objectives stated in the original manifestoes, the revolution has not succeeded. Something else may have succeeded, or may eventually succeed, but the goals of the original revolution have been altered and in a sense abandoned. I think Chomsky would say that this shows not a failure of the original project but a redefinition of its goals in ways dictated by new discoveries, and that such redefinitions are typical of ongoing scientific research projects.

The research project of the revolution was to work out for each natural language a set of syntactical rules that could “generate” all the sentences of that language. The sense in which the rules could generate the infinite number of sentences of the language is that any speaker, or even a machine, that followed the rules would produce sentences of the language, and if the rules are complete, could produce the potentially infinite number of its sentences. The rules require no interpretation and they do more than just generate patterns. Applied mechanically, they are capable of generating the infinite number of sentences of the language.

Syntax was regarded as the heart of linguistics and the project was supposed to transform linguistics into a rigorous science. A “grammar,” in the technical sense used by linguists, is a theory of a language, and such theories were called “generative grammars.” Stated informally, some rules of English are that a sentence can be composed of a noun phrase plus a verb phrase, that a verb phrase can consist of a verb plus a noun phrase and that a noun phrase can be composed of a “determiner” plus a noun, that nouns can be “woman,” “man,” “ball,” “chair”…; verbs can be “see,” “hit,” “throw”…; determiners can be “the,” “a”…. Such rules can be represented formally in the theory as a set of instructions to rewrite a symbol on the left side as the symbols on the right side. Thus,

S → NP + VP

VP → V + NP

NP → Det + N

N → man, woman, ball…

V → hit, see, throw…

Det → a, the…

This small fragment of an English grammar would be able to generate, for example, the sentence
> The man hit the ball.

Such rules are sometimes called “rewrite rules” or “phrase structure rules” because they determine the elementary phrase structure of the sentence. Chomsky argued that such rules are inadequate to account for the complexities of actual human languages like English, because some sentences require that a rule apply to an element not just in virtue of its form, but in virtue of how it got that form, the history of how it was derived. Thus, for example, in the sentence
> The chicken is ready to eat

even though the words are not ambiguous, the sentence as a whole is syntactically ambiguous depending on whether “chicken” is the subject or the object of “eat.” The sentence can mean either the chicken is ready to eat something, or the chicken is ready for something to eat it. To account for this ambiguity it seems, Chomsky argued, that we have to suppose that the sentence is the surface expression of two different underlying sentences. The sentence is the result of applying rules that transform two different underlying, or deep, structures. Such rules are called transformational rules, and Chomsky’s version of generative grammar was often called “transformational grammar” because of the argument for the necessity of transformational rules. In the classical versions of the theory, the phrase structure rules determined the “deep structure” of the sentence, the bearer of meaning; the transformational rules converted deep structure into surface structure, something that could be uttered. In the example of the chicken above, there is one surface structure, the sentence I have quoted, and two deep structures, one active, one passive.

It was a beautiful theory. But the effort to obtain sets of such rules that could generate all and only the sentences of a natural language failed. Why? I don’t know, though I will suggest some explanations later. But seen from outside a striking feature of the failure is that in Chomsky’s later work even the apparently most well-substantiated rules, such as the rule for forming passive sentences from active sentences, have been quietly given up. The relation between “John loves Mary” and “Mary is loved by John” seemed elegantly explained by a transformational rule that would convert the first into the second. Apparently nobody thinks that anymore.

Another feature of the early days was the conviction that human beings were born with an innate brain capacity to acquire natural human languages. This traditional view—it goes back at least to the seventeenth century—seemed inescapable, given that a normal infant will acquire a remarkably complex system of rules at a very early age with no systematic teaching and on the basis of impoverished and even defective stimuli. Small children pick up a highly competent knowledge of a language even though they get no formal instruction and the utterances they hear are limited and often not even grammatical.

The traditional objection to this “innateness hypothesis” (Chomsky always objected to this term, but it seems reasonable enough) was that languages were too various to be accounted for by a single brain mechanism. Chomsky’s answer was that the surface variety of languages concealed an underlying structure common to all human languages. This common structure is determined, he wrote, by an innate set of rules of Universal Grammar (UG). The innate mechanism in the brain that enables us to learn language is so constituted that it embodies the rules of UG; and those rules, according to Chomsky, are not rules we can consciously follow when we acquire or use language. I think the official reason for the abandonment of the research program was that the sheer complexity of the different rule systems for the different languages was hard to square with the idea that they are really all variations on a single underlying set of rules of UG.

There were, as might be expected, a number of objections to Chomsky’s proposals. I, for one, argued that the innate mechanism that enables the child to acquire language could not be “constituted” by—i.e., made up of—rules. There are no rules of universal grammar of the sort that Chomsky claimed. I argued this on a number of grounds, the chief being that no sense had been given to the idea that there is a set of rules that no one could possibly consciously follow: if you can’t follow them consciously then you can’t follow them unconsciously either. I also argued that, precisely to the extent that the mechanism was innate and applied automatically, it was empty to suppose that its application consisted in rule-governed behavior, No sense, I wrote, had been given to the idea of rules so deeply buried in unconscious brain processes that they were not even the sort of things that could be consciously followed.

Just as a child does not follow a rule of “Universal Visual Grammar” that prohibits it from seeing the infrared or ultraviolet parts of the electromagnetic spectrum, so the child does not follow rules of Universal Linguistic Grammar that prohibit it from acquiring certain sorts of languages but not others. The possibilities of vision and language are already built into the structure of the brain and the rest of the nervous system. Chomsky attempted to answer my arguments in a number of places, including the book under review. But in the case of UG he has given up the idea that there are rules of universal grammar.

In his recent book, as well as in other works (most importantly, The Minimalist Program2), Chomsky advances the following, much more radical, conception of language: the infant is indeed born with an innate language faculty, but it is not made up of any set of rules; rather it is an organ in the brain that operates according to certain principles. This organ is no longer thought of as a device for acquiring language, because in an important sense it does not so much acquire as produce any possible human language in an appropriate environment. Chomsky writes,

We can think of the initial state of the faculty of language as a fixed network connected to a switch box; the network is constituted of the principles of language, while the switches are the options to be determined by experience. When the switches are set one way, we have Swahili; when they are set another way, we have Japanese. Each possible human language is identified as a particular setting of the switches—a setting of parameters, in technical terminology. If the research program succeeds, we should be able literally to deduce Swahili from one choice of settings, Japanese from another, and so on through the languages that humans can acquire. [my italics]

According to this view, the possibility of all human languages is already in the human brain before birth. The child does not learn English, French, or Chinese; rather, its experiences of English set the switches for English and out comes English. Languages are neither learned nor acquired. In an important sense they are already in the “mind/brain” at birth.

What happens, then, to the rules of grammar? Chomsky writes that

This “Principles and Parameters” approach, as it has been called, rejected the concept of rule and grammatical construction entirely: there are no rules for forming relative clauses in Hindi, verb phrases in Swahili, passives in Japanese, and so on. The familiar grammatical constructions are taken to be taxonomic artifacts, useful for informal description perhaps but with no theoretical standing. They have something like the status of “terrestrial mammal” or “household pet.”

The overall conception of language that emerges is this: a language consists of a lexicon (a list of elements such as words) and a set of computational procedures. The computational procedures map strings of lexical elements onto a sound system at one end and a meaning system at the other. But the procedures themselves don’t represent anything; they are purely formal and syntactical. As Chomsky says,

The computational procedure maps an array of lexical choices into a pair of symbolic objects…. The elements of these symbolic objects can be called “phonetic” and “semantic” features, respectively, but we should bear in mind that all of this is pure syntax and completely internalist.

  1. 1

    The New York Review, June 29, 1972.

  2. 2

    MIT Press, 1995.

  • Email
  • Single Page
  • Print