Calling deafness “one of the most desperate of human calamities,” Dr. Johnson was expressing the classic “pathological” view of deafness: it is a physical defect. Unable to hear or speak, the deaf are usually thought to be cut off from language, and therefore from social life and from the knowledge and culture transmitted from previous generations.
But what if the deaf have their own language—not oral but signed—as rich and expressive as any oral language, and as suitable for discussing science, art, or any other topic? What if this language is central to an independent culture of the deaf, with its own history and traditions, its own art forms and poetry? The deaf would then have to be viewed not as sharing a common pathology, but as a linguistic and cultural minority. This is the “linguistic-cultural” view of deafness. It challenges the view of deafness as a pathological condition to be treated or corrected, and concentrates instead on a community with its own language, traditions, and culture.
The very term “deaf” is rooted in the pathological view of deafness, for it designates people whose status as “deaf” is determined solely by their inability to hear. This includes the “prelingually” deaf, who are born deaf or become deaf before learning to speak, and the “postlingually” deaf, who become deaf after acquiring an oral language. From the standpoint of the linguistic-cultural view of deafness, the crucial distinction is between those who use a sign language as their primary language—who are customarily designated by the word “Deaf”—and those who do not. People who cannot hear but do not sign or participate in the life and culture of a Deaf community are deaf but not Deaf. According to the linguistic-cultural view of deafness, to be deaf but not Deaf is indeed a calamity; inability to sign cuts one off from the Deaf community, just as inability to hear cuts one off from the hearing world. Learning to sign takes one from pathology to membership in a community with a rich culture that is passed on from generation to generation.
In Seeing Voices: A Journey into the World of the Deaf, the neurologist and writer Oliver Sacks takes us on a personal journey from the pathological to the linguistic-cultural view of deafness. In his preface, Sacks states plainly:
I am, I should emphasize, an outsider in this field—I am not deaf, I do not sign, I am not an interpreter or teacher, I am not an expert on child development, and I am neither a historian nor a linguist.
The reader discovers that alongside the world we know there exists a parallel Deaf world—in some ways like the hearing world and in some ways very different.
Dr. Johnson’s view of deafness was not unreasonable before the late eighteenth century. Unable to acquire speech, the deaf were viewed as “dumb” or imbecilic and not recognized as persons under the law. Without literacy or education, they were limited to the most menial work and their economic situation was often desperate. Except in places where there were enough deaf people to make up a community, they lived in isolation, deprived of any means of communicating with their fellows. Their fate attracted the attention of the philosophes, who asked what kept the deaf in this deplorable state. Sacks writes that “to ask this question—never really or clearly asked before—is to grasp its answer.” Without language, intellectual or social development is impossible. But how could the deaf acquire language? Attempts to teach the deaf to speak had proved extremely difficult, succeeding only with those who had some experience of speech—whether through residual hearing or by having learned to speak before becoming deaf. For those born entirely or almost entirely deaf, learning to speak was extremely difficult, if not impossible.
Like many problems that resist solution, this one was rooted in a basic unconscious assumption: the equating of language with speech. It took the Abbé de l’Epée, a French cleric who wanted to bring the deaf the word of God, to discover that there were deaf people in Paris who already had a language—not oral but signed. L’Epée learned their language and approached them through it, developing a method of teaching them to read and write French. For the first time deaf people were able to acquire an education. L’Epée’s school, founded in 1755, trained teachers who by his death in 1789 had established twenty-one schools for the deaf in France and Europe. Sacks describes the results:
This period—which now seems a sort of golden period in deaf history—saw the rapid establishment of deaf schools, usually manned by deaf teachers, throughout the civilized world, the emergence of the deaf from neglect and obscurity, their emancipation and enfranchisement, and their rapid appearance in positions of eminence and responsibility—deaf writers, deaf engineers, deaf philosophers, deaf intellectuals, previously inconceivable, were suddenly possible.
The first liberation of the deaf occurred in the late eighteenth century because, for the first time, the deaf had been approached through sign.
The school founded by the Abbé de l’Epée had much to do with the genesis of American Sign Language (ASL). Laurent Clerc, a Deaf graduate of the school, introduced French Sign Language as the language of instruction in the first American school for the deaf in Hartford, Connecticut, in 1817. ASL emerged from a blending of French Sign Language with the sign languages already in use in America, brought to the school by its pupils. The school became the center of a signing community and sent its graduates to teach in new schools for the deaf, spreading ASL throughout the United States and to most of Canada. Passed from generation to generation for almost two hundred years, ASL is now the language of a community of several hundred thousand.
ASL shares structural features with Irish Sign Language, Swedish Sign Language, Dutch Sign Language, Latvian Sign Language, Swiss Sign Language, Austrian Sign Language, Italian Sign Language, and Spanish Sign Language; these languages are said to be “related” because they all evolved from French Sign Language,1 which was introduced into schools for the deaf in all these countries by graduates of L’Epée’s school and merged with local sign languages already in use. There are also many differences arising from the differences among the local sign languages with which French Sign merged or from independent changes during the past two centuries. ASL and British Sign Language, having no common ancestor, are not related and are mutually unintelligible. Taiwan Sign Language is related to Japanese Sign Language and Korean Sign Language. These Asian sign languages are not related to the sign languages of Europe or their offshoots in America such as ASL.
Some forms of manual communication look like “signing” but are not ASL. One is “fingerspelling,” in which a hand configuration or “handshape” corresponds to each letter of the alphabet. Like the Morse code, this is only a way of representing words written in the Latin alphabet. There are also forms of “signed English” invented by educators to teach English to deaf children. Perhaps the most widely used is Signing Exact English (SEE), which borrows many signs from ASL. A SEE sentence is simply English transposed word for word into signs, following English syntax in every detail. Also transposed into sign are verbal suffixes such as -s, -ed, -en, ing, as well as prefixes and suffixes that form derived words in English, e.g. –al (refusal), –ment (amendment), –ship (scholarship), –ous (dangerous), mis– (misunderstand), and so on. SEE is thus a manual representation of English, not an independent language.
Many hearing people confuse these manual communication systems with ASL, now emerging from a long period as a stigmatized “underground language” used by the Deaf at home and in the Deaf community but not with the hearing. While ASL enjoyed relatively wide acceptance through most of the nineteenth century, by the end of the century there was a movement away from signing and in favor of speech training and lip-reading in deaf education. Many educators viewed signing as an imprecise, defective mode of communication. After the International Congress of Educators of the Deaf in Milan in 1880, sign was officially banned from schools for the deaf, first in Europe and then in the United States. The shift from sign to speech in American schools for the deaf brought a shift from Deaf to hearing teachers, many of whom did not know ASL. But ASL continued as a widely used underground language. Despite educational policies that forbade and sometimes punished its use, residential schools for the deaf were vital to the survival and spread of ASL. They created communities of deaf children which included fluent signers who had learned ASL from infancy in Deaf homes and spread ASL to the roughly 90 percent of deaf children with hearing parents.
Beginning in the early 1960s, and accelerating rapidly through the 1970s and 1980s as linguistic research on ASL began to reveal its status as an independent language and as a new generation of Deaf performers and poets began to find new ways to exploit the artistic and poetic potential of the language, the Deaf community began to experience a change in consciousness that made the language and its culture a source of pride.2 This new consciousness burst upon an unsuspecting world in March 1988 through events at Gallaudet University in Washington, DC, the only liberal arts university for the deaf. Angered by the Board of Trustees’ appointment of a new president ignorant of their language and culture and by the chairman’s inflammatory statement that “the deaf are not yet ready to function in the hearing world,” Deaf students mounted a revolt against the pathological view of deafness.
Sacks visited the campus, talked with students, and gives a moving account of the strike, which took the hearing world by surprise. While educators and others had debated the pathological and linguistic-cultural views of deafness for two centuries, most hearing people were unaware that there was anything to debate. If they had thought about deafness at all, the pathological view seemed self-evident. They had never been introduced to the linguistic-cultural view, whose understanding requires a knowledge of the language and culture of Deaf communities which few possess.3
What separates the two views of deafness is the answer they give to a central question: Do the Deaf have a language of their own that is comparable to oral languages such as English, Japanese, Navajo, Yiddish, and others that are transmitted orally from one generation to the next?4 The different answers spring from fundamentally opposed views of language. The pathological view is based on the idea that only speech or its representation in writing is language. It assumes that a sign language is somehow deficient—not a true language but a substitute used by people whose deafness makes language inaccessible. Advocates of the linguistic-cultural view of deafness maintain that sign languages are as effective as oral languages, providing the deaf a means of communication, full membership in a community, and participation as equals in its life and culture.
Sacks’s journey into the Deaf world begins with his encounter with alingualism (languagelessness) in isolated deaf people who acquired neither sign nor speech. Having described the inner worlds of patients with unusual neurological disorders in earlier books, Sacks is fascinated with the differentness of the alingual deaf. What is it like not to have language? Which aspects of cognition depend on language and will fail to develop if language does not? What are the effects of language deprivation on a person’s development? Sacks describes an eleven-year-old boy who had learned neither sign nor speech:
Joseph longed to communicate, but could not. Neither speaking nor writing nor signing was available to him, only gesture and pantomime, and a marked ability to draw. What has happened to him, I kept asking myself? What is going on inside, how has he come to such a pass? He looked alive and animated, but profoundly baffled: his eyes were attracted to speaking mouths and signing hands—they darted to our mouths and hands, inquisitively, uncomprehendingly, and, it seemed to me, yearningly. He perceived that something was “going on” between us, but he could not comprehend what it was—he had, as yet, almost no idea of symbolic communication, of what it was to have a symbolic currency, to exchange meaning.
In striking contrast to the alingual deaf, Sacks describes the Deaf who learn to sign in childhood, remarking on their normal intellectual, emotional, and social development.
Discovering that sign is language, Sacks comes to see Dr. Johnson’s pathological view of deafness as a fallacy: the calamity is not deafness but alingualism, for which sign is the remedy. Like many hearing people, Sacks had assumed that sign is a language substitute—inadequate or inferior in some way. The book describes how he came to reject this view, and with it the pathological view of deafness. More than any other experience, his visit to Gallaudet University changed his mind:
I had never before seen an entire community of the deaf nor had I quite realized (even though I knew this theoretically) that Sign might indeed be a complete language—a language equally suitable for making love or speeches, for flirtation or mathematics. I had to see philosophy and chemistry classes in Sign; I had to see the absolutely silent mathematics department at work; to see deaf bards, Sign poetry, on the campus, and the range and depth of the Gallaudet theater; I had to see the wonderful social scene in the student bar, with hands flying in all directions as a hundred separate conversations proceeded—I had to see all this for myself before I could be moved away from my previous “medical” view of deafness (as a “condition,” a deficit, that had to be treated) to a “cultural” view of the deaf as forming a community with a complete language and culture of its own.
A key aspect of the assumption that sign is a language substitute is the idea that speech is somehow more “natural,” and that sign is used by the Deaf only because speech is inaccessible. This assumption was undermined for Sacks by an encounter with some of the older residents of Martha’s Vineyard in Massachusetts, who, though hearing, had learned to sign as children when parts of the Vineyard had a large Deaf population. Sacks describes a lady in her nineties who would sometimes fall into a peaceful reverie in which she signed to herself. “Even in sleep…the old lady might sketch fragmentary signs on the counterpane—she was dreaming in Sign.” Sacks concludes:
Such phenomena cannot be accounted as merely social. It is evident that if a person has learned Sign as a primary language, his brain/mind will retain this, and use it, for the rest of that person’s life, even though hearing and speech be freely available and unimpaired. Sign, I was now convinced, was a fundamental language of the brain.
What was the deeply ingrained assumption that this experience dislodged? It seems to have been the idea that sign is a substitute for language used only by those who cannot use speech. What convinced Sacks was the use of sign when speech and hearing were available.
Presenting himself as a traveler to a foreign land, Sacks enables readers to experience vicariously his discovery of the contrasting lives of the signing Deaf and the nonsigning deaf. As an outsider, Sacks starts with many of the same assumptions his readers are likely to have, and his book’s greatest contribution is in making accessible to the general reader a sympathetic portrayal of the linguistic-cultural view of deafness and of the discoveries that led him to it.
Sacks’s longest chapter is largely devoted to ASL itself and here a number of difficult questions arise. The cornerstone of the linguistic-cultural view of deafness is the fact that the Deaf have their own language and a culture centered around it. To understand this is to give up some of the widespread misconceptions I have mentioned and to see that ASL is not a manual version of English; that it was not invented or brought to the deaf by hearing people; and that it developed naturally in deaf communities just as oral languages develop in hearing communities. Study of its structure has revealed an independent language with a grammar different from English and the European languages most of us know best, but not dissimilar to what is found in other languages.
A major obstacle to understanding ASL is the tendency to see it as very different simply because it is signed. Superficially, sign languages appear very different from oral languages. Unlike the words of oral languages, many signs have an “iconic” basis, that is, the movement of the hands and body suggests the physical identity of the object the signer wants to refer to. It may even seem that sign languages have no grammar at all. Only when they are analyzed as systems—when sentences and individual signs are broken are broken down into their component parts, revealing the ways those parts are combined—can we see how much they resemble oral languages.
This point can be illustrated with the ASL signs for pronouns, which initially appear very different from the words of oral languages. To say “I” or “me,” I point at myself with my index finger. To say “you,” I use my index finger to point at the person to whom I am signing. Pointing in another direction with the index finger means “he/him” or “she/her.” Unlike the pronouns of oral languages, these signs initially appear to be simple iconic gestures: one simply points at the person referred to.
This makes ASL appear to lack three properties found in oral languages. First, in oral languages the relation between a word and its meaning is conventional and generally arbitrary. Except for familiar words borrowed from other languages and onomatopoeia, when one first hears a word of a foreign language its meaning is not evident; pairings of sound and meaning are specific to a particular language and arbitrary. For example, there is no inherent connection between the sequence of sounds represented by “b,” “e,” and “d” in the English word bed and the concept “bed”; they are paired by an arbitrary convention of English. Other languages have equally arbitrary conventions pairing this concept with other sequences of sounds. In contrast, the pointing gestures that ASL uses as pronouns may seem to be not arbitrary but direct representations of the meaning; they seem understandable with no previous knowledge of the language.
Second, sentences, phrases, and words in oral languages are compositional: they consist of smaller units combined in different ways to convey different meanings. The simple pointing gestures of ASL pronouns appear not to consist of combinations of smaller units combined in particular ways to convey particular meanings. Third, pronouns in oral languages make grammatical distinctions of person, number, gender, and/or case which ASL’s pointing gestures appear not to make.
An examination of the ASL pronouns as a system, however, reveals all three properties found in oral languages: a conventional relation between form and meaning, compositionality, and grammatical distinctions. The ASL pronoun system is compositional in that it consists of smaller units combined in different ways to convey different meanings. These smaller units are the same types of units of which all signs are composed: movement, handshape, and “orientation,” i.e., the direction in which the hands face when the sign is made.5 Signs that are alike in other respects but differ only in movement differ in meaning. Identical movements performed with different handshapes (e.g., with different fingers extended, with a fist, a flat hand, etc.) are distinct signs with different meanings. Two signs with the same movement and handshape but with a different orientation of the hand or hands are different signs. Movement, handshape, and orientation, which serve to distinguish signs in general, are also the smaller units that comprise ASL pronoun signs.
Although the pronoun signs at first appear to be iconic gestures, they are not; they are the first-, second-, and third-person singular pronouns in an intricate system. Making the same movement, but using a flat hand (with fingers touching) instead of the extended index finger, I have changed the pronouns’ case to possessive. With the palm of the flat hand oriented toward me the sign means “my”; when it is oriented toward the person being addressed the sign means “your”; and with a third orientation it means “his/her.” The extended index finger and the flat hand are two of ASL’s distinctive handshapes. Here they indicate different grammatical cases. Orientation of the hand indicates the first, second, or third person.
If, instead of a simple pointing movement, I make an arc movement with the extended index finger, I have made a plural pronoun. An arc movement with the index finger extended toward me is first-person plural; toward the person being addressed it is second-person plural; and extended in another direction it is third-person plural.6 If I make the same arc movement with the flat hand, the resulting pronoun is both possessive and plural: facing toward me it is first-person plural possessive “our”; facing the person being addressed it is second-person plural possessive “your”; and oriented in a third direction it is third-person plural possessive “their.”
These signs thus form a system with the essential linguistic properties found in oral languages. The pronouns’ forms are compositional and they make grammatical distinctions: the orientation of the hand indicates grammatical person, the handshape indicates grammatical case, and the type of movement (pointing vs. moving the hand in an arc) indicates grammatical number (singular vs. plural). The relation between form and meaning is conventional; one must know the language to know, for example, that the flat hand indicates a possessive pronoun and an arc movement a plural. Systematic differences in the form of signs yield corresponding differences in meaning.
The ASL system of pronouns is entirely independent of English, for each language makes grammatical distinctions that the other does not. In addition to possessive forms, English pronouns distinguish nominative forms (he, she, I, they) from objective (him, her, me, them). Like Indonesian, ASL makes no such distinction. English distinguishes gender of pronouns in the third-person singular only (he/him vs. she/her vs. it), while ASL, like Finnish and Hungarian, does not distinguish gender of pronouns. Taiwan Sign Language pronouns, however, distinguish gender by means of handshape; in all three persons, a fist with an erect thumb indicates masculine and a fist with an erect pinky indicates feminine. Both languages use handshape to make grammatical distinctions in pronouns: case in ASL and gender in Taiwan Sign. In both languages the relation between handshape and grammatical notions is conventional and arbitrary. ASL uses an extended index finger in non-possessive pronouns not because these are nonlinguistic pointing gestures, but because the extended index finger is one of the distinctive handshapes of ASL. Like many ASL signs, these signs are probably iconic in origin, but they now function as elements of a distinctively linguistic system.
The ASL system of pronouns makes distinctions that the English system does not. English pronouns distinguish only two grammatical numbers (singular vs. plural), while ASL, like Slovenian, distinguishes three numbers: singular, plural, and dual, in all three persons. In addition, ASL (in the dual) has distinct forms for first-person inclusive (me + you) vs. exclusive (me + someone else)—a distinction also found in Mandarin Chinese. These distinctions in the ASL pronoun system show both that it is independent of English and that it makes distinctively grammatical distinctions.
ASL pronouns illustrate an important additional point. No conclusions can be based on mere observation. It is necessary to analyze the entire system of pronouns to determine the parts of which each is composed and the function of each part in the system. As will be seen below, it has been claimed that ASL has no pronouns, but uses pointing gestures or points in space instead of pronouns. This view does not account for the systematic ways handshapes and arc movements are used in ASL pronouns, or for the fact that the distinctions found in ASL pronouns (singular vs. dual vs. plural; inclusive vs. exclusive; plain vs. possessive) are precisely the kinds of distinctions found in systems of pronouns in oral languages. Similar systematic characteristics can be found in other aspects of ASL grammar.
ASL’s syntax is also independent of English, as can be seen in the syntax of questions. Languages differ with respect to the placement of the question word (who, what, where, etc. or their equivalents). In Georgian and Hungarian, for example, the question word appears immediately before the verb. The equivalent of “What did Tom buy yesterday?” could have several alternative word orders, but always with the question word before the verb: “What bought Tom yesterday?” or “Tom what bought yesterday?” or “Tom yesterday what bought?”7 In English the question word is placed in initial position: “What did Tom buy yesterday?” In ASL, the question word occurs at the end: “Tom buy yesterday what?” Thus, ASL grammar differs both from English and from Georgian and Hungarian with respect to the placement of question words. A second difference between ASL and those languages is that ASL grammar also allows the question word to appear twice—in both initial and final position: “What Tom buy yesterday what?” These differences in the syntax of questions in ASL and English show the two languages to have independent grammars.8
The grammatical notions that verbs express are also different in English and ASL. The familiar “principal parts” of English verbs (e.g., sing, sang, sung) have no counterparts in ASL. In addition to the present stem (e.g., sing), English verbs have a past tense form (e.g., sang) and a past participle (e.g., sung) that is used to form perfect tenses (e.g., have sung, had sung) and the passive voice (e.g., was sung). As in Indonesian, verbs in ASL do not express tense. Like Lakhota (a Siouan language spoken in South Dakota) and Choctaw (a Muskogean language spoken in Oklahoma and Mississippi), ASL does not have a passive voice. On the other hand, ASL verbs express grammatical notions that English verbs do not. For example, by varying the manner of movement with which certain verbs are signed, one obtains distinctions in meaning known as “aspectual.” From the ASL verb “look,” for example, one obtains different forms meaning “look continually,” “look repeatedly,” “look for a long time,” and so on.
In general, ASL differs more from English in its grammatical structure than French, Spanish, German, and Russian do. These oral languages are all members of the Indo-European language family. English and German are both Germanic languages, French and Spanish are Romance languages, and Russian is Slavic. These language families can ultimately be traced back to what was once a single Indo-European proto-language. They have many structural features in common, some of which can be traced back to the earlier language from which they developed. English, French, Spanish, German, and Russian all form questions with the question word in the initial position. They all have a passive voice. In all these languages the verb expresses tense. As we have seen, ASL has none of these properties. Its structure differs significantly both from English and from other familiar European languages. At the same time, the structural features that make ASL different from these languages are shared with other oral languages; they cannot be explained by the fact that ASL is a sign language.
The basic structural differences between ASL and English have practical consequences. For native signers, English is a foreign language. Learning to speak English is not just a question of learning to pronounce English words with no auditory feedback (which is itself extremely difficult or impossible for those who have been profoundly deaf from birth). It means learning a grammar that differs radically from their own. If the goal is only to read English, the pronunciation problem may be mitigated somewhat, but the grammatical problems are no less severe. The position of a native signer learning to read English is in some respects like that of an English speaker learning to read ancient Greek or Latin, with two differences: Greek and Latin, as Indo-European languages, share many structural features with English that ASL does not, and a hearing English speaker learning to read Greek or Latin can learn to pronounce the words well enough to grasp elements of the language’s structure that depend on sound and to help learn the vocabulary.
In some respects a native signer learning to read English is in the position of a native speaker of Chinese, Arabic, Japanese, Navajo, or any other non-Indo-European language who tries to learn English. In other respects, however, learning English (or any other oral language) presents special problems for the deaf because aspects of its structure depend on sound. For example, a Deaf college student once asked me to explain the difference in meaning between a and an in English. I was taken aback. “There is no difference in meaning,” I said. “They are variants of the same thing, and which one occurs in a given case is predictable: “an occurs before a vowel and ‘a’ before a consonant.” “No,” she replied. “I have seen things like a horse and an honor.” Of course, she was right. The occurrence of a vs. an is predictable, as I had said, but it depends on the first sound of the next word, not the spelling. A hearing child learning English grasps this regularity. The Deaf student still had not figured it out in her early twenties. How could she, without access to the sounds on which the occurrence of a vs. an depends?
My purpose here is not to advocate one or another method of English instruction for the deaf, but only to point out two realities. First, certain aspects of English structure that present no problem for hearing learners are not obvious to the deaf because they depend on sound. This means that even if deaf children are exposed to English early on, they will not be able to learn it in the same way that hearing children do.
Second, for the Deaf whose first language is ASL, English is a foreign language that presents the usual problems of second-language acquisition for those whose first language differs significantly in structure. Deaf children exposed to ASL learn it much in the same way that hearing children learn oral languages.9 It gives them a means of expression and communication and access to Deaf culture. To read English and thereby gain access to the national culture they must learn another language. Surrounded by English speakers and English texts since childhood, many American Deaf people learn to read English very well and they can respond strongly to its literature; but if they become fluent in reading English, it is because they have mastered a language whose structure is radically different from their own.
How can languages’ structures be compared? If we wish to compare the structures of English and Russian, for example, how would we go about it?
One method might be to listen to people speaking English and Russian, and then give our impressions of how the two languages sound. This would not be very revealing, for there are systematic similarities and differences between the two languages that would escape detection. Merely listening to the two languages, for example, would not reveal that they share a structural characteristic: the placement of question words in initial position. Two languages can be similar in structure even though their words sound very different.
A second method of comparing languages’ structures would be to analyze them as systems and see how the system compare. As with the pronoun system and the syntax of questions in ASL, one needs to make explicit the regularities in each language, which can then be compared with those found in the other. The second method of comparing two languages’ structures is therefore to analyze the forms in each language and the types of sentences in which they are used, to discover the regularities and see how they fit together as a system. This is part of the process of discovering the grammars of the languages, which can then be compared to discover their similarities and differences.
In Seeing Voices, Sacks compares ASL with oral languages and comes to the conclusion that “the difference between the most diverse spoken languages is small compared to the difference between speech and Sign.” On what is this conclusion based? Sacks has not made an analysis of the structure of ASL that would allow comparisons with other languages. He concentrates on the fact that ASL is signed, basing his comparison on the first method, analogous to listening to English and Russian, and then giving his impressions of how they sound:
The single most remarkable feature of Sign—that which distinguishes it from all other languages and mental activities—is its unique linguistic use of space. The complexity of this linguistic space is quite overwhelming for the “normal” eye, which cannot see, let alone understand, the sheer intricacy of its spatial patterns.
Watching people sign, Sacks is impressed by this “unique linguistic use of space.” Someone who does not know the language cannot break down the signing stream into the individual signs of which it is composed, just as someone listening to a foreign language for the first time cannot break down the stream of speech into its component words. A non-signer’s inability to see what is going on in fluent signing reveals nothing of the extent to which ASL resembles oral languages.
There is a serious question whether (and if so, to what extent) the fact that ASL is a sign language articulated with the hands, face, and body affects its grammar. Adopting the phrase “spatial grammar” coined by sign language researches Ursula Bellugi and Edward Klima, Sacks repeatedly speaks of ASL as having a “spatial grammar” unlike anything known in oral languages. However, he does not make explicit what this means. What is the “linguistic use of space,” other than the fact that ASL is signed? What is “spatial grammar?” Do these phrases refer to something ASL has instead of a grammar? Do they refer to properties of the grammar itself that distinguish oral and sign language grammars? If so, what are they?
The closest Sacks comes to giving some meaning to the term “spatial grammar” is when he states that “much of what occurs linearly, sequentially, temporally in speech, becomes simultaneous, concurrent, multileveled in Sign.” This is an interesting claim: that oral languages make grammatical distinctions by sequential means, while sign languages use devices that occur simultaneously with the stem. In Latin, case forms are distinguished by the sequential device of attaching suffixes; distinct case forms of a noun have the same stem, to which different case suffixes are added. For example, the noun meaning “boy” has the case forms pueri, puero, puerum, etc., which consist of the stem puer followed by the case suffixes –i –o, and –um for the genitive, dative, and accusative cases in the singular. The stem and case suffixes are arranged sequentially: the suffix follows the stem. In ASL pronouns the possessive case is distinguished by handshape, which occurs simultaneously with the stem. This would be an interesting difference between oral and sign language grammars: that the former use “linear, sequential, temporal” means to make grammatical distinctions, while the latter use “simultaneous, concurrent, multileveled” means.
While researchers have noted a strong tendency for sign languages to use “simultaneous” means, this is not an absolute difference that distinguishes sign from oral language grammars. Oral languages can use simultaneous means to indicate grammatical distinctions, as in Maasai, a Nilotic language of Kenya and Tanzania, where nominative vs. accusative case forms of a noun are distinguished by tone (relative pitch level), which is simultaneous with the stem. Sign languages can use sequential means, as in ASL, where nouns of agency meaning “one who does x” (e.g., the equivalents of teacher from teach, analyst from analyze) are derived from verbs by adding a suffix. The arc movement that indicates plurality in ASL pronouns also occurs as a suffix added to verb stems to indicate a plural object. ASL thus makes use of suffixes in addition to “simultaneous” grammatical devices. Sequential devices are not confined to oral languages, and “simultaneous” ones are not limited to sign languages. If the term “spatial grammar” is used to indicate simultaneity rather than sequentiality of grammatical devices it does not identify a systematic difference between oral and sign language grammars. It certainly provides no basis for the claim that sign language grammars are fundamentally different from those of oral languages. Sacks does not give the term “spatial grammar” any other precise meaning that would make it possible to test that claim.
The obverse of Sacks’s claim that sign languages are nothing like oral languages is the idea that sign languages may be more similar to one another than oral languages are:
The hundreds of sign languages that have arisen spontaneously all over the world are as distinct and strongly differentiated as the world’s range of spoken languages. There is no one universal sign language. And yet there may be universals in signed languages, which help to make it possible for their users to understand one another far more quickly than users of unrelated spoken languages could understand each other…. Signers (especially native signers) are adept at picking up, or at least understanding, other signed languages, in a way which one would never find among speakers (except, perhaps, in the most gifted). Some understanding will usually be established within minutes, accomplished mostly by gesture and mime (in which signers are extraordinarily proficient). By the end of a day, a grammarless pidgin will be established. And within three weeks, perhaps, the signer will possess a very reasonable knowledge of the other sign language, enough to allow detailed discussion on quite complex issues.
What are the universals in sign languages of which Sacks speaks? If he could show, for example, that sign languages all have structural characteristics not shared by oral languages, that would be relevant evidence. Sacks provides none. Instead he offers what he calls “an impressive example”: a newspaper account of a joint production by American and Japanese Deaf theater companies which reported that “by late afternoon during one recent rehearsal it became clear they were already on each other’s wavelengths.” He also briefly recalls a conference he attended at which signers from different countries seemed to have learned quickly to communicate with one another. Highly experienced in communicating with people with whom they do not share a language, many Deaf people are extraordinarily good at communicating with gesture and mime, and with signers of another sign language they quickly establish a kind of pidgin that enables them to communicate at some level, though not with the full range of expression their own sign languages provide. Sacks’s point, however, is that universals unique to sign languages may make them more similar to each other than oral languages are. How does the newspaper report provide evidence for this? There is a big difference between “being on someone’s wavelength” and knowing that person’s language. Deaf people’s ability to communicate by means other than their own language is not, by itself, evidence for putative universal principles unique to sign languages.10
Sacks devotes considerable space to what he calls the “neurological basis” of sign. There are two key issues. First, are the same regions of the brain used for sign and speech? Second, does signing require some special kind of neurological “substrate” or “hardware” that is not needed for speech?
The brain’s left hemisphere plays a greater role in language tasks such as the production and comprehension of speech, while the right hemisphere is dominant for certain nonlinguistic cognitive phenomena, including visual and spatial phenomena. A visual language such as ASL therefore raises an interesting question: will ASL be mediated by the right hemisphere because it is visual and spatial, or by the left hemisphere because it is a language? The key study on which Sacks reports was by Howard Poizner, Edward Klima, and Ursula Bellugi.11 They used a classic method of neurology—studying the effects on patients of damage (due to strokes or other causes) to different regions of the brain. They found that damage to the left hemisphere resulted in significant impairments in signing but relatively intact capacity for processing nonlinguistic visuospatial relations, while patients with right hemisphere damage showed the reverse pattern. They concluded that the left hemisphere is dominant for sign language, as for oral language. Sacks summarizes:
This finding…is both startling and obvious and leads to two conclusions. It confirms, at a neurological level, that Sign is a language and is treated as such by the brain, even though it is visual rather than auditory, and spatially rather than sequentially organized. And as a language, it is processed by the left hemisphere of the brain, which is biologically specialized for just this function.
The demonstration that sign, like oral languages, is mediated by the left hemisphere is an important result and suggests that the “neurological basis” of sign may be the same as for oral languages. By itself, it gives no grounds for concluding that signers are neurologically different from anyone else. Imbued with the idea that ASL has a “spatial grammar” that makes it unlike any oral language, however, Sacks takes the demonstration that the left hemisphere is dominant for sign to mean that signers have an unusual representation of space in the left hemisphere:
The fact that Sign is based in the left hemisphere, despite its spatial organization, suggests that there is a representation of “linguistic” space in the brain completely different from that of ordinary, “topographic” space…. There develops in signers a new and extraordinarily sophisticated way of representing space; a new sort of space, which has no analogue in those of us who do not sign. This reflects a wholly novel neurological development. It is as if the left hemisphere in signers “takes over” a realm of visual-spatial perception, modifies it, sharpens it, in an unprecedented way, giving it a new, highly analytical and abstract character, making a visual language and visual conception possible.
Here Sacks claims that represented in signers’ brains is something absent in nonsigners’ brains: a different kind of space that is claimed to reflect “a wholly novel neurological development.” Poizner, Klima, and Bellugi’s demonstration that signers’ brains are like those of oral language users in showing left-hemisphere dominance for language is interpreted by Sacks as evidence for a difference between signers’ and nonsigners’ brains.
What is the evidence that signers’ brains have a new and different representation of space? What Sacks calls “a remarkable and startling confirmation” comes from a patient with right hemisphere damage whose ability to deal with topographic space was profoundly defective but whose signing was still intact. But to account for this one need not conclude that this patient has a special representation of space in the left hemisphere. On the contrary, the fact that her signing was impervious to a breakdown of spatial functions is entirely expected if ASL does not have a special “spatial grammar” but is fundamentally like oral languages, and if signers’ brains consequently have no novel representation of space. Damage to regions of the brain specialized for spatial or other nonlinguistic functions would therefore not affect signing any more than speech, since both are linguistic phenomena.
Sacks also cites studies showing that signers do better than nonsigners at certain visual tasks, but such behavioral studies do not provide direct evidence for one or another kind of neurological organization. For that some kind of direct neurological evidence would be needed. Sacks cites the research results of Helen Neville and her colleagues at the Salk Institute in La Jolla, California, who designed an experiment to test whether the allocation of functions between the brain’s two hemispheres is genetically determined, or whether it can be altered by experience—in particular, by early language experience (learning and using ASL as a first language). They discovered both behavioral and neurological differences between nonsigning hearing subjects and native signers (both deaf and hearing) who had learned ASL as their first language from their Deaf parents.12
The task for the subjects in Neville’s experiment was to detect direction of motion in both foveal (central) and peripheral visual stimuli. For stimuli in the foveal visual field there were no significant differences between the two groups. For peripheral stimuli, however, nonsigning hearing subjects detected direction of motion better in the left visual field, while native signers (both deaf and hearing) performed better for right visual field targets. “Event-related brain potentials” (ERPs), which measure effects of attention on different regions of the brain, showed greater increases in attention in the right hemispheres of Neville’s hearing subjects during the experimental task, but in the left hemispheres of native signers (both deaf and hearing).
Why did native signers perform better for the right visual field? Why did ERPs show greater increases in attention in the left hemisphere? These results are consistent, since stimuli in the right visual field are mediated by the left hemisphere, which is also dominant for language. Since early acquisition of a sign language means early exposure to specifically linguistic visual stimuli (mediated by the left hemisphere because they are linguistic), Neville’s results suggest that this may enhance the visual processing abilities of the left hemisphere for nonlinguistic visual stimuli as well.
A question remains: Why did native signers and nonsigners show different results for peripheral but not foveal stimuli? The answer lies in the way sign language is used. The participants in a signed conversation do not watch each other’s hands; they maintain eye contact throughout, paying attention to the facial expressions that are an integral part of ASL and picking up hand movements through peripheral vision. This fact strengthens the argument that language experience is the relevant variable distinguishing native signers’ performance and ERPs from those of nonsigning subjects. The left hemisphere’s enhanced ability to process peripheral linguistic stimuli in native signers seems to extend to peripheral nonlinguistic stimuli. Neville concludes that cortical specialization in humans is not completely determined genetically; she attributes the different pattern of cortical specialization in native signers to plasticity of the brain in early childhood when ASL was learned.
These are fascinating results because they bear on the important question of what is genetically determined and what results from experience. But they do not support Sacks’s claim that signers’ brains represent a novel sort of space distinct from ordinary, “topographic” space. Neville’s neurological results are more modest: they show greater left-hemisphere involvement in native signers responding to peripheral visual stimuli.13
Recall Sacks’s bewilderment at his inability to “see” the distinct movements in rapid, fluent signing. He attempts to link the idea that signers’ and nonsigners’ brains are different with his idea that sign languages are fundamentally different from oral languages, speculating that the languages’ differentness may have a neurological explanation: a nonsigner cannot even imagine “grammaticizing space,” and this inability may be physiological. He notes that the observer who expects sign to be like a pantomime will soon find it unintelligible, and goes on to comment as follows (my italics):
One must…wonder whether there is not also an intellectual (and almost physiological) difficulty here. It is not easy to imagine a grammar in space (or a grammaticization of space)…. Our extraordinary difficulty in even imagining a spatial grammar, a spatial syntax, a spatial language—imagining a linguistic use of space—may stem from the fact that “we” (the hearing, who do not sign), lacking any personal experience of grammaticizing space ourselves (and lacking, indeed, any cerebral substrate for it) are physiologically unable to imagine what it is like (any more than we can imagine having a tail or seeing infrared).
What enables signers to “grammaticize space?” Later in the book, Sacks seeks an explanation in the idea that their brains are different:
What goes on in the mind and brain of a three-year-old signer, or any signer, that makes him such a genius at Sign, makes him able to use space, to “linguisticize” space, in this astonishing way? What sort of hardware does he have in his head? One would not think, from the “normal” experience of speech and speaking, or from the neurologist’s understanding of speech and speaking, that such spatial virtuosity could occur. It may indeed not be possible for the “normal” brain—i.e. the brain of someone who has not been exposed early to Sign.
The references in these passages to the brain and to a “cerebral substrate,” a “physiological” inability, and “hardware” make their content clear: even to imagine a “spatial language,” to be able to “linguisticize” space and exhibit “such spatial virtuosity,” it may be necessary to have a different kind of brain. The second passage reveals what presumably gives rise to such a brain: early exposure to sign. Dazzled by the signs that flash before his uncomprehending eyes with breathtaking speed, Sacks interprets his inability to “see” the individual movements in fluent signing as an inability to “linguisticize” or “grammaticize” space. He speculates that signers can do this because their brains are different, presumably in the sense made explicit earlier: their left hemispheres have “a new and extraordinarily sophisticated way of representing space” as a result of early exposure to sign.
There is, of course, an alternative explanation: signers differ from Sacks not in having a special “cerebral substrate” or “hardware,” but in knowing the language. A nonsigner with no special neurological “hardware” can learn ASL and then use it, which for Sacks means “grammaticizing” space. As he recognizes in a footnote accompanying the passage quoted above14 and elsewhere in the book, this is possible after early childhood, and in fact it happens all the time. In addition to Deaf parents’ children who learn and use it from infancy, who learns and uses ASL? Many deaf people with hearing parents use ASL. Some learn it in childhood when they attend schools for the deaf with children from signing homes, some in their teens, and some even later. Some individuals who become deaf later in life learn and use it. Some hearing people learn to use it with Deaf friends or relatives. Increasingly, as courses in ASL are offered in high schools and colleges, hearing people are learning and using ASL as a second language. All these people use ASL; in Sacks’s terms, they “linguisticize” space. They learn ASL and begin to use it at different ages and, as with any language, they achieve different degrees of proficiency. These people all have “normal” brains in Sacks’s sense: they have not been exposed early to sign. Signers can handle ASL’s “spatial virtuosity” that Sacks finds so daunting not because they have a different kind of brain, but because they know the language.
As with any language, those who learn ASL later in life do not achieve the same kind of proficiency as those who learn it from infancy. In the last quotation above, Sacks fails to distinguish between the two types of proficiency: that which a “three-year-old signer” can develop and the ability of “any signer” to “linguisticize” space. Which type of proficiency may require a special kind of brain? Sacks is attempting to give a neurological explanation for what he views as signers’ “astonishing” ability to “linguisticize” space—i.e. to use ASL. This is the proficiency of “any signer.” Since it is based on the idea that the neurological differences that can explain signers’ ability to “linguisticize” space result from early exposure to a sign language, the neurological explanation of this kind of proficiency fails; all signers, in using the language, “linguisticize” space, regardless of the age of first exposure.15
On the other hand, perhaps Sacks’s point is that only the proficiency of those who learn ASL from infancy may require a special kind of brain unique to signers. This would fail to explain what he views as the “spatial virtuosity” of “any signer.” The quotation above that juxtaposes the proficiency of a “three-year-old signer” and that of “any signer” is accompanied by the footnote, earlier mentioned, which emphasizes the proficiency of those who learn ASL from infancy:
It has been shown by Elissa Newport and Ted Supalla…that late learners of Sign…though competent enough, never master its full subtleties and intricacies, are not able to “see” some of its grammatical complexities. It is as if the development of special linguistic-spatial ability, of a special left hemisphere function, is only fully possible in the first years of life. This is also true for speech. It is true for language in general. If Sign is not acquired in the first five years of life, but is acquired later, it never has the fluency and grammatical correctness of native Sign: some essential grammatical apparatus has been lost.
While this quotation and its context bring out the parallels between children’s acquisition of sign and speech, Sacks assumes that signers have a “special linguistic-spatial ability” as a “special left-hemisphere function” (presumably based on the special representation of nontopographic space he attributes to their left hemispheres). He speculates that, like speech, it can develop only in the first years of life. But it is precisely the existence of such a “special left-hemisphere function” that is at issue. Newport and Supalla’s research results provide no evidence for this.
Newport and Supalla studied three groups of signers with only limited skills in English: “native learners” exposed to ASL from infancy, “early learners” who were first exposed to ASL between the ages of four and six, and “late learners” first exposed to ASL after the age of twelve. All subjects had learned ASL as their first language and had used it as their primary language for at least thirty years. Newport and Supalla found interesting differences in proficiency that were correlated not with length of experience with ASL, but with age at first exposure. Although late learners are fluent signers, they were significantly less able to use and understand certain complex forms and combinations of forms which signers who had learned ASL from infancy can use correctly and effortlessly.16 Late learners of ASL are in this respect similar to those who learn English as a second language after childhood. They may speak and understand English well but never fully master certain aspects of English structure that come naturally to those exposed to English from birth. An important difference is that ASL is the late signers’ first language. Newport and Supalla argue that the relevant variable is not whether language is acquired as a first or second language but the age at which it is acquired. They interpret their results as evidence for Eric Lenneberg’s idea that humans have a biologically determined “critical period” for language acquisition in early childhood after which language learning does not occur in the same way or with the same results.
Newport and Supalla found that the differences in proficiency between native and late learners of ASL are like those between native and late learners of oral languages.17 Their results do not support Sacks’s idea of “a special linguistic-spatial ability” as “a special left hemisphere function” distinct from the linguistic abilities that underlie speech. On the contrary, they support the alternative hypothesis: that the proficiency of those exposed to ASL from infancy arises not from a special brain unique to signers, but rather from the ability of all humans to master whatever language they encounter in early childhood. Sacks himself recognizes that “despite the differences in modality, the acquisition of ASL by deaf children bears remarkable similarities to the acquisition of spoken language by a hearing child,” adding that “the acquisition of grammar seems identical,” and that it “occurs at the same age…and in the same way, whether the child is speaking or signing.” Like the proficiency of native speakers of oral languages, native signers’ proficiency in ASL can be accounted for as a result of the normal process of language acquisition by children. It provides no evidence for a special representation of nontopographic space or a “special left hemisphere function” in native signers distinct from what enables hearing children to master an oral language.
In brief, one of the outstanding results of the research on the “neurological basis” of sign that Sacks cites has been the discovery of strong parallels (rather than differences) between oral and sign languages. Poizner, Klima, and Bellugi’s results argue that the left hemisphere is dominant for both. Newport and Supalla have found the same “critical period” effects for acquisition of both oral and sign languages. Neville’s results alone show neurological differences between native signers and nonsigners, but these are limited to her finding of increased attention in native signers’ left hemispheres in response to peripheral visual stimuli. None of these studies—or any others that Sacks cites—provide evidence either for his conclusion that signers’ left hemispheres have “a new and extraordinarily sophisticated way of representing space,” “a new sort of space,” that is “different from ordinary, ‘topographic’ space,” or for his speculations that such a left hemisphere may be needed to “linguisticize” space or to display what he views as “spatial virtuosity.” Neville’s results are extremely interesting and important, and behavioral studies showing that native signers do better than nonsigners at certain visual tasks are highly suggestive. Additional research may indeed reveal interesting neurological differences between signers and nonsigners. Sacks’s conclusions and speculations, however, go well beyond what has been shown by the research he cites.
Although Sacks has performed a valuable service in introducing the general reader to the linguistic-cultural view of deafness, much of what he says about ASL is seriously flawed. I have concentrated on the idea that “the difference between the most diverse spoken languages is small compared to the difference between speech and Sign” and the idea that “there may be universals in signed languages, which help to make it possible for their users to understand one another far more easily than users of unrelated spoken languages could understand each other,” as well as the idea that signers’ left hemispheres have “a new and extraordinarily sophisticated way of representing space” which “reflects a wholly novel neurological development” and which might explain what Sacks sees as signers’ “astonishing” ability to “linguisticize” space. Sacks does not present the kind of evidence that would be needed to establish these points. He realizes that sign language is central to the lives of the Deaf and he grasps the intellectual interest of the fact that language exists in sign, but depends too much on what he can see with his own eyes. The result is a strange blend of personal reactions to signing and speculations that run the risk of creating new myths to replace those he has helped to dispel.
Unfortunately, Sacks’s speculations about ASL are likely to be repeated and remembered by others as fact. This can be seen in the review by the well-known philosopher Ian Hacking,18 which assumes the truth of Sacks’s speculation that sign languages “do not have anything much like the structure of any spoken language.” According to Hacking, sign is so different because “space is everything, and it is a marvel that the human eye can take in what are, for nonsigners, movements too fast even to see, let alone distinguish.” “There are no pronouns,” Hacking says, “but the function of pronouns is well served by space.” Perceptions such as these, apparently rooted in Sacks’s speculations, lead Hacking to the conclusion that “philosophy of mind and of language, we hope, will never be the same again.” Recognition of sign languages may indeed alter current conceptions of language, but any such changes must be based on serious analysis of their structure. Each of Sacks’s speculations may seem innocuous by itself. Taken together, they can yield a misleading picture with far-reaching consequences, as Hacking’s conclusions show.
As he was fascinated with the inner worlds of the neurological patients he described in earlier works, Sacks is fascinated with what he perceives to be the startlingly different world of the Deaf. This perception of differentness remains remarkably constant throughout out his book. At first he is struck by the differentness of the alingual deaf. One might think that the discovery that the signing Deaf have language would radically change Sacks’s perception of differentness, but he sees in ASL a “spatial virtuosity” that he finds “astonishing.” He then seeks an explanation for the perceived differentness of the language in a differentness rooted in the brains of the Deaf. An alternative view—better supported by the evidence—is that the difference between the Deaf and the rest of us need not be described as a fundamental one: they have a different language that serves as the medium of a distinctive culture.
Astounded by its “unique linguistic use of space,” Sacks saw ASL as a miracle different in kind from any oral language. Analysis of its structure could have made this perception of differentness give way to a deeper appreciation of the ways it is like oral languages. Nonetheless, Sacks’s sense of wonder and awe was not entirely misplaced. Sign is indeed a miracle. But so is speech. X-ray cinematography, which makes it possible to see the activity of the articulators (the lips, tongue, velum, pharynx, and larynx) in the vocal tract during speech, reveals a “virtuosity” every bit as impressive as that of the face, hands, and body in sign.
Imagine the wonderful social scene in a crowded bar under X-ray cinematography, with tongues flying in all directions as a hundred different conversations proceed. What is happening as each speaker’s tongue gyrates wildly, as the lips open and close, the velum rises and falls, the pharynx expands and contracts, and the jaw moves up and down? Adjectives are placed beside the nouns they modify; subjects, verbs, and objects are aligned in the right order; question words are placed in initial position; pronouns are put in the nominative or objective case according to their function in the sentence; verbs are put in a form that expresses tense and (in the present tense) whether or not the subject is third person singular. These and many other conventions of English grammar are being followed so that meaning can pass from one mind to another. Through a complex chain of events, these grammatical conventions result in movements of the tongue and other articulators whose “virtuosity” can be appreciated under X-ray cinematography. In exactly the same way, sign languages’ grammatical conventions result in the articulations of the hands, face, and body that Sacks called “spatial virtuosity.” In speech the articulators’ “virtuosity” occurs in the vocal tract, where it is hidden from view. In sign it is out in the open—in “space”—where it can command the attention of those who are unaware of what goes on in speech. There is a strong parallel between the two phenomena. What seems extraordinary in sign may lead us to appreciate how extraordinary speech is. The miracle is neither sign nor speech per se. The miracle is language.
What the Deaf have to teach us—above all—is that either sign or speech can serve as the vehicle of language. It may take us a long time to assimilate the implications of this simple fact. And it may teach us that there are more ways than we realized of being fully human.
March 28, 1991
William C. Stokoe, “Classification and Description of Sign Languages,” in Thomas A. Sebeok, ed., Current Trends in Linguistics, Vol. 12 (Mouton, 1974). ↩
This process is beautifully described and analyzed in Carol Padden and Tom Humphries, Deaf in America: Voices From a Culture (Harvard University Press, 1988), Chapter 5. ↩
On Deaf culture, the American Deaf community, and the experience of Deafness, see Padden and Humphries, Deaf in America. ↩
Writing is not a separate modality on a par with speaking and signing, but is derivative of oral language. ↩
Location is a fourth component of signs, but it need not concern us here. William Stokoe’s pioneering discovery that signs consist of different combinations of movement, handshape, and location units (the significance of orientation was discovered later) marked the beginning of the scientific study of sign language structure and of the recognition of sign languages as languages. See William Stokoe, “Sign Language Structure: An Outline of the Visual Communication Systems of the American Deaf,” Studies in Linguistics, Occasional Papers No. 8, Department of Anthropology and Linguistics, University of Buffalo (1960). ↩
At a deeper level of analysis than need concern us here, there may be a difference between the arc movement in the first-person plural and that in the second and third persons. ↩
To abstract from details of individual languages, I use English translations of foreign-language words and signs. ↩
Two matters need not concern us here: some additional question patterns in ASL, and further differences between a question and the corresponding declarative in English. ↩
For an overview of research results on this subject, see Elissa L. Newport and Richard P. Meier, “The Acquisition of American Sign Language,” in Dan I. Slobin, ed., The Cross-Linguistic Study of Language Acquisition (Lawrence Erlbaum, 1985). ↩
Sacks’s anecdotal evidence of how quickly Deaf people learn each other’s languages is suspect as well: if they communicate in a grammarless pidgin—distinct from both signers’ languages—how does either get enough exposure to the other’s language to learn it in three weeks? ↩
Howard Poizner, Edward S. Klima, and Ursula Bellugi, What the Hands Reveal about the Brain (MIT Press, 1987). ↩
On the results reported here, see Helen J. Neville and Donald Lawson, “Attention to central and peripheral visual space in a movement detection task. III. Separate effects of auditory deprivation and acquisition of a visual language,” Brain Research 405 (1987), pp. 284–294. ↩
Neville and her colleagues tested Deaf parents’ adult hearing children who had learned ASL as their first language precisely in order to distinguish effects of congenital deafness from effects of learning ASL from infancy. With respect to the results reported above, hearing children of Deaf parents patterned with other native signers, showing the effects to be due to early acquisition of ASL. On the other hand, congenitally deaf subjects differed both from hearing children of Deaf parents and from nonsigning hearing subjects in showing greater attention effects during the visual task in regions of the brain usually regarded as purely auditory in function. This must therefore be an effect of congenital deafness, showing a functional reallocation of brain areas that usually process auditory information as a result of auditory deprivation from birth. This, too, is important evidence that bears on (in Sacks’s words) “the extent to which the cerebral cortex is fixed by inborn genetic constraints and to what extent it is plastic and may be modified by the particularities of sensory experience.” See Neville and Lawson’s article for discussion. ↩
See the passage about the work of Newport and Suppalla quoted below. ↩
Alternatively, Sacks might mean that by learning sign at any age one undergoes neurological changes that make it possible to “linguisticize” space. This, however, contradicts his idea that the “spatial virtuosity” involved in signing may not be possible for the brain of someone without early exposure to sign. Further, there is at present no evidence that people who learn ASL after early childhood undergo neurological changes as a result. (In an addition to the paperback edition, Sacks echoes Neville’s suggestion that her experiments be repeated with “late learners” of ASL.) ↩
Early learners’ proficiency was much closer to that of native learners, but not quite at the same level. ↩
For discussion of research results concerning native vs. late acquisition of both ASL and English, see Elissa Newport, “Maturational Constraints on Language Learning,” Cognitive Science Vol. 14 (1990), pp. 11–28 and the references cited there. ↩
Ian Hacking, “Signing,” The London Review of Books, April 5, 1990. ↩