By the end of the nineteenth century neurologists were convinced that seeing and understanding were two distinct, anatomically separate brain functions; seeing was passive and understanding active. The evidence seemed clear: patients with damage in one part of the brain became blind, whereas patients with damage in another part of the brain were able to see but could not understand what they were seeing. They could not recognize objects. The German neurologist Hermann Munk called this second condition “mind-blindness” (Seelenblindheit) and today it is known as “agnosia,” a term we owe to Sigmund Freud. These neurological findings led to the widely accepted idea that the eye was like a camera: the eye took photographs and transmitted them to the visual center of the brain; the photographs were then interpreted and understood by other cortical centers. When the cortical centers were damaged and could no longer interpret them, the result was agnosia.
During the past thirty-odd years, however, neurophysiological and clinical studies, as well as attempts to simulate animal and human vision on computers, have given us a considerably more sophisticated view of the nature of perception. Since the world we see is always changing and the retina receives a constant flow of different kinds of visual information, the brain must be able to select visual properties of objects and surfaces in order to give them coherence. In acquiring this ability, the brain has developed specialized functions for the analysis of different properties, such as color, shape, and movement. It creates our visual worlds, including the colors of the rainbow, the illusion of motion on the movie screen, and the perception of depth in a flat painting.
Semir Zeki, a professor of neurobiology at University College, London, has made some of the more important discoveries that have radically changed our view of how we see and understand the world around us. His book A Vision of the Brain, published in 1993, has become a classic introduction to vision and the brain sciences, showing how recent neurophysiological studies support the view that our visual worlds are a product of complex processes rather than a simple reflection of the world around us. Though the different colors we see are determined by the different frequencies of light reflected off surfaces, the particular sensation of, for example, the color red is a product of our brains. The perception of color is one of the ways the brain makes us aware of the physical characteristics of the external world, in this case various frequencies of light. The universe, as Isaac Newton noted long ago, is colorless: “Rays, to speak properly, have no Colour. In them there is nothing else than a certain power and disposition to stir up a sensation of this Colour or that.”
In Inner Vision, Zeki is less concerned with what neurophysiology can tell us about art than what art can tell us about neurophysiology. Painters have always been dependent on the functional organization of the visual brain, without being aware of that dependence. The portrait painter, Zeki points out, relies on the specialized areas in the brain that allow us to see and identify faces. So, too, attempts of Fauve painters such as Matisse or Derain to liberate colors from form could only be achieved because color and form are separate and independent brain functions. And the Cubists, Zeki writes, were able to make explicit our simultaneous awareness of multiple perspectives because specific brain mechanisms can analyze the two-dimensional images on our retinas as three-dimensional objects.
Zeki’s new book fortuitously appears at the same time as Visual Intelligence by Donald Hoffman, professor of the cognitive sciences at the University of California (Irvine), which gives a remarkably clear summary of what we know, and don’t know, about the way the brain creates what we see. Hoffman has done important research suggesting that in order to create visual perceptions the brain must be using a set of rules comparable to the rules of transformational grammar that Noam Chomsky has found to be universal characteristics of language. His book describes implicit rules that make it possible for us to see lines, depth, colors, form, and motion. Taken together, the books by Zeki and Hoffman give the most perceptive general account we have of the workings of human vision.
In A Vision of the Brain, Zeki recalled that in 1888 a Swiss ophthalmologist, Louis Verrey, published a study of a sixty-year-old woman who had a stroke affecting her left hemisphere. When she looked straight ahead, everything to her right (her “right visual field”) appeared gray. The post-mortem examination of her brain revealed localized brain damage and Verrey therefore concluded that there was a subdivision of the visual cortex that was specifically devoted to color. “The implications of Verrey’s conclusion,” Zeki wrote,
were momentous, indeed so momentous that even he failed to see them. For, if color vision could be specifically and separately compromised, this would suggest that color is separately represented in the brain. From this it would follow that functional specialization is a much more widespread phenomenon, extending to the submodalities of a modality, in this case the sensation of color within vision. It would also follow that the cerebral processes involved in vision are not unitary, as our unitary experience of the visual world might suggest. Many profound questions about the brain and about vision would have followed. It is no wonder that this finding was too revolutionary to many.
The full implications of Verrey’s work became clear in 1973 when Zeki discovered the area in the rhesus monkey’s brain that is specialized for seeing colors. When this area is destroyed in the monkey or human brain, neither the monkey nor the human can see colors. At the time of Zeki’s discovery neurophysiologists were beginning to establish the outlines of a new view of the visual cortex, suggesting that there are specific functional units for seeing form, motion, and colors. Instead of interpreting “pictures” in the head, the brain is performing a variety of highly specialized visual subtasks. It is creating pictures by combining the operations of the different specialized regions into a unified visual image, though just how the brain does this is still not understood. The brain, Zeki writes, “is no mere passive chronicler of the external physical reality but an active participant in generating the visual image, according to its own rules and programs.”
Advertisement
Hoffman’s approach is explicit in comparing these rules and programs to Chomsky’s approach to language. “The rules of universal grammar,” Hoffman writes,
allow a child to acquire the specific rules of grammar for one or more specific languages…. Similarly, the rules of universal vision allow a child to acquire specific rules for constructing visual scenes. These specific rules are at work when the child, having learned to see, looks upon and understands specific visual scenes.
Consider the well-known image in which figure and ground can be reversed—either one sees two faces or one sees a goblet. The two possibilities are governed, according to Hoffman, by the following rule: the points of greatest curvature in the concave region of a curve determine what we take to be the “parts” of the image. The point of greatest curvature refers to the region “inside” the curve where the angle of change is the greatest. (For example, in a curve that shows a sudden drop in the Dow Jones average followed by a complete recovery one day later, the point at which the Dow Jones is at its lowest is where the angle of the curve is the greatest, representing as it does the Dow Jones going from high to low and then back to high.) Thus if we see the goblet as a “figure,” the points of greatest curvature “inside” the curve create the lip, the bowl, the stem, and the base; whereas if we see the two faces as a “figure,” the points of greatest curvature “inside” the curve are the forehead, nose, lips, and chin (see illustration on page 62). What is the “inside” of the curve for the goblet is the “outside” of the curve for the face. Hoffman writes that we still do not understand why, at any given moment, the viewer hits on one set of curves rather than another.
What has become clear, however, is that the brain creates our visual world by connecting and then transforming stimuli. For example, motion pictures give us a sense of continual movement by means of a series of static images presented in rapid succession. Our visual experience is not of one static image followed by another; instead we see motion because the brain—specifically that part of the brain that is specialized in the analysis of visual motion—unconsciously relates one static image to the next. We comprehend motion when the static images are presented at the rate of twenty-four frames per second. At any slower motion we see only a succession of static frames, which suggests that the brain establishes visual motion from stimuli occurring about twenty-fourths of a second apart. Without this activity of connecting, we would merely perceive a sequence of unrelated stimuli from moment to unrelated moment.
The catastrophic consequence of the inability to see movement is described in a famous case, reported in Hoffman’s book, of a woman who, following her recovery from a stroke, reported that she could no longer see things move. She had great difficulty pouring tea from a teapot because the fluid in the pot appeared to her to be frozen:
In addition, she could not stop pouring at the right time since she was unable to perceive the movement in the cup (or a pot) when the fluid rose. Furthermore the patient complained of difficulties in following a dialogue because she could not see the movements of the face and, especially, the mouth of the speaker. In a room where more than two other people were walking she felt very insecure and unwell, and usually left the room immediately, because “people were suddenly here and there but I have not seen them moving.” [She had the] same problem but to an even more marked extent in crowded streets or places, which she therefore avoided as much as possible. She could not cross the street because of her inability to judge the speed of a car, but she could identify the car itself without difficulty. “When I’m looking at the car first, it seems far away. But then, when I want to cross the road, suddenly the car is very near.”
The failure, then, to see or judge motion is a failure to establish relations that depend, in Hoffman’s view, on the rules that are part of a universal grammar of vision:
Advertisement
No one teaches you these rules. Instead, you acquire them early in life in a genetically predetermined sequence that requires, for its unfolding, visual experience…. And just as an adult, using rules of grammar, can understand countless sentences (in principle, if not in practice), so also an adult, using rules of vision, can understand countless images (again, in principle if not in practice).
An example of such a rule might be: a sudden change in the amount of light reflected from one part of a surface usually means there is a border, or change of contour, at that part of the surface. Other rules would refine this rule, determining, among other things, if the change in reflected light was caused by a shadow.
Nonetheless there are limits to the brain’s ability to acquire the necessary rules for seeing and creating a coherent visual world. More than two hundred years ago an English surgeon, William Cheselden, removed cataracts from a young boy who had been born blind. The operation attracted considerable attention because it seemed to answer a question the philosopher William Molyneux had asked John Locke: whether a man born blind who had recovered his sight later in life would be able to “distinguish between a cube and a sphere of the same metal.” Locke believed that he would not be able to distinguish the shapes visually because “he has not yet obtained the [visual] experience.”
Locke’s view appeared to be confirmed following the operation. Cheselden reported that his patient was unable to judge the shapes or sizes of objects. His room appeared to him as large as his house, though he knew this did not make any sense. He could see portraits hanging on the walls, but was surprised to discover that the faces were flat. He could recognize objects in two-dimensional drawings, but, again, their flatness did not make sense to him; they appeared to be in three dimensions. Cheselden’s patient was unable to acquire the rules that are essential for understanding drawings of three-dimensional objects on two-dimensional surfaces. Similar surgical operations on people born with congenital cataracts have been performed some twenty times since, always with unfortunate results, although they differ in some ways from the reactions of Cheselden’s patient. Recently, one patient reported, following an operation that restored her sight, that she had great difficulty making sense of whatever she was looking at. She was in a constant state of depression. Among the distressing problems she described was her in- ability to see individual colors as a part of objects; she saw the colors floating in separate planes before her eyes. Zeki cites the case of a French surgeon, Moreau, who in 1913 had enthusiastically looked forward to restoring the vision of an eight-year-old patient with cataracts. “The deception was great,” Moreau wrote. He concluded that,
It would be an error to suppose that a patient whose sight has been restored to him by surgical intervention can thereafter see the external world. The eyes have certainly obtained the power to see, but the employment of this power…still has to be acquired from the very beginning. The operation itself has no more value than that of preparing the eye to see; education is the most important factor. The [visual cortex] can only register and preserve the visual impressions after a process of learning…. To give back his sight to a congenitally blind patient is more the work of an educationalist than that of a surgeon.
Unfortunately, however, just as children who have never learned any language before the age of puberty cannot acquire the grammatical rules essential for language, people born blind who later have their vision restored cannot acquire the rules essential for seeing and recognizing objects and people.
Such cases of people who are born blind and later acquire sight seem to show that our visual recognition of distance, size, and the three dimensions are acquired early in life when visual stimuli are integrated with our sense of touch, a sense that generally depends on movement. Our capacities to recognize distance, size, and dimensions derive from the set of rules or procedures the brain uses to recognize and make sense of the visual world. The rules required for creating the abstract idea of a two-dimensional representation, for example, must be acquired very young. Apparently infants can acquire the ability to correlate their sense of touch and movement with visual stimuli and older children can’t. After infants undergo operations to remove congenital cataracts, they have normal vision; following the same operations, children of some twelve years or older are hardly able to make out objects clearly. Once again, Hoffman makes it clear that we do not understand why this is the case, just as we do not understand why most children learn new languages easily and most adults do not.
Artists have always known, at least intuitively, that the brain makes possible the creation of our visual worlds, since representational art uses materials on a flat surface to create the illusion of faces, objects, and scenes. For Zeki, artists were unwitting neurologists long before neurology came into being, at times exploring the limits of our visual intelligence. He argues that in simultaneously showing objects from several different perspectives Cubist works make explicit the perceptions that we know implicitly and that are essential to our faculties of recognition.
Brain damage can destroy this implicit knowledge, making the recognition of objects and drawings virtually impossible. Zeki describes a patient who, following a stroke, had great difficulty seeing objects. He was asked to make a drawing of St. Paul’s Cathedral in London. After considerable effort, he eventually completed the drawing. But when he had finished, Zeki writes, “he could not recognize the Cathedral in his own drawing; he could not combine the elements of which the drawing was made into a whole. But he could see the individual details, describing correctly the orientation of the lines in various parts of his drawing.”
Another brain-damaged patient reported: “Generally, I find moving objects much easier to recognize, presumably because I see different and changing views…. For that reason the TV screen enables me to comprehend far more of an outdoor scene than, for example, the drawings on my living room walls.” Here movement provides the information that most of us implicitly understand when we are viewing a face or an object. The case of this patient shows it is possible to be directly aware of stationary objects and yet have difficulty imagining what they are like.
Just as we rely on implicit knowledge in our perception of objects and faces, so too, the brain creates a sense of “color constancy”: no matter what the lighting conditions—bright sunlight, filtered sunlight, or artificial lighting—colors remain more or less the same. This remarkable ability is not fully understood, but depends, in part, on the brain’s comparing the amount of light reflected in the long (red), middle (green), and short (blue) frequencies coming from different parts of a given scene. In a famous experiment performed by Edwin Land, the inventor of the Polaroid camera, a multicolored surface was illuminated by three projectors of colored light; one projector emitted red light, one green light, and one blue light. The intensity of the light emitted by each projector could be adjusted. The people being tested were asked to view, for example, a green patch. The projectors were then turned on and the green patch was illuminated with 30 units of red light, 60 of green, and 10 of blue. All the subjects of the experiment agreed that the patch was green. Of course this is hardly surprising since most of the reflected light (60 units) was green. In the following trial, however, the multicolored surface was illuminated by 60 units of red light, 30 units of green light, and 10 units of blue light. Even though the surface was illuminated by twice the amount of red light than in the first trial the subjects of the experiment agreed that the patch they had declared to be green in the first trial remained green. (See illustration on page 61.)
This is an example of color constancy. The brain compares the light frequencies coming from the green surface with those coming from surrounding surfaces. The green patch will reflect proportionately more green wavelength light than the surrounding areas no matter what the actual mix of red, green, and blue light coming from the projectors. Hence surfaces appear to have basically the same color under very different light conditions. Furthermore, since colors are created when the brain compares the ratios of light frequencies reflected by neighboring surfaces, colors establish borders; even in chaotic and turbulent paintings there are always unavoidable distinctions from one shade of color to another.
Carbon monoxide poisoning can cause the loss of color constancy and with it a separation of the sense of color and form. One patient Zeki describes complained that though he could see colors, they were “wrong”; careful study revealed what he meant. If a green surface reflected more red wavelength light than green or blue, he saw the surface as red; if the same surface were made to reflect more green wavelength light than red or blue, he saw the surface as green. In other words, unlike the normal subjects in the experiment I have already described, this patient’s brain did not compare the amount of red wavelength light being reflected by a given surface with the amount of red wavelength light being reflected by surrounding surfaces. For this patient, green leaves did not remain green from morning to evening. His brain was measuring the frequency of the light reflected from the leaves throughout the day without comparing the frequency reflected from the leaves to the frequency from the surrounding areas. The leaves became red in red light, blue in blue light, and green in green light. He had no way of knowing if the leaves had any permanent, invariant color.
And yet if pathology has told us much about how the brain works, artists have in their own way discovered aspects of brain function that had previously been unknown. Indeed, Zeki writes, brain-scanning techniques have revealed that abstract, representational, and Fauve paintings cause very different patterns of brain activity in normal individuals. Abstract art activates principally two areas in the visual cortex of the brain—V1 and V4—that are apparently essential for creating the perception of colors. Representational art activates, in addition to areas V1 and V4, areas believed to be concerned with memory and learning, including the hippocampus. The Fauves painted people and objects with unnatural colors. Zeki found that Fauve art creates a pattern of activity in the brain that is quite different from the pattern created by traditional representational art.1 It is remarkable that abstract and representational art should cause such diverse patterns of brain activity; different schools of art each seem to have their own neurological basis.
Our enjoyment of portrait painting, moreover, is dependent on our specialized faculty for face recognition, which, Zeki writes, is an apparently irreducible neurophysiological unit. When that unit is damaged, the capacity to see a portrait may be impaired, as in the case of the patient who said, when looking at his wife, “I can certainly see a face, with eyes, nose and mouth etc. but somehow it is not familiar; it really could be anybody.” A relatively small lesion in a specific part of the brain is responsible for these difficulties; brain damage causes an inability to synthesize visual faces into coherent, recognizable images. Equally striking is another form of brain damage, which does not destroy a patient’s ability to recognize faces, but leaves him unable to recognize facial expressions of emotions such as fear. Our interest in portraiture obviously derives not just from recognition but also from our ability to read facial expressions and their psychological implications.
Our visual worlds, as the books by Hoffman and Zeki demonstrate, are creations that depend upon complex interactions within the visual cortex. Color, form, and motion are the brain’s solutions to making sense of the constant flux of visual sensations that are registered on the retina. We are beginning to uncover the ways in which the brain divides up its visual tasks, but, as Zeki and Hoffman both make clear, we have few details of the neurophysiology of these rule-following processes.
Indeed, there is still much to learn. We will only truly understand vision when we have unraveled the mysteries of consciousness and memory and the relations among recognition, remembering, and awareness. We have yet to learn how we become aware of our recollections and perceptions, and how we can consciously recognize and recall people, places, and things. Not very long ago, for example, it was a generally accepted truth that the human brain never produces new neurons after birth; therefore it was widely believed that memories had to be imprinted and permanently stored in fixed brain structures for recognition, thought, and action to be possible. Last year researchers discovered that new neurons appear in areas of the brain concerned with learning and memory. If these findings are confirmed, it is very probable that even our present limited understanding of perception and brain function will undergo radical, if not revolutionary changes—just as Hoffman and Zeki have shown us how our present views of perception are radically different from what they were a few decades ago.2
This Issue
September 21, 2000
-
1
Fauve art activates areas V1 and V4, and scans show a pattern of frontal lobe activity different from that of traditional representational painting, with no activity in the hippocampus.
↩ -
2
See Elizabeth Gould et al., “Neurogenesis in the Neocortex of Adult Primates,” Science, October 15, 1999, pp. 548-552. The first reports of neurogenesis (new neurons) in mammalian brains go back to 1965. See J. Altman et al., Journal of Comparative Neurology (1965), p. 319. Also F. Nottebohm has long studied neurogenesis in avian brains. See S.A. Goldman and F. Nottebohm in Proceedings of the National Academy of Science (1983), p. 2390. See Gould’s article for more recent papers of Nottebohm and others.
↩