• Email
  • Single Page
  • Print

What Can’t the Computer Do?

The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics

by Roger Penrose
Oxford University Press, 466 pp., $24.95

The year 1989 marked a milestone in the history of artificial intelligence. A computer program, Deep Thought, defeated several chess grandmasters. The Russian grandmaster Garri Kasparov, it is true, defeated Deep Thought, but who can feel confident that in ten years’ time the then world champion will be able to defeat the best program? If a computer can succeed at the most beautiful and creative of games, what limits are there to the achievements of artificial intelligence? The question becomes more compelling if one accepts the current “strong AI” philosophy. According to this view, the human brain is only a large, if somewhat inaccurate, digital computer: consciousness is a necessary property of matter organized in such a way as to carry out complex computations: it follows, therefore, that when we construct computers as complex as the human brain, they too will be conscious. Computers will think as we do, and be aware of what they are doing.

Roger Penrose’s book was written to combat this strong AI philosophy. There are, he suggests, two other points of view open to you, even if you accept that the brain is the organ of thought, and hope for a scientific account of how it works. The first is that the brain is indeed a digital computer, but that it is conscious only because it is composed of living material. A digital computer made of transistors might perform identical computations, in identical ways, but it would not be conscious, because it was made of transistors and not of neurons. Penrose ascribes this view to John Searle, a philosopher who is critical of the strong AI view. I am not sure whether this correctly reflects Searle’s views: if not, no doubt he will find an opportunity to reply. In any case, Penrose does not find the argument convincing. For reasons I will give in a moment, I agree with him. Penrose’s own opposition to the strong AI view is differently based. The brain, he argues, is not a digital computer.

Before explaining why he thinks this, I must say a few words about the other line of argument, that computers are made of the wrong stuff to be conscious. As a geneticist, I am prejudiced against the idea that the peculiarities of living organisms arise because of the special nature of living material. This was a popular view when I was a boy, and was commonly used as a defense of religion against the inroads of science. I remember being told, by a schoolmaster who was also a parson, that scientists had shown that one could bring together all the chemical substances, carbon, oxygen, nitrogen, and so on, in the same proportions as are present in a seed, yet the seed would not grow into a plant, because it lacked the breath of life. Today we have a very satisfactory explanation of one of the two most fundamental properties of life—heredity—in chemical terms, so I hope that no schoolmaster is so foolish as to use this argument to stave off incipient atheism in his pupils.

The other property, consciousness, still escapes us, but it seems sensible to try to explain it in terms of physical law. I can see little sense in claiming that consciousness, any more than heredity, resides in single atoms of carbon or nitrogen. Presumably, then, it must reside in the way in which the matter is arranged. Of course, it is logically possible that only computers made of neurons can be conscious, but the idea is unattractive. It seems more plausible that any “computer” that is formally similar to a brain will be conscious. The idea of “formal similarity” is crucial, and is the topic of the rest of this review.

I think that Penrose would agree with the ideas expressed above. His essential point is different: it is that the brain is not a digital computer. He spends some time explaining what this means. Digital computers are “Universal Turing Machines.” That is, they are examples of a class of computing machines first defined by the mathematician Alan Turing. For the purposes of this review, it is sufficient to understand that a Turing machine is algorithmic: it reaches its conclusions by following a precisely defined set of rules, an “algorithm.” An example of an algorithm would be the following rule for deciding on your lead against a no-trump contract at bridge: “lead the fourth highest card of your longest suit” (an additional rule would be required to tell you what to do if you had two equal longest suits—for example, “lead from the higher-ranking suit”). The important point is that the rules must be precise, and of a kind whose consequences can be calculated unambiguously.

At this point, a distinction must be made between “producing the same answer” and “producing the answer in the same way.” For example, imagine you are playing an invisible opponent at chess. You do not know whether your opponent is either the computer program, Deep Thought, or a human grandmaster. Either way, you will probably lose—certainly so unless you are a better player than I am. But it would be hard, and perhaps impossible, to tell whether your opponent was machine or human: both would produce the same answer. But the calculations performed would be quite different. The computer would examine many millions of positions, and choose the best line by a “minimax” procedure, which selects the best line of play against an opponent who is also playing as well as possible. Indeed, the depressing thing about chess programs is that they are no cleverer than the programs that I and others imagined back in the 1940s. The difference is that computers have become much faster. In contrast, the human grandmaster would examine only a minute subset of the possible lines of play: we still find it impossible to write an exact rule, or algorithm, specifying which lines he would, or should, choose to examine.

Let me give a more illuminating example of two ways of reaching the same conclusion. I give you a large-scale map of England and a box of matches, and ask you how far it is from London to Brighton. One method would be to read off the map references (say 10, 20 and 40, 60) and find the answer by the method of Pythagoras (d2 = (40-10)2 + (60-20)2, or d=50). An elegant alternative would be to note that the length of a match corresponded to, say, five miles on the map, and that ten matches arranged end to end would reach from London to Brighton. These two methods give the same answer, but they do so in different ways. Both methods could, in principle, be mechanized. The Pythagoras method is the one that would be adopted by anyone programming a computer. There would, in fact, be nothing in the computer analogous to the “map.” The geographical information would merely be a list of places with their map references. There would be no physical object inside the computer that was a two-dimensional representation of the world. It would also be possible, though tricky, to devise a machine in which magnetized needles arranged themselves in a row linking “London” and “Brighton,” which themselves would be represented by north and south magnetic poles on a two-dimensional map.

The latter would be an “analog” computer. Today, the word analog means, to a computer scientist, continuously varying, as opposed to “digital,” which means existing in one of two discrete states. Originally, however, an analog computer was any device that made use of a physical analogue. For example, one can analyze the stresses in a beam by looking at soap bubbles, and the modes of vibration of a complex structure such as an airplane’s wing by measuring the current in an analogous electrical circuit. Today, analog computers are out of fashion, because the astonishing ability of digital computers to do arithmetic has rendered them obsolete. But this does not prove that the brain is not an analog device.

In fact there are reasons to think that the brain may not be only a digital computer. Optically, the eye is a device that throws a two-dimensional image of the world onto the retina. But the problem of vision is to explain how this image is translated into “there is a car approaching me on the wrong side of the road,” or “my friend Joe is smiling at me.” It is known that several 2-D representations of the retina are present in the brain, with particular points on the retina connected to corresponding points in the brain. These representations are presumably used in performing the calculations that interpret the information on the retina. But does the brain make use of the 2-D nature of the representation? It would in principle be possible to store the information from the retina in a form that bore no geometrical similarity to the image. Indeed, a digital computer could store the information, but it would not do so in a set of units arranged in a 2-D array. We have seen, in the example of estimating the distance from London to Brighton, that the geometric information can be made use of by an analog device, but that the same answer can be produced if the information is stored simply as a list of map references.

If the brain does make use of the 2-D pattern when computing, how might it do so? Clearly, the brain does not contain matchsticks or magnetic needles. But the distance between two points could easily be estimated by the time it takes for a message to travel from one to the other. Since most of the time would be taken up in transmitting the message across the synapses that connect one neuron to another, this method of measuring would be closely analogous to arranging matchsticks in a row. It is at least possible that much of visual perception depends on analog computation.

A proponent of digital computing, however, could argue that the 2-D representations in the brain, of tactile and auditory as well as visual information, exist because that is a convenient way to construct the brain during development, and not because the representation is used in analog computing. This is a valid objection, but there are several reasons for thinking that the 2-D representations exist because they need to be that way. Owls can form a picture of the world by using their ears, as well as their eyes. Both pictures, auditory and visual, have a 2-D representation in the cortex, and these two representations are superimposed on one another, so that a point in the external world is representated by a group of neurons in the auditory map, and by a group of neurons in the visual map, and these two groups lie over one another. It is hard to see why this arrangement exists unless it is used in an analog computation.

  • Email
  • Single Page
  • Print