This is the most illuminating book that has yet come my way on the topic of artificial intelligence. One of its great merits is that it does not confine itself to the sterile question whether machines can properly be said to think but provides, as its title indicates, a succinct account of the ways in which modern computers work and of the social implications of their use. Mr. Bolter is an offspring of the marriage of C.P. Snow’s “Two Cultures.” A professor of classics, he also holds a master’s degree in computer science. The greater part of the book is devoted to matters of technological detail, but its originality and principal interest lie in the way in which it embeds the developments of technology in their cultural setting.

Alan Mathison Turing, to whom this book owes its title, was an English mathematician who was born in 1912 and died in 1954, most probably by his own hand. He was harried by his homosexuality, the practice of which, even between consenting adults, was still illegal in England at that date. During the war he had rendered great service to his country as a cryptographer. The paper that made him famous, “On Computable Numbers, with an Application to the Entscheidungsproblem (problem of decidability),” was published in the Proceedings of the London Mathematics Society as early as 1936. The paper was a contribution to symbolic logic but Turing was led to speak of machines by his concern with the question how far the exercise of logic resembles a mechanical procedure, and accordingly supplied computer scientists with what has become the standard concept of a Turing machine.

The definition of a Turing machine, as set out in the Collins English Dictionary, is that it is “a hypothetical universal computing machine able to modify its original instructions by reading, erasing, or writing a new symbol on a moving tape of fixed length that acts as its program.” This is misleading only insofar as it suggests that the tape is required to be of some specific length. Clearly the length of any actual tape must be finite, but no limit is set to the length of the tape in the concept of the machine.

In explaining the nature of the Turing machine, Mr. Bolter hits upon the useful analogy of a game. Like a good game, a Turing machine is self-contained. The point of the game is to reach an end result by performing a series of operations upon an initial set of data by the successive application of a finite number of rules. There is a rule that determines when the end has been achieved. Both the data and the rules are encoded in the machine. The information is stored on a tape on which the machine also writes its output. In a simple example, supplied by Mr. Bolter, the tape is divided into cells, each of which contains just one symbol, either “0,” “1,” or a blank, with a marker to show which cell is being “inspected” at any given moment. The machine can be in one or other of two logical states and the simple game consists in its moving in a systematic way from one to the other. It is played in obedience to a rule which requires “writing a symbol on the cell designated by the marker, moving the marker one cell right or left and changing the current state.” The game continues in this fashion until a rule is brought into play which tells the machine to stop. The symbols displayed on the tape at that point will then show that the purpose of the game has been achieved.

A game of such simplicity is not worth playing. Its importance lies in its serving as an illustration of the way in which all Turing machines operate. The measure of their capacity is to be found in the character of the data that is fed into them, the multiplicity of the rules, and the information that is sought. But whatever the task may be, even, to use one of Mr. Bolter’s own examples, something so complex as computing the thrust needed to put a spacecraft into orbit, the method is the same. Indeed, one universal Turing machine can be designed that can perform any task within the compass of any individual Turing machine. The logician is thus spared the labor of having to design a new machine to accommodate each program that he devises.

As Mr. Bolter himself points out, the use of the very word “machine” by Turing is to some extent misleading. The Turing machine generates no power: “it merely moves its marker back and forth along its tape—examining, erasing, and writing symbols as it applies its rules of operation.” Neither did Turing actually construct anything that could be called a machine, even by analogy. It was not until 1949 that the first electronic computer, capable of being programmed, was built. This was largely owing to the work of John von Neumann, a mathematician of Hungarian origin, who flourished in the United States and is perhaps even better known for his contribution to the theory of games.

Advertisement

It is of interest that Turing’s main purpose, in his original paper, was to exhibit not the positive capacity of his machine but its limitations. Like those other great logicians Kurt Gödel and Alonzo Church, he was concerned to show that the surface of formal logic was not perfectly smooth: he was in search of propositions for which there was no mechanical decision procedure. It was only much later, in 1950, in an article in Mind, which Mr. Bolter does not list in his bibliography though he alludes to it in his text, that Turing disturbed philosophers by putting up his machine as a contender for human intelligence. Notoriously he predicted that by the year 2000 he would have been able to construct a machine so adept in giving true and false answers to the questions put to it that in about one case out of three the questioner, going only by its responses, would mistake it for a human being. When it came to mathematics, the machine would have to practice deception insofar as it would need to curb the speed at which its answers were produced.

In spite of the extraordinary advance that has taken place in electronics in the last thirty-five years, I do not think it probable that anyone who set about trying to vindicate Turing’s prediction would actually succeed. My reason for saying this is no more than that the machine would need the resources of an encyclopedia, as well as a stock of cunning, which together would, as I see it, make a heavier demand upon the efficiency of engineers than would be capable of being satisfied within the comparatively short space of time that Turing allowed.

In any case no sensible person would go to the trouble and expense of actually manufacturing an all-purpose computer such as Turing envisaged in his philosophical essay. It is more rewarding to produce specialized computers that regulate traffic, or replace filing clerks, or supply a proof that no more than four colors are needed for marking the boundary lines within any map, or excel at checkers, or play a much better than average game of chess. One might wonder why a computer should not be invincible at chess, but here one has to be reminded that when it comes to actual computers there are physical as well as logical factors to be taken into account.

In the case of chess, Mr. Bolter supplies a pertinent quotation from the work of David Levy:

If the number of feasible chess games were not so enormous, a computer would be able to play perfect chess. It could analyze the initial position out to mate or to a mandatory drawn position at the termination of every line of look-ahead analysis. But the number of possible games (more than 10120) far exceeds the number of atoms in the universe and the time taken to calculate just one move in the perfect game would be measured in millions of years.

As it is, the most powerful chess computers take an unconscionable time over their moves, and it is because they are bound, even so, to overlook some possible lines of development that grand masters, with special gifts of intuition, are able to beat them.

It may have been noted that it was on physical rather than logical grounds that I discounted the probability of anybody’s winning Turing’s wager. I am not committed to denying that the Turing machine supplies us with an adequate model of the human brain. I do not think that we yet know enough about the human brain to decide whether this is so or not. Even if it were so, we should still fall short of Leibniz’s ideal of a characteristica universalis, or philosophical language, that would so mirror the structure of the world that when any vexed question arose all that the disputants would need to do would be to come together and say, “Let us calculate.” Mr. Bolter repeatedly lays stress upon the fact that the capacity of a Turing machine is finite. I attach less weight to this than he does, since for all the immense complexity of the human brain, it contains only a finite number of cells. What is more to the purpose, in my opinion, is that for all the labor of the philosopher Rudolf Carnap on induction, we have no adequate formal theory of inductive reasoning. For the benefit of those who have been captivated by Professor Karl Popper, the same point can be made by saying that there is, in the strict sense, no such thing as a logic of discovery.

Advertisement

A question for philosophers is whether the machine understands the information which is fed to it, and that which it offers in response. I implied that this question was sterile because I do not know where we are to look for a criterion of understanding if it does not consist in the ability to pass something like the Turing test. The fact that the machine is programmed is not an insuperable obstacle, for could not the same be said of ourselves? Even if it is only metaphorical to speak of our being equipped at birth with a genetic code, we have to learn to speak and read and write a language; we are supplied with information of all sorts, we are taught how to evaluate it. What difference does it make, in this context, that machines are artifacts? I do not see that it need make any. The idea of a child’s emerging from a test tube does not strain our credulity. Nor need there be anything in his behavior as an adult to differentiate him from the ordinary run of human beings. Yet he too would be an artifact.

Even as I write this I feel that I am disposing of the problem too easily. What I find difficult, and here I believe that I am speaking not only for myself, is not so much to include machines in the extension of my concept of intelligence as to allow them an inner life, to credit them with feelings and emotions, to treat them as moral agents. This is, indeed, a special case of the stubborn philosophical problem of one’s knowledge of other minds. Perhaps the reason why we are disinclined to treat machines as persons in this fuller sense is that their behavior does not satisfy the criteria that we use in judging that our fellow human beings undergo experiences analogous to those of which we are conscious in ourselves. Admittedly, these criteria are not demonstrative that other human beings have such experiences, but the recognition of this fact does not lead me to believe, or even to suspect, that I alone am really sentient. The explanatory power of the hypothesis that those whom I take to be other persons are also literally conscious is so great that I do not seriously think of questioning it. I may often be mistaken about the specific character of their beliefs or feelings, but that is not a sufficient ground for my hesitating to credit them with any such mental states at all.

Suppose now, what is not so far the case, that machines were to satisfy the criteria on the basis of which we ascribe experiences to one another. I regard this supposition as improbable but not as logically impossible. If it became true, then our only reason for denying them the full possession of consciousness would be on the score of their origin or their physical composition. I do not see why they should be disqualified on either of those grounds. I think that we should treat them as persons with a peculiar constitution. I am suggesting not only that this would actually be our reaction but also, more hesitantly, that it would be justified.

If I read him correctly, Mr. Bolter has no hesitations on this score. Indeed, he sometimes writes as if the assimilation of human beings to Turing machines were an accomplished fact. In the introduction to his book, where he first introduces his concept of Turing’s man, he explains that he attaches it to those who accept a view of man and nature which is based on their faith in the potentiality of the Turing machine. I think that he would have to admit that there are still many people to be found who are not Turing’s men, in this special sense, but he maintains that “we are all liable to become Turing’s men, if our work with the computer is intimate and prolonged and we come to think and speak in terms suggested by the machine.”

A consequence of our becoming Turing’s men would be, in Mr. Bolter’s view, “the most complete integration of humanity and technology, of artificer and artifact, in the history of the Western cultures.” I have no quarrel with this judgment. On the contrary, I am repeatedly impressed by Mr. Bolter’s grasp of the relation between the technology of an age and its conception of the world and of man’s place in it. Thus he shows, in a very convincing way, how strongly the philosophies of Plato and Aristotle bear the imprint of the crafts of weaving and pottery, the imposition of form on matter, which flourished in ancient Greece. In similar fashion, he traces the influence of the invention of the mechanical clock on the metaphysics of Descartes and of those who followed him. As Alfred North Whitehead showed in his Science and the Modern World, the price paid for the seventeenth-century scientific renaissance was the acquiescence in what Whitehead called the bifurcation of nature. Mr. Bolter might fairly claim that this division is in the process of being overcome. If so I agree with him that this is a happy issue, even though I am in my own person too old to be one of Turing’s men. A proper emphasis upon our kinship with nature should lead not to a narrowing but to an enlargement of our sympathies.

This Issue

March 1, 1984