• Email
  • Single Page
  • Print

How Smart Are Computers?

What Computers Can’t Do: A Critique of Artificial Reason

by Hubert L. Dreyfus
Harper & Row, 259 pp., $8.95

Electronic machines of the kind generically called “computers” can now do a number of things at least as well as human beings, and in some cases better. Many of these tasks are boring, such as finding addresses or counting things. Immunity to boredom is one thing that helps to give computers the edge over human beings in some tasks. Another is speed of operation: only a computer could do the calculations necessary for landing a module on the moon, since only a computer could do the sums in less time than it takes the module to get there.

In some cases, the computer’s program guarantees an answer to the problem in hand. Whether this is so depends on several things: first, whether the problem is one for whose solution a determinate procedure (called an algorithm) can be specified. An algorithm is a set of instructions which when carried out is bound eventually to yield what is required. Looking things up in lists and doing addition are two among many tasks for which there exist algorithms, and computers spend most of their time on just such things.

But even if a task can be specified in an algorithm, there remain vitally important questions of whether a machine could complete the task in an acceptable time or within the limits of the amount of information it can process. These restrictions are so important that the question of whether there is an algorithm for a given task may be of little practical interest. Thus, in principle, there could be programs for playing checkers that involved counting out all possible future combinations of moves and countermoves (though even this would not by itself provide the way to choose the best moves). But assuming that at any given point five moves on the average are possible, the number of possibilities twenty moves ahead is greater than the number of microseconds in a year—which forecloses that way of going about it.

For most interesting tasks either there is no algorithm or it is not a practicable one. So machines must be programmed not to grind through the task but to proceed “heuristically”—to search intelligently (as we would put it), to show some insight into what is relevant and promising, and to learn what is useful and what is not. Such programs, of course, are in themselves as determinate as the others, and the machine’s states are still determined by the program and its earlier states: the difference is that the program does not contain instructions which lead inevitably and by exhaustion to a solution, but rather is designed to throw up routines and strategies which should prove fruitful in finding a solution.

In talking of “computers” here I have in mind, as Dreyfus has throughout his book, digital machines, that is to say, machines that represent all the information they handle by combinations of elements each of which can be either of two states (on or off, for instance), and are thus machines which conduct their procedures in a series of discrete steps. In saying that a digital machine goes in discrete steps one is saying something about how it represents information; one is not saying that it goes from one physical state to another by instantaneous magic, without going through places in between, but only that the places in between do not have any significance in representing information. When a digital device such as an adding machine is contrasted with an analogue device such as a slide rule, the point is not that one clicks and the other creeps, but that any points that the latter creeps through, however close together, do represent something.

Some still hope, and more fear, that foreseeable developments of these techniques will yield digital machines which over an impressive and growing range of human competence will equal or surpass human ability. In this book Dreyfus aims to refute these expectations and to allay these fears by showing on general philosophical grounds that the aim is impossible and that the research which seeks to develop machine intelligence so that it can solve interesting and really complex problems is doomed to failure, except perhaps in solving some very narrow problems. He starts with the record and claims that there has been over the last fifteen years a repeated pattern of initial minor success, grandiose promises and predictions, then disappointments, diminishing returns, and finally silence as the research bogs down and the promises remain unfulfilled.

Similar patterns can be seen in various fields of computer research. Following Minsky, a leading worker whom he extensively criticizes, Dreyfus distinguishes two main branches of inquiry that have seemed at one time or another very promising. One is Cognitive Simulation (CS), which takes tips from the ways humans actually solve problems and from the short cuts they use, and seeks to provide methods for machines which will significantly reproduce some psychological features of intelligent human behavior in solving mathematical problems, for example, or translating from one language to another. The other, Artificial Intelligence (AI), is, in Minsky’s words,

…an attempt to build intelligent machines without any prejudice toward making the system simple, biological, or humanoid…one might first need experience with working intelligent systems (based if necessary on ad hoc mechanisms) if one were eventually to be able to design more economical schemes. [Quoted by Dreyfus, p. 43]

With both these approaches, initial small successes led to overconfidence: Dreyfus rehearses numerous instances of this. But there is a real question about how significant much of this now aging material is. Even if early predictions of computers’ chess competence were wildly overoptimistic, it really is not very interesting to be told again that a certain chess program was beaten in 1960 by a ten-year-old: it is even less interesting than the fact (a trifle gracelessly admitted by Dreyfus) that more recently a program called MacHack beat Dreyfus.

Artificial Intelligence has gone through a sober process of realizing that human beings are cleverer than it supposed. It has turned to a more cautious and diversified strategy of accumulating “know-how” rather than mounting frontal assaults. These developments—of which one will gather little from Dreyfus—are more appropriate to the sort of phenomenon intelligence is, and the boasts and disappointments which Dreyfus tirelessly rehearses are of decreasing relevance to assessing prospects now.

The more important part of Dreyfus’s case lies not in the references to past history and slow progress, but in the general considerations which, he claims, show that the failures are inevitable and that it is to be expected that relatively trivial initial success will run out when one tries something more complex. The predictions of large success on the basis of small are not just examples of technologists’ euphoria or grant collectors’ publicity, but rely on a principle which is itself fundamental to the kind of analysis that goes into CS and AI, namely that moving from simple to complex is just moving from less to more—that the development of more of the same can be expected, in one way or another, to crack the problem. This principle Dreyfus rejects.

Dreyfus cites a number of features of human experience in problem solving that he claims are essential to problem solving and could not conceivably be reproduced or imitated by a computer. They are all of what might, very loosely, be called a “Gestalt-ish” kind, and include the phenomena of “fringe consciousness” (as when one is dimly aware of the relevance of some ill-defined factor), “zeroing in” (as when a problem-situation “organizes itself around” a promising approach), and tolerance of ambiguity, under which, for example, the mind can succeed in disregarding in a certain context a possible significance of a word which in another context would be the one to present itself. In general, the human mind can seize on what is essential in a given situation and mentally organize the whole problem in the light of this understanding.

It is I, and not Dreyfus, who have assembled this set of requirements for problem solving in such short order, though it must be said that he himself runs through them briskly and successively in his own exposition. But when they are brought together, one gets the first glimpse of a problem which grows throughout Dreyfus’s book, that is, the exact status he assigns to such phenomena. Dreyfus tends to present them as though they were special ways human beings have of going about solving problems, ways not employable by computers but which have to be used if problems are going to be solved. But it is not clear that the requirements are all this, or indeed that they all have any one kind of relevance to problem solving. Thus an ability to distinguish the essential from the inessential does not provide a special way of solving problems, available to humans and lacking to machines: solving a complex problem is itself an exercise in telling the essential from the inessential, and to say that machines cannot do this is not to uncover a deep reason why machines cannot solve that kind of problem, but is just to say that they cannot. Dealing with ambiguity seems to be similar; and it certainly is, if we assume that one aim of the exercise must be to produce machines that can handle natural language.

Zeroing in,” on the other hand, seems to be of a different, though perhaps rather ambiguous, status. It could just refer to the human ability to arrange the data of a problem in a way conducive to a solution, seeing the relevant ways to use the data, and so on, in which case this ability seems once more logically indistinguishable from an ability to solve the problem, or at least (what is wanted) to solve it economically. But it could, as characterized by Dreyfus, refer to a certain kind of experience, of a Gestalt character, in which the data “turn round” and “structure themselves” and “present themselves” in a relevant way. It is the sort of experience, perhaps, that may be helpful to humans in problem solving. There are many reasons for wondering whether any machine could have an experience of that sort; but also, there are few reasons for supposing that it would have to in order to solve problems.

The confusion here is encouraged, I think, by Dreyfus’s own philosophy, which does the best it can to obliterate distinctions between the problem situation itself and how it seems to the problem solver—a distinction without which the whole issue and some of Dreyfus’s own assertions become unintelligible. He presents certain capacities as at once indispensable to problem solving and inconceivable for the machine. But, on inspection, these items tend to dissolve into some things that are certainly essential to problem solving (as being indeed in various degrees restatements of what problem solving is) but which are not shown to be inconceivable for the machine, and, on the other hand, things of the Gestalt-experience kind that may well be inconceivable for the machine but are not shown to be in themselves essential to problem solving—at least, for machines.

  • Email
  • Single Page
  • Print