Electronic machines of the kind generically called “computers” can now do a number of things at least as well as human beings, and in some cases better. Many of these tasks are boring, such as finding addresses or counting things. Immunity to boredom is one thing that helps to give computers the edge over human beings in some tasks. Another is speed of operation: only a computer could do the calculations necessary for landing a module on the moon, since only a computer could do the sums in less time than it takes the module to get there.

In some cases, the computer’s program guarantees an answer to the problem in hand. Whether this is so depends on several things: first, whether the problem is one for whose solution a determinate procedure (called an algorithm) can be specified. An algorithm is a set of instructions which when carried out is bound eventually to yield what is required. Looking things up in lists and doing addition are two among many tasks for which there exist algorithms, and computers spend most of their time on just such things.

But even if a task can be specified in an algorithm, there remain vitally important questions of whether a machine could complete the task in an acceptable time or within the limits of the amount of information it can process. These restrictions are so important that the question of whether there is an algorithm for a given task may be of little practical interest. Thus, in principle, there could be programs for playing checkers that involved counting out all possible future combinations of moves and countermoves (though even this would not by itself provide the way to choose the best moves). But assuming that at any given point five moves on the average are possible, the number of possibilities twenty moves ahead is greater than the number of microseconds in a year—which forecloses that way of going about it.

For most interesting tasks either there is no algorithm or it is not a practicable one. So machines must be programmed not to grind through the task but to proceed “heuristically”—to search intelligently (as we would put it), to show some insight into what is relevant and promising, and to learn what is useful and what is not. Such programs, of course, are in themselves as determinate as the others, and the machine’s states are still determined by the program and its earlier states: the difference is that the program does not contain instructions which lead inevitably and by exhaustion to a solution, but rather is designed to throw up routines and strategies which should prove fruitful in finding a solution.

In talking of “computers” here I have in mind, as Dreyfus has throughout his book, digital machines, that is to say, machines that represent all the information they handle by combinations of elements each of which can be either of two states (on or off, for instance), and are thus machines which conduct their procedures in a series of discrete steps. In saying that a digital machine goes in discrete steps one is saying something about how it represents information; one is not saying that it goes from one physical state to another by instantaneous magic, without going through places in between, but only that the places in between do not have any significance in representing information. When a digital device such as an adding machine is contrasted with an analogue device such as a slide rule, the point is not that one clicks and the other creeps, but that any points that the latter creeps through, however close together, do represent something.

Some still hope, and more fear, that foreseeable developments of these techniques will yield digital machines which over an impressive and growing range of human competence will equal or surpass human ability. In this book Dreyfus aims to refute these expectations and to allay these fears by showing on general philosophical grounds that the aim is impossible and that the research which seeks to develop machine intelligence so that it can solve interesting and really complex problems is doomed to failure, except perhaps in solving some very narrow problems. He starts with the record and claims that there has been over the last fifteen years a repeated pattern of initial minor success, grandiose promises and predictions, then disappointments, diminishing returns, and finally silence as the research bogs down and the promises remain unfulfilled.

Similar patterns can be seen in various fields of computer research. Following Minsky, a leading worker whom he extensively criticizes, Dreyfus distinguishes two main branches of inquiry that have seemed at one time or another very promising. One is Cognitive Simulation (CS), which takes tips from the ways humans actually solve problems and from the short cuts they use, and seeks to provide methods for machines which will significantly reproduce some psychological features of intelligent human behavior in solving mathematical problems, for example, or translating from one language to another. The other, Artificial Intelligence (AI), is, in Minsky’s words,

Advertisement

…an attempt to build intelligent machines without any prejudice toward making the system simple, biological, or humanoid…one might first need experience with working intelligent systems (based if necessary on ad hoc mechanisms) if one were eventually to be able to design more economical schemes. [Quoted by Dreyfus, p. 43]

With both these approaches, initial small successes led to overconfidence: Dreyfus rehearses numerous instances of this. But there is a real question about how significant much of this now aging material is. Even if early predictions of computers’ chess competence were wildly overoptimistic, it really is not very interesting to be told again that a certain chess program was beaten in 1960 by a ten-year-old: it is even less interesting than the fact (a trifle gracelessly admitted by Dreyfus) that more recently a program called MacHack beat Dreyfus.

Artificial Intelligence has gone through a sober process of realizing that human beings are cleverer than it supposed. It has turned to a more cautious and diversified strategy of accumulating “know-how” rather than mounting frontal assaults. These developments—of which one will gather little from Dreyfus—are more appropriate to the sort of phenomenon intelligence is, and the boasts and disappointments which Dreyfus tirelessly rehearses are of decreasing relevance to assessing prospects now.

The more important part of Dreyfus’s case lies not in the references to past history and slow progress, but in the general considerations which, he claims, show that the failures are inevitable and that it is to be expected that relatively trivial initial success will run out when one tries something more complex. The predictions of large success on the basis of small are not just examples of technologists’ euphoria or grant collectors’ publicity, but rely on a principle which is itself fundamental to the kind of analysis that goes into CS and AI, namely that moving from simple to complex is just moving from less to more—that the development of more of the same can be expected, in one way or another, to crack the problem. This principle Dreyfus rejects.

Dreyfus cites a number of features of human experience in problem solving that he claims are essential to problem solving and could not conceivably be reproduced or imitated by a computer. They are all of what might, very loosely, be called a “Gestalt-ish” kind, and include the phenomena of “fringe consciousness” (as when one is dimly aware of the relevance of some ill-defined factor), “zeroing in” (as when a problem-situation “organizes itself around” a promising approach), and tolerance of ambiguity, under which, for example, the mind can succeed in disregarding in a certain context a possible significance of a word which in another context would be the one to present itself. In general, the human mind can seize on what is essential in a given situation and mentally organize the whole problem in the light of this understanding.

It is I, and not Dreyfus, who have assembled this set of requirements for problem solving in such short order, though it must be said that he himself runs through them briskly and successively in his own exposition. But when they are brought together, one gets the first glimpse of a problem which grows throughout Dreyfus’s book, that is, the exact status he assigns to such phenomena. Dreyfus tends to present them as though they were special ways human beings have of going about solving problems, ways not employable by computers but which have to be used if problems are going to be solved. But it is not clear that the requirements are all this, or indeed that they all have any one kind of relevance to problem solving. Thus an ability to distinguish the essential from the inessential does not provide a special way of solving problems, available to humans and lacking to machines: solving a complex problem is itself an exercise in telling the essential from the inessential, and to say that machines cannot do this is not to uncover a deep reason why machines cannot solve that kind of problem, but is just to say that they cannot. Dealing with ambiguity seems to be similar; and it certainly is, if we assume that one aim of the exercise must be to produce machines that can handle natural language.

“Zeroing in,” on the other hand, seems to be of a different, though perhaps rather ambiguous, status. It could just refer to the human ability to arrange the data of a problem in a way conducive to a solution, seeing the relevant ways to use the data, and so on, in which case this ability seems once more logically indistinguishable from an ability to solve the problem, or at least (what is wanted) to solve it economically. But it could, as characterized by Dreyfus, refer to a certain kind of experience, of a Gestalt character, in which the data “turn round” and “structure themselves” and “present themselves” in a relevant way. It is the sort of experience, perhaps, that may be helpful to humans in problem solving. There are many reasons for wondering whether any machine could have an experience of that sort; but also, there are few reasons for supposing that it would have to in order to solve problems.

Advertisement

The confusion here is encouraged, I think, by Dreyfus’s own philosophy, which does the best it can to obliterate distinctions between the problem situation itself and how it seems to the problem solver—a distinction without which the whole issue and some of Dreyfus’s own assertions become unintelligible. He presents certain capacities as at once indispensable to problem solving and inconceivable for the machine. But, on inspection, these items tend to dissolve into some things that are certainly essential to problem solving (as being indeed in various degrees restatements of what problem solving is) but which are not shown to be inconceivable for the machine, and, on the other hand, things of the Gestalt-experience kind that may well be inconceivable for the machine but are not shown to be in themselves essential to problem solving—at least, for machines.

If one lays aside the covert appeal to Gestalt experience, most of Dreyfus’s arguments look thin. He may well be right in claiming that many tasks that are simple for human beings would need systems quite undreamed of in practice for their machine simulation. But his claim to have proved the limitations of computers is exaggerated. If one shakes together the considerations that Dreyfus brings forward, one can extract, I think, three kinds of arguments for his conclusion; and all, it seems to me, leave the issue still open.

First, there is the general “anti-Platonic” argument. This is not so much one argument as a class of considerations, the general upshot of which is that both machine simulations of human skills (such as CS) and machine reproductions by other means of human skills (such as AI) depend on an assumption which is pervasive throughout Western, modern, or at least technological thought, namely, that rationality consists in reducing experience to discrete atomistic elements and handling them by determinate rules of procedure which can be clearly and discursively spelled out.

This assumption Dreyfus repeatedly calls the “Platonic” assumption, thus making a historical claim which it would be tedious to go on about, but which in some of its applications at least is downright amazing; as in the apparent suggestion (pp. 123-124) that Plato had the quasi-technological ambition of reducing the empirical world to determinate rule-governed order—something which the historical Plato repeatedly claimed to be impossible. (Though Dreyfus frequently quotes Heidegger, he does not in this historical connection; but the picture of Western intellectual history of course comes from Heidegger, who holds that Socrates and Plato with their clanging inhuman essences scared off the pre-Socratics, those shepherds of Being.)

Dreyfus contests what he calls the “Platonic” assumption on several planes. At the psychological level, his point seems to be that human beings do not in fact think about things, and could not solve problems, just by a “Platonic” style of step-by-step discursive thought. This seems true, but of doubtful relevance; it may tell against some machine men who claim to be guided by actual psychological data, but beyond that it seems to loop back into the Gestalt-experience consideration. Even when it is the aim of a machine intelligence researcher to construct a program which will solve problems “the way we do,” it cannot be a requirement that the program should have the same sort of shape as our experience of solving problems—indeed, it is not in the least clear what such a requirement would mean. Admittedly—and here Dreyfus makes a good bit of capital—the idea of a machine solving problems “the way we do” is itself very unclear; but, as we shall see, Dreyfus has neglected to look in the obvious direction from which content might be given to that idea.

In any case, for AI researchers the aim is not to get a machine to solve problems “the way we do,” but just to get a machine to solve problems. Against them also Dreyfus applies his “anti-Platonic” argument, on the ground that their assumptions about the possibility of modeling intelligent activity on a digital machine involve the “Platonic” assumption, not this time about the processes of human thought, but about what the world is like and what an explanation or theory about the world, and about intelligent behavior, must be like. A theory about intelligent activity might be “Platonic” without that activity itself being “Platonic,” as the movement of the planets can be described in differential equations without the planets themselves solving differential equations; but, Dreyfus argues, there is no reason to believe that such a theory, which might enable one to model the behavior in a digital machine, is possible, and it is only the “Platonic” assumption that makes people think that it is possible.

The trouble in evaluating Dreyfus’s argument here is that he leaves it unclear exactly how a theory can be said to be “Platonic,” and how strong a restriction on the theory this is. His argument requires at least that any theory which can be modeled in a digital machine is “Platonic”; but he himself mentions the important result that any analogue theory can, if it is sufficiently precise, be modeled in a digital machine as well. He has an argument apparently designed to get round this fact, but I have not been able to understand it. It is not even clear how far round it he wants to get.

Dreyfus’s rejection of the “Platonic” assumption seems to boil down to the usual antimechanist, antiphysicalist, or antideterminist claim that intelligent behavior cannot be scientifically understood, which it will be no surprise to find is rejected by AI researchers. I do not think that Dreyfus wants it just to come down to that, but I have not found enough in his characterization of the “Platonic”—once the Gestalt-experience element is laid aside—to stop it from just coming down to that.

Dreyfus’s second general argument might be called the “all or nothing” or “form of life” argument. The clearest application of this is in the centrally important matter of understanding natural language, and the ability of speakers to cope with a high level of ambiguity, to catch on to what is relevant, and so forth. The sad failure of the project, much vaunted at one time, of constructing programs to translate from one natural language to another has revealed among other things how much information about the world a machine has to have available to it in order to make sense of even very simple human communications; and also how flexible and open-ended the deployment of that information has to be if the computer program, even when it seems to be getting along quite well, is not to collapse into breathtaking idiocy.

Dreyfus rightly emphasizes the importance of this sort of consideration but his treatment of the point is exaggerated. On matters of fact, he fails to acknowledge the extent to which current work on natural language programs shows greatly increased sensitivity to context; and on questions of principle, his treatment tends to distract attention from some of the most interesting questions that might be asked. He claims that the ever-present need of interpretation, and the indefinite range of knowledge and understanding that the human has to call upon, mean that we have to take a human being’s “world” as a whole; Wittgenstein’s celebrated remark about a language being a “form of life” is used here to suggest how much a whole way of being would have to be given to a machine if it were really to deal intelligently with its environment and properly understand what was said to it.

But might not this just be overdoing it? For one thing—to mention an issue which is hardly ever touched on by Dreyfus—even if the human world of understanding has this vast and indivisible complexity, perhaps the intelligent activity of some simpler organism might be adequately simulated? Some of Dreyfus’s other arguments would no doubt aim to exclude that as well; but so far as this argument goes, it is as well to remember that machine intelligence would already have had a vast triumph if it could simulate on any substantial scale the intelligent activity of creatures less culturally elaborated than man.

Another, and more important, point is that Dreyfus’s exclusive insistence on the ways in which human abilities, when all are present and working, are tightly interrelated, and on the extent to which information of different types is brought to bear on interpretative questions in various and plastic ways, distracts attention from the question of the extent to which abilities might be separable from one another, and of the kinds of simplifications which might yield a recognizable fragment of human intelligent behavior. Dreyfus seems, indeed, to think that the idea of such a fragment is nonsense; but we need more than phenomenological descriptions of the experience (or “world”) of the functioning human being to convince us of that. We need detailed theoretical and experimental study of simpler animals and partial abilities, and Dreyfus’s argument, which is all about a world both whole and human, neither proves its impossibility nor anticipates its results.

Dreyfus seems to make the demand that machines, in order to be intelligent at all, should be unfailingly at least as intelligent as human beings sometimes are. That demand is absurd; and, further, there is little reason to believe that the ways in which machines will succeed and fail in displaying intelligence will be just the same as with human beings or other animals. We should expect neither that machines will make no mistakes, nor that their mistakes will necessarily be related in a familiar human way to their successes.

The third general argument, or type of argument, is the “infinite regress” argument, which rests on the idea that the rules or principles of understanding that human beings use need interpretation and that their application is in various ways relative to context. Now rules or principles of a higher order can be used, which determine the application of the ones of a lower rank, and sort contexts into different kinds; but then these rules will themselves need interpretation. If every rule needs another rule, this leads to a vicious regress; it has to be stopped. It cannot be stopped, Dreyfus argues, by an appeal to rules that are intrinsically independent of the context they are to be applied to, or are self-interpreting—there are no such things, and the idea that there are is a fiction of “Platonic” thinkers. Rather, the regress stops, as Wittgenstein (once more) claimed, in certain concrete facts of human life and practice: we do just “go on” in certain ways, “catch on” to some things rather than others; justifications come to an end in a shared form of life.

I find this argument difficult because it is not clear to me what exactly these concrete facts are, and in particular whether they apply to the species as a whole, or culturally differ between societies, to mention only the crudest alternatives—a difficulty which I also have with Wittgenstein’s own account of them. The impression often given by such arguments is that these facts—which no doubt exist—are in some basic way inexplicable: an impression much helped by the very fact that one is given no adequate directions about the level (cultural, psychological, zoological) on which one should look for them, and hence for their explanation. But while facts of this kind no doubt exist, there is no reason at all to expect them to be inexplicable. Until it is made clear why they have to remain inexplicable, it is likely to remain unclear why knowledge of them, or of some weak but adequate version of them, cannot be modeled into a machine.

I have already mentioned Dreyfus’s own philosophy, which provides the basis for some of his criticism, and which he even claims is capable of producing explanations of intelligent and purposive behavior more adequate than any available to “Platonic” theories. The philosophy in question is a type of phenomenology owing much to Heidegger and to Merleau-Ponty. It is not, at least in this offering, very easy to take it seriously, or even patiently. One of its characteristics is its reliance on terms which sound explanatory, but which in fact conceal in their ambiguity many of the real questions that need to be asked:

But what if the work of the central nervous system depends on the locomotive system, or to put it phenomenologically, what if the “higher,” determinate, logical, and detached forms of intelligence are necessarily derived from and guided by global and involved “lower” forms? [Pp. 148-149]

What indeed? “Derived from” and “guided by” here are sheer bluff, and will remain so unless some more “Platonic” work is done.

Another trait—almost definitional of the method—is to offer a graphic, and often pointedly inaccurate, description of a perceived situation which allegedly reveals its nature:

Thus, in ordinary situations, we say we perceive the whole object, even its hidden aspects, because the concealed aspects directly affect our perception. [P. 153]

That contains at least two straightforward falsehoods, and unhelpful ones: to suggest that such characterizations could helpfully replace scientific investigation of why and how things look solid under certain conditions is absurd.

Such untruths have a built-in defense mechanism: they are so obviously untrue that anyone who protests of their literal falsehood can be accused of having missed their nonliteral point. But that mechanism is not enough to keep them alive in a world of hard questions; nor, incidentally, to justify their use by Dreyfus, who earlier in the book has gone in for a great deal of donnish nit-picking against formulations, loose but often adequately intelligible, offered by people on the other side.

Another characteristic feature of phenomenology that deeply affects Dreyfus’s argument is its traditional tendency, despite heroic efforts on the part of its leading exponents, to slide inexorably in the direction of idealism, the view that the world can only be coherently regarded as the-world-as-it-seems-to-us, or, worse still, the-world-as-it-seems-to-me. Dreyfus constantly uses formulae that present the world which men perceive and act in as already constituted by their experiences and perceptions: thus at page 136 he seems to accept that “only in terms of situationally determined relevance are there any facts at all”; and at page 184 writes, “We are at home in the world and can find our way about in it because it is our world produced by us as the context of our pragmatic activity…[our] activity has produced the world.” Even my personal memories are “inscribed in the things around me” (p. 178).

Of course, there are ways of taking these sayings. But it seems that Dreyfus wants to take them in such a way that the whole idea of a scientific theory which regarded the objects of the human world—such as trees, for instance—in abstraction from human interests would be absurd. From that, I am inclined to think, he derives his most general “anti-Platonic” opinions: if the objects of the human world cannot be regarded in abstraction from human perception, activities, and interests, then there can be no scientific account which takes such objects and human beings, and inquires how they interact.

But if that argument is going to work, it looks as though the idealistic premise from which it has to be derived must be taken in an enormously strong and indeed lunatic way. It has to be taken, in fact, in such a form as to imply that, since trees are “produced” and so forth by human interests, a world in which there were no humans would be a world without trees—indeed, it would not be a world at all. If one objects to this, naïvely, by saying that it must be possible for there to be a world without humans, because there used to be a world without humans (including, however, trees), the reply will be that one has misunderstood.

But the use that Dreyfus makes of these idealist formulae for his purposes seems to me precisely to require the crude, indeed laughable, interpretation which he would immediately reject. For if we can conceive of trees without humans, why cannot we scientifically investigate their interactions with humans? If the sense in which the world is “produced” by humans is just the less hectic sense in which the world must be described from a human point of view, why is it impossible that among the things described from that point of view are the causal relations involved in the human perception of trees?

Dreyfus does not, however, surrender everything to the realm of production by human experience; apart from our experience, and really “there” in some sense, is a flux of energy, atomic particles, things as described by physics. Indeed, he admits that man is a physical system interacting like others with his physical environment, and that “inputs of energy of various frequencies are correlated with the same perceptual experience” (p. 95). Moreover, according to Dreyfus, the impossibility of digital simulation is not supposed to exclude the possibility of artificial organisms, if these are conceived of in terms of analogue systems, no doubt embodied in biological materials. But Dreyfus makes these concessions very lightly and clearly regards the levels of physical or neurophysiological explanation, which he is happy to concede, as something quite detached from the possibility, which he rules out, of the simulation of intelligent activity by digital means.

But Dreyfus does not see how much his offhand concessions to science may have cost him. For if we can gain enough physical knowledge to construct an artificial organism, and if construct it is what we do, as opposed to growing it in vitro from ready-made biological materials, then we understand it. And if we understand it so that we can construct it to behave in certain ways, then we understand the relation of its physical structure to its possibilities of behavior, in the sense at least that we understand what kinds of physical differences underlie what kinds of differences in behavior.

Moreover, there is no a priori reason why the possibility of yielding certain behavior should be restricted to structures in one given sort of material; it would rather prove, perhaps, to be indeed the structure of a system which provided the required potentiality. And if we got to that stage, then even if it were an analogue system that we had uncovered, it is unclear why in principle it should be impossible to model it in a digital machine. I see nothing in Dreyfus’s frequent complaints against those who have confused physical and psychological levels that blocks this road to the positions he supposes himself to have cut off. This is the direction in which content can be found for the notion that a machine physically very different from us might solve problems or do other things “as we do.” It also provides the sense—the only interesting sense for these investigations—to the question “How does man, or another animal, produce a given kind of behavior?”

Dreyfus says at one point (p. 144) of the question “How does man produce intelligent behavior?” that

…the notion of “producing” behavior…is already colored by the [Platonic] tradition. For a product must be produced in some way; and if it isn’t produced in some definite way, the only alternative seems to be that it is produced magically.

Well, one ambiguity, rather tediously, has to be removed: of course it isn’t necessary that a given sort of behavior must on every occasion be produced in the same definite way, nor would a machine have to produce it always in the same way. But if the thought is that a given piece of behavior can appear on a given occasion, and not be produced on that occasion in some definite way—then yes, indeed, it would be produced magically. That is the magic Dreyfus is calling us to from his counter-Platonic cavern. But however depressed we may sometimes be by the threats and promises of the machine men, we are not forced in there yet.

This Issue

November 15, 1973