• Email
  • Single Page
  • Print

Is the Universe a Computer?

Wolfram even tackles the old conflict between belief in a deterministic view of nature and in the existence of free will. He suggests that free will is an illusion that arises from the apparent unpredictability of the complex behavior produced by those simple rules of biology that he imagines to govern the human organism. This is odd, because we certainly don’t attribute free will to other unpredictable complex phenomena like earthquakes or thunderstorms. Surely the impression of free will arises instead from our personal experience of actually deciding what to do, an experience that we are unwilling to suppose is enjoyed by earthquakes or thunderstorms. This is not to say that I have any enlightenment to offer about free will either, because I have never been able to understand the inconsistency that other people find between free will and a completely deterministic view of nature. Free will to me means only that we sometimes decide what we do, and we know that this is true by the same sort of mental experience that convinced Descartes that he existed, but we have no mental experience that tells us that our decisions are not inevitable consequences of past conditions and the laws of nature.

3.

Other scientists like myself study phenomena that may not be intrinsically very interesting, because they think that studying these phenomena will help them to understand the laws of nature, the rules that govern all phenomena. In such work, we tend to study the simplest possible phenomena, because it is in these cases that we can most easily calculate what our theories predict, and compare the results with experimental data to decide whether our theories are right or wrong. Wolfram makes it seem that physicists choose simple rather than complex phenomena to study because of long habit or mathematical flabbiness, but in seeking the laws of nature it is the essence of the art of science to avoid complexity.

My own work has been mostly on the theory of elementary particles, but I have never found these particles very interesting in themselves. An electron is featureless, and much like every other electron. Most of those of us who study elementary particles do so because we think that at this moment in the history of science it is the best way to discover the laws that govern all natural phenomena. Because of this special motivation, we don’t generally care whether we can calculate everything that happens to elementary particles in complicated situations, only whether we can calculate enough to check the validity of our theories. In collisions of elementary particles at moderate energy the energy of the collision goes into complex showers of particles. No one can predict the details of these showers, even where the underlying theory is known, and hardly anyone cares. At higher energies things are simpler: the energy goes into well-defined jets, each containing particles that travel in the same direction, in a way that can be calculated theoretically and compared with experiment to test our theories of elementary particles. It is the laws, not the phenomena, that interest us.

Unlike elementary particles, planets have historically seemed interesting for religious and astrological reasons. But it was the simplicity of planetary motions that allowed Newton to discover the laws of motion and gravitation. The planets move in empty space and to a good approximation under the influence of a single motionless body, the sun. Newton would never have discovered his laws by studying turbulence or snowflakes.

Wolfram himself is a lapsed elementary particle physicist, and I suppose he can’t resist trying to apply his experience with digital computer programs to the laws of nature. This has led him to the view (also considered in a 1981 article by Richard Feynman) that nature is discrete rather than continuous. He suggests that space consists of a network of isolated points, like cells in a cellular automaton, and that even time flows in discrete steps. Following an idea of Edward Fredkin, he concludes that the universe itself would then be an automaton, like a giant computer. It’s possible, but I can’t see any motivation for these speculations, except that this is the sort of system that Wolfram and others have become used to in their work on computers. So might a carpenter, looking at the moon, suppose that it is made of wood.

There is another reason for studying complex phenomena—not because the phenomena are interesting, which they sometimes are, or because studying complex phenomena is a good way to learn the laws of nature, which it isn’t, but because complexity itself is interesting. Maybe there is a theory of complexity waiting to be discovered, that says simple things about complex behavior in general, not just about the rule 110 cellular automaton or turbulent air flow or the human nervous system.

There are other examples of what I like to call free-floating theories, theories that are applicable in a wide (though not unlimited) variety of very different contexts. The theory of chaos, which has captured the public imagination, deals with systems, from the weather to the pebbles in the rings of Saturn, whose behavior exhibits an exquisite sensitivity to initial conditions. Thermodynamics, the science of heat, is a less trendy example. Concepts of thermodynamics like temperature and entropy are applicable to black holes as well as to steam boilers. A less familiar example is the theory of broken symmetry. Many very different substances, including superconductors, magnetized iron, and liquid helium, are governed by equations that have some symmetry, in the sense that the equations look the same from certain different points of view, and yet the substances exhibit phenomena that do not respect this symmetry.

There is a low-intensity culture war going on between scientists who specialize in free-floating theories of this sort and those (mostly particle physicists) who pursue the old reductionist dream of finding laws of nature that are not explained by anything else, but that lie at the roots of all chains of explanation. The conflict usually comes to public attention only when particle physicists are trying to get funding for a large new accelerator. Their opponents are exasperated when they hear talk about particle physicists searching for the fundamental laws of nature. They argue that the theories of heat or chaos or complexity or broken symmetry are equally fundamental, because the general principles of these theories do not depend on what kind of particles make up the systems to which they are applied. In return, particle physicists like me point out that, although these free-floating theories are interesting and important, they are not truly fundamental, because they may or may not apply to a given system; to justify applying one of these theories in a given context you have to be able to deduce the axioms of the theory in that context from the really fundamental laws of nature.

This debate is unfortunate, for both kinds of science are valuable, and they often have much to teach each other. My own work in elementary particle physics has benefited tremendously from the idea of broken symmetry, which originated in the study of the solid state but turned out to be the key both to understanding reactions involving particles called pi mesons at low energy and to the unification of some of the forces acting on elementary particles. The theory of complexity might also have lessons for elementary particle theory (or vice versa) but it is not likely to be fundamental in the same sense as elementary particle physics.

Lately particle physicists have been having trouble holding up their end of this debate. Progress toward a fundamental theory has been painfully slow for decades, largely because the great success of the “Standard Model” developed in the 1960s and 1970s has left us with fewer puzzles that could point to our next step. Scientists studying chaos and complexity also like to emphasize that their work is applicable to the rich variety of everyday life, where elementary particle physics has no direct relevance.

Scientists studying complexity are particularly exuberant these days. Some of them discover surprising similarities in the properties of very different complex phenomena, including stock market fluctuations, collapsing sand piles, and earthquakes. Often these phenomena are studied by simulating them with cellular automata, such as Conway’s Game of Life. This work is typically done in university physics departments and in the interdisciplinary Santa Fe Institute. Other scientists who call themselves complexity theorists work in university departments of computer science and mathematics and study the way that the number of steps in a computer calculation of the behavior of various systems increases with the size of the systems, often using automata like the Turing machine as specific examples of computers. Some of the systems they study, such as the World Wide Web, are quite complex. But all this work has not come together in a general theory of complexity. No one knows how to judge which complex systems share the properties of other systems, or how in general to characterize what kinds of complexity make it extremely difficult to calculate the behavior of some large systems and not others. The scientists who work on these two different types of problem don’t even seem to communicate very well with each other. Particle physicists like to say that the theory of complexity is the most exciting new thing in science in a generation, except that it has the one disadvantage of not existing.

It is here I think that Wolfram’s book may make a useful contribution. Wolfram and his co-workers have been able to show that numerous simple “class four” automata that produce complex behavior, like the rule 110 cellular automaton, are able to emulate each other. That is, by setting up a suitable input pattern of black and white cells in the rule 110 cellular automaton, one can produce the same complex pattern that would be produced by other class four automata, and vice versa. (In this emulation, blocks of cells in one automaton represent a single cell in the automaton being emulated.) What makes this particularly interesting is that one of the automata that can be emulated in this way is the universal Turing machine.3

The Turing machine is the most important automaton in the history of computer science, and the forerunner of today’s digital computers. It was invented in 1936 by Alan Turing, who in World War II became one of Britain’s ace cipher-breakers and was later the hero of Hugh Whitemore’s play Breaking the Code. Turing’s purpose was to answer a classic question of mathematical logic known as the Decision Problem: given some deductive mathematical system like arithmetic or Euclidean geometry or symbolic logic, is there any logical method that, when applied mechanically to any statement of that system, is guaranteed to decide whether that statement can be proved by following the rules of that system?4

To answer this question, the Turing machine was designed to capture the essence of mechanical logical methods. Just as a person going through a mathematical proof works with a string of symbols, focusing on just one at a time, the Turing machine works on a one-dimensional sequence of cells, each containing a symbol taken from some finite list, with only one “active” cell that can be read and possibly changed at each step. Also, to correspond to the fact that a person working out a proof would keep some memory of previous steps, Turing gave his machine a memory register, which can be in any one of a finite number of “conditions.”

Each type of Turing machine obeys a fixed rule that tells it at each step how to change the symbol in the active cell, how to change the condition of the memory register, and whether to move the active cell one step to the left or right, according to the symbol in the active cell and the condition of the memory register. It makes no difference what symbols we choose to use or what conditions are possible for the memory register; their significance arises only from the rules of the machine. The problem to be solved and the data to be used are fed into the machine as an initial string of symbols, and the answer appears as the string of symbols found when the memory register reaches a condition that tells the machine to stop.

Turing never actually built such a machine (though he did go on to build some special-purpose computers), but if you like you can think of the cells in a Turing machine as forming a paper tape, with the symbols just a sequence of colored dots on the tape, read and written by a scanning device that moves up or down the tape from one active cell to another. In this example the memory register is a simple mechanical pointer which can take any of two or more positions. The decision how to change the color of the active cell and the position of the memory register and how to move the tape is made according to the color of the cell being read and the position of the pointer, following rules that are hard-wired into the machine. A specific Turing machine is entirely characterized by the number of possible colors on the cells of the tape, the number of possible positions of the pointer, and by the rules wired into the machine.

The important thing about Turing machines is that some of them are universal. Turing was able to prove that any of these universal Turing machines could calculate or prove anything that could be calculated or proved by any other Turing machine. For instance, at least one of the huge number (96 to the power 48) of the possible Turing machines that use two colors and have a memory register with twenty-four positions is universal. Further, from the way that Turing machines were designed to imitate the way humans mechanically do calculations, Turing argued that universal Turing machines could calculate or prove anything that could be calculated or proved by any purely mechanical procedure. This is often called the Church-Turing thesis, because at about the same time the Princeton mathematician Alonzo Church reached similar conclusions about a more abstract but equivalent mathematical method of his own. Incidentally, Turing and Church found that the answer to the Decision Problem in most mathematical or logical systems is “no”: there is no mechanical procedure that is guaranteed to decide whether any given statement can be proved by following the rules of that system.

Since universal Turing machines can be emulated by the rule 110 cellular automaton, it follows that this cellular automaton, and all the other automata that can emulate it, are also universal—they can do any computation that can be done by any computer. The program for the calculation and the data to be used would be fed into a rule 110 cellular automaton as a pattern of black cells in the top row, and the answer would appear as a pattern on a lower row. Wolfram says that all these automata are computationally equivalent, but that is only true in a limited sense. The simpler the design of a universal computer, the more steps it takes to emulate each single step of a practical computer. This is why Dell and Compaq do not sell Turing machines or rule 110 cellular automata.

Wolfram goes on to make a far-reaching conjecture, that almost all automata of any sort that produce complex structures can be emulated by any one of them, so they are all equivalent in Wolfram’s sense, and they are all universal. This doesn’t mean that these automata are computationally equivalent (even in Wolfram’s sense) to systems involving quantities that vary continuously. Only if Wolfram were right that neither space nor time nor anything else is truly continuous (which is a separate issue) would the Turing machine or the rule 110 cellular automaton be computationally equivalent to an analog computer or a quantum computer or a brain or the universe. But even without this far-reaching (and far- out) assumption, Wolfram’s conjecture about the computational equivalence of automata would at least provide a starting point for a theory of any sort of complexity that can be produced by any kind of automaton.

The trouble with Wolfram’s conjecture is not only that it has not been proved—a deeper trouble is that it has not even been stated in a form that could be proved. What does Wolfram mean by complex? If his conjecture is not to be a tautology, then we must have some definition of complex behavior independent of the notion of universality. The pattern produced by the rule 110 cellular automaton certainly looks complex, but what criterion for complexity can we use that would tell us that it is complex enough for Wolfram’s conjecture to apply?

There is a well-known parallel problem in defining randomness. The most common precise definition of the randomness of a string of digits or of a sequence of black and white cells on a tape is that it is random if there is no way of describing it with a string of shorter length. The trouble is that according to this definition the string of digits in a number like the square root of two would not qualify as random, because it can be described very simply—it is the square root of two—even though it surely looks random. (Mathematica gives the first thirty digits as 1.41421356237309504880168872420.) In the same way, it wouldn’t do to define the output (shown in Figure B) of a cellular automaton like rule 110 with a single black cell in the top row as complex only if it can’t be described in simple terms, because it can be described in simple terms—it is the output of rule 110 with a single black cell in the top row. There are other definitions of randomness, such as the absence of correlations: the digits in the square root of two can be said to be random because, as far as is known, being given one digit at an unidentified decimal place tells you nothing at all about what the next digit is likely to be. Wolfram has not even begun to formulate a similar definition of complexity.

In fact, as he admits, for Wolfram the real test of the complexity of a pattern is that it should look complex. Much of his discussion of complexity is anecdotal, relying on pictures of the patterns produced by specific automata that he has known. In this, Wolfram is allying himself with one side in the ancient struggle between what (with much oversimplification) one might call cultures of the image and cultures of the word. In our own time it has surfaced in the competition between television and newspapers and between graphical user interfaces and command line interfaces in computer operating systems.

The culture of images has had the better of it lately. For a while the culture of the word had seemed to have scored a victory with the introduction of sound into motion pictures. In Sunset Boulevard, Norma Desmond recalls that in silent films, “We didn’t need dialogue. We had faces.” But now movies can go on for long stretches with no words, only the thunk of cars running into each other and the sizzle of light sabers. The ascendancy of the culture of the image has been abetted by computers and the study of complexity, which have made possible the simulation of complex visual images.

I am an unreconstructed believer in the importance of the word, or its mathematical analogue, the equation. After looking at hundreds of Wolfram’s pictures, I felt like the coal miner in one of the comic sketches in Beyond the Fringe, who finds the conversation down in the mines unsatisfying: “It’s always just ‘Hallo, ‘ere’s a lump of coal.’”

Wolfram’s classification of the patterns produced by cellular automata dates from the early 1980s, and the discovery that the rule 110 elementary cellular automaton is a universal computer was made in the early 1990s. Since then, none of this work has had much of an impact on the research of other scientists, aside from Wolfram’s employees. The strongest reaction I have seen by scientists to this new book has been outrage at Wolfram’s exaggeration of the importance of his own contributions to the study of complexity. Wolfram’s survey of the complex patterns produced by automata may yet attract the attention of other scientists if it leads to some clear and simple mathematical statement about complexity. I doubt if even Wolfram cares what picture is produced by the rule 110 cellular automaton after a billion steps. But if Wolfram can give a precise statement of his conjecture about the computational equivalence of almost all automata that produce complex patterns and prove that it is true, then he will have found a simple common feature of complexity, which would be of real interest. In the study of anything outside human affairs, including the study of complexity, it is only simplicity that can be interesting

  1. 3

    Wolfram says that the main elements of the proof were found in 1994 by one of his assistants, Matthew Cook, and he gives an unreadable updated version in this book, along with an admission that a few errors may still remain. I gather that the proof has not been published in a refereed journal. An article in Nature by Jim Giles titled “What Kind of Science Is This?” (May 16, 2002) reports that when Cook left his job with Wolfram in 1998, he gave a talk on his work at the Santa Fe Institute, but the talk did not appear in the conference proceedings; Wolfram took legal action against Cook, arguing that Cook was in breach of agreements that prevented him from publishing until after the publication of Wolfram’s book.

  2. 4

    Turing took pains to point out that this issue had not been settled by the famous 1931 theorem of Kurt Gödel, which states that there are statements in the general system of mathematics presented in the Principia Mathematica of Bertrand Russell and Alfred North Whitehead that can be neither proved nor disproved by following the rules of that system.

  • Email
  • Single Page
  • Print