The front cover of Keith Devlin’s Penguin paperback is a dazzling computer image reproduced from a Springer Verlag book, The Beauty of Fractals. If it looks familiar it’s because you’ve seen it on the front cover of James Gleick’s recent best seller, Chaos, published by Viking. Two books with identical covers are rare enough, but to make things worse, Viking is now owned by Penguin. I take this blunder to reflect the chaos that develops when big publishing houses merge.

A research mathematician at the University of Lancaster, Devlin is best known in England for his column on mathematics in The Guardian, and for his television appearances. This new book (his ninth) is an effort to introduce general readers to some of the more exciting recent breakthroughs in what he calls “mankind’s most impenetrable subject.” Devlin’s choice of material is excellent, and he is to be praised for the clarity and accuracy with which he presents it.

The book’s first topic is prime numbers: integers greater than 1 that have no divisors except themselves and 1. The largest known prime—it has 65,050 digits—is obtained by raising 2 to the power of 216,091, then subtracting 1. Primes of this form are called Mersenne primes. Only thirty-one are known, and no one has yet proved whether there are infinitely many, or only a finite number, maybe no more at all.

For the past decade there has been an intensive search for faster ways to factor composites—numbers that are not prime. One reason, Devlin tells us, is that improved computer methods of factoring could destroy a widely used cipher known as the RNA system after the initials of its three MIT inventors. It is called a publickey cryptosystem because the method of encoding secret messages can be published and used by anyone. Decoding is something else. It is possible only if one knows the two factors of a gigantic composite, obtained by multiplying two primes, which serves as the cipher’s secret key. When the RNA system was first proposed twelve years ago prime numbers of eighty digits were recommended because there were then no known ways to factor the product of two such primes in “reasonable” computer time—say ten years. The RNA system is still secure, but because of recent improvements in factoring speed, primes of one hundred digits are now used to make two-hundred-digit keys.

Here are quick rundowns on Devlin’s other topics. A chapter on infinite sets includes Kurt Gödel’s famous proof that even in systems as simple as arithmetic there are statements impossible to prove true or false. A discussion of different kinds of numbers centers around an amazing property of 163. A chapter on fractals—structures that always look the same, either exactly or in a statistical sense, as you endlessly enlarge portions of them—introduces the famous Mandelbrot set. It is named after IBM’s Benoit Mandelbrot, who invented the term “fractal” and pioneered the study of these marvelous patterns.

In his book on chaos theory Gleick calls the Mandelbrot set the most mysterious object in geometry. When a computer zooms in on its infinite structure, progressively magnifying regions, there is an endless sequence of surprises. A periodical devoted entirely to investigations of the M-set, as it is called, even runs occasional science fiction based on the set’s unfathomable depths. How mathematicians who pretend that mathematical structure is not “out there,” independent of human minds, can view successive enlargements of the M-set and preserve their cultural solipsism is hard to comprehend.

A chapter on simple groups (abstract algebraic structures based on such symmetry operations as ways of rotating a cube so that it returns to its original state) introduces the concept of symmetry. The concept is fundamental in modern physics, especially in particle theory and the new TOEs (Theories of Everything). Not until 1980 were all the finite simple groups (finite here means that a group has a finite number of symmetry operations) identified. Proof that they are now all classified followed the discovery of the Monster, a group whose number of symmetry operations is 8 followed by fifty-three digits. The proof runs to about 15,000 pages in some five hundred papers by more than one hundred mathematicians. Devlin calls it the longest proof in mathematical history.

A discussion of a famous algebraic problem posed by David Hilbert in 1900, and solved in 1970, leads into Diophantine analysis—finding integral solutions to equations. Devlin gives an up-to-date account of efforts to crack the greatest of all unsolved Diophantine problems—proving Fermat’s conjecture that an+bn=cn has no solutions when n is higher than 2. (If n is 2 there is an infinity of solutions starting with 32+42=52.) Lest you be tempted to search for a counterexample, know that if one exists n must be greater than 125,000. The media gave widespread coverage early in 1988 to a supposed proof by a Japanese mathematician, but like hundreds of earlier claims it turned out to be flawed. The most recent development on Fermat’s conjecture is a proof that for every n greater than 2 there are at most a finite number of solutions.1

“Hard Problems About Complex Numbers,” the book’s toughest chapter, discusses three more conjectures: the Riemann hypothesis (unsolved), the Mertens conjecture (proved false in 1983), and the Bieberbach conjecture (proved true in 1984). The solving in 1976 of the notorious four-color theorem—the conjecture that all maps can be colored with four colors—is the topic of another chapter. The proof hides in a computer printout so massive that only computers can check it, and to this day some mathematicians are not convinced it is flawless.

The latest proof of this sort was announced last December by Clement Lam and his associates at Concordia University, Montreal. Does a finite projective plane of order-10 exist? This longstanding combinatorial question would take too much space to explain, but the point is that a Cray supercomputer, after running for several thousand hours, answered no. Can we be sure this answer is correct? Some philosophers contend that proofs so lengthy and complex that no human mind can “see” their validity are turning mathematics more and more into an empirical science where results are only probable to varying degrees.

Knot theory, a whimsical branch of topology (the study of properties that are unchanged when an object is continuously deformed), is currently enjoying a revival. It even has applications in organic chemistry, where protein molecules can take knotted forms. Devlin’s chapter on knot theory is the most entertaining in his book.

A final chapter on computer algorithms (procedures) discusses a class of closely related problems known as NP-complete for which no reasonable-time algorithms have been found, or probably exist. (There is a good chapter on NP-complete tasks in Poundstone’s book.) The best known example is the task of finding the shortest route for a salesman who must visit each of n cities just once and return to where he started. Karmaker’s algorithm, a recent breakthrough in speeding computer solving of such problems, is explained. Because the algorithm is based on the structures of solids in higher dimensions, Devlin sees it as “an exemplary blend of the pure and abstract with the world we live in and an excellent place to bring to an end a survey of mathematics’ New Golden Age.” Modern computers have given mathematicians a tool of incredible power. It is the book’s recurrent theme that these mindless machines are not only solving old problems that could never be solved without them; they are also opening up vast new regions, of great beauty and utility, that mathematicians are just starting to explore.

William Poundstone’s first book, Big Secrets, annoyed professional magicians because it exposed the workings of some of their most cherished stage illusions. He followed it with Bigger Secrets and The Recursive Universe, a book centered on the philosophical implications of a computer game called Life. Labyrinths of Reason, his fourth and best book, is a lively, freewheeling discussion of the most bewildering and most profound of many paradoxes still hotly debated in philosophical journals.

“Paradox” has a broader meaning than “fallacy.” You can seemingly prove, as Poundstone does, that 1=2, but not without making a subtle mathematical error. A paradox is any line of reasoning that strongly contradicts common sense. Fallacies are trivial. Paradoxes can be far from trivial.

Poundstone’s first example is a paradox put forth by Carl Hempel, a distinguished philosopher of science, to show how little we understand what is meant by “confirming evidence.” Everyone agrees that each time we observe a black crow it confirms the conjecture “All crows are black.” A logically equivalent way to say the same thing is “No nonblack object is a crow.” We see a red apple. Surely it confirms “No nonblack object is a crow.” If so, it must also confirm the equivalent “All crows are black.” I once summarized this seemingly impeccable reasoning with a parody of Gelett Burgess:

I never saw a purple cow, But if I ever see one,
Will the probability crows are black Have a better chance to be I?

Something clearly is wrong, but just what? Poundstone skillfully capsules the various positions philosophers have taken in recent years on this mystifying question.

Sometimes what seems to be a confirming instance turns out to be the opposite. Poundstone cites Paul Berent’s amusing paradox of the 99-foot man. Consider the hypothesis “All men are less than 100 feet tall.” Each observation of a man less than 100 feet surely confirms the conjecture. Suppose you encounter a man 99 feet tall. Strictly speaking, he confirms the conjecture, but in a stronger way he disconfirms it because you now have good reason to believe that men can grow close to 100 feet. To get around the paradox one must consider Rudolf Carnap’s important proviso. Degrees of confirmation must rest on all the evidence that bears on a conjecture, not on the conjecture alone.

Other chapters in Poundstone’s racy volume cover such mindbenders as Nelson Goodman’s “grue-bleen” paradox of induction; Henri Poincaré’s claim that if everything instantly doubled in size overnight we would be unable to detect the change (or would we?); infinity machines, such as a lamp that turns on and off in time intervals expressed by the halving series of one minute, one half minute, one quarter minute, and so on—at the end of two minutes, the sum of the series, Is the lamp on or off?; John Searle’s much discussed “Chinese Room,” designed to attack the notion that computers “understand” their instructions; and Hilary Putnam’s “Twin Earth,” intended to show that the meanings of words are linked to the external world. “Cut the pie any way you like,” Putnam is quoted as saying, “‘meanings’ just ain’t in the head!” If not, Poundstone asks, where are they?

The paradox of the unexpected hanging, the topic of another chapter, continues to generate bizarre debates. A judge tells a prisoner, “You will be hanged at noon on a day next week, but there is no way you can know in advance what day it will be.” The prisoner reasons: “They can’t hang me Saturday, the last day of the week, because on Friday afternoon I’ll be able to deduce that my hanging will be Saturday. So Saturday cannot be the day. Nor can Friday. On Thursday afternoon, knowing that Saturday has been eliminated, I will know Friday is my day of doom. This again contradicts what the judge told me, so Friday has to be eliminated also.”

Of course such reasoning will extend backward through the week to Sunday. Does this prove that, assuming the judge spoke truly, the prisoner cannot be hanged? No, because on, say, Tuesday morning the judge orders the hanging at noon. This comes as a total surprise to the prisoner. What was wrong with his logic?2

Newcomb’s paradox, the topic of the book’s last chapter, has produced more confusing argument among philosophers than any other paradox since physicist William Newcomb thought of it in 1960. Ten years later Harvard’s philosopher Robert Nozick introduced it to his colleagues in a book of essays honoring Hempel. When I gave it still wider currency in my Scientific American column it brought more than a thousand letters. I passed them over to Nozick, who, after summarizing them in a guest column, regretfully concluded that the paradox remains inscrutable. (Both columns are reprinted in my Knotted Doughnuts and Other Mathematical Amusements.)

Imagine, said Newcomb, that in front of you are two closed boxes, A and B. A is transparent. Inside you see a thousand-dollar bill. B is opaque. It is either empty or it holds a million dollars. You must make an irrevocable choice between taking both boxes or taking only B. In both cases you keep what is inside.

On the day before the test, a Superbeing from another planet (God if you like) has scanned your brain to determine how you will decide. If he thinks you will take only B he puts a million in the box. If he thinks you will greedily take both boxes, he puts nothing in B. He then leaves the scene. You know exactly what the conditions are. Moreover, you have seen this test performed a hundred times before with others, and in every case the Superbeing predicted accurately. The greedy who took both boxes got only the visible thousand dollars. Those who took just the opaque box got a million.

What is your best decision? From a pragmatic point of view you are impressed by the Superbeing’s past accuracy. (Incidentally, the paradox holds even if the accuracy is only slightly better than 50 percent.) Because all of science, as well as your daily behavior, rests on induction from past experience, should you not take just B and be almost certain of joining the millionaire’s club? About half the philosophers who have written about the paradox see this as “obviously” the best way to maximize your take.

Unfortunately, an equal number find it just as obvious that you should make a “logical” decision and take both boxes. Assume B is empty. If you take only B you get nothing, but if you take both boxes you get at least a thousand. Assume B is not empty. If you take it alone you get a million, but if you take both boxes you get a million plus a thousand. In each case, taking both boxes will increase your gain by a thousand. Because nothing can change the fixed state of the boxes, why not take everything there?

The paradox leads into deep questions about decision theory—questions that bear on decisions in economics and politics, perhaps even on metaphysical questions about free will and determinism. As Nozick insists, the paradox is not resolved by giving a vigorous defense of either strategy. You must show why the other strategy is inferior.

Scattered throughout Poundstone’s book are many lesser paradoxes and puzzles. Here is the most amusing:

A man gets an unsigned letter telling him to go to the local graveyard at midnight. He does not generally pay attention to such things, but complies out of curiosity. It is a deathly still night, lighted by a thin crescent moon. The man stations himself in front of his family’s ancestral crypt. The man is about to leave when he hears scraping footsteps. He yells out, but no one answers. The next morning, the caretaker finds the man dead in front of the crypt, a hideous grin on his face.

“Did the man vote for Teddy Roosevelt in the 1904 US presidential election?”

Answer: A crescent moon is visible at midnight only in regions near the poles. The man, therefore, must have lived in northern Alaska. Because Alaska was not part of the United States in 1904, he could not have voted for Teddy Roosevelt.

This Issue

March 16, 1989