When people talk about “the infinite,” they usually mean the infinitely great: inconceivable vastness, world without end, boundless power, the Absolute. There is, however, another kind of infinity that is quite different from these, though just as marvelous in its own way. That is the infinitely small, or the infinitesimal.
In everyday parlance, “infinitesimal” is loosely used to refer to things that are extremely tiny by human standards, too small to be worth measuring. It tends to be a term of contempt. In his biography of Frederick the Great, Carlyle tells us that when Leibniz offered to explain the infinitely small to Queen Sophia Charlotte of Prussia, she replied that on that subject she needed no instruction: the behavior of her courtiers made her all too familiar with it. (About the only nonpejorative use of “infinitesimal” I have come across occurs in Truman Capote’s unfinished novel Answered Prayers, when the narrator is talking about the exquisite vegetables served at the tables of the really rich: “The greenest petits pois, infinitesimal carrots…” Then there are the abundant malapropisms. Some years back, The New Yorker reprinted a bit from an interview with a Hollywood starlet in which she was describing how she took advantage of filming delays on the set to balance her checkbook, catch up on her mail, and so forth. “If you really organize your time,” she observed, “it’s almost infinitesimal what you can accomplish.” To which The New Yorker ruefully added: “We know.”)
Properly speaking, as all the books under review agree, the infinitesimal is every bit as remote from us as the infinitely great is. Pascal, in the seventysecond of his Pensées, pictured nature’s “double infinity” as a pair of abysses between which finite man is poised. The infinitely great lies without, at the circumference of all things; the infinitesimal lies within, at the center of all things. These two extremes “touch and join by going in opposite directions, and they meet in God and God alone.” The infinitely small is even more difficult for us to comprehend than the infinitely great, Pascal observed: “Philosophers have much oftener claimed to have reached it, [but] they have all stumbled.”
Nor, one might add, has the poetical imagination been much help. There have been many attempts in literature to envisage the infinitely great: Father Arnall’s sermon on eternity in A Portrait of the Artist as a Young Man, Borges’s infinite “Library of Babel.” For the infinitesimal, though, there is only vague talk from Blake about an infinity you can hold “in the palm of your hand,” or, perhaps more helpful, these lines from Swift: “So, naturalists observe, a flea/Hath small fleas on him prey;/ And these have smaller fleas to bite ’em,/And so proceed ad infinitum.”
From the time it was conceived, the idea of the infinitely small has been regarded with deep misgiving, even more so than that of the infinitely great. How can something be smaller than any given finite thing and not be simply nothing at all? Aristotle tried to ban the notion of the infinitesimal on the grounds that it was an absurdity. David Hume declared it to be more shocking to common sense than any priestly dogma. Bertrand Russell scouted it as “unnecessary, erroneous, and selfcontradictory.”
Yet for all the bashing it has endured, the infinitesimal has proved itself to be the most powerful device ever deployed in the discovery of physical truth, the key to the scientific revolution that ushered in the Enlightenment. And, in one of the more bizarre twists in the history of ideas, the infinitesimal—after being stuffed into the oubliette seemingly for good at the end of the nineteenth century—was decisively rehabilitated in the 1960s. It now stands as the epitome of a philosophical conundrum fully resolved. Only one question about it remains open: Is it real?
Ironically, it was to save the natural world from unreality that the infinitesimal was invoked in the first place. The idea seems to have appeared in Greek thought sometime in the fifth century BCE, surfacing in the great metaphysical debate over the nature of being. On one side of this debate stood the monists—Parmenides and his followers—who argued that being was indivisible and that all change was illusion. On the other stood the pluralists—including Democritus and his fellow Atomists, as well as the Pythagoreans—who upheld the genuineness of change, which they understood as a rearrangement of the parts of reality.
But when you start parsing reality, breaking up the One into the Many, where do you stop? Democritus held that matter could be analyzed into tiny units—“atoms”—that, though finite in size, could not be further cut up. But space, the theater of change, was another question. There seemed to be no reason why the process of dividing it up into smaller and smaller bits could not be carried on forever. Therefore its ultimate parts must be smaller than any finite size.
Advertisement
This conclusion got the pluralists into a terrible bind, thanks to Parmenides’ cleverest disciple, Zeno of Elea. Irritated (according to Plato) by those who ridiculed his master, Zeno composed no fewer than forty dialectical proofs of the oneness and changelessness of reality. The most famous of these are his four paradoxes of motion, two of which—the “dichotomy” and “Achilles and the Tortoise”—attack the infinite divisibility of space. Take the dichotomy paradox. In order to complete any journey, you must first travel half the distance. But before you can do that, you must travel a quarter of the distance, and before that an eighth, and so on. In other words, you must complete an infinite number of subjourneys in reverse order. So you can never get started.
A story has it that when Zeno told this paradox to Diogenes the Cynic, Diogenes “refuted” it by getting up and walking away. But Zeno’s paradoxes are far from trivial. Bertrand Russell called them “immeasurably subtle and profound,” and even today there is doubt among philosophers whether they have been completely resolved. Aristotle dismissed them as fallacies, but he was unable to disprove them; instead he tried to block their conclusions by denying that there could be any actual infinity in nature. You could divide up space as finely as you pleased, Aristotle said, but you could never reduce it to an infinite number of parts.
Aristotle’s abhorrence of the actual infinite came to pervade Greek thought, and a century later Euclid’s Elements barred infinitesimal reasoning from geometry. This was disastrous for Greek science. The idea of the infinitely small had offered to bridge the conceptual gap between number and form, between the static and the dynamic. Consider the problem of finding the area of a circle. It is a straightforward matter to determine the area of a figure bounded by straight lines, such as a square or triangle. But how do you proceed when the boundary of the figure is curvilinear, as with a circle? The clever thing to do is to pretend the circle is a polygon made up of infinitely many straight line segments, each of infinitesimal length. It was by approaching the problem in this way that Archimedes, late in the third century BCE, was able to establish the modern formula for circular area involving π. Owing to Euclid’s strictures, however, Archimedes had to disavow his use of the infinite. He was forced to frame his demonstration as a reductio ad absurdum—a double reductio, no less—in which the circle was approximated by finite polygons with greater and greater numbers of sides. This cumbersome form of argument became known as the method of exhaustion, because it involved “exhausting” the area of a curved figure by fitting it with a finer and finer mesh of straightedged figures.
For static geometry, the method of exhaustion worked well enough as an alternative to the forbidden infinitesimal. But it proved sterile in dealing with problems of dynamics, in which both space and time must be sliced to infinity. An object falling to earth, for example, is being continuously accelerated by the force of gravity. It has no fixed velocity for any finite interval of time, even one as brief as a thousandth of a second; every “instant” its speed is changing. Aristotle denied the meaningfulness of instantaneous speed, and Euclidean axiomatics could get no purchase on it. Only fullblooded infinitesimal reasoning could make sense of continuously accelerated motion. Yet that was just the sort of reasoning the Greeks fought shy of, because of the horror infiniti that was Zeno’s legacy. Thus was Greek science debarred from attacking phenomena of matter in motion mathematically. Under Aristotle’s influence, physics became a qualitative pursuit, and the Pythagorean goal of understanding the world by number was abandoned. The Greeks may have amassed much particular knowledge of nature, but their love of rigor held them back from discovering a single scientific law.
Though ostracized by Aristotle and Euclid, the infinitesimal did not entirely disappear from Western thought. Thanks to the enduring influence of Plato—who, unlike Aristotle, did not limit existence to what is found in the world of the senses—the infinitesimal continued to have a murky career as the object of transcendental speculation. NeoPlatonists like Plotinus and early Christian theologians like Saint Augustine restored the infinite to respectability by identifying it with God. Medieval philosophers spent even more time engaged in disputation over the infinitely small than over the infinitely great.
With the revival of Platonism during the Renaissance, the infinitesimal began to creep back into mathematics, albeit in a somewhat mystical way. For Johann Kepler, the infinitely small existed as a divinely given “bridge of continuity” between the curved and the straight. Untroubled by logical niceties—“Nature teaches geometry by instinct alone, even without ratiocination,” he wrote—Kepler employed infinitesimals in 1612 to calculate the ideal proportions for, of all things, a wine cask. And his calculation was correct.
Advertisement
Kepler’s friendliness toward the infinitesimal was shared by Galileo and Fermat. All three were edging away from the barren structure of Euclidean geometry toward a fertile, if freewheeling and unrigorous, science of motion, one that represented bodies as moving through infinitely divisible space and time. But there was a certain theological nettle to be grasped by these natural philosophers, as Michel Blay observes in the introduction to his Reasoning with the Infinite: “How could one conceive of a real infinite, present in the world, when it was exactly the conception of the infinite that was supposed to be reserved to the Creator of the world—when speaking the name of the infinite was reserved to God alone?” It was Blaise Pascal who was most galvanized by this question. None of his contemporaries embraced the idea of the infinite more passionately than did Pascal. And no one has ever written with more conviction of the awe that the infinite vastness and minuteness of nature can evoke. Nature proposes the two infinities to us as mysteries “not to understand, but to admire,” Pascal wrote—and to use in our reasoning, he might have added. For Pascal was also a mathematician, and he freely introduced infinitely small quantities into his calculations of the areas of curvilinear forms. His trick was to omit them as negligible once the desired finite answer was obtained. This offended the logical sensibilities of contemporaries like Descartes, but Pascal replied to criticism by saying, in essence, that what reason cannot grasp the heart makes clear.
Although Pascal’s work prefigured the new science of nature, he (like Fermat and Galileo) never fully broke with the Euclidean tradition. But geometry alone was not up to the task of taming the infinitesimal; and if motion was to be understood quantitatively, the infinitesimal had to be tamed. This feat was finally achieved by Newton and Leibniz in the 1660s and 1670s with their more or less simultaneous invention of the “calculus of infinitesimals”—which we now know simply as the calculus. Reasoning with the Infinite furnishes a philosophically sophisticated account of how the failed “geometrization” of nature gave way to itswildly successful “mathematization,” in the form of the calculus. Its author, the director of research at the Centre National de la Recherche Scientifique in France, describes how a “new order of meaning” emerged as the old philosophical perplexities about the infinitesimal were replaced by sheer wonder at its scientific fecundity.
And in Newton’s hands, it scarcely could have been more fecund. Although his rival Leibniz worked out a more elegant formalism for the infinitesimal calculus—the same one in use today, in fact—it was Newton who used this new tool to bring a sense of harmony to the cosmos. Having framed his laws of motion and of gravity, he set out to deduce from them the exact nature of the orbit of a planet around the sun. This was a daunting task, given the continuous variation in a planet’s velocity and distance from the sun. Instead of trying to arrive at the shape of the orbit all at once, Newton had the inspired idea of breaking it up into an infinite number of segments and then summing up the effects of the sun’s gravitational force on the velocity of the planet in each infinitesimal segment.
Instantaneous velocity—a concept that had baffled Newton’s predecessors—was defined as the ratio of two vanishingly small quantities: the infinitesimal distance traveled in an infinitesimal amount of time. From his calculations Newton deduced that the planets should move in elliptical orbits with the sun at one focus—precisely the empirical law that Kepler had already formulated based on the voluminous sixteenthcentury astronomical observations of Tycho Brahe. By dint of the infinitesimal calculus, Newton had managed to unify celestial and terrestrial motion.
Newton’s demonstration of the law of ellipses was the single greatest achievement of the Scientific Revolution. The seeming implication—that nature obeys reason—made its discoverer the patron saint of the Enlightenment. Voltaire, after attending Newton’s royal funeral in 1727, wrote, “Not long ago a distinguished company were discussing the trite and frivolous question: ‘Who was the greatest man, Caesar, Alexander, Tamerlane, or Cromwell?’ Someone answered that without doubt it was Isaac Newton. And rightly: for it is to him who masters our minds by the force of truth, not to those who enslave them by violence, that we owe our reverence.” At a stroke Newton had transformed Aristotle’s teleologyridden cosmos into an orderly and rational machine, one that could serve the philosophes as a model for remaking human society. By elevating natural law to the status of objective fact, the Newtonian world view inspired Thomas Jefferson’s proposition that under the law of nature a broken contract authorized the Americans to rebel against George III.
Behind this triumph of human reason, however, lay an idea that still struck many as occult and untrustworthy. Newton himself was more than a little qualmish. In presenting his proof of the law of ellipses in the Principia, he purged it insofar as possible of the infinitesimal calculus; the resulting exposition, cast in a Euclidean mold, is impossible to follow.^{1} In his later writings, Newton was careful never to consider infinitesimals in isolation but only in ratios, which were always finite. By the end of his life he had renounced the idea of the infinitely small altogether.
Leibniz, too, had misgivings about the infinitesimals. On the one hand, they appeared to be required by his metaphysical principle natura non facit saltus (“Nature does not make leaps”); without these amphibia traveling between existence and nonexistence, the transition from possibility to actuality seemed inconceivable. On the other hand, they resisted all attempts at rigorous definition. The best Leibniz could do was to multiply analogies, comparing, for instance, a grain of sand to the earth, and the earth to the stars. But when his pupil Johann Bernoulli cited the tiny creatures then being seen for the first time under the microscope (newly invented by Leeuwenhoek), Leibniz bridled, objecting that these animalcules were still of finite, not infinitesimal, size. Finally, he decided that infinitely small quantities were merely fictiones bene fundatae (“wellfounded fictions”): they were useful to the art of discovery and did not lead to error, but they enjoyed no real existence.
For Bishop Berkeley, however, this was not good enough. In 1734 the philosopher published a devastating attack on the infinitesimal calculus entitled The Analyst, Or a Discourse Addressed to an Infidel Mathematician. What motivated Berkeley was the threat to orthodox Christianity posed by the growing prestige of mechanistic science. (The “infidel mathematician” addressed is generally supposed to have been Newton’s friend Edmund Halley.) As contrary to reason as the tenets of Christian theology might sometimes appear, Berkeley submitted, they were nowhere near so arcane and illogical as the linchpin of the new science, the infinitesimal. Defenders of the calculus were made to confront the following dilemma: either infinitesimals are exactly zero—in which case calculations involving division by them make no sense; or they are not zero—in which case the answers must be wrong. Perhaps, Berkeley derisively concluded, we are best off thinking of infinitesimals as “ghosts of departed quantities.”
On the continent, Voltaire, for one, was unbothered by scruples about the infinitely small, breezily describing the calculus as “the art of numbering and measuring exactly a thing whose existence cannot be conceived.” As an instrument of inquiry it was simply too successful to be doubted. In the late eighteenth century, mathematicians like Lagrange and Laplace were using it to clear up even the difficult bits of celestial mechanics that had confounded Newton. The power of the calculus was matched by its versatility. It made possible the quantitative handling of all varieties of continuous change. The differential calculus showed how to represent the rate of change as a ratio of infinitesimals. The integral calculus showed how to sum up an infinite number of such changes to arrive at a global picture of the phenomenon in question. And the “fundamental theorem of calculus” linked these two operations in a rather beautiful way, by establishing that one was, logically speaking, the mirror image of the other.
During this golden age of discovery, scientists treated the infinitesimal as they would any other number, until it became convenient in their calculations to set it to zero. This cavalier attitude toward the infinitely small is captured by the advice of the French mathematician Jean le Rond d’Alembert: Allez en avant, et la foi vous viendra. (“Go forward, and the faith will come to you.”)
Still, there remained those who felt it a scandal that the edifice of modern science was being erected on such metaphysically shaky foundations. Throughout the eighteenth century there were many efforts to answer the charges against the infinitesimal put by critics like Berkeley, and to find a logical set of rules for its use. None was successful; some were simply fatuous.^{2} One of the more philosophically appealing attempts (discussed at length by Blay in Reasoning with the Infinite) was that of Bernard de Fontenelle, who tried to rationalize the infinitesimal by characterizing it as the reciprocal of the infinitely large. Though Fontenelle was ultimately defeated by formal difficulties, he was prescient in arguing that the reality of objects like the infinitesimal rested ultimately on their logical coherence, not on their existence in the natural world.
In the nineteenth century—by which time Hegel and his followers were seizing on confusions about the infinitesimal to support their contention that mathematics was selfcontradictory—a way was finally found to get rid of this troublesome notion without sacrificing the wonderful calculus that was based on it. In 1821, the great French mathematician Augustin Cauchy took the first step by exploiting the mathematical notion of a “limit.” The idea, which had been hazily present in the thought of Newton, was to define instantaneous velocity not as a ratio of infinitesimals but as the limit of a series of ordinary finite ratios; the members of this series, though never reaching the limit, come “as near as we please” to it. In 1858, the German mathematician Karl Weierstrass supplied a logically precise meaning to “as near as we please.” Then in 1872, Richard Dedekind, another German, showed how the continuum, previously thought to be held together by the glue of the infinitesimal, could be resolved into an infinity of rational and irrational numbers, no two of which actually touched.
All of these developments were highly technical, and not a little painful to absorb. (They still are, as students of freshman calculus, made to struggle through mysterious “deltaepsilon” limit proofs, will tell you.) Taken together, they had three momentous consequences. First, they signaled the seemingly final ousting of the infinitely small from orthodox scientific thought. “There was no longer any need to suppose that there was such a thing,” observed Bertrand Russell with relief. Second, they meant a return to Euclidean rigor for mathematics, and its formal separation from physics after a heady era of discovery when the two were virtually indistinguishable. Third, they helped work a transformation in the prevailing philosophical picture of the world. If there is no such thing as the infinitesimal, then, as Russell observed, notions like “the next moment” and “state of change” become meaningless. Nature is rendered static and discontinuous, since there is no smooth transitional element to blend one event into the next. In a rather abstract sense, things no longer “hang together.”
The first banishment of the infinitely small led to the decline of Greek science. Its reintroduction led to the Newtonian revolution and the Enlightenment. Could it be that its second banishment led to the birth of modernism? That, stretching the point a bit, is William R. Everdell’s contention in The First Moderns. For Everdell, the achievement of Cauchy, Weierstrass, and Dedekind put paid to “that grail of nineteenthcentury metaphor, smooth change.” The rejection of the infinitesimal, if not the cause of the cultural lurch we label modernism, was at least its beginning, he submits. Drawing together such disparate manifestations as Seurat’s pointillism, Muybridge’s stopmotion photography, the poetry of Whitman, Rimbaud, and Laforgue, the tone rows of Shoenberg, and the novels of Joyce, the author makes an engrossing and persuasive case for his claim that “the heart of Modernism is the postulate of ontological discontinuity.”
A certain nostalgia for the infinitely small persisted among a few philosophical mavericks. Around the turn of the century, the French philosopher Henri Bergson argued that the new “cinematographic” conception of change falsified our prereflective experience, in which infinitesimal moments of time glided smoothly one into the next. In the United States, C.S. Peirce, one of the founders of pragmatism, similarly insisted on the primacy of our intuitive grasp of continuity. Peirce railed against the “antique prejudice against infinitely small quantities,” arguing that the subjective now only made sense if interpreted as an infinitesimal. Meanwhile, in the mathematical world, the infinitesimal might have been expunged from “highbrow” mathematics, but it continued to be popular among “lowbrow” practitioners; physicists and engineers still found it an invaluable heuristic device in their workaday calculations—one that, for all its supposed muddledness, reliably led them to the right answer.
After all, despite the strictures of Aristotle, Berkeley, and Russell, the infinitesimal had never been formally shown to be inconsistent. And with advances in logic during the early part of this century, a new understanding of consistency, and its relation to truth and existence, had begun to emerge. The prime mover was the Austrianborn logician Kurt Gödel (19061978). Today Gödel is most famous for his “incompleteness theorem” of 1930, which says, roughly speaking, that no system of axioms is capable of generating all the truths of mathematics. In his doctoral thesis the year before, though, Gödel had proved a result of perhaps equal importance, which, somewhat confusingly, is known as the “completeness theorem.” It has a very interesting corollary. Take any set of statements couched in the language of logic that you please. Then as long as those statements are mutually consistent—that is, as long as no contradiction can be deduced from them—the completeness theorem guarantees that there exists an abstract structure in which they all come out true.
Gödel’s findings helped to inaugurate the field of logic called model theory, which studies the relationship between formal languages and their interpretations, or “models.” The most dramatic discovery that has been made by model theorists is that a theory in a formal language is usually incapable of pinning down the unique reality that it is intended to describe. And no one did more to exploit this fascinating indeterminacy than the subject of Joseph Dauben’s biography, Abraham Robinson: The Creation of Nonstandard Analysis, a Personal and Mathematical Odyssey.
For a logician, Abraham Robinson led a turbulent, yet urbane and even glamorous, life. Born in 1918 in the Silesian mining village of Waldenburg (now Walbrzych in Poland), he fled Nazi Germany with his family as a teenager. As a refugee in Palestine, Robinson joined the illegal Jewish militia called the Haganah while studying mathematics and philosophy at Hebrew University. A scholarship to the Sorbonne brought him to Paris shortly before it was taken by the Germans. Narrowly escaping, he managed to get to London during the Blitz and served as a sergeant for the Free French and then as a technical expert for the British Air Force. While pursuing pure mathematics and logic during the chaos of the war, Robinson also did brilliant work for the military in aerodynamics and “wing theory.”
After the war, Robinson and his wife, a talented actress and fashion photographer from Vienna, could be found attending the haute couture collections together in Paris. Following teaching stints at the University of Toronto and Hebrew University, he was given Rudolf Carnap’s old chair in philosophy and mathematics at UCLA at the beginning of the 1960s. Attracted by the lure of Hollywood, Robinson and his wife lived in a Corbusierstyle villa in Mandeville Canyon, becoming friendly with the actor Oskar Werner. While doing work that made him one of the supreme mathematical logicians in the world, Robinson was also a convivial bon vivant as well as an early and vocal opponent of the Vietnam War. In the late 1960s he moved to Yale, helping to transform it into a world center for logic before dying of pancreatic cancer in 1974, at the age of 55.
All the biographical facts have been commendably gathered by Dauben, a historian of science at the City University of New York, and they are competently related—if we overlook, that is, the frequent dangling participles and cringemaking exclamation points (“And one morning, sitting on the floor of their room at the Miyako Hotel, they had a fish soup breakfast!”). We are told, in great and sometimes excessive detail, of the academic conferences Robinson attended, the cigars he smoked, the little red sports car he drove, and the time his wife slipped on some cow dung in Katmandu. Yet when it comes to explaining Robinson’s intellectual achievements, Dauben’s book is disappointing. It is replete with impenetrable sentences like “Indeed, in a nonstandard way, Robinson showed that every locally finitely generated subsheaf of the sheaf of germs of holomorphic functions on a given domain is coherent (Oka’s theorem).” But somehow the author never gets around to giving a clear account of Robinson’s greatest feat of genius: his singlehanded redemption of the infinitely small.
Robinson achieved this by thinking of the language of mathematics as an object, one that could be investigated and manipulated by logic. He proceeded in two steps. First, he added to the ordinary theory of numbers a new symbol, which I’ll call i for “infinitesimal,” along with axioms saying that i was smaller than any finite number and yet not zero. These axioms were of the form “i is bigger than zero but less than 1/2,” “i is bigger than zero but less than 1/3,” “i is bigger than zero but less than 1/4,” and so on. Then he showed that this enriched theory of numbers was consistent, assuming the ordinary theory of finite numbers was. How did he do this? Well, suppose that the enriched theory was inconsistent—that is, suppose a contradiction could be deduced from it. The demonstration of this contradiction would, by definition, involve a finite number of steps, and hence only a finite number of the new axioms—none beyond, let’s say, “i is bigger than zero but less than 1/137.” But then, by interpreting i as any plain old fraction less than 1/137—like 1/138, for instance—you would have on your hands a contradiction in the ordinary theory of numbers. Therefore, if the ordinary theory is free from inconsistency, the enriched theory must also be. The usual paradoxes associated with the infinitesimal are evaded by this axiomatization because no single statement in it can express that i is smaller than all positive numbers.
There is more. If the enriched theory is consistent, then, by Gödel’s completeness theorem, there is some mathematical model that the theory truly describes. This model will be “nonstandard,” in the sense that it can be shown to contain all sorts of exotic entities in addition to the ordinary finite numbers. Among the entities living in this nonstandard universe are infinitely small numbers. They surround each finite number in a tight little cloud that Robinson, in a nod to Leibniz, dubbed a “monad.”
Robinson’s epiphany about the infinitesimal came to him one day in 1961 as he walked into Fine Hall at Princeton, where he was visiting during a sabbatical. Four years later he published Nonstandard Analysis, in which he elaborated on the mathematical potential of his discovery.^{3} Curiously, adding infinitesimals to the universe of mathematics in no way alters the properties of ordinary finite numbers. Anything that can be proved about them using infinitesimal reasoning can, as a matter of pure logic, also be proved by ordinary methods. Yet this scarcely means that Robinson’s approach is sterile. By restoring the intuitive methods that Newton and Leibniz pioneered, nonstandard analysis yields proofs that are shorter, more insightful, and less ad hoc than their standard counterparts. Indeed, Robinson used it early on to solve a major open problem in the theory of linear spaces that had frustrated other mathematicians.^{4} Nonstandard analysis has since found many adherents among mathematicians, especially in France, and has been fruitfully applied to probability theory, physics, and economics, where it is well suited to model, say, the infinitesimal impact that a single trader has on prices.
Beyond his achievement as a mathematical logician, Robinson must be credited with bringing about one of the great reversals in the history of ideas. More than two millennia after the idea of the infinitely small had its dubious conception, and nearly a century after it had been got rid of seemingly for good, he managed to remove all taint of contradiction from it. Yet he did so in a way that left the ontological status of the infinitesimal completely open. There are those, of course, who believe that any mathematical object that does not involve inconsistency has a reality which transcends the world of our senses. Robinson himself subscribed to such a Platonistic philosophy early in his career, but he later abandoned it in favor of Leibniz’s view that infinitesimals were merely “wellfounded fictions.” What is certain is that, whatever reality the infinitesimal might have, it has no less reality than the ordinary numbers—positive, negative, rational, and irrational—do. When we talk about numbers, modern logic tells us, our language simply cannot distinguish between a nonstandard universe brimming with infinitesimals and a standard one that is devoid of them. Thus when the writer David Berlinski claims, in his popular primer A Tour of the Calculus,^{5} that “there are no infinitely large or infinitely small numbers,” he is simply talking nonsense.
It remains a meaningful question, however, whether the infinitely small is part of the architecture of nature: meaningful, but perhaps irresoluble. Might matter, space, and time be infinitely divisible? In this century matter has been analyzed into atoms, which then turned out to consist of protons and neutrons, which in turn seem to be made up of smaller particles called quarks. Is that as far as it goes? There is some evidence that quarks too have an internal structure, but probing it may require greater energies than physicists will ever be able to muster. As for space and time, according to current speculative theories they too could well have a discontinuous, foamlike structure on the tiniest scale, with the minimum length being 1033 centimeters and the minimum time 1043 seconds (exactly the time, it has been observed, that it takes a New York cabbie to honk after the light turns green). Again, though, proponents of infinite divisibility can always argue that with greater energies even smaller spacetime scales could be detected, further worlds within worlds. They might also point to the “singularity” from which our universe was born in the big bang, an infinitely tiny point of energy. What better than the infinitesimal to serve as a principle of becoming, the ontological intermediary between being and nothingness?
Our most vivid sense of the infinitely small, however, may spring from our own finitude in the face of eternity, the thought of which can be at once humbling and ennobling. This idea, and its connection to the infinitely small, was expressed in a poignant way by Scott Carey, the protagonist of the 1950s film The Incredible Shrinking Man, as he seemed to be dwindling into nonexistence, at the end of the movie, owing to the effect of some weird radiation: “I was continuing to shrink, to become—what?—the infinitesimal,” he meditates, in a Pascalian vein, under the starry skies.
So close, the infinitesimal and the infinite. But suddenly I knew they were really the two ends of the same concept. The unbelievably small and the unbelievably vast eventually meet, like the closing of a gigantic circle. I looked up, as if somehow I could grasp the heavens. And in that moment I knew the answer to the riddle of the infinite. I had thought in terms of man’s own limited dimensions. I had presumed upon nature. That existence begins and ends is man’s conception, not nature’s. And I felt my body dwindling, melting, becoming nothing. My fears melted away. And in their place came acceptance. All this vast majesty of creation—it had to mean something. And then I meant something too. Yes, smaller than the smallest, I meant something too. To God, there is no zero. I still exist.
And so, one feels, does the infinitesimal.
This Issue
May 20, 1999

1
Even the Nobel laureate Richard Feynman lost his way in the middle of Newton’s argument and had to make up his own conclusion when presenting it in a lecture. See Feynman’s Lost Lecture, edited by David L. and Judith R. Goodstein (Norton, 1996). ↩

2
Well into the next century, Karl Marx would try his hand at this problem, leaving nearly a thousand posthumous pages devoted to it. ↩

3
Robinson aptly chose the book’s epigraph from Voltaire’s Micromégas: “Je vois plus que jamais qu’il ne faut juger de rien sur sa grandeur apparente. O Dieu! qui avez donné une intelligence à des substances qui paraissent si méprisables, l’infiniment petit vous coûte autant que l’infiniment grand.” ↩

4
It is thus unfair to say, as Richard Morris does in his Achilles and the Quantum Universe: The Definitive History of Infinity (Henry Holt, 1997), that though nonstandard analysis is “an interesting theory, it appears that it has not yet produced any important new mathematical results.” ↩

5
Pantheon, 1995. ↩