Y2K specter
Y2K specter; drawing by David Levine

1.

The Year 2000 computer problem originated in the 1950s and 1960s, when programmers decided to use two rather than four digits to represent a year. The date of Apollo 11’s launch to the moon, for instance, was registered in NASA programs as 07/16/69, rather than 07/16/1969. It was obvious to programmers that they needed to reserve two digits each for the day and month of a date—07 rather than just 7 for July. But there seemed every reason to economize on the year.

One reason was that computer memory was then so expensive. Random-access memory, the sort a computer uses to store the instructions and data for the program it is actively running, cost roughly one million times as much, per unit of storage, in 1960 as it does today. One economist has contended that, even if all the current end-of-the-century reprogramming problems had been foreseen, sticking with two digits in the 1950s would still have been rational because of the compounded value of the early savings in memory cost. But less purely rational considerations of habit and convenience were also important. Before the era of early UNIVAC-style computers, businesses and government agencies had fed data into tabulating machines by using punch cards. The standard card had room for only eighty columns of data, a punch in each column indicating a digit or letter, so it was sensible to confine the year to two columns rather than four. The early computer programmers merely continued this practice. Such abbreviation also fit the way people usually talk and think. They speak of the Roaring Twenties, the Gay Nineties, the Spirit of Seventy-Six, and don’t bother with the century.

Once the two-digit standard was established, it naturally persisted, even as the price of memory fell. The great headache of the computer industry is “backwards compatibility,” or dealing with “legacy code.” It would be easy for companies to introduce faster computers, and more effective programs, far more frequently than they do—if they didn’t have to worry about compatibility with all the computers and programs that already exist. Inserting “19” at the beginning of each year would be a trivial matter for, say, word-processing documents, since a change in one memo does not necessarily affect the ability to understand the next. But for database programs, like those used to manage a payroll or keep track of airline reservations, the size of each data field—the number of spaces allotted to store each name, date, or other unit of information—is part of the basic structure of the program. Changing a field from two digits to four can be a major undertaking, with ripple effects on other programs and databases that must also be changed to remain compatible. Programmers who could work with a more or less free hand, like the creators of the Macintosh in the early 1980s, could build in four-digit dates from the beginning (though not all did—for reasons of habit, two-digit dates turned up in brand-new programs well into the 1990s). But a typical company whose business was humming along fine with two-digit dates had every reason not to tamper with its programs, until the end of the century came in sight.

As that time approached, programmers had to reckon with a variety of difficulties that collectively make up the “Y2K problem” or “Millennium Bug.” Some highly computerized functions are not date-sensitive, and therefore can roll into the next century unchanged. VCRs, alarm clocks, automatic coffee makers, and other small appliances are controlled by computer chips—but they have no way of monitoring the real date or time. This is why they so often show a flashing “12:00,” and why they should not be affected on January 1. Modern cars contain hundreds of computer chips controlling the engine and brakes and calculating when maintenance is next due. But usually they reckon time by counting the number of days or hours the car is running, rather than referring to an absolute calendar. They know to signal “Servicing Due” when the odometer reaches 30,000 miles, but they generally don’t know when a new year has started. The same is true of even the most advanced airplanes, which are full of computerized chips but whose systems generally keep track of accumulated operating time rather than the actual hour, day, month, and year.

Programs that do refer to an absolute calendar could well have problems the first time they carry out a comparison operation next year. Comparing one value to another, and then taking different actions depending on which value is larger, is one of the fundamental elements of any computer program. The comparisons will be flawed next year in any system using two-digit dates, because events next year—year 00—will seem to come before events this year, year 99.

Most often this will produce nonsense results that the program will be unable to interpret. For instance, in calculating whether a credit card is still valid or a bank loan is due, the computer subtracts the issue date from the current date to see how many years have passed. If an issue date in year 95 is subtracted from a current date in year 00 to calculate elapsed time, the result will be a negative number (-95 years). Depending on how a program is written, a negative value for elapsed time could either produce a cascade of erroneous results or simply stop the program. In either case, the computer won’t do what the user intends. There are endless variations on the ways by which two-digit years can stop programs or make them report false data, all of them arising from the fact that events in the future appear to be in the past.

Advertisement

One special complication that adds to the Y2K problem involves leap year calculations. Years evenly divisible by four are, of course, leap years. Surprisingly enough, years divisible by 100 are not generally leap years—1800 and 1900 had only 365 days apiece. (This adjustment is necessary to reconcile the modern Gregorian calendar to the actual timing of the earth’s annual orbits around the sun.) But years divisible by 400 are leap years (a further calendar adjustment). So the year 2000, evenly divisible by 400, will have 366 days. If a computer, reading 2000 as “00,” assumes that the year is 1900, it will expect it to have only 365 days, and might therefore refuse to accept next February 29 or December 31 (the 366th day of 2000) as legitimate values.

Conceptually, the solution to the Y2K problem is straightforward: find all two-digit date fields, convert them to four digits, and insert “19” or “20,” as appropriate. In practical terms this is a huge and tedious undertaking, since date fields can be buried in so many obscure ways in so many kinds of programs. Nearly every major software company has posted information about how to begin Y2K repair efforts. A glance at these gives an idea of the magnitude of the task.1

And beyond the effort to debug programs themselves, where the main obstacle is limited time, is the near impossibility of determining how many embedded chips might be coded in a way that causes problems next year. Worldwide, something like 50 billion microchips are now built into medical devices, industrial plant controls, transportation and navigation equipment, heating and cooling systems, and virtually every other underpinning of modern life. Some small fraction—most estimates say 1 to 2 percent—of devices and systems were built with internal calendar functions that may produce errors in the next century. Chips in the most vital systems, those controlling power generation, emergency room care, aviation, and so on, have largely been tested and repaired. But no one will know how many other chips are vulnerable until they begin failing.

Everyone who has written or spoken about the problem agrees that some computer systems will malfunction next year, simply because there is too much code in too many places to find or correct in time. What is striking is the very wide difference in expectations about how serious the consequences of failure will be.

The first striking aspect of this debate is the almost ostentatious nonchalance of the technician class. Those who are deepest into the computer culture seem most relaxed about what will happen on January 1. In six months of inquiry, I have not found a software executive or professional programmer who is willing to express serious concerns about the consequences of Y2K. All admit the inevitability of some disruption; all dismiss the possibility of widespread or significant breakdowns. When asked where he placed himself on a ten-point scale of concern, Charles Simonyi, a much-respected programmer at Microsoft, said, “I am at Zero. It will be like the ‘Skylab is falling’ scare.” The head of Y2K preparations at a major software company said, “I am doing nothing special, personally, to prepare for serious bad things to happen on January 1.” Eric Schmidt, CEO of the software company Novell, compared the hubbub over Y2K to that over Monica Lewinsky: “Everyone will be very excited about it, but in the end it won’t matter very much.” Experts have been wrong about their fields before, and it is possible that the professionals’ confidence will look like hubris a few months from now. But two main reasons seem to lie behind this outlook.

One is that programmers in particular and the software industry in general, although they usually miss deadlines, believe that they can meet them when they have to. A truism often cited in Y2K discussions is that most software development projects take far longer than promised. Therefore, however close the experts say they’ll be to correcting Y2K problems, the reality of next January 1 would seem likely to be worse.2 The programmers reply that there is an enormous difference between the schedule for a normal commercial product, such as the latest release of Lotus’s popular Notes software, and that for completing Y2K work. If an ordinary program reaches the market a month or even a year later than planned, it’s a nuisance but not a disaster. The new “R5” version of Notes was late—but so was its main competition, the latest Office software from Microsoft. Often the delay reflects a deliberate tradeoff, to allow new features to be designed and fitted in.

Advertisement

The deadline for Y2K work, in contrast, is entirely real, and programmers approach it with an emergency room mentality of fixing the truly urgent problems now and leaving niceties for later. Normal software development is like book publishing, in that deadlines are targets rather than imperatives, whereas Y2K repair work is like the evening TV news. If you extrapolated from missed deadlines in the book business, you’d think TV news could never get out on time—but it does, because the deadlines are real.

Beyond asserting that their colleagues can get the most important work done in time, programmers go on to claim that the software industry has already done so. The Y2K transition, they say, won’t happen like a thunderclap next New Year’s Eve, as many disaster scenarios assume. Indeed, it is already well underway, through the countless revised computer programs that must deal with dates in the future—and so far the transition has been surprisingly smooth.

Four-year colleges enrolled members of the Class of 2000 three years ago; for at least that long, their grading, billing, and enrollment systems have had to recognize that someone from the class of ’00 should graduate one year later, not ninety-nine years earlier, than someone from the class of ’99. (According to a report from the President’s Commission on Year 2000 Conversion, released in August, American schools in general have been slow to make Y2K preparations. Only 28 percent of the 3,500 school districts surveyed said they had converted all their computer systems. However, the report said the problems were unlikely to have a direct impact on teaching and learning.) In 1996, because of uncertainty about how its computers would handle dates in the next century, the Visa network instructed member banks not to issue cards with expiration dates later than 12/99. By late 1997, Visa was confident enough about revisions in its software to authorize cards with post-1999 expiration dates. When the first of these ’00 cards were used, early in 1998, there were scattered reports of trouble, but for more than a year they’ve been working with no more than the routine level of credit card problems. Most people reading this article possess ATM cards, driver’s licenses, credit cards, or other documents showing expiration dates of ’00 and afterward.

Many industrial and financial systems are already living with the Y2K transition, and doing so without significant difficulty. Airline reservation systems work 330 days into the future. Since April, they’ve had to recognize that a trip booked in year 99 might take place in year 00. Bonds and CDs have been issued with maturity dates in the next century. On March 1, the governments of New York State and of Japan began their 1999-2000 fiscal years, without notable Y2K mishap. On April 9 of this year computer systems around the world functioned without problems, which is significant because that was the 99th day of 1999. Programmers sometime use numbers like 99, 999, or 9999 as dummy values signaling the end of an operation. A simple program might say, “For n=1 to 9999, do…,” meaning that a certain process should be carried out 9999 times and then stop. A program that rendered April 9 internally as 99/99 or 9999 might therefore have problems—but few apparently did. The next such sensitive date comes on September 9—9/9/99. Until it arrives, it is impossible to prove that it will also pass uneventfully, but the evidence points that way—as it does for the beginning of the federal government’s 1999-2000 fiscal year, on October 1.3

Recent Y2K tests of the nation’s electric power grid, telephone networks, air traffic control system, and financial institutions have been reassuring. The head of the Federal Aviation Administration, Jane Garvey, has announced that when the clocks turn she will be on a commercial flight from Washington, D.C., to Dallas. On January 1, European financiers began the switch from national currencies to the euro, which required software reengineering comparable to that of Y2K. Predictions about its impact were dire but so far have not been borne out.

There have been enough moderate-scale difficulties, in enough parts of the world, to make it clear that some disruptions will continue until and after the new year.4 Two years ago, Chrysler ran a Y2K drill at one of its production plants, setting the internal clocks of its computers to the year 2000 and seeing if they still worked. Most of the car-making machinery functioned, but the building’s security system failed, and employees could not open the doors for several hours. Something similar happened to the mayor of Los Angeles this May, when, during a Y2K test, City Hall lost electric power and he was stuck in an elevator. On January 1 of this year, the passport-reading software at Sweden’s main airports failed; so did the credit card-reading system for Norway’s gas stations. Each was fixed within hours.

Such breakdowns will become more numerous and for that reason more important at the turn of the year. “The most serious problem areas will hit organizations and individuals where there is an intersection between dependence on embedded systems and a lack of Y2K readiness programs,” Jeanne Sheldon, a testing expert who is head of Y2K compliance for Microsoft’s Office software, told me. “If you don’t know that your widget assembly plant equipment measures the speed of the assembly line by calculating widgets-per-elapsed-time, then your line will shut down when the elapsed time becomes a negative number.” Localized failures of this sort, she points out, could be corrected simply by turning off the computers controlling the assembly line and powering them back up. (This would reset the elapsed time counter so that all its values were in year 00, avoiding the problem of negative elapsed time.)

In the dozen hearings it has conducted this year, the US Senate’s Special Committee on the Year 2000 Technology Problem has heard repeatedly from witnesses that the most severe Y2K problems are likely to occur outside the United States. The worldwide Internet is designed explicitly to keep functioning even if some computers or local networks connected to it break down. But how much more slowly and spottily it might function if overseas systems, or crucial network connections within the US, malfunction can only be guessed now. The most alarming hypothesis presented at the Senate hearings came from Lawrence Gershwin, a national intelligence officer for the CIA. He described the ways in which electric power failures in the former Soviet empire might lead to catastrophic failures of old Soviet-designed nuclear power plants, including one reactor still functioning at Chernobyl.5

These projections of possible disasters assume a series of linked failures, like falling dominos. First the electric supply shuts off, because the Russians have not located all the Y2K problems in their power grid. A power failure would normally cause the plant simply to turn itself off—but in certain circumstances it could conceivably disable the equipment necessary for a safe shutdown. For instance: Gershwin said that in a certain type of water-cooled reactor, called RBMK, a power failure or computer failure would activate the safety system and begin turning the reactor off. But a complete shutdown requires cooling pumps to operate for several days, removing extra heat from the nuclear core. The pumps run on electric power, which in an emergency might come from diesel generators at the nuclear plants. But unless the plants had enough diesel fuel on hand to keep the generators running for days, the result could be a core meltdown.

Gershwin has declined to take phone calls about these possibilities. In July a CIA press official said that he stood by his March statement but “continues to follow” the situation. The dangers that can be identified ahead of time—like shortages of diesel fuel—can presumably be averted. But the possibility of meltdown, though unlikely, cannot be ruled out until the time comes.

Gershwin raised an even worse prospect: that Y2K failures in strategic defense systems might somehow lead Russia or the United States to launch nuclear missiles. Later comment by other officials, following concentrated international attention to the problem, makes this seem less threatening. “The specter of nuclear missiles firing themselves because of Y2K problems is unlikely,” Jack Gribben, spokesman for the President’s Council on Year 2000 Conversion, told me. “Unlikely?” I inquired. He elaborated: “It will not happen. It requires human intervention. They cannot fire themselves. There are authorizations of all sorts that need to occur—and the default position if they fail is to have them shut down rather than go off.” Even during the Kosovo fighting, the US military was working with its Russian counterparts to station observers in each side’s headquarters by December 31, keeping them in phone contact to make sure no sneak launch was underway.

The most worrying and plausible outcome of Y2K is loss of electric power in a city or region. This is thought to be all but certain in some of the poorest nations, where facilities for reprogramming computers are limited, and will in some cases be used only after the failure actually occurs. The unfortunate reality is that the same societies often can’t count on electric power in the best of circumstances—and they are not connected to international power grids that would let them bring down other nations’ systems. The results of Y2K failure may be harsh for them, but they will not differ in their effects from the breakdowns in electrical service that take place fairly often already.

2.

The most significant aspect of Y2K will probably not turn out to be whatever happens in January. Instead it will involve the actions taken in anticipation of that moment. Even if the computer disruptions prove to be as mild as most programmers now expect, the effort to forestall disruption has turned Y2K into a major commercial, political, and cultural event.

Like fears of terrorism, anticipation of Y2K has spawned its own industry of consultants and advisors, ranging from real experts to hucksters. The first boom was among veteran programmers familiar with the old mainframe languages, notably COBOL, in which the problem got its start. Of the estimated $600 billion that will be spent worldwide on Y2K corrections, most will go to software companies in one form or another, largely for new systems that allow the client to avoid the nuisance of debugging old ones. That computer repair fees recirculate within the high-tech economy is why Y2K has so far had a stimulative, or at worst redistributive, effect on the economy, rather than acting as a drag. By contrast (as Mitch Ratcliffe, of the online publication ZDNet, has pointed out), the collapse of Asian markets took $3.5 trillion out of global assets during six months in 1998, and that was dead loss.6

The $600 billion estimate of the total costs of Y2K corrections comes from the Gartner Group, based in Stamford, Connecticut, which has made itself the most visible and controversial consulting firm with Y2K expertise. The controversy involves potential overestimates of the cost and impact of software repairs. The more apocalyptic the prediction of what Y2K might bring, the more attention it has received in the press. This also holds true for the consulting firms making the predictions—as Barnaby Feder noted in The New York Times, Morgan Stanley Dean Witter has gotten virtually no attention for its prediction that worldwide costs would be about one seventh as much as Gartner’s estimate7—and for individual experts as well. The Internet is home base for about a dozen Y2K authorities who have become local celebrities, in the fashion of talk-radio hosts. One of them—the Texan Gary North—is so extreme in his apocalyptic vision as to have provoked a substantial debunking counterreaction. North (www.garynorth.com) maintains that failure to fix even one of the world’s computers or electric systems will eventually bring them all down, so modern industrial society is at an end. A counter-website is called the “Gary North is a Big Fat Idiot Page” (http://206.28.81.29/).

But just short of this level of despair are many other authorities whose prominence seems to depend on their pessimism. The most frequently cited of them is Edward Yardeni, chief economist of the Deutsche Bank, who for the last two years has been predicting a “severe worldwide recession” beginning next January. Yardeni first said there was a 70 percent chance of a Y2K-related collapse similar to the one that followed the oil shocks of the early 1970s; recently he’s reduced the estimate to 60 percent. The cause, he said, would be disruption of “the global just-in-time manufacturing system,” by which components arrive precisely at the moment when they will be used, thus avoiding many costs. (His site is www.yardeni.com. Like North’s, it has a collection of valuable links to other Y2K information.)

The financial-world consensus has become much more sanguine than Yardeni’s view. For instance, in July, Merrill Lynch Global Research released a report saying that the “real” Y2K problem, that of repairing computer systems, was being solved; the only significant financial risk, it said, might come from panicky behavior by consumers or investors as the date drew near. If next January passes without incident, Edward Yardeni will have some explaining to do. But his explanation could well be that he and his fellow Cassandras saved the world. Because the alarmists got attention, they stampeded companies and governments into repair efforts while there was still time.

The similarly named Y2K expert Edward Yourdon has already been through the “explaining to do” stage. As of the mid-1990s he was a well-established computer expert—author of many authoritative technical books, a graduate of MIT. Then he began warning of Y2K problems, and, as he put it on his website (www.yourdon. com), the issue “has occupied nearly every waking moment since the summer of 1997.” He has given scores of speeches, testified before Congress, posted almost daily updates on his website, and published several books, the best known of which (written with his daughter Jennifer) is Time Bomb 2000. This book lays out scenarios ranging from a decade-long world depression to a minor blip lasting a couple of days, but it favors the dire possibilities. Yourdon moved from New York City to New Mexico to avoid the urban chaos likely to follow Y2K, and he published a celebrated article predicting that programmers in general would have to flee the big cities in 1999 to escape the public’s wrath.

As Yourdon’s prominence rose, he attracted critics and rebuttals. Suddenly, at the end of May, he published a statement headed “Sayonara, Y2K” on his website, announcing that he would have nothing more to say on this subject, that he was changing his e-mail address, that he was retreating from the field. It was hard to avoid the impression of a technical expert who had stumbled into the nasty world of political debate and had not bargained for what he encountered.

Another part of the Y2K industry is the publishing business, which has produced more than 100 books on the subject. Mainly these concern personal financial advice, which is aimed at already hard-pressed Americans but offers them the hope that they can do well out of the nation’s distress. Get Rich with Y2K begins on a down note: “We are about to face possibly the most terrifying and challenging time of our lives.” On the other hand,

the opportunity for acquiring financial riches is mind boggling…. In a time when job security is threatened, unemployment is on the rise, businesses are going into bank-ruptcy, and foreclosures skyrocketing, you can become rich!… You will be able to live like a king….

The Get Rich strategy mainly involves buying distressed properties at the foreclosure auctions that are sure to follow Y2K and then turning them into rental housing. This and other books offer sensible tips like have paper records of bank, stock, and real estate transactions near the end of the year; don’t be flustered if bills at the beginning of the year are messed up; and lay in modest additional supplies of anything you can’t do without, from prescription medicine to cash. Still, as with so many self-improvement and pyramid schemes, it is impossible to avoid the impression that the people getting rich will be the ones selling the advice.

The Federal Reserve Board, aware of the cash-supply part of such counsel, is building its own additional hoard of cash for year-end purposes. A spokesman said that the demand for cash is always high during the end-of-the-year holiday season, so the Fed typically has a cushion of $200 billion in currency available then. The plan this year is to have at least $50 billion more on hand. That cushion comes to about $1000 per American—seemingly not much, in the event of a genuine bank run. But Alan Greenspan has been Zen-like in his public unconcern about Y2K, and the Fed spokesman said, “If need be, the Bureau of Engraving and Printing could go onto overtime. We’re confident we can get out the money to meet the demand.”

3.

As a political event, Y2K is becoming an unanticipated nightmare for the Democrats. For Al Gore, it can mean bad news. If the New Year comes and goes without incident, no one will use that as a reason to vote for him. But if there are real problems, not only will he lose the ability to run on the Clinton prosperity; he will also become the technology expert blindsided by a technology failure.

The more immediate problem for the Democrats is this year’s negotiation over the right to sue for Y2K-related commercial damages. In July, by lopsided margins, the House and Senate passed a bill that would make it harder to sue computer or software companies if their systems fail next year—as some undoubtedly will. Under the bill, the companies would have a ninety-day grace period to try to fix problems before any suit could be filed; plaintiffs would be obliged to do their best to mitigate damages, rather than rush straight to court; class action suits would be limited; and companies with fewer than fifty employees could be sued for no more than $250,000 in punitive damages. (There would not be a limit on normal compensatory damages.) The bill originated with Republicans but was backed by virtually every business in the high-tech economy. It put the Clinton administration in an impossible position, because it pitted one bloc the Democrats hope to woo (the high-tech moguls) against another that has been extremely loyal in fund-raising (the trial lawyers’ association, which bitterly opposed the bill). After attacking the bill as anti-consumer and emboldening a handful of Democrats to vote against it, the President announced in July that he would sign it. Once again he has left the hapless Gore in a no-win predicament: the high-tech industry isn’t grateful for the late conversion, and the lawyers and consumer groups feel betrayed.

Y2K has also left a mark on politics in the broad cultural sense. Just as those closest to computer programming seem calmest about Y2K’s impact, those closest to centers of governmental, commercial, or social power seem least concerned about how society will change with the millennium. The people who claim we face deep consequences tend to be far from power.

Some countercultural groups on the left, for example, say that the coming collapse of national power networks should be a chance to break out of the mass-consumption, mass-culture modern life style. The editors of The Simple Living Journal say in their Y2K Preparation Guide that getting ready for the millennium can be a way to build a new spirit of community-level mutual trust.

The far more prominent voices are from the right, illustrated by two recent polemical novels. (Both were originally published by Xlibris Press, a vanity house.) Y2K: The Millennium Crisis, by Bradford Morgan, is at first glance overtly political. Its villains are an American president and vice president drawn as caricatures of Clinton and Gore. They deliberately engineer a worse-than-necessary worldwide computer collapse on January 1 as a pretext for imposing martial law, suspending elections, and making themselves rulers for life. (In the case of the Clinton figure, that’s not very long. His seemingly loyal vice president has him poisoned before his own demise.)

But the real theme of the book is religious. The band of heroes that struggles to repair computer systems and preserve democracy benefits at crucial points from divine intervention. (Amazingly, most of these heroes are federal bureaucrats.) The book’s climax comes not so much when the national power grid starts running as when an important character embraces Christianity. There is also a laugh-out-loud scene in which the Clinton figure literally goes to Hell, which will satisfy the fantasies of his enemies who believe that he’s escaped punishment in this world.

The religious concept of a millennial end-of-days is an important part of the alarmist Y2K movement. Religious stations on cable TV have heavily emphasized this theme over the last two years. Gary North, the most extreme Y2K pessimist, is the son-in-law of the major theorist of the “Christian Reconstructionist” group, which believes that current society must be toppled and succeeded by an era of strict biblical rule before the return of Jesus to earth. This may explain the jaunty tone of North’s predictions, for instance: “I think [Y2K] will wipe out every national government in the West. Not just modify them—destroy them. I honestly think the Federal government will go under. I think the USA will break up the way the USSR did. Call me a dreamer. Call me an optimist. That’s what I think.”

The other Xlibris novel, Y2K: The Millennium Bug, by Don Tiggre, is right wing in a harder-core, survivalist sense. The theme of his book is that anyone who relies on established, mainstream institutions to supply food, water, heat, power, and physical safety will perish as the century begins. The surviving protagonists are the ones who hole up in the Rockies, on heavily fortified Dollar Ranch, where they have stockpiled food and use special (Y2K-compliant) computer-guided artillery to mow down the hungry masses who come to pillage. (Mormons in general also survive, in the newly independent state of Deseret.) The book offers this projection of the spectacle next New Year’s Eve, as CNN broadcasts shots from orbiting satellites:

Occasionally, as midnight moved westward through a gap in the cloud cover, cities and other light sources would go out. This had become quite pronounced as Y2K hit Europe, especially when the country of Belgium disappeared from sight. You couldn’t miss Belgium’s disappearance; virtually all of their highways were lit up at night, making Belgium one of the brightest places on nocturnal earth, as seen from space. When its power grid had gone down, it was as though the cold waters of the North Sea had poured into the heart of Europe and obliterated the country.

Now it was the United States’ turn.

Many other books and magazines convey the same idea: that to rely on the Establishment is to invite doom. A newsstand book called Y2K Crisis: Doomsday 2000! is full of advertisements for dried food, water purification systems, guns and ammo, and guides to living off the land. (Why dried food? Because the just-in-time inventory system for grocery stores will break down and the shelves will be bare.) The book advertises more than a dozen survival books by Ragnar Benson, including Ragnar’s Ten Best Traps: “With these little-known traps, you’ll nab every imaginable fish, fowl and beast, including bobcat, fox, coyote, deer, mink, coon, duck and the craftiest of all—man.”

The unwillingness to trust modern computers seems to be a specific case of a mistrust of modern life in general. And this attitude is the one thing that the computer experts say could in fact turn the inevitable localized problems next January into broader economic and technical failures. Jeanne Sheldon, Microsoft’s testing supervisor and Y2K authority, said she was not laying in food or making other efforts to prepare for problems. “To do otherwise would be socially irresponsible,” she said:

I worry a lot about the social and economic impact of panic. Confidence in the effectiveness of computer technology is part of what makes it effective. The funny part of the concerns about cascading failures is that the people expressing the concerns believe that they have solved everything within their domain. It’s just those other interconnected systems that might fail. I don’t see it as much different from realizing that not all cars have working brakes. Good drivers are aware of that. But they don’t stop using public roads because of it.

On this view, the Y2K “crisis” lies in whether or not, as December 31 approaches, enough people actually see it as a crisis and become possessed by sudden, overwhelming fears that cause them to sell property, pile up cash, and otherwise behave irrationally. The chances of such outbreaks of panic have been getting smaller and smaller, but they are still there.

This Issue

September 23, 1999