Investigating the Impact of the Year 2000 Problem
The Simple Living Journal's Y2K Preparation Guide: 110 Ways to Create a Sustainable Life
Crisis or Not
Y2K: The Millennium Bug
Y2K: The Millennium Crisis
Get Rich with Y2K: How to Cash in on the Financial Crisis in the Year 2000
The Year 2000 computer problem originated in the 1950s and 1960s, when programmers decided to use two rather than four digits to represent a year. The date of Apollo 11’s launch to the moon, for instance, was registered in NASA programs as 07/16/69, rather than 07/16/1969. It was obvious to programmers that they needed to reserve two digits each for the day and month of a date—07 rather than just 7 for July. But there seemed every reason to economize on the year.
One reason was that computer memory was then so expensive. Random-access memory, the sort a computer uses to store the instructions and data for the program it is actively running, cost roughly one million times as much, per unit of storage, in 1960 as it does today. One economist has contended that, even if all the current end-of-the-century reprogramming problems had been foreseen, sticking with two digits in the 1950s would still have been rational because of the compounded value of the early savings in memory cost. But less purely rational considerations of habit and convenience were also important. Before the era of early UNIVAC-style computers, businesses and government agencies had fed data into tabulating machines by using punch cards. The standard card had room for only eighty columns of data, a punch in each column indicating a digit or letter, so it was sensible to confine the year to two columns rather than four. The early computer programmers merely continued this practice. Such abbreviation also fit the way people usually talk and think. They speak of the Roaring Twenties, the Gay Nineties, the Spirit of Seventy-Six, and don’t bother with the century.
Once the two-digit standard was established, it naturally persisted, even as the price of memory fell. The great headache of the computer industry is “backwards compatibility,” or dealing with “legacy code.” It would be easy for companies to introduce faster computers, and more effective programs, far more frequently than they do—if they didn’t have to worry about compatibility with all the computers and programs that already exist. Inserting “19” at the beginning of each year would be a trivial matter for, say, word-processing documents, since a change in one memo does not necessarily affect the ability to understand the next. But for database programs, like those used to manage a payroll or keep track of airline reservations, the size of each data field—the number of spaces allotted to store each name, date, or other unit of information—is part of the basic structure of the program. Changing a field from two digits to four can be a major undertaking, with ripple effects on other programs and databases that must also be changed to remain compatible. Programmers who could work with a more or less free hand, like the creators of the Macintosh in the early 1980s, could build in four-digit dates from the beginning (though not…
This article is available to online subscribers only.
Please choose from one of the options below to access this article:
Purchase a print premium subscription (20 issues per year) and also receive online access to all content on nybooks.com.
Purchase an Online Edition subscription and receive full access to all articles published by the Review since 1963.
Purchase a trial Online Edition subscription and receive unlimited access for one week to all the content on nybooks.com.