Is the Internet Good for Democracy?
The story of technology is largely the story of people who guess wrong about which problems will be easy to solve and which will be hard. For example, less than a decade before the Wright Brothers’ flight, Lord Kelvin announced that “heavier than air flying machines are impossible.” The Scientific American of 1909 concluded that “the automobile has practically reached the limit of its development,” on evidence “that during the past year no improvements of a radical nature have been introduced.” Thomas Watson of IBM famously said in the 1940s that “there is a world market for maybe five computers.”1 In The Road Ahead, published in 1996, Bill Gates of Microsoft said that his—or anyone’s—predictive writings about technology were bound to be judged harshly ten years after publication. “What I’ve said that turned out to be right will be considered obvious, and what was wrong will be humorous.”
People in the computer industry have already criticized Gates for one such “humorous” error. In the early 1980s, Microsoft made its historic deal to be the sole supplier of operating-system software for the first IBM Personal Computer. This was like being the sole engine supplier for the first mass-produced automobiles, and it was the foundation of Microsoft’s later dominance. But the software that Microsoft provided, known as MS-DOS, had various limitations that frustrated its early users. One quote from Gates became infamous as a symbol of the company’s arrogant attitude about such limits. It concerned how much memory, measured in kilobytes or “K,” should be built into a personal computer. Gates is supposed to have said, “640K should be enough for anyone.” The remark became the industry’s equivalent of “Let them eat cake” because it seemed to combine lordly condescension with a lack of interest in operational details. After all, today’s ordinary home computers have one hundred times as much memory as the industry’s leader was calling “enough.”
It appears that it was Marie Thérèse, not Marie Antoinette, who greeted news that the people lacked bread with qu’ils mangent de la brioche. (The phrase was cited in Rousseau’s Confessions, published when Marie Antoinette was thirteen years old and still living in Austria.) And it now appears that Bill Gates never said anything about getting along with 640K. One Sunday afternoon I asked a friend in Seattle who knows Gates whether the quote was accurate or apocryphal. Late that night, to my amazement, I found a long e-mail from Gates in my inbox, laying out painstakingly the reasons why he had always believed the opposite of what the notorious quote implied. His main point was that the 640K limit in early PCs was imposed by the design of processing chips, not Gates’s software, and he’d been pushing to raise the limit as hard and as often as he could. Yet despite Gates’s convincing denial, the quote is unlikely to die. It’s too convenient an expression of the computer industry’s sense that no one can be sure what will happen next.
There are many examples showing how hard it is to predict the speed of technological advance or its effect on social or commercial life. Space travel. Cloning. Cures for cancer. The search for clean or renewable energy sources. The sense of apprehension in the last days of 1999 arose from the fact that while most experts believed the “Y2K bug” would not shut down computer systems, no one really could be sure.
The most dramatic recent demonstration of this problem involves the Internet. As a matter of pure technology, the Internet has worked far better than almost anyone dared hope. A respected engineer and entrepreneur, Robert Metcalfe, who invented the networking standard called Ethernet, predicted in 1995 that as more and more people tried to connect, the Internet would “go spectacularly supernova and in 1996 catastrophically collapse.” (Metcalfe later had the grace to literally eat his words, pureeing a copy of the column containing his prediction and choking it down before an amused audience.) In fact, the Internet has become both faster and less crash-prone the larger it has grown. This is partly because of improved hardware but mainly because of the brilliance of the “distributed processing” model by which it operates, which automatically steers traffic away from any broken or congested part of the network.
The financial assumptions surrounding the Internet have of course changed radically in a very short time. It was only three years ago that Lawrence Summers, then secretary of the treasury, joked that Brazil could solve its debt problems by changing its name to Brazil.com, since venture capital would then be sure to flow in. Until the summer of that year, any dot-com was assumed (by financiers) to be a winner, whether or not it had a plan for making profits. Since the spring of last year, any dot-com is assumed to be a loser, even if it in fact is becoming profitable.
On-line sales continue to grow, despite the general recession and the depression among dot-com companies. In 2001, sales at “normal” retail stores were flat, but sales by on-line merchants rose by 20 percent. The two most celebrated dot-com bankruptcies in 2001 were DrKoop.com, a medical advice site and on-line drugstore, and WebVan, a grocery-delivery service that in 2000 had a market value of $1.2 billion. There was nothing preposterous about either of their business concepts, only about the lavishness with which they were carried out. Sooner or later some company will make money letting customers fill prescriptions or order staple groceries on line. The successful companies will probably be branches of established drugstore or grocery chains. The hit film Startup.com, released a year ago, told the story of the rise and embarrassing fall of GovWorks.com, which was intended to let state and local governments do part of their business on line. This, too, is a sound concept, one that seems destined to spread. Few people who have the choice to register a car on line will want to trudge downtown to do it the normal way. But for now, the overcorrection and rush from dot-com investments leaves good ideas as underfunded as bad ideas were overfunded before.
While assessments of the Internet’s economic prospects have gone through manic swings, interpretations of its political and social effects have displayed a kind of stable schizophrenia. That is, through the last half-dozen years of the Internet’s explosive rise, observers have agreed that it would do something significant to society, but have disagreed about whether the effect would be good or bad.
The utopian view, strongest in the Internet’s early days but still heard now, boils down to the idea that the truth will make people free. Elections will become more about “issues,” as voters can more easily investigate each candidate’s position. Government will become more honest, as the role of money is exposed. People in different cultures will become more tolerant, as they build electronic contacts across traditional borders. Tyrants will lose their grip, as the people they oppress gain allies in the outside world and use the Internet to circumvent censorship. Liberal democracies will govern with a lighter hand, as information technology makes them more efficient and responsive. The recent struggles of dot-com companies have dampened the enthusiasm of some of the strongest exponents of these views, and have postponed estimates of when the desired effects might occur. But the concepts have not gone away.
The opposing, dystopian view shares the assumption that the Internet will weaken traditional power structures, but it emphasizes all the ways in which that will be bad. Families and communities will be weakened, as each member spends more time with “virtual” friends and associates. Childhood may be destroyed, because of the lure of pornography—of which a huge supply is available on the Internet—and of addictive on-line games and the threat of sexual predators. If the Internet ultimately erodes the barriers among people and parts of the world, then any culture, community, or institution that requires a sense of separate identity to survive is threatened. A variant of this concern has been the strongest international complaint about the Internet: that it would be another means of promoting American values and the English language.
The Internet’s effect on language already seems to be evolving in an unexpected way. Initially nearly all the Internet’s users were native English speakers, and nearly all Web pages were in English. Now well under half of all users are native English speakers, and the proportion can only fall. But the proportion of English-language pages is falling more slowly. It was 85 percent five years ago and about 60 percent now, as pages in Japanese, German, Chinese, Spanish, French, and other languages have been added rapidly. But in many parts of the non-Anglophone world, users are posting pages in English along with their national language, or instead of their own language, to make the sites comprehensible to the broadest-possible group of readers. One recent academic study of this trend, called Internet—Flagship of Global English?, concludes that the Internet will cement the role of English as universal lingua franca. The study was carried out at the University of Lecce, in Italy, and the results were posted in English.2
Previous big, modern innovations have made a significant difference in family, community, and national life. Antibiotics and immunization dramatically cut childhood mortality in the developed countries, which in turn altered family patterns and the place of women. So too for electricity, the telephone, automobiles, air travel, radio and television, and modern techniques of farming. The Internet could have effects as profound as any of these—but they won’t be clear all at once.
The discussion that has surrounded Cass Sunstein’s republic.com is a useful way to begin considering these long-term effects. I emphasize the discussion as much as the book, for it highlights some of the issues in the book in an unusual way. republic.com was published last spring. In it Sunstein, a highly regarded law professor and First Amendment specialist at the University of Chicago, addressed two different subjects with different degrees of authority and success.
One of his subjects was the connection between information flow and democratic government. His argument was that in a democracy “free expression” must mean something more than mere absence of censorship. Instead, a “well-functioning system of free expression,” one adequate to equip citizens for self-government, had additional tests to meet:
First, people should be exposed to materials that they would not have chosen in advance. Unplanned, unanticipated encounters are central to democracy itself…. I do not suggest that government should force people to see things that they wish to avoid. But I do contend that in a democracy deserving the name, people often come across views and topics that they have not specifically selected.
Second, many or most citizens should have a range of common experiences. Without shared experiences, a heterogeneous society will have a much more difficult time in addressing social problems…. Common experiences, emphatically including the common experiences made possible by the media, provide a form of social glue.
The Experts Speak, by Christopher Cerf and Victor Navasky (Villard, 1998), is a collection of "authoritative" predictions that were quickly disproved.↩