Google plays such a large part in so many of our “digital lives” that it can be startling to learn how much of the company’s revenues come from a single source. Almost everyone online relies on one Google service or another; personally I make semiregular use of
Google Search, Gmail, Google Chat, Google Voice, Google Maps, Google Documents, Google Calendar, Google Buzz, Google Earth, Google Chrome, Google Reader, Google News, YouTube, Blogspot, Google Profiles, Google Alerts, Google Translate, Google Book Search, Google Groups, Google Analytics, and Google 411.
Yet few of these services support themselves (YouTube alone has lost hundreds of millions of dollars per year). In 2008 the advertising on Google’s search engine was responsible for 98 percent of the company’s $22 billion in revenue, and while Google refuses to provide more recent percentages, the company’s 2009 revenue of $23.6 billion suggests that little has changed.
The reason Google’s search engine remains its single largest source of revenue—and why that revenue exceeds that of any other website—can most easily be understood by studying the company’s history. In 1998, when the Stanford graduate students Sergey Brin and Larry Page launched Google, the existing search engines were so inadequate that only one was capable of finding itself when queried with its own name; a search for “cars” on Lycos, one of the better search engines, returned more pornography sites than sites about cars.
Google’s vast improvement on other search engines is usually attributed to a new algorithm, called PageRank, that made use of the links between sites to more accurately determine relevancy. In contrast to other search engines, which ranked results according to the number of times a searched-for word was used, Google ranked its results based on the number of links a site received, a method that revealed the “wisdom of crowds.” The pages to which many people link were, by Google’s model, listed higher than pages that were less popular, and if a particularly popular site linked to a page, that link would be given greater weight in determining relevancy. But Google’s success was not due primarily to its technical ingenuity; other search engines, including Lycos, used a similar ranking technology.
Google’s advantage over Lycos only became apparent when Page and Brin, still intent on pursuing their Ph.D.s, tried to sell their technology to another search engine, Yahoo. As Ken Auletta writes in Googled:
[The Yahoo founders] were impressed with [Google’s] search engine. Very impressed, actually; their concern was that it was too good…. The more relevant the results of a search were, the fewer [pages] users would experience before leaving Yahoo. Instead of ten pages, they might see just a couple, and that would deflate the number of page views Yahoo sold advertisers.
In the early years of the Web, a search engine was considered only one of many attractions on sites like Yahoo and Lycos, which were attempting to become, in the language of the time, a “portal,” or a site that served as an entry point to the Web and provided links to various kinds of content. Most of this content was organized and displayed on different Web pages that were part of the site, thus leading users to remain on the site and look at page after page. As with newspapers, an advertiser might pay a premium to guarantee that an ad would appear on a site’s front or “home” page. But on the Web, aside from the home page, no equivalent of a newspaper’s “A” section exists; websites, unlike print newspapers, do not proceed in a linear fashion.
Hence the emphasis on “page views.” Instead of a promise of an ad in a newspaper’s “A” section, sites like Yahoo and Lycos sold advertising based on how many times each page on their site was viewed (a statistic easily tracked online). This “page view” approach has since been adopted by newspapers themselves in their online versions: the ads that readers see—i.e., for Moviefone and the Red Cross in a recent New York Times story about Haiti’s upcoming election—are most often sold at rates based on the number of times they are viewed, rather than where they appear on a given site.
Page and Brin decided to continue improving Google’s search algorithms, while disdaining the efforts of Yahoo, Lycos, and other portals to maximize the number of pages—and hence ads—that visitors might see. More than any innovation, this decision allowed Google to become the best search engine available. But it also left Google with almost no source of revenue, since users did not see many different pages and the site consequently could not compete in selling ads based on page views.
It wasn’t until Google grew desperate for funding during the dot-com bust of the early 2000s that Page, Brin, and their colleagues began to see that the technical advantage they had gained over other search engines might translate into an economic advantage as well. Aside from page views, one of the few easily measured statistics on the early Web was “click-throughs,” the number of times visitors to a site found an ad displayed enticing enough to click on it, and then be taken to the advertiser’s own website, where the product or service in question might be purchased or used. Most websites, including those of other search engines, found that they could earn more by charging small amounts for each time an ad was seen (page views) rather than charging a larger amount for the far less frequent instances when a visitor clicked on an ad (click-throughs) and then visited the advertiser’s own website.
Google’s executives realized that ads on search engines reach users at a singularly receptive time: unlike readers browsing through articles on a news website, users of search engines are often looking for something very specific. A user who asks a search engine, for instance, “Where can I find the best car insurance?” would be a more promising potential customer than a visitor to a news website, because by searching for car insurance a user signals that he or she is, at that moment, in the market for car insurance. A car insurance ad programmed to appear next to the results of such a search would allow the advertiser to target its most desirable audience.
This approach, a form of what’s known as a “cost-per-click” advertising system, charges advertisers for each time a user clicks on an ad that is displayed next to related search engine results. To implement it, Google developed a program to link specific ads to millions of different search terms that prospective customers might use (from “car insurance” to “French horn” to “cat grooming in New York”), as well as a program to ensure that the ads sold through this system would be priced fairly (the program uses a simple bidding system, vetted by economists).
You can see how this looks today: a user searching for “pet food,” for exam ple, is greeted not only by a ranked list of sites containing information about pet food, but also by three “sponsored” links at the very top of the page, as well as a column of pet food ads in the right margin. These “sponsored” links and ads in the margin are paid for by online pet food stores and related ventures.
Back in the early 2000s, though, another company, Overture, which was far more focused than Google on making money online, had already developed such a system. Google copied much of the system from Overture in 2001; Overture sued Google in 2002; and Overture was itself bought by Yahoo in 2003. Yahoo settled the lawsuit against Google out of court for $275 million in 2004, and the system, modified over time, still provides the vast majority of Google’s billions of dollars in revenues.
Much of this story has been told before. Ken Auletta, like other reporters, emphasizes the luck of Google’s early success; unlike other reporters, he never forgets that luck as the company’s profits begin to pile up. Had Google’s revenues derived from a hundred diverse sources, one could plausibly credit the company, as some business writers do, with a “management breakthrough.” Auletta instead quotes Steve Ballmer, the CEO of Microsoft, who famously called Google “a one-trick pony” in 2007. “They have one product that makes all their money,” Ballmer explained—referring to the sale of search-based ads—“and it hasn’t changed in five years.”
But while Auletta approaches his subject more judiciously than other reporters, he still tends to withhold judgment. A more critical approach to Google comes from Nicholas Carr, the technology-expert-turned-skeptic who recently expanded his widely discussed 2008 article in The Atlantic, “Is Google Making Us Stupid?,” into a full-length book, The Shallows.
Google, Carr suggests, is the new home of Taylorism, the management philosophy of “perfect efficiency” developed by Frederick Winslow Taylor for factory production lines toward the end of the nineteenth century. It is an unexpectedly easy argument to make. Eric Schmidt, the CEO of Google, has proudly said that the company is “founded around the science of measurement,” and Google executive Marissa Mayer has argued that “because you can measure so precisely [online], you can actually find small differences and mathematically learn which one is right.” This “scientific” emphasis leads Carr to conclude that the Internet, partly because of Google’s influence, has been built to process information efficiently, rather than to encourage deep understanding.
Carr laments the hyperlinks that dominate much of what we read online, arguing that instead of adding context, they turn the Internet into an “interruption system, a machine geared for dividing attention.” He cites studies of how links affect the brain, though the results of these experiments remain ambiguous; more persuasive evidence comes from the many bloggers who, partly as a result of Carr’s writing, have begun to admit to an inability to read entire books or even magazine articles.
The Web, ostensibly built to help process information, for Carr thus leads to confusion and distraction among the people it purports to serve, because in the drive of companies to increase the efficiency with which we go through pages (and the number of ads seen and the total profits made), the Web pre-sents us with more information than we can process, let alone understand.
Carr’s analysis is often suggestive, but he tends to ignore much of how Google approaches the Web. He argues that Google considers information a commodity that should be “mined and processed” and dispensed in highly efficient “snippets” to searchers. But while Google has done everything it can to “mine” more information, the company always tries to make it possible for searchers to view entire texts. Google does, it is true, show only small “snippets” from books when forced by copyright law to limit the text it can provide. But the company has spent millions of dollars in court fighting to allow users to see more of these books.