Sergey Brin and Larry Page
Sergey Brin and Larry Page; drawing by John Springs

Tweets Alain de Botton, philosopher, author, and now online aphorist:

The logical conclusion of our relationship to computers: expectantly to type “what is the meaning of my life” into Google.

You can do this, of course. Type “what is th” and faster than you can find the e Google is sending choices back at you: what is the cloud? what is the mean? what is the american dream? what is the illuminati? Google is trying to read your mind. Only it’s not your mind. It’s the World Brain. And whatever that is, we know that a twelve-year-old company based in Mountain View, California, is wired into it like no one else.

Google is where we go for answers. People used to go elsewhere or, more likely, stagger along not knowing. Nowadays you can’t have a long dinner-table argument about who won the Oscar for that Neil Simon movie where she plays an actress who doesn’t win an Oscar; at any moment someone will pull out a pocket device and Google it. If you need the art-history meaning of “picturesque,” you could find it in The Book of Answers, compiled two decades ago by the New York Public Library’s reference desk, but you won’t. Part of Google’s mission is to make the books of answers redundant (and the reference librarians, too). “A hamadryad is a wood-nymph, also a poisonous snake in India, and an Abyssinian baboon,” says the narrator of John Banville’s 2009 novel, The Infinities. “It takes a god to know a thing like that.” Not anymore.

The business of finding facts has been an important gear in the workings of human knowledge, and the technology has just been upgraded from rubber band to nuclear reactor. No wonder there’s some confusion about Google’s exact role in that—along with increasing fear about its power and its intentions.

Most of the time Google does not actually have the answers. When people say, “I looked it up on Google,” they are committing a solecism. When they try to erase their embarrassing personal histories “on Google,” they are barking up the wrong tree. It is seldom right to say that anything is true “according to Google.” Google is the oracle of redirection. Go there for “hamadryad,” and it points you to Wikipedia. Or the Free Online Dictionary. Or the Official Hamadryad Web Site (it’s a rock band, too, wouldn’t you know). Google defines its mission as “to organize the world’s information,” not to possess it or accumulate it. Then again, a substantial portion of the world’s printed books have now been copied onto the company’s servers, where they share space with millions of hours of video and detailed multilevel imagery of the entire globe, from satellites and from its squadrons of roving street-level cameras. Not to mention the great and growing trove of information Google possesses regarding the interests and behavior of, approximately, everyone.

When I say Google “possesses” all this information, that’s not the same as owning it. What it means to own information is very much in flux.

In barely a decade Google has made itself a global brand bigger than Coca-Cola or GE; it has created more wealth faster than any company in history; it dominates the information economy. How did that happen? It happened more or less in plain sight. Google has many secrets but the main ingredients of its success have not been secret at all, and the business story has already provided grist for dozens of books. Steven Levy’s new account, In the Plex, is the most authoritative to date and in many ways the most entertaining. Levy has covered personal computing for almost thirty years, for Newsweek and Wired and in six previous books, and has visited Google’s headquarters periodically since 1999, talking with its founders, Larry Page and Sergey Brin, and, as much as has been possible for a journalist, observing the company from the inside. He has been able to record some provocative, if slightly self-conscious, conversations like this one in 2004 about their hopes for Google:

“It will be included in people’s brains,” said Page. “When you think about something and don’t really know much about it, you will automatically get information.”

“That’s true,” said Brin. “Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now you go into your computer and type a phrase, but you can imagine that it could be easier in the future, that you can have just devices you talk into, or you can have computers that pay attention to what’s going on around them….”

…Page said, “Eventually you’ll have the implant, where if you think about a fact, it will just tell you the answer.”

In 2004, Google was still a private company, five years old, already worth $25 billion, and handling about 85 percent of Internet searches. Its single greatest innovation was the algorithm called PageRank, developed by Page and Brin when they were Stanford graduate students running their research project from a computer in a dorm room. The problem was that most Internet searches produced useless lists of low-quality results. The solution was a simple idea: to harvest the implicit knowledge already embodied in the architecture of the World Wide Web, organically evolving.

Advertisement

The essence of the Web is the linking of individual “pages” on websites, one to another. Every link represents a recommendation—a vote of interest, if not quality. So the algorithm assigns every page a rank, depending on how many other pages link to it. Furthermore, all links are not valued equally. A recommendation is worth more when it comes from a page that has a high rank itself. The math isn’t trivial—PageRank is a probability distribution, and the calculation is recursive, each page’s rank depending on the ranks of pages that depend…and so on. Page and Brin patented PageRank and published the details even before starting the company they called Google.

Most people have already forgotten how dark and unsignposted the Internet once was. A user in 1996, when the Web comprised hundreds of thousands of “sites” with millions of “pages,” did not expect to be able to search for “Olympics” and automatically find the official site of the Atlanta games. That was too hard a problem. And what was a search supposed to produce for a word like “university”? AltaVista, then the leading search engine, offered up a seemingly unordered list of academic institutions, topped by the Oregon Center for Optics.

Levy recounts a conversation between Page and an AltaVista engineer, who explained that the scoring system would rank a page higher if “university” appeared multiple times in the headline. AltaVista seemed untroubled that the Oregon center did not qualify as a major university. A conventional way to rank universities would be to consult experts and assess measures of quality: graduate rates, retention rates, test scores. The Google approach was to trust the Web and its numerous links, for better and for worse.

PageRank is one of those ideas that seem obvious after the fact. But the business of Internet search, young as it was, had fallen into some rigid orthodoxies. The main task of a search engine seemed to be the compiling of an index. People naturally thought of existing technologies for organizing the world’s information, and these were found in encyclopedias and dictionaries. They could see that alphabetical order was about to become less important, but they were slow to appreciate how dynamic and ungraspable their target, the Internet, really was. Even after Page and Brin flipped on the light switch, most companies continued to wear blindfolds.

The Internet had entered its first explosive phase, boom and then bust for many ambitious startups, and one thing everyone knew was that the way to make money was to attract and retain users. The buzzword was “portal”—the user’s point of entry, like Excite, Go.com, and Yahoo—and portals could not make money by rushing customers into the rest of the Internet. “Stickiness,” as Levy says, “was the most desired metric in websites at the time.” Portals did not want their search functions to be too good. That sounds stupid, but then again how did Google intend to make money when it charged users nothing? Its user interface at first was plain, minimalist, and emphatically free of advertising—nothing but a box for the user to type a query, followed by two buttons, one to produce a list of results and one with the famously brash tag “I’m feeling lucky.”

The Google founders, Larry and Sergey, did everything their own way. Even in the unbuttoned culture of Silicon Valley they stood out from the start as originals, “Montessori kids” (per Levy), unconcerned with standards and proprieties, favoring big red gym balls over office chairs, deprecating organization charts and formal titles, showing up for business meetings in roller-blade gear. It is clear from all these books that they believed their own hype; they believed with moral fervor in the primacy and power of information. (Sergey and Larry did not invent the company’s famous motto—“Don’t be evil”—but they embraced it, and now they may as well own it.)

As they saw it from the first, their mission encompassed not just the Internet but all the world’s books and images, too. When Google created a free e-mail service—Gmail—its competitors were Microsoft, which offered users two megabytes of storage of their past and current e-mail, and Yahoo, which offered four megabytes. Google could have trumped that with six or eight; instead it provided 1,000—a gigabyte. It doubled that a year later and promised “to keep giving people more space forever.”

Advertisement

They have been relentless in driving computer science forward. Google Translate has achieved more in machine translation than the rest of the world’s artificial intelligence experts combined. Google’s new mind-reading type-ahead feature, Google Instant, has “to date” (boasts the 2010 annual report) “saved our users over 100 billion keystrokes and counting.” (If you are seeking information about the Gobi Desert, for example, you receive results well before you type the word “desert.”)

Somewhere along the line they gave people the impression that they didn’t care for advertising—that they scarcely had a business plan at all. In fact it’s clear that advertising was fundamental to their plan all along. They did scorn conventional marketing, however; their attitude seemed to be that Google would market itself. As, indeed, it did. Google was a verb and a meme. “The media seized on Google as a marker of a new form of behavior,” writes Levy.

Endless articles rhapsodized about how people would Google their blind dates to get an advance dossier or how they would type in ingredients on hand to Google a recipe or use a telephone number to Google a reverse lookup. Columnists shared their self-deprecating tales of Googling themselves…. A contestant on the TV show Who Wants to Be a Millionaire? arranged with his brother to tap Google during the Phone-a-Friend lifeline….And a fifty-two-year-old man suffering chest pains Googled “heart attack symptoms” and confirmed that he was suffering a coronary thrombosis.

gleick_2-081811.jpg

Google’s first marketing hire lasted a matter of months in 1999; his experience included Miller Beer and Tropicana and his proposal involved focus groups and television commercials. When Doug Edwards interviewed for a job as marketing manager later that year, he understood that the key word was “viral.” Edwards lasted quite a bit longer, and now he’s the first Google insider to have published his memoir of the experience. He was, as he says proudly in his subtitle to I’m Feeling Lucky, Google employee number 59. He provides two other indicators of how early that was: so early that he nabbed the e-mail address doug@google.com; and so early that Google’s entire server hardware lived in a rented “cage.”

Less than six hundred square feet, it felt like a shotgun shack blighting a neighborhood of gated mansions. Every square inch was crammed with racks bristling with stripped-down CPUs [central processing units]. There were twenty-one racks and more than fifteen hundred machines, each sprouting cables like Play-Doh pushed through a spaghetti press. Where other cages were right-angled and inorganic, Google’s swarmed with life, a giant termite mound dense with frenetic activity and intersecting curves.

Levy got a glimpse of Google’s data storage a bit later and remarked, “If you could imagine a male college freshman made of gigabytes, this would be his dorm.”

Not anymore. Google owns and operates a constellation of giant server farms spread around the globe—huge windowless structures, resembling aircraft hangars or power plants, some with cooling towers. The server farms stockpile the exabytes of information and operate an array of staggeringly clever technology. This is Google’s share of the cloud (that notional place where our data live) and it is the lion’s share.

How thoroughly and how radically Google has already transformed the information economy has not been well understood. The merchandise of the information economy is not information; it is attention. These commodities have an inverse relationship. When information is cheap, attention becomes expensive. Attention is what we, the users, give to Google, and our attention is what Google sells—concentrated, focused, and crystallized.

Google’s business is not search but advertising. More than 96 percent of its $29 billion in revenue last year came directly from advertising, and most of the rest came from advertising-related services. Google makes more from advertising than all the nation’s newspapers combined. It’s worth understanding precisely how this works. Levy chronicles the development of the advertising engine: a “fantastic achievement in building a money machine from the virtual smoke and mirrors of the Internet.” In The Googlization of Everything (and Why We Should Worry), a book that can be read as a sober and admonitory companion, Siva Vaidhyanathan, a media scholar at the University of Virginia, puts it this way: “We are not Google’s customers: we are its product. We—our fancies, fetishes, predilections, and preferences—are what Google sells to advertisers.”

The evolution of this unparalleled money machine piled one brilliant innovation atop another, in fast sequence:

  1. Early in 2000, Google sold “premium sponsored links”: simple text ads assigned to particular search terms. A purveyor of golf balls could have its ad shown to everyone who searched for “golf” or, even better, “golf balls.” Other search engines were already doing this. Following tradition, they charged according to how many people saw each ad. Salespeople sold the ads to big accounts, one by one.
  2. Late that year, engineers devised an automated self-service system, dubbed AdWords. The opening pitch went, “Have a credit card and 5 minutes? Get your ad on Google today,” and suddenly thousands of small businesses were buying their first Internet ads.
  3. From a short-lived startup called GoTo (by 2003 Google owned it) came two new ideas. One was to charge per click rather than per view. People who click on an ad for golf balls are more likely to buy them than those who simply see an ad on Google’s website. The other idea was to let advertisers bid for keywords—such as “golf ball”—against one another in fast online auctions. Pay-per-click auctions opened a cash spigot. A click meant a successful ad, and some advertisers were willing to pay more for that than a human salesperson could have known. Plaintiffs’ lawyers seeking clients would bid as much as fifty dollars for a single click on the keyword “mesothelioma”—the rare form of cancer caused by asbestos.
  4. Google—monitoring its users’ behavior so systematically—had instant knowledge of which ads were succeeding and which were not. It could view “click-through rates” as a measure of ad quality. And in determining the winners of auctions, it began to consider not just the money offered but the appeal of the ad: an effective ad, getting lots of clicks, would get better placement.

Now Google had a system of profitable cycles in place, positive feedback pushing advertisers to make more effective ads and giving them data to help them do it and giving users more satisfaction in clicking on ads, while punishing noise and spam. “The system enforced Google’s insistence that advertising shouldn’t be a transaction between publisher and advertiser but a three-way relationship that also included the user,” writes Levy. Hardly an equal relationship, however. Vaidhyanathan sees it as exploitative: “The Googlization of everything entails the harvesting, copying, aggregating, and ranking of information about and contributions made by each of us.”

By 2003, AdWords Select was serving hundreds of thousands of advertisers and making so much money that Google was deliberating hiding its success from the press and from competitors. But it was only a launching pad for the next brilliancy.

  1. So far, ads were appearing on Google’s search pages, discreet in size, clearly marked, at the top or down the right side. Now the company expanded its platform outward. The aim was to develop a form of artificial intelligence that could analyze chunks of text—websites, blogs, e-mail, books—and match them with keywords. With two billion Web pages already in its index and with its close tracking of user behavior, Google had exactly the information needed to tackle this problem. Given a website (or a blog or an e-mail), it could predict which advertisements would be effective.

This was, in the jargon, “content-targeted advertising.” Google called its program AdSense. For anyone hoping to—in the jargon—“monetize” their content, it was the Holy Grail. The biggest digital publishers, such as The New York Times, quickly signed up for AdSense, letting Google handle growing portions of their advertising business. And so did the smallest publishers, by the millions—so grew the “long tail” of possible advertisers, down to individual bloggers. They signed up because the ads were so powerfully, measurably productive. “Google conquered the advertising world with nothing more than applied mathematics,” wrote Chris Anderson, the editor of Wired. “It didn’t pretend to know anything about the culture and conventions of advertising—it just assumed that better data, with better analytical tools, would win the day. And Google was right.” Newspapers and other traditional media have complained from time to time about the arrogation of their content, but it is by absorbing the world’s advertising that Google has become their most destructive competitor.

Like all forms of artificial intelligence, targeted advertising has hits and misses. Levy cites a classic miss: a gory New York Post story about a body dismembered and stuffed in a garbage bag, accompanied on the Post website by a Google ad for plastic bags. Nonetheless, anyone could now add a few lines of code to their website, automatically display Google ads, and start cashing monthly checks, however small. Vast tracts of the Web that had been free of advertising now became Google part- ners. Today Google’s ad canvas is not just the search page but the entire Web, and beyond that, great volumes of e-mail and, potentially, all the world’s books.

Search and advertising thus become the matched edges of a sharp sword. The perfect search engine, as Sergey and Larry imagine it, reads your mind and produces the answer you want. The perfect advertising engine does the same: it shows you the ads you want. Anything else wastes your attention, the advertiser’s money, and the world’s bandwidth. The dream is virtuous advertising, matching up buyers and sellers to the benefit of all. But virtuous advertising in this sense is a contradiction in terms. The advertiser is paying for a slice of our limited attention; our minds would otherwise be elsewhere. If our interests and the advertisers’ were perfectly aligned, they would not need to pay. There is no information utopia. Google users are parties to a complex transaction, and if there is one lesson to be drawn from all these books it is that we are not always witting parties.

Seeing ads next to your e-mail (if you use Google’s free e-mail service) can provide reminders, sometimes startling, of how much the company knows about your inner self. Even without your e-mail, your search history reveals plenty—as Levy says, “your health problems, your commercial interests, your hobbies, and your dreams.” Your response to advertising reveals even more, and with its advertising programs Google began tracking the behavior of individual users from one Internet site to the next. They observe our every click (where they can) and they measure in milliseconds how long it takes us to decide. If they didn’t, their results wouldn’t be so uncannily effective. They have no rival in the depth and breadth of their data mining. They make statistical models for everything they know, connecting the small scales with the large, from queries and clicks to trends in fashion and season, climate and disease.

It’s for your own good—that is Google’s cherished belief. If we want the best possible search results, and if we want advertisements suited to our needs and desires, we must let them into our souls.

The Google corporate motto is “Don’t be evil.” Simple as that is, it requires parsing.

It was first put forward in 2001 by an engineer, Paul Buchheit, at a jawboning session about corporate values. “People laughed,” he recalled. “But I said, ‘No, really.'” (At that time the booming tech world had its elephant-in-the-room, and many Googlers understood “Don’t be evil” explicitly to mean “Don’t be like Microsoft”; i.e., don’t be a ruthless, take-no-prisoners monopolist.)

Often it is misquoted in stronger form: “Do no evil.” That would be a harder standard to meet.

Now they’re mocked for it, but the Googlers were surely sincere. They believed a corporation should behave ethically, like a person. They brainstormed about their values. Taken at face value, “Don’t be evil” has a finer ring than some of the other contenders: “Google will strive to honor all its commitments” or “Play hard but keep the puck down.”

“Don’t be evil” does not have to mean transparency. None of these books can tell you how many search queries Google fields, how much electricity it consumes, how much storage capacity it owns, how many streets it has photographed, how much e-mail it stores; nor can you Google the answers, because Google values its privacy.

It does not have to mean “Obey all the laws.” When Google embarked on its program to digitize copyrighted books and copy them onto its servers, it did so in stealth, deceiving publishers with whom it was developing business relationships. Google knew that the copying bordered on illegal. It considered its intentions honorable and the law outmoded. “I think we knew that there would be a lot of interesting issues,” Levy quotes Page as saying, “and the way the laws are structured isn’t really sensible.”

Who, then, judges what is evil? “Evil is what Sergey says is evil,” explained Eric Schmidt, the chief executive officer, in 2002.

As for Sergey: “I feel like I shouldn’t impose my beliefs on the world. It’s a bad technology practice.” But the founders seem sure enough of their own righteousness. (“‘Bastards!’ Larry would exclaim when a blogger raised concerns about user privacy,” recalls Edwards. “‘Bastards!’ they would say about the press, the politicians, or the befuddled users who couldn’t grasp the obvious superiority of the technology behind Google’s products.”)

Google did some evil in China. It collaborated in censorship. Beginning in 2004, it arranged to tweak and twist its algorithms and filter its results so that the native-language Google.cn would omit results unwelcome to the government. In the most notorious example, “Tiananmen Square” would produce sightseeing guides but not history lessons. Google figured out what to censor by checking China’s approved search engine, Baidu, and by accepting the government’s supplementary guidance.

Yet it is also true that Google pushed back against the government as much as any other American company. When results were blocked, Google insisted on alerting users with a notice at the bottom of the search page. On balance Google clearly believed (and I think it was right, despite the obvious self-interest) that its presence benefited the people of China by increasing information flow and making clear the violation of transparency. The adventure took a sharp turn in January 2010, after organized hackers, perhaps with government involvement, breached Google’s servers and got access to the e-mail accounts of human rights activists. The company shut down Google.cn and now serves China only from Hong Kong—with results censored not by Google but by the government’s own ongoing filters.

So is Google evil? The question is out there now; it nags, even as we blithely rely on the company for answers—which now also means maps, translations, street views, calendars, video, financial data, and pointers to goods and services. The strong version of the case against Google is laid out starkly in Search & Destroy, by a self-described “Google critic” named Scott Cleland. He wields a blunt club; the book might as well been have been titled Google: Threat or Menace?! “There is evidence that Google is not all puppy dogs and rainbows,” he writes.

Google’s corporate mascot is a replica of a Tyrannosaurus Rex skeleton on display outside the corporate headquarters. With its powerful jaws and teeth, T-Rex was a terrifying predator. And check out the B-52 bomber chair in Google Chairman Eric Schmidt’s office. The B-52 was a long range bomber designed to deliver nuclear weapons.

Levy is more measured: “Google professed a sense of moral purity…but it seemed to have a blind spot regarding the consequences of its own technology on privacy and property rights.” On all the evidence Google’s founders began with an unusually ethical vision for their unusual company. They believe in information—“universally accessible”—as a force for good in and of itself. They have created and led teams of technologists responsible for a golden decade of genuine innovation. They are visionaries in a time when that word is too cheaply used. Now they are perhaps disinclined to submit to other people’s ethical standards, but that may be just a matter of personality. It is well to remember that the modern corporation is an amoral creature by definition, obliged to its shareholder financiers, not to the public interest.

The Federal Trade Commission issued subpoenas in June in an antitrust investigation into Google’s search and advertising practices; the European Commission began a similar investigation last year. Governments are responding in part to organized complaints by Google’s business competitors, including Microsoft, who charge, among other things, that the company manipulates its search results to favor its friends and punish its enemies. The company has always denied that. Certainly regulators are worried about its general “dominance”—Google seems to be everywhere and seems to know everything and offends against cherished notions of privacy.

The rise of social networking upends the equation again. Users of Facebook choose to reveal—even to flaunt—aspects of their private lives, to at least some part of the public world. Which aspects, and which part? On Facebook the user options are notoriously obscure and subject to change, but most users share with “friends” (the word having been captured and drained bloodless). On Twitter, every remark can be seen by the whole world, except for the so-called “direct message,” which former Representative Anthony Weiner tried and failed to employ. Also, the Library of Congress is archiving all tweets, presumably for eternity, a fact that should enter the awareness of teenagers, if not members of Congress.

Now Google is rolling out its second attempt at a social-networking platform, called Google+. The first attempt, eighteen months ago, was Google Buzz; it was an unusual stumble for the company. By default, it revealed lists of contacts with whom users had been chatting and e-mailing. Privacy advocates raised an alarm and the FTC began an investigation, quickly reaching a settlement in which Google agreed to regular privacy audits for the next twenty years. Google+ gives users finer control over what gets shared with whom. Still, one way or another, everything is shared with the company. All the social networks have access to our information and mean to use it. Are they our friends?

This much is clear: We need to decide what we want from Google. If only we can make up our collective minds. Then we still might not get it.

The company always says users can “opt out” of many of its forms of data collection, which is true, up to a point, for savvy computer users; and the company speaks of privacy in terms of “trade-offs,” to which Vaidhyanathan objects:

Privacy is not something that can be counted, divided, or “traded.” It is not a substance or collection of data points. It’s just a word that we clumsily use to stand in for a wide array of values and practices that influence how we manage our reputations in various contexts. There is no formula for assessing it: I can’t give Google three of my privacy points in exchange for 10 percent better service.

This seems right to me, if we add that privacy involves not just managing our reputation but protecting the inner life we may not want to share. In any case, we continue to make precisely the kinds of trades that Vaidhyanathan says are impossible. Do we want to be addressed as individuals or as neurons in the world brain? We get better search results and we see more appropriate advertising when we let Google know who we are. And we save a few keystrokes.