It does not have to mean “Obey all the laws.” When Google embarked on its program to digitize copyrighted books and copy them onto its servers, it did so in stealth, deceiving publishers with whom it was developing business relationships. Google knew that the copying bordered on illegal. It considered its intentions honorable and the law outmoded. “I think we knew that there would be a lot of interesting issues,” Levy quotes Page as saying, “and the way the laws are structured isn’t really sensible.”
Who, then, judges what is evil? “Evil is what Sergey says is evil,” explained Eric Schmidt, the chief executive officer, in 2002.
As for Sergey: “I feel like I shouldn’t impose my beliefs on the world. It’s a bad technology practice.” But the founders seem sure enough of their own righteousness. (“‘Bastards!’ Larry would exclaim when a blogger raised concerns about user privacy,” recalls Edwards. “‘Bastards!’ they would say about the press, the politicians, or the befuddled users who couldn’t grasp the obvious superiority of the technology behind Google’s products.”)
Google did some evil in China. It collaborated in censorship. Beginning in 2004, it arranged to tweak and twist its algorithms and filter its results so that the native-language Google.cn would omit results unwelcome to the government. In the most notorious example, “Tiananmen Square” would produce sightseeing guides but not history lessons. Google figured out what to censor by checking China’s approved search engine, Baidu, and by accepting the government’s supplementary guidance.
Yet it is also true that Google pushed back against the government as much as any other American company. When results were blocked, Google insisted on alerting users with a notice at the bottom of the search page. On balance Google clearly believed (and I think it was right, despite the obvious self-interest) that its presence benefited the people of China by increasing information flow and making clear the violation of transparency. The adventure took a sharp turn in January 2010, after organized hackers, perhaps with government involvement, breached Google’s servers and got access to the e-mail accounts of human rights activists. The company shut down Google.cn and now serves China only from Hong Kong—with results censored not by Google but by the government’s own ongoing filters.
So is Google evil? The question is out there now; it nags, even as we blithely rely on the company for answers—which now also means maps, translations, street views, calendars, video, financial data, and pointers to goods and services. The strong version of the case against Google is laid out starkly in Search & Destroy, by a self-described “Google critic” named Scott Cleland. He wields a blunt club; the book might as well been have been titled Google: Threat or Menace?! “There is evidence that Google is not all puppy dogs and rainbows,” he writes.
Google’s corporate mascot is a replica of a Tyrannosaurus Rex skeleton on display outside the corporate headquarters. With its powerful jaws and teeth, T-Rex was a terrifying predator. And check out the B-52 bomber chair in Google Chairman Eric Schmidt’s office. The B-52 was a long range bomber designed to deliver nuclear weapons.
Levy is more measured: “Google professed a sense of moral purity…but it seemed to have a blind spot regarding the consequences of its own technology on privacy and property rights.” On all the evidence Google’s founders began with an unusually ethical vision for their unusual company. They believe in information—“universally accessible”—as a force for good in and of itself. They have created and led teams of technologists responsible for a golden decade of genuine innovation. They are visionaries in a time when that word is too cheaply used. Now they are perhaps disinclined to submit to other people’s ethical standards, but that may be just a matter of personality. It is well to remember that the modern corporation is an amoral creature by definition, obliged to its shareholder financiers, not to the public interest.
The Federal Trade Commission issued subpoenas in June in an antitrust investigation into Google’s search and advertising practices; the European Commission began a similar investigation last year. Governments are responding in part to organized complaints by Google’s business competitors, including Microsoft, who charge, among other things, that the company manipulates its search results to favor its friends and punish its enemies. The company has always denied that. Certainly regulators are worried about its general “dominance”—Google seems to be everywhere and seems to know everything and offends against cherished notions of privacy.
The rise of social networking upends the equation again. Users of Facebook choose to reveal—even to flaunt—aspects of their private lives, to at least some part of the public world. Which aspects, and which part? On Facebook the user options are notoriously obscure and subject to change, but most users share with “friends” (the word having been captured and drained bloodless). On Twitter, every remark can be seen by the whole world, except for the so-called “direct message,” which former Representative Anthony Weiner tried and failed to employ. Also, the Library of Congress is archiving all tweets, presumably for eternity, a fact that should enter the awareness of teenagers, if not members of Congress.
Now Google is rolling out its second attempt at a social-networking platform, called Google+. The first attempt, eighteen months ago, was Google Buzz; it was an unusual stumble for the company. By default, it revealed lists of contacts with whom users had been chatting and e-mailing. Privacy advocates raised an alarm and the FTC began an investigation, quickly reaching a settlement in which Google agreed to regular privacy audits for the next twenty years. Google+ gives users finer control over what gets shared with whom. Still, one way or another, everything is shared with the company. All the social networks have access to our information and mean to use it. Are they our friends?
This much is clear: We need to decide what we want from Google. If only we can make up our collective minds. Then we still might not get it.
The company always says users can “opt out” of many of its forms of data collection, which is true, up to a point, for savvy computer users; and the company speaks of privacy in terms of “trade-offs,” to which Vaidhyanathan objects:
Privacy is not something that can be counted, divided, or “traded.” It is not a substance or collection of data points. It’s just a word that we clumsily use to stand in for a wide array of values and practices that influence how we manage our reputations in various contexts. There is no formula for assessing it: I can’t give Google three of my privacy points in exchange for 10 percent better service.
This seems right to me, if we add that privacy involves not just managing our reputation but protecting the inner life we may not want to share. In any case, we continue to make precisely the kinds of trades that Vaidhyanathan says are impossible. Do we want to be addressed as individuals or as neurons in the world brain? We get better search results and we see more appropriate advertising when we let Google know who we are. And we save a few keystrokes.