halpern_1-062311.jpg

Edward Gorey Charitable Trust

Drawing by Edward Gorey

Early this April, when researchers at Washington University in St. Louis reported that a woman with a host of electrodes temporarily positioned over the speech center of her brain was able to move a computer cursor on a screen simply by thinking but not pronouncing certain sounds, it seemed like the Singularity—the long-standing science fiction dream of melding man and machine to create a better species—might have arrived. At Brown University around the same time, scientists successfully tested a different kind of brain–computer interface (BCI) called BrainGate, which allowed a paralyzed woman to move a cursor, again just by thinking. Meanwhile, at USC, a team of biomedical engineers announced that they had successfully used carbon nanotubes to build a functioning synapse—the junction at which signals pass from one nerve cell to another—which marked the first step in their long march to construct a synthetic brain. On the same campus, Dr. Theodore Berger, who has been on his own path to make a neural prosthetic for more than three decades, has begun to implant a device into rats that bypasses a damaged hippocampus in the brain and works in its place.

The hippocampus is crucial to memory formation, and Berger’s invention holds the promise of overcoming problems related to both normal memory loss that comes from aging and pathological memory loss associated with diseases like Alzheimer’s. Similarly, the work being done at Brown and Washington University suggests the possibility of restoring mobility to those who are paralyzed and giving voice to those who have been robbed by illness or injury of the ability to communicate. If this is the Singularity, it looks not just benign but beneficent.

Michael Chorost is a man who has benefited from a brain–computer interface, though the kind of BCI implanted in his head after he went deaf in 2001, a cochlear implant, was not inserted directly into his brain, but into each of his inner ears. The result, after a lifetime of first being hard of hearing and then shut in complete auditory solitude, as he recounted in his memoir, Rebuilt: How Becoming Part Computer Made Me More Human (2005), was dramatic and life-changing. As his new, oddly jejune book, World Wide Mind: The Coming Integration of Humanity, Machines, and the Internet, makes clear, he is now a cheerleader for the rest of us getting kitted out with our own, truly personal, in-brain computers. In Chorost’s ideal world, which he lays out with the unequivocal zeal of a convert, we will all be connected directly to the Internet via a neural implant, so that the Internet “would become seamlessly part of us, as natural and simple to use as our own hands.”

The debate between repair and enhancement is long-standing in medicine (and sports, and education, and genetics), though it gets louder and more complicated as technology advances. Typically, repair, like what those Brown, USC, and Washington University research teams are aiming to do for people who have suffered stroke, spinal cord and other injuries, neurodegeneration, dementia, or mental illness, is upheld as something good and necessary and worthy. Enhancement, on the other hand—as with performance drugs and stem cell line manipulation—is either reviled as a threat to our integrity and meaning as humans or conflated with repair until the distinction becomes meaningless.1

Chorost bounces over this debate altogether. While the computer in his head was put there to fix a deficit, the fact that it is there at all is what seems to convince him that the rest of us should become cyborgs. His assumption—it would be too generous to call it an argument—is that if that worked for him, this will work for us. “My two implants make me irreversibly computational, a living example of the integration of humans and computers,” he writes. “So for me the thought of implanting something like a BlackBerry in my head is not so strange. It would not be so strange for a lot of people, I think.”

More than a quarter-century ago, a science writer named David Ritchie published a book that I’ve kept on my bookshelf as a reminder of what the post-1984 world was supposed to bring. Called The Binary Brain, it extolled “the synthesis of human and artificial intelligence” via something he called a “biochip.” “The possibilities are marvelous to contemplate,” he wrote.

You could plug into a computer’s memory banks almost as easily as you put on your shoes. Suddenly, your mind would be full of all the information stored in the computer. You could instantly make yourself an expert in anything from Spanish literature to particle physics…. With biochips to hold the data, all the information in the MIT and Harvard libraries might be stuffed into a volume no greater than that of a sandwich. All of Shakespeare in a BB-sized module…. You may see devices like this before this century ends.

“Remember,” he says gravely, “we are talking here about a technology that is just around the corner, if not here already. Biochips would lead to the development of all manner of man-machine combinations….”

Advertisement

Twenty-six years later, in the second decade of the new millennium, here is Chorost saying almost the same thing, and for the same reason: our brains are too limited to sufficiently apprehend the world.2 “Some human attributes like IQ appear to have risen in the twentieth century,” he writes, “but the rate of increase is much slower than technology’s. There is no Moore’s Law for human beings.” (Moore’s Law is the much-invoked thesis, now elevated to metaphor, that says that the number of components that can be placed on an integrated circuit doubles every two years.) Leaving aside the flawed equivalences—that information is knowledge and facts are intelligence—Chorost’s “transmog” dream is rooted in a naive, and common, misperception of the Internet search engine, particularly Google’s, which is how most Internet users navigate through the fourteen billion pages of the World Wide Web.

Most of us, I think it’s safe to say, do not give much thought to the algorithm that produces the results of a Google search. Ask a question, get an answer—it’s a straightforward transaction. It seems not much different from consulting an encyclopedia, or a library card catalog, or even an index in a book. Books, those other repositories of facts, information, and ideas, are the template by which we understand the Web, which is like a random, messy, ever-expanding volume of every big and little thing. A search is our way into and through the mess, and when it’s made by using Google, it’s relying on the Google algorithm, a patented and closely guarded piece of intellectual property that the company calls PageRank, composed of “500 million variables and 2 billion terms.”

Those large numbers are comforting. They suggest an impermeable defense against bias, a scientific objectivity that allows the right response to the query to bubble up from the stew of so much stuff. To an extent it’s a self-perpetuating system, since it uses popularity (the number of links) as a proxy for importance, so that the more a particular link is clicked on, the higher its PageRank, and the more likely it is to appear near the top of the search results. (This is why companies have not necessarily minded bad reviews of their products.) Chorost likens this to Hebbian learning—the notion that neurons that fire together, wire together, since

a highly ranked page will garner more page views, thus strengthening its ranking. [In this way]pages that link together “think” together. If many people visit a page over and over again, its PageRank will become so high that it effectively becomes stored in the collective human/electronic long-term memory.

Even if this turns out to be true, the process is anything but unbiased.

A Google search—which Chorost would have us doing in our own technologically modified heads—“curates” the Internet. The algorithm is, in essence, an editor, pulling up what it deems important, based on someone else’s understanding of what is important. This has spawned a whole industry of search engine optimization (SEO) consultants who game the system by reconfiguring a website’s code, content, and keywords to move it up in the rankings. Companies have also been known to pay for links in order to push themselves higher up in the rankings, a practice that Google is against and sometimes cracks down on. Even so, results rise to the top of a search query because an invisible hand is shepherding them there.

It’s not just the large number of search variables, or the intervention of marketers, that shapes the information we’re shown by bringing certain pages to our attention while others fall far enough down in the rankings to be kept out of view. As Eli Pariser documents in his chilling book The Filter Bubble: What the Internet Is Hiding from You, since December 2009, Google has aimed to contour every search to fit the profile of the person making the query. (This contouring applies to all users of Google, though it takes effect only after the user has performed several searches, so that the results can be tailored to the user’s tastes.)

The search process, in other words, has become “personalized,” which is to say that instead of being universal, it is idiosyncratic and oddly peremptory. “Most of us assume that when we google a term, we all see the same results—the ones that the company’s famous Page Rank algorithm suggests are the most authoritative based on other page’s links,” Pariser observes. With personalized search, “now you get the result that Google’s algorithm suggests is best for you in particular—and someone else may see something entirely different. In other words, there is no standard Google anymore.” It’s as if we looked up the same topic in an encyclopedia and each found different entries—but of course we would not assume they were different since we’d be consulting what we thought to be a standard reference.

Advertisement

Among the many insidious consequences of this individualization is that by tailoring the information you receive to the algorithm’s perception of who you are, a perception that it constructs out of fifty-seven variables, Google directs you to material that is most likely to reinforce your own worldview, ideology, and assumptions. Pariser suggests, for example, that a search for proof about climate change will turn up different results for an environmental activist than it would for an oil company executive and, one assumes, a different result for a person whom the algorithm understands to be a Democrat than for one it supposes to be a Republican. (One need not declare a party affiliation per se—the algorithm will prise this out.) In this way, the Internet, which isn’t the press, but often functions like the press by disseminating news and information, begins to cut us off from dissenting opinion and conflicting points of view, all the while seeming to be neutral and objective and unencumbered by the kind of bias inherent in, and embraced by, say, the The Weekly Standard or The Nation.

halpern_2-062311.jpg
Edward Gorey Charitable Trust

Why this matters is captured in a study in the spring issue of Sociological Quarterly, which echoes Pariser’s concern that when ideology drives the dissemination of information, knowledge is compromised. The study, which examined attitudes toward global warming among Republicans and Democrats in the years between 2001 and 2010, found that in those nine years, as the scientific consensus on climate change coalesced and became nearly universal, the percentage of Republicans who said that the planet was beginning to warm dropped precipitously, from 49 percent to 29 percent. For Democrats, the percentage went up, from 60 percent to 70 percent. It was as if the groups were getting different messages about the science, and most likely they were. The consequence, as the study’s authors point out, was to stymie any real debate on public policy. This is Pariser’s point exactly, and his concern: that by having our own ideas bounce back at us, we inadvertently indoctrinate ourselves with our own ideas. “Democracy requires citizens to see things from one another’s point of view, but instead we’re more and more enclosed in our own bubbles,” he writes. “Democracy requires a reliance on shared facts; instead we’re being offered parallel but separate universes.”

It’s not difficult to see where this could lead—how easily anything with an agenda (a lobbying group, a political party, a corporation, a government) could flood the echo chamber with information central to its cause. (This, in fact, is what has happened, on the right, with climate change.) Who would know? Certainly not Michael Chorost, whose blind allegiance to Google—which he believes is the central part of the “nascent forebrain, hippocampus, and long-term declarative memory store” of the coming World Wide Mind—is matched by his stunning political naiveté. A government “that used the World Wide Mind for overt control would have to be more ominously totalitarian than any government in existence today (except perhaps North Korea),” he writes. “The push-pull dynamic of evolution tends to weed out totalitarian societies because they are, in the long run, inefficient and wasteful.” Contrast this to the words of the man who invented the World Wide Web, Sir Timothy Berners-Lee, writing not long ago in Scientific American:

The Web as we know it is being threatened…. Some of its most successful inhabitants have begun to chip away at its principles…. Governments—totalitarian and democratic alike—are monitoring people’s online habits, endangering important human rights.

One of the most significant changes in the Internet since the release in 1993 of the first graphical browser, Mosaic, which was built on the basis of Berners-Lee’s work, has been the quest to monetize it. In its inaugural days, the Web was a strange, eclectic collection of personal homepages, a kind of digital wall art that bypassed traditional gatekeepers, did not rely on mainstream media companies or corporate cash, and was not driven by commercial interests. The computer scientist and musician Jaron Lanier was there at the creation, and in his fierce, coruscating manifesto, You Are Not a Gadget,3 remembers it like this:

The rise of the web was a rare instance when we learned new, positive information about human potential. Who would have guessed (at least at first) that millions of people would put so much effort into a project without the presence of advertising, commercial motive, threat of punishment, charismatic figures, identity politics, exploitation of the fear of death, or any of the other classic motivators of mankind. In vast numbers, people did something cooperatively, solely because it was a good idea, and it was beautiful.

But then commerce moved in, almost by accident, when Larry Page and Sergey Brin, the duo who started Google, reluctantly paired small ads with their masterful search engine as a way to fund it. It was not their intent, at first, to create the largest global advertising platform in the history of the world, or to move marketing strategy away from pushing products toward consumers to pulling individual consumers toward specific products and brands. But that is what happened. Write the word “blender” in an e-mail, and the next set of ads you’re likely to see will be for Waring and Oster.4 Search for information on bipolar disease, and drug ads will pop up when you’re reading baseball scores. Use Google Translate to read an abstract of a journal article and an ad for Spanish translation software will appear when you are using an online English dictionary. (All this activity leads to a question that will not be rhetorical if Chorost’s World Wide Mind comes to fruition: Will our thoughts have corporate sponsors, too?)

Targeted ads (even when they are generated by what may have appeared to have been a private communication) may seem harmless enough—after all, if there is going to be advertising, isn’t it better if it is for products and services that might be useful? But to pull you into a transaction, companies believe they need to know not only your current interests, but what you have liked before, how old you are, your gender, where you live, how much education you have, and on and on. There are something like five hundred companies that are able to track every move you make on the Internet, mining the raw material of the Web and selling it to marketers. (“Stop calling yourself a user,” Lanier warns. “You are being used.”) That you are overweight, have diabetes, have missed a car payment or two, read historical novels, support Republicans, use a cordless power drill, shop at Costco, and spend a lot of time on airplanes is not only known to people other than yourself, it is of great monetary value to them as well. So, too, where you are and where you’ve been, as we recently learned when it was revealed that both Apple and Google have been tracking mobile phone and tablet users and storing that information as well.

Even reading devices like Amazon’s Kindle pay attention to what users are doing: highlight a passage in a Kindle book and the passage is sent back to Amazon. Clearly, the potential for privacy and other civil liberty abuses here is vast. While the FBI, for instance, needs a warrant to search your computer, Pariser writes that “if you use Yahoo or Gmail or Hotmail for your e-mail, you ‘lose your constitutional protections immediately,’ according to a lawyer for the Electronic Frontier Foundation.” At least one arrest has been made by law enforcement officers using Apple location data. And this past April, the Supreme Court heard arguments in Sorrell v. IMS Health, in which IMS Health, in challenging Vermont’s statutory restriction on the sale of patients’ prescription information to data-mining companies, argued that harvesting and selling medical records data is a First Amendment right. Clearly, data tracking and mining give new meaning to the words “computer monitor.”

In the commercial sphere, marketers are also looking beyond facts and bits of information, in order to determine not just what you have bought, but what kinds of pitches appealed to you when you did. Once they have compiled your “persuasion profile,” they will refine those targeted ads even further. And if marketing companies can do this, why not political candidates, the government, or companies that want to sway public opinion? “There are undoubtedly times and places and styles of argument that make us more susceptible to believe what we’re told,” Pariser observes.

One thing that we—the denizens of the Internet—have come to accept without much thought is that commerce is a really cool aspect of the Web’s shift into social networking. The very popular Foursquare, Loopt, and Groupon sites, for example, make shopping and branding the basis of the social encounter. People on Foursquare vie to become the “mayor” of bakeries and clothing stores by visiting them more than anyone else. They proudly display “badges” that they’ve “earned” by patronizing certain businesses, as if they were trophies celebrating excellence. Facebook users who click on the “like” button for a product may trigger the appearance of an ad for that product on the pages of their “friends.” Companies like Twitalyzer and Klout analyze data from Twitter, Facebook, and LinkedIn to determine who has the most influence online—these can be celebrities or ordinary people with significant followings—and sell that information to businesses that then entice the influencers to pitch their products or “evangelize their brand.” This, according to The Wall Street Journal, has “ignited a race among social-media junkies who, eager for perks and bragging rights, are working hard to game the system and boost their scores.”5 As Lanier points out, “The only hope for social networking sites from a business point of view is for a magic formula to appear in which some method of violating privacy and dignity becomes acceptable.” That magic, it seems, is already in play.

The paradox of personalization and the self-expression promoted by the Internet through Twitter, Facebook, and even Chatroulette is that it simultaneously diminishes the value of personhood and individuality. Read the comments that accompany many blog posts and articles, and it is overwhelmingly evident that violating dignity—someone else’s and, therefore, one’s own—is a cheap and widely circulated currency. This is not only true for subjects that might ordinarily incite partisanship and passion, like sports or politics, but for pretty much anything.6

The point of ad hominem attacks is to take a swipe at someone’s character, to undermine their integrity. Chorost suggests that the reason the Internet as we now know it does not foster the kind of empathy he sees coming in the Web of the future, when we will “feel people’s inner lives electronically,” is because it is not yet an integral part of our bodies, but Lanier’s explanation is more convincing. The “hive mind” created through our electronic connections necessarily obviates the individual—indeed, that’s what makes it a collective consciousness. Anonymity, which flourishes where there is no individual accountability, is one of its key features, and behind it, meanness, antipathy, and cruelty have a tendency to rush right in. As the sociologist Sherry Turkle observes:

Networked, we are together, but so lessened are our expectations of each other that we can feel utterly alone. And there is the risk that we come to see others as objects to be accessed—and only for the parts that we find useful, comforting, or amusing.7

Here is Chorost describing the wonders of a neural-networked friendship:

Having brainlike computers would greatly simplify the process of extracting information from one brain and sending it to another. Suppose you have such a computer, and you’re connected with another person via the World Wide Mind…. You see a cat on the sidewalk in front of you. Your rig…sees activity in a large percentage of the neurons constituting your brain’s invariant representation of a cat. To let your friend know you’re seeing a cat, it sends three letters of information—CAT—to the other person’s implanted rig. That person’s rig activates her brain’s invariant representation of a cat, and she sees it. Or rather, to be more accurate, she sees a memory of a cat that is taken from her own neural circuitry….

Now, many important details would be missing. The cat’s breed, its color, its posture, what it’s doing, and so forth…. But it would convey a key piece of information: your friend would know that you are seeing a cat.

Of course, if you called or texted or e-mailed your friend, she would also know that you were seeing a cat, and she’d know what it looked like, and what it was doing, and that it was a significant enough event in your life that you were telling her about it. Do we want to know every time someone we know sees a cat?

It’s easy to make fun of this, just as it is easy to dismiss the Singularity as a silly science fiction fantasy, but that would be even sillier. Of course, one of the groups of people most drawn to science fiction are the engineers who write code and build robots and have, in less than a generation, changed the way we do research and medicine and read books and communicate with each other and pay the bills and on and on. (In a 2004 interview, Larry Page envisioned a future where one’s brain is “augmented” by Google, so that when you think of something, “your cell phone whispers the answer into your ear.”) As Lanier points out:

We [the engineers] make up extensions to your being, like remote eyes and ears (webcams and mobile phones) and expanded memory (the world of details you can search for online). These become the structures by which you connect to the world and other people…. We tinker with your philosophy by direct manipulation of your cognitive experience…. It takes only a tiny group of engineers to create technology that can shape the entire future of human experience with incredible speed.

Moore’s Law is predicted to hit a wall around 2015, when it will be impossible to squeeze more circuitry onto a silicon chip without it overheating. By then, though, computers may have switched over to magnetic random access memory, chips that operate with subatomic circuitry. One of the main creators of MRAM, Stuart Wolf, developed it at DARPA, the agency that invented ARPANET, the precursor to the Internet as we know it. A few years ago, in an interview with Fortune, Wolf, envisioning the future of computing, imagined that before too long we’ll be wearing a headband that feeds directly into the brain and lets us, among other things, talk without speaking, see around corners, and drive by thinking.8

Another branch of DARPA is pouring millions of dollars into the development of a battlefield “thought helmet” that will let soldiers in the field communicate wordlessly by translating brain waves, which will be “read” by sensors embedded in the helmet and arrayed around the scalp, into audible radio messages. (One researcher called it a “radio without a microphone.”)9 As early as 2000, Sony began work on a patented way to beam video games directly into the brain using ultrasound pulses to modify and create sensory images for an immersive, thoroughly inescapable gaming experience.10 More recently, computer scientists at the Freie Universität in Berlin got a jump on Stuart Wolf’s vision of a car operated solely by thought. Using commercially available electroencephalogram (EEG) sensors to first decode the brain wave patterns for “right,” “left,” “brake,” and “accelerate,” they then were able to connect those sensors to a computer-controlled vehicle, so that a driver “was able to control the car with no problem—there was only a slight delay between the envisaged commands and the response of the car,” according to one of the lead researchers.11

Moreover, a group at the University of Southampton in England has developed a BCI—a brain–computer interface—that enables people to communicate with each other brain to brain without thought or, as the developers call it, B2B, again with a kind of EEG cap that lets one person think of “left” (as represented by a zero) or “right” (represented by a one), send one of those digits to a second person, also wired with electrodes that are connected as well to a computer that receives the digit, and, once it is understood, allows the second person to flash the digit back to the sender by way of a light-emitting diode (LED), which is “read” by that person’s visual cortex. It’s not quite the soundless, wordless, almost thoughtless integration of our thoughts, B2B, but it’s a fourth or fifth step toward a future that is becoming increasingly visible.

Jaron Lanier is right: you are not a gadget—yet.