“As smoking gives us something to do with our hands when we aren’t using them, Time gives us something to do with our minds when we aren’t thinking,” Dwight Macdonald wrote in 1957. With smartphones, the issue never arises. Hands and mind are continuously occupied texting, e-mailing, liking, tweeting, watching YouTube videos, and playing Candy Crush.
Americans spend an average of five and a half hours a day with digital media, more than half of that time on mobile devices, according to the research firm eMarketer. Among some groups, the numbers range much higher. In one recent survey, female students at Baylor University reported using their cell phones an average of ten hours a day. Three quarters of eighteen-to-twenty-four-year-olds say that they reach for their phones immediately upon waking up in the morning. Once out of bed, we check our phones 221 times a day—an average of every 4.3 minutes—according to a UK study. This number actually may be too low, since people tend to underestimate their own mobile usage. In a 2015 Gallup survey, 61 percent of people said they checked their phones less frequently than others they knew.
Our transformation into device people has happened with unprecedented suddenness. The first touchscreen-operated iPhones went on sale in June 2007, followed by the first Android-powered phones the following year. Smartphones went from 10 percent to 40 percent market penetration faster than any other consumer technology in history. In the United States, adoption hit 50 percent only three years ago. Yet today, not carrying a smartphone indicates eccentricity, social marginalization, or old age.
What does it mean to shift overnight from a society in which people walk down the street looking around to one in which people walk down the street looking at machines? We wouldn’t be always clutching smartphones if we didn’t believe they made us safer, more productive, less bored, and were useful in all of the ways that a computer in your pocket can be useful. At the same time, smartphone owners describe feeling “frustrated” and “distracted.” In a 2015 Pew survey, 70 percent of respondents said their phones made them feel freer, while 30 percent said they felt like a leash. Nearly half of eighteen-to-twenty-nine-year-olds said they used their phones to “avoid others around you.”
It is the troubling aspects of social and mobile media that Sherry Turkle attends to in her wise and observant new book, Reclaiming Conversation. A clinical psychologist and sociologist who teaches at MIT, Turkle is by no means antitechnology. But after a career examining relations between people and computers, she blends her description with advocacy. She presents a powerful case that a new communication revolution is degrading the quality of human relationships—with family and friends, as well as colleagues and romantic partners. The picture she paints is both familiar and heartbreaking: parents who are constantly distracted on the playground and at the dinner table; children who are frustrated that they can’t get their parents’ undivided attention; gatherings where friends who are present vie for attention with virtual friends; classrooms where professors gaze out at a sea of semiengaged multitaskers; and a dating culture in which infinite choice undermines the ability to make emotional commitments.
Turkle finds the roots of the problem in the failure of young people absorbed in their devices to develop fully independent selves, a topic she began to explore in Alone Together (2011). In that book, she examined the way interaction with robotic toys and “always on” connections affect adolescent development. She argued that phones and texting disrupt the ability to separate from one’s parents, and raise other obstacles to adulthood. Curating a Facebook profile alters the presentation of self. Absorption in a gaming avatar can become a flight from the difficulties of real life. Young people face new anxieties around the loss of privacy and the persistence of online data.
In her new book, she expresses a version of those concerns that is as much philosophic as psychiatric. Because they aren’t learning how to be alone, she contends, young people are losing their ability to empathize. “It’s the capacity for solitude that allows you to reach out to others and see them as separate and independent,” Turkle writes. Without an ability to look inward, those locked into the virtual worlds of social media develop a sensibility of “I share, therefore I am,” crafting their identities for others. Continuous digital performance leaves teenagers experiencing what ought to be the satisfactions of solitude only as “disconnection anxiety.”
As in her earlier work, Turkle considers this loss of empathy as both a clinician and an ethnographer. She culls from hundreds of interviews she has done since 2008, the first year many high school and college students became armed with smartphones. Unhappy teachers at one private middle school in upstate New York describe students who don’t make eye contact or respond to body language, who have trouble listening and talking to teachers, and can’t see things from another’s point of view, recognize when they’ve hurt someone, or form friendships based on trust. “It is as though they all have some signs of being on an Asperger’s spectrum,” one teacher tells her. Turkle even seeks to quantify the damage, repeatedly citing a study that shows a 40 percent decline in empathy among college students over the past twenty years as measured by standard psychological tests.
For young people, she observes, the art of friendship is increasingly the art of dividing your attention successfully. Speaking to someone who isn’t fully present is irritating, but it’s increasingly the norm. Turkle has already noticed considerable evolution in “friendship technologies.” At first, she saw kids investing effort into enhancing their profiles on Facebook. More recently, they’ve come to prefer Snapchat, known for its messages that vanish after being viewed, and Instagram, where users engage with one another around a stream of shared photos, usually taken by phone. Both of these platforms combine asynchronicity with ephemerality, allowing you to compose your self-presentation, while looking more casual and spontaneous than on a Facebook profile. It’s not the indelible record that Snapchat’s teenage users fear. It’s the sin of premeditated curating—looking like you’re trying too hard.
More worrying to Turkle is that social media offer respite from the awkwardness of unmediated human relationships. Apple’s FaceTime feature hasn’t taken off because, as one college senior explains, “You have to hold it [the phone] in front of your face with your arm; you can’t do anything else.” Then again some younger teens, presumably with an ordinary number of arms, are using FaceTime as an alternative to spending time with one another in person. The advantage is that “you can always leave” and “you can do other things on social media at the same time.”
The thing young people never do on their smartphones is actually speak to one another. Their comments about live conversation are telling: “I never really learned how to do a good job with talking in person.” “Even when I’m with my friends, I’ll go online to make a point…. I’m more at home.” An Ivy league–bound high school student worries that college is going to require “a fair amount of on-the-spot talking.” Collectively, teens “make it clear that the back-and-forth of unrehearsed ‘real-time’ conversation is something that makes you ‘unnecessarily’ vulnerable,” Turkle writes. Reading these accounts, one is caught between dismay at the flight from personal contact and admiration for human ingenuity in devising new modes of communication. One group of students explains that when they get together physically, they “layer” online conversations on top of face-to-face ones, with people who are in the same room.
Family relations are evolving new digital modes and mores as well. Conflict has in many cases evolved into what Turkle calls “fighting by text.” One of the stories she tells is about a young man she calls Colin, who is at odds with his parents about his and his siblings’ failure to meet their expectations. He finds that moving the conflict to Gchat “makes things smoother.”
But when he pauses to ask if something might be lost, a question as much directed to himself as to me, Colin responds with a business metaphor: “What would be the value proposition of disagreeing with each other face-to-face?”
He can’t think of an answer. His family takes care of conflict by cooling it down online. Colin thinks they are now more “productive” as a family.
Needless to say, Turkle the psychotherapist does not see “productivity” as a healthy way to think about one’s family. Parents choose to manage conflict digitally in order to control their emotions, to get rid of the “messy and irrational” parts of fighting. “But to say to a child, partner, or spouse, ‘I choose to absent myself from you in order to talk to you,’ suggests many things that may do their own damage,” she writes.
Being able to be enough in control of our feelings to listen to another person is a requirement for empathy. If a parent doesn’t model this—if you go directly to a text or email—a child isn’t going to learn it, or see it as a value.
The application of texting and chat as a romantic buffer seems just as pernicious. Turkle devotes several pages to the story of Adam, a thirty-six-year-old architect who can’t get over the end of a long-term relationship. Adam feels he was able to be his “better self” with his girlfriend Tessa, the more open and less defensive man that she needed him to be. Communicating with her through electronic messaging rather than the phone gave him a chance to “pause and get it right” in their exchanges. He remains obsessed with the digital archive of the romance, dozens of texts a day sent over a period of three years:
He pulls up a text he sent Tessa after a fight. Adam says that after this quarrel he was frightened, afraid of what would happen next. But in his text he lessened the tension by sending a photo of his feet, beneath which he wrote, “Try to control your sexual passion in seeing me in Crocs and socks.” In person, Adam says that his anxiety would have led him to try to corner Tessa into forgiving him. His panic would have made things worse. Online, he used humor to signal confidence in their enduring connection. So what the text communicated is not the “real” Adam; it’s the Adam he wants to be.
In the Spike Jonze film Her, the romantic partner constituted through artificial intelligence provides emotional support without the demands of a real person. Here, the real person thinks that the modulated self he presents in disembodied conversation is more appealing. This turns the goal of affective computing on its head; instead of getting machines to seem more like people, it’s something closer to a man imitating a robot. Turkle comments that digital media put people in a “comfort zone,” where they believe they can share “just the right amount” of themselves. But this feeling of control is an illusion—a “Goldilocks fallacy.” In a romantic relationship, there is no ideal distance to be maintained over time. As she sums up her case: “Technology makes us forget what we know about life.”
Why might too much digital participation be corroding empathy, whether online or offline? Turkle is at her weakest on this connection, which sends her scurrying to Thoreau for homilies about the value of solitude. For a better answer, it makes sense to consider how humans interact in their purely digital relations. That is the implicit concern of Joseph M. Reagle Jr., a communications professor at Northeastern University. In Reading the Comments, he focuses on the way people relate to one another through the digital genre that he defines as social, reactive, short, asynchronous, and pervasive. To him, this “bottom of the Web” includes everything from Facebook sharing to bulletin board systems (BBS) to user-generated product reviews on Amazon.
Reagle surveys this varied landscape in pursuit of a goal he calls “intimate serendipity,” his term for successful online communities, places where people are able to express themselves electronically in a civilized way. He finds constructive criticism in a few surprising places, such as “beta readers” who offer feedback on one another’s fan fiction—composition in the mode of a favorite writer. He also finds some gems of crowd-wisdom culture, such as the Amazon review of a carbon monoxide detector headed “Saved our son’s life—4/5 stars.” But in the main, the Web conversation Reagle considers suffers from tendencies similar to the ones Turkle identifies: narcissism, disinhibition, and the failure to care about the feelings of others. It’s a world devoid of empathy.
Anonymous comments are the worst, leading to vicious mob behavior. But flamers, cyberbullies, and trolls (who all rely on insults) ruin even identity-based, moderated conversation. No one has figured out how to prevent the operation of Godwin’s law, which says that online debates always devolve into comparisons to the Nazis. Worse still is the hate and harassment that attend any discussion of feminism, or just expressions by women. Menacing phenomena include “doxing,” exposing the personal information of anonymous users, like someone’s home address or children’s photos, in order to intimidate them. Another form of abuse is “image-based harassment and visual misogyny,” which involves manipulating photos and pornographic images in a threatening way. Threats of rape and violence may arrive at the rate of fifty an hour with the formation of a “trollplex,” which Reagle defines as an uncoordinated attack on a target in a digital venue.
He diagnoses this casual cruelty as stemming in part from a male urge to quantify female attractiveness, reminding us that the origin of Facebook was Mark Zuckerberg’s dorm-room project Facemash, created as a way to rate Harvard students on their hotness. Twitter has been no better. “We suck at dealing with abuse and trolls on the platform and we’ve sucked at it for years,” Dick Costolo wrote in an internal company forum last year, shortly before he was pushed out as the company’s CEO. A newer, campus-based social platform called Yik Yak seems purposefully designed for students to denigrate their teachers anonymously and share bullying gossip. But despite all of the ugliness he documents, Reagle doesn’t want to abandon comments, as publications including Reuters, Tablet, and USA Today’s online sports section have done recently. “Comment is with us, and we must find ways to use it effectively,” he writes.
Reagle is right that to give up on free comment means abandoning the democratic promise of the Web. Yet his alternative that we “find ways to develop a robust self-esteem that can handle ubiquitous comment” is no solution. We can’t just deal with the emotional toll of brutality on the Web by toughening up. We need a Web that is less corrosive to our humanity.
If so much of what we do on the Internet is harmful to us, and harmful to one another, perhaps we should do less of it. But that turns out to be not so simple. There’s no clear boundary between a hard-to-quit behavior and a compulsive one. The idea of “Internet addiction disorder” first surfaced in a parody of an academic article in 1995. A year later, it was seriously proposed for inclusion in the DSM-IV. At that stage, compulsive digital behavior required you to be attached to a desk or laptop, which was a limiting factor. In the late 1990s, however, the combination of e-mail and mobile technology made an immoderate relationship with technology as familiar as the seductive blinking light of the Blackberry at the bedside.
The further combination of mobile technology and social media has made digital excess more familiar to people too young for Blackberries and untempted by e-mail. The simplest habitual activities are checking for updates in one’s social streams and affirming the contributions posted by friends. You do this by tapping on various permutations of the “like” button that Facebook launched in 2009: they include +1s on Google+, pins on Pinterest, hearts on Instagram, first stars, then hearts too on Twitter. The most successful mobile apps create distinctive, repetitive hand movements, like swiping on Tinder (left to reject), double-tapping on Instagram (to indicate approval), pressing down to view imploding doodles on Snapchat, and stroking down to catapult angry birds on Angry Birds.
When Turkle writes that “the Net teaches us to need it,” she is speaking metaphorically. But while the Internet itself may lack intentions, those designing our interactions with it have a purpose very much like the one she describes. Twenty years ago, the hottest jobs for college graduates were at Goldman Sachs or Morgan Stanley. Today, students at Stanford and CalTech and Harvard aspire to work in product management or design at social media companies. The disciplines that prepare you for such a career are software architecture, applied psychology, and behavioral economics—using what we know about human vulnerabilities in order to engineer compulsion.
Some of Silicon Valley’s most successful app designers are alumni of the Persuasive Technology Lab at Stanford, a branch of the university’s Human Sciences and Technologies Advanced Research Institute. The lab was founded in 1998 by B.J. Fogg, whose graduate work “used methods from experimental psychology to demonstrate that computers can change people’s thoughts and behaviors in predictable ways,” according to the center’s website. Fogg teaches undergraduates and runs “persuasion boot camps” for tech companies. He calls the field he founded “captology,” a term derived from an acronym for “computers as persuasive technology.” It’s an apt name for the discipline of capturing people’s attention and making it hard for them to escape. Fogg’s behavior model involves building habits through the use of what he calls “hot triggers,” like the links and photos in Facebook’s newsfeed, made up largely of posts by one’s Facebook friends.
One of Fogg’s students, Nir Eyal, offers a practical guide in his book Hooked: How to Build Habit-Forming Products. A former game designer and professor of “applied consumer psychology” at Stanford’s Graduate School of Business, Eyal explains why applications like Facebook are so effective. A successful app, he writes, creates a “persistent routine” or behavioral loop. The app both triggers a need and provides the momentary solution to it. “Feelings of boredom, loneliness, frustration, confusion, and indecisiveness often instigate a slight pain or irritation and prompt an almost instantaneous and often mindless action to quell the negative sensation,” he writes. “Gradually, these bonds cement into a habit as users turn to your product when experiencing certain internal triggers.”
The financial value of an app is largely determined by how much time consumers spend using it, on the assumption that usage translates into advertising revenue. For US users of Facebook, the average “time spent” figure is an extraordinary forty minutes a day. What compels this level of immersion? As Eyal writes, Facebook’s trigger is FOMO, fear of missing out. The social network relieves this apprehension with feelings of connectedness and validation, allowing users to summon recognition. On Facebook, one asserts one’s social status and quantifies its increase through numbers of likes, comments, and friends. According to Eyal, checking in delivers a hit of dopamine to the brain, along with the craving for another hit. The designers are applying basic slot machine psychology. The variability of the “reward”—what you get when you check in—is crucial to the enthrallment.
Eyal thinks the photo-sharing app Instagram is an even itchier trigger. “Instagram is an example [of the work] of an enterprising team—conversant in psychology as much as technology—that unleashed a habit-forming product on users who subsequently made it part of their daily routines,” he writes. Its genius, in his view, is moving beyond generalized FOMO to create angst around “the fear of losing a special moment.” Posting a photo to Instagram assuages that unease. Facebook’s 2012 acquisition of Instagram, a startup with thirteen employees, for the bargain price of $1 billion, “demonstrates the increasing power of—and immense monetary value created by—habit-forming technology.” In other words, Instagram was so damned addictive that Facebook had to have it.
Of course, posting to Facebook or Instagram also contributes to the global accumulation of FOMO. What Eyal describes, without seeming fully to appreciate it in human terms, is a closed cycle of anxiety creation and alleviation. What are others doing? What do they think of me? What do I think of them? In the last part of his book, Eyal raises ethical considerations and says developers ought to peddle only products that they believe in. But in the main, his book reads like one of those tobacco industry documents about manipulating nicotine levels in cigarettes. Designers can hook users through the application of psychological phenomena such as investment bias—once you’ve put time into personalizing a tool, you’re more likely to use it. But an app, Eyal writes, should ask for investment only after offering rewards, such as tidbits of interesting information. Another tool is rationalization, the feeling that if one is spending a lot of time doing something, it must be valuable.
Turkle argues against using the term “addiction” because it implies that “you have to discard the addicting substance,” and we aren’t very well “going to ‘get rid’ of the Internet.” But in describing what they’re doing, many of her subjects fall naturally into the language of substance abuse, abstention, and recovery. People colloquially describe sessions online as getting a fix, or refer to disconnection from social media as detoxing or going cold turkey. The industry can’t help talking that way either, about “users” and “devices.” The toll of technology is emotional rather than physical. But the more you read about it, the more you may come to feel that we’re in the middle of a new Opium War, in which marketers have adopted addiction as an explicit commercial strategy. This time the pushers come bearing candy-colored apps.
Despite the picture she paints of digital damage to nearly every kind of human relationship, Turkle remains optimistic that we can gain control of technology or, as her book’s title has it, reclaim conversation. Even teenagers who don’t remember a time before social media express nostalgia for life without it. One place they still experience friendship without divided attention is at device-free summer camps, where they return after six weeks more thoughtful and empathetic—only to plunge back into the “machine zone.”
How can we enjoy the pleasures and benefits of mobile and social media while countering its self-depleting and antisocial aspects? Turkle keeps her discussion of remedy general, perhaps because there aren’t many good solutions at the moment. She thinks we should consciously unitask, cultivate face-to-face conversation, and set limits on ourselves, like keeping devices away from the family dinner table. She suggests reading Walden.
As consumers, we can also pressure technology companies to engineer apps that are less distracting. If product design has a conscience at the moment, it may be Tristan Harris, a former B.J. Fogg student at Stanford who worked until recently as an engineer at Google. In several lectures available on YouTube, Harris argues that an “attention economy” is pushing us all to spend time in ways we recognize as unproductive and unsatisfying, but that we have limited capacity to control. Tech companies are engaged in “a race to the bottom of the brain stem,” in which rewards go not to those that help us spend our time wisely, but to those that keep us mindlessly pulling the lever at the casino.
Harris wants engineers to consider human values like the notion of “time well spent” in the design of consumer technology. Most of his proposals are “nudge”-style tweaks and signals to encourage more conscious choices. For example, Gmail or Facebook might begin a session by asking you how much time you want to spend with it that day, and reminding you when you’re nearing the limit. Messaging apps might be reengineered to privilege attention over interruption. iTunes could downgrade games that are frequently deleted because users find them too addictive.
These are helpful suggestions—more thoughtful apps, and apps to control our apps. They also seem wildly inadequate to the problem. Aspirations for humanistic digital design have been overwhelmed so far by the imperatives of the startup economy. As long as software engineers are able to deliver free, addictive products directly to children, parents who are themselves compulsive users have little hope of asserting control. We can’t defend ourselves against the disciples of captology by asking nicely for less enticing slot machines.