Axel Schmidt/Reuters

A demonstrator dressed as a security camera at an anti-surveillance protest, Berlin, November 2017

In her seminal work The Managed Heart: Commercialization of Human Feeling (1983), the sociologist Arlie Russell Hochschild described a workplace practice known as “emotional labor management.” Hochschild was studying the extreme kinds of “emotional labor” that airline stewardesses, bill collectors, and shop assistants, among others, had to perform in their daily routines. They were obliged, in her words, “to induce or suppress feeling in order to sustain the outward countenance that produces the proper state of mind in others.” In the case of airline stewardesses, the managers and human resources staff of the airline companies relied on reports from passengers or management spies to make sure that stewardesses kept up their cheerful greetings and radiant smiles no matter what.

The stewardesses Hochschild studied were working under a regime of “scientific management,” a workplace control system conceived in the 1880s and 1890s by the engineer Frederick Winslow Taylor. Workers subject to such regimes follow precise, standardized routines drawn up by managers and undergo rigorous monitoring to ensure that these routines are followed to the letter. Taylor’s practice is often associated with such factory workplaces as the early Ford Motor plants or today’s Amazon “fulfillment centers,” where workers must perform their prescribed tasks on a strict schedule.

Hochschild showed that regimes of scientific management could be applied virtually anywhere. Her airline company managers aspired to control every aspect of their employees’ emotional conduct. What kept them from doing so was that they weren’t actually present in plane cabins during flights and so had to rely on haphazard reporting to confirm that the stewardesses were always behaving as they should. But in the twenty-first century, new technologies have emerged that enable companies as varied as Amazon, the British supermarket chain Tesco, Bank of America, Hitachi, and the management consultants Deloitte to achieve what Hochschild’s managers could only imagine: continuous oversight of their workers’ behavior.

These technologies are known as “ubiquitous computing.” They yield data less about how employees perform when working with computers and software systems than about how they behave away from the computer, whether in the workplace, the home, or in transit between the two. Many of the technologies are “wearables,” small devices worn on the body. Consumer wearables, from iPhones to smart watches to activity trackers like Fitbit, have become a familiar part of daily life; people can use them to track their heart rate when they exercise, monitor their insulin levels, or regulate their food consumption.

The new ubiquity of these devices has “raised concerns,” as the social scientists Gina Neff and Dawn Nafus write in their recent book Self-Tracking—easily the best book I’ve come across on the subject—“about the tremendous power given to already powerful corporations when people allow companies to peer into their lives through data.” But the more troubling sorts of wearables are those used by companies to monitor their workers directly. This application of ubiquitous computing belongs to a field called “people analytics,” or PA, a name made popular by Alex “Sandy” Pentland and his colleagues at MIT’s Media Lab.

Pentland has given PA a theoretical foundation and has packaged it in corporate-friendly forms. His wearables rely on many of the same technologies that appear in Self-Tracking, but also on the sociometric badge, which does not. Worn around the neck and attached to microphones and sensors, the badges record their subjects’ frequency of speaking, tone of voice, facial expressions, and body language. In Sociometric Badges: State of the Art and Future Applications (2007), Pentland and his colleague Daniel Olguín Olguín explained that the badges “automatically measure individual and collective patterns of behavior, predict human behavior from unconscious social signals, identify social affinity among individuals…and enhance social interactions by providing feedback.”

The badges and their associated software are being marketed by Humanyze, a Boston company cofounded by Pentland, Olguín Olguín, and Ben Waber among others (Waber was formerly one of Pentland’s researchers at MIT and is now the company’s CEO). Under its original name, Sociometric Solutions, the company got early commissions from the US Army and Bank of America. By 2016 Humanyze had among its clients a dozen Fortune 500 companies and Deloitte. In November 2017 it announced a partnership with HID Global, a leading provider of wearable identity badges, which allows HID to incorporate Humanyze’s technologies into its own products and so expands the use of such badges by US businesses.

The main tool in Humanyze’s version of PA is a digital diagram in which people wearing sociometric badges are represented by small circles arrayed around the circumference of a sphere, rather like the table settings for diners at a banquet. Each participant is linked to every other one by a straight line, the thickness of which depends on what the system considers the “quality” of their relationship based on the data their badges collect.


In a 2012 essay for the Harvard Business Review, Pentland described how this method was used to evaluate the performance of employees at a business meeting in Japan.1 The PA diagram for Day One showed that the lines emanating from two members of an eight-person team, both of whom happened to be Japanese, were looking decidedly thin. But by Day Seven, the diagrams were showing that the “Day 1 dominators” had “distributed their energy better” and that the two Japanese members were “contributing more to energy and engagement.” Evidently some determined managerial nudging had taken place between Days One and Seven. In a June 2016 interview with MEL Magazine, Waber claimed that little escapes the gaze of the sociometric badge and its associated technologies: “Even when you’re by yourself, you’re generating a lot of interesting data. Looking at your posture is indicative of the kind of work and the kind of conversation you’re having.”2

In a 2008 article Pentland commended his PA systems for being more rational and dependable than their human counterparts.3 But the “intelligence” of his and Waber’s PA systems is not that of disembodied artificial intelligence—whatever that may look like—but of corporate managers with certain ideas about how their subordinates should behave. The managers instruct their programmers to create algorithms that in turn embed these managerial preferences in the operations of the PA systems. Pentland and Waber’s PA regime is in fact a late variant of scientific management and descends directly from the “emotional labor management” Hochschild discussed in The Managed Heart. But these twenty-first-century systems have powers of surveillance and control that the HR managers of the airline companies thirty years ago could only dream of.

Not all PA systems depend on wearable devices. Some target landlines and cell phones. Behavox, a PA company financed by Citigroup, specializes in the surveillance of employees in financial services. “Emotion recognition and mapping in phone calls is increasingly something that banks really want from us,” Erkin Adylov, the company’s CEO, told a reporter in 2016.4 Behavox’s website advertises that its systems give “real-time and automatic tracking” of aspects of employee conversation like the “variability in the timing of replies, frequency in communications, use of emoticons, slang, sentiment and banter.” The company, in the words of a recent Bloomberg report,

scans petabytes of data, flagging anything that deviated from the norm for further investigation. That could be something as seemingly innocuous as shouting on a phone call, accessing a work computer in the middle of the night, or visiting the restroom more than colleagues.5

“If you don’t know what your employees are doing,” Adylov told another reporter in 2017, “then you’re vulnerable.”

Most PA software providers rely on combinations of wearables and computer-based technologies to monitor and control workplace behavior. These companies boast that their systems can find out virtually everything there is to know about employees, both in the workplace and outside it. “Thanks to modern technology,” in the words of Hubstaff, a PA company based in Indianapolis, “companies can monitor almost 100 percent of employee activity and communication.”6

Max Simkoff, the cofounder of San Francisco’s Evolv Corporation (now taken over by Cornerstone, another Humanyze competitor), has said that his PA systems can analyze more than half a billion employee data points across seventeen countries and that “every week we figure out more things to track.” Kronos Incorporated, a management software firm based in Lowell, Massachusetts, claims that its workforce management systems are used daily by “more than 40 million people” and offer “immediate insight into…productivity metrics at massive scale.”7

Microsoft entered the PA market when it acquired the Seattle-based company Volometrix in 2015. It inherited Volometrix’s “Network Efficiency Index” (NEI), which measures how efficiently employees build and maintain their “internal networks.” The index is calculated by dividing “the total number of hours spent emailing and meeting with other employees” by the number of “network connections” an employee manages to secure. The NEI’s recognition of an employee’s network connection depends on whether encounters with coworkers have met both a “frequency of interaction threshold” and “an intimacy of interaction threshold,” the latter of which is satisfied when there are “2 or more interactions per month which include 5 or fewer people total.”8

When workers fail to meet these thresholds, other workplace technologies can be enlisted to give them a nudge. One Humanyze client created a robotic coffee machine that responded to data collected from sociometric badges worn by nearby employees. By connecting to Humanyze’s Application Programming Interface (API), the coffee machine could assess when a given group of workers needed to interact more; it would then wheel itself to wherever it could best encourage that group to mingle by dispensing lattes and cappuccinos.9


When American managers want to install PA surveillance systems, employees rarely manage to stop them. In Britain, an exception to this trend occurred in January 2016, when journalists at the London office of the Daily Telegraph came to work one Monday and found that management had affixed small black boxes on the undersides of their desks that used heat and motion sensors to track whether or not they were busy at any given time. Seamus Dooley of the UK National Union of Journalists told The Guardian that “the NUJ will resist Big Brother–style surveillance in the newsroom.” The boxes were removed.10

The Telegraph’s journalists were right to act as they did. A 2017 paper by the National Workrights Institute in Washington, D.C.,11 cites a wealth of academic research on the physical and psychological costs that intrusive workplace monitoring can have on employees. A study by the Department of Industrial Engineering at the University of Wisconsin has shown that the introduction of intense employee monitoring at seven AT&T-owned companies led to a 27 percent increase in occurrences of pain or stiffness in the shoulders, a 23 percent increase in occurrences of neck pressure, and a 21 percent increase in back pain. Other research has suggested that the psychological effects of these technologies can be equally severe. Many of Bell Canada’s long-distance and directory assistance employees have to meet preestablished average work times (AWTs). Seventy percent of the workers surveyed in one study reported that they had “difficulty in serving a customer well” while “still keeping call-time down,” which they said contributed to their feelings of stress to “a large or very large extent.”

How have the corporate information-technology community and its academic allies justified these practices and the violations of human dignity and autonomy they entail? Among economists, Erik Brynjolfsson at MIT is perhaps the leading counsel for the defense. With Andrew McAfee, also of MIT, he has published two books to this end, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies (2014) and Machine, Platform, Crowd: Harnessing Our Digital Future (2017), the latter clearly written with a corporate audience in mind.

In the opening chapter of Machine, Platform, Crowd, they write that “our goal for this book is to help you.” The “you” in question is a corporate CEO, CIO, or senior executive who might be saddled with obsolete technologies—in Brynjolfsson and McAfee’s words, “the early-twenty-first-century equivalent of steam engines.” Each subsequent chapter ends with a series of questions aimed at such readers: “Are you systematically and rigorously tracking the performance over time of your decisions?”

Although the use of information technology in the workplace is a dominant theme of Brynjolfsson and McAfee’s two books, the authors say nothing about the surveillance powers of People Analytics or its predecessors, whose existence cannot easily be reconciled with the glowing vision they describe in the opening chapters of The Second Machine Age. There are, for instance, eighteen references to Amazon in The Second Machine Age and Machine, Platform, Crowd. All of them are to technological breakthroughs like the company’s “recommendation engine,” which reduces search costs so that “with a few checks over two million books can be found and purchased.”

From Brynjolfsson and McAfee one would never know that among large US corporations Amazon has relied perhaps most heavily on a combination of surveillance systems to control both its shop floor and its middle management workforce, and to push the performance of both to the limit. It tags its shop floor employees with micro-computers that constantly measure how long they take to load, unload, and shelve packages at Amazon depots. If the timings set by management are not met, even by a few seconds, the computer starts beeping and the employee gets rebuked. Commenting on a November 2013 BBC documentary about the conditions under which Amazon’s shop floor employees work, filmed clandestinely at a “fulfillment center” at Swansea, UK, the public health expert Michael Marmot of University College, London, noted that such practices had been shown to cause “increased risk of mental illness and physical illness.”12

Amazon has also relied on a program called “Anytime Feedback Tool” to achieve comparable levels of surveillance over its middle managers, who are encouraged to send their bosses anonymous evaluations of their co-workers without giving the subject the chance to respond. A manager’s regular monthly performance review may run to fifty or sixty pages; each year Amazon managers with the weakest performance record are in danger of being fired. In the words of a report by Jodi Kantor and David Streitfeld of The New York Times, “many workers called it a river of intrigue and scheming” in which cliques of managers could gang up on a colleague and use the system to demote him or her in the performance ratings, thereby protecting themselves from the management cull.13

None of this appears in either of Brynjolfsson and McAfee’s books. Despite their academic credentials, Brynjolfsson and McAfee are not acting in these books as eminent scholars conveying new research to a nonspecialist audience. They are acting as propagandists, arming their business audience with their own rationale for using digital technologies in the workplace. Both authors take refuge in a kind of techno-determinism. “We need,” they write in The Second Machine Age, “to let the technologies of the second machine age do their work and find ways of dealing with the challenges they will bring with them.”

When Brynjolfsson and McAfee do discuss recent developments in workplace technologies, it is with a kind of fatalism. In this account, the people involved—the CIOs, the system designers, and the programmers—are simply expediting an inevitable transition to a digital-intensive workplace where, as even the authors admit, “some people, even a majority of them, can be made worse off.” (They have particularly in mind those whose labor is “relatively unskilled.”) The techno-managerial elite may perform its tasks with varying degrees of efficiency, but the parameters within which it operates are highly circumscribed.

The managers are not to blame, in this determinist view, for the human consequences of the “second machine age”: jobs are outsourced, while employees are laid off, deskilled, relentlessly monitored, and forced to settle for precarious and poorly paid jobs. The responsibility for dealing with these casualties is dumped onto the state. But by airbrushing out the decisions corporate managers can—and do—make over how to use technologies like Pentland’s PA systems, Brynjolfsson and McAfee are effectively keeping employees in the dark about the forces that lower their quality of life and their standard of health.

Digital technologies have any number of possible uses in the workplace, and not all of them involve subjecting workers to heightened monitoring and machine-generated feedback. Tasks that Brynjolfsson and McAfee write off as “routine” and thus as fair game for total or partial automation may turn out to be just the opposite under regimes designed to support rather than displace them. The Cornell scholar Virginia Doellgast has shown in detail what an employee-friendly use of new workplace technologies can look like.14

She has done detailed research on call centers in the German telecommunications industry, where unions are strong and works councils have statutory rights of “codetermination” (Mitbestimmung) over the use of workplace technologies. There, she discovered, employees had managed to negotiate limits on how far management could go in using surveillance systems to set the pace of their work and track their real-time performance. Doellgast found this level of employee activism surprising in what is usually considered a “peripheral service industry.”

Once they had set those limits, the workers could exercise the advanced workforce skills provided by the German apprentice training system on their own terms. They often belonged to self-managed work teams, which could influence the size and content of the daily workload, the pace of work, and the nature of communications with customers, sparing employees the indignities of the mandated digital scripts widely used in US call centers. Compared with US workplaces and German ones without employee representation, these “high involvement” workplaces had lower levels of employee stress, lower rates of employee turnover, higher pay, and better service.

Arrangements like this, in which management and labor share power, barely exist in corporate America. Union membership in the US private sector fell to 6.5 percent in 2017, and many of its industries are becoming virtually union-free. I couldn’t find a single reference to labor unions in either of Brynjolfsson and McAfee’s books. In the deunionized US workplace, digital technologies are being deployed in ways that both increase labor’s productivity and diminish its earning power: the more workers have to meet preestablished output targets and respond to real-time analysis of their performance, the fewer opportunities they have to widen their earning power by refining their judgment, experience, and skills.

When the output of labor rises and its real earnings stagnate or decline, as they have in the US for at least the past thirty years, then, other things being equal, the cost of labor per unit of output will fall and the share of profits in GDP rise, as they have again consistently done during this period. From a corporate perspective this is a rosy scenario—especially since the compensation of top managers is frequently linked to corporate stock prices, which tend to rise with profits, as they did in the first year of the Trump presidency. But it has done nothing to narrow income inequality and much to widen it.

Few world leaders have had much to say about the relationship between the misuse of technology and the human damage it can inflict. Pope Francis is one who has: “Only by educating people to a true solidarity,” he said in 2017, “will we be able to overcome the ‘culture of waste,’ which doesn’t only concern food and goods but, first and foremost, the people who are cast aside by our techno-economic systems which, without even realizing it, are now putting products at their core, instead of people.”15