It wasn’t always this way for Mark Zuckerberg. These days, we take almost for granted that when there’s organized political violence in the world, there will be an apparent Facebook connection—whether it involves groups and pages that played a part in fomenting the Capitol insurrection in Washington, D.C., or the social media incitement of genocide in Myanmar, where the military, which backed that effort to eliminate the Rohingya, has now seized power in a coup. These days, the platform is routinely associated in the public mind with secretive extremist hate groups, divisive foreign propaganda, sick conspiracy theories with cultlike followings, and forums for plotting armed uprisings against legitimate governments. Many critics have come to see Facebook itself as a threat to liberal democracy.
So it seemed to make a superficial sort of sense for Facebook to hire, in 2018, a liberal democrat—in fact, an actual Liberal Democrat, the former leader of that UK centrist party—to become the company’s chief public defender. But even as Facebook adopted Sir Nicholas Clegg, Britain’s deputy prime minister from 2010 to 2015, as a global communications adviser, the company has also tried to downplay its influence on politics. In late January this year, coinciding with Facebook’s quarterly earnings report, Zuckerberg was anxious to minimize the furor over Facebook accounts’ stoking “Stop the Steal” messages ahead of the Capitol riot, and said he was considering steps “to reduce the amount of political content in News Feed.” In early February, Zuckerberg announced trials in Canada, Brazil, and Indonesia of this less political news feed, claiming that users don’t want “politics and fighting” to dominate their experience of the platform.
But after Facebook’s share price took a heavy hit following the January 6 insurrection, senior executives have continued to be skittish about possible moves by lawmakers to impose greater regulation on platforms like theirs. And so the company’s former professional politician, Nick Clegg, has embarked on a round of public diplomacy, publishing on Medium on March 31 a prolix, meandering, and evasive response to the criticisms, while also touting the plans to have the platform’s algorithms promote less political content.
Facebook uses a machine-learning model to determine what content is political, and it hopes that by limiting this content, it can make people focus more on family and friends, and less on political divisiveness. But regardless of what algorithms Facebook employs and what policies it adopts, the platform has such wide and deep reach into so many people’s lives as a primary source of information that increasingly it does determine political outcomes. The very idea of the platform’s neutrality, premised on a disavowal of its position as a distributor of information (what traditional media called being a publisher), is clearly a failure to admit social responsibility, since doing so would compromise the company’s profitability.
In a previous piece, published in 2020, Clegg repeated the canard that “platforms like Facebook hold up a mirror to society,” saying that “everything that is good, bad and ugly in our societies will find expression on our platform.” This is disingenuous: Facebook’s algorithms and policies profoundly influence what we do and don’t see in the mirror. Facebook has recently announced that it is lifting the temporary ban on political advertising it imposed before the 2020 US presidential election. Since, at the very same time, the company says it is aiming to reduce political content in its users’ news feeds, this means there will inevitably be a preponderance of paid political content. In other words, the company’s business model is in direct contradiction with the notion that political speech on its platform merely “mirrors” the views of society. The machine learning on the site that counts is the software helping to target content designed to manufacture and manipulate political opinion.
The question Facebook needs to address transparently is what forms of manipulation take place on their platform, so that the public can determine whether such activity is compatible with liberal democracy. Clegg’s recent piece on Medium insists that Facebook users have agency, that their interaction with algorithms shapes their experience on the platform, and that greater transparency about how that process works can empower people further. He calls for more openness, but it’s hard to accept any of this is in good faith. Clegg was hired at a time when Mark Zuckerberg and Chief Operating Officer Sheryl Sandberg had for months been refusing to answer legitimate questions from Clegg’s own Parliament. That tussle culminated in Zuckerberg’s threatening to pull the company’s UK investment, as a FOIA request later revealed.
Those questions had arisen following the Cambridge Analytica scandal that engulfed Facebook in March 2018. The company had bought data scraped from millions of Facebook users, without their knowledge, to develop the “micro-targeting” of political messaging on behalf of the Trump campaign, exploiting personality characteristics inferred from that data. As governments around the world formed investigative committees to figure out exactly what was going on, Zuckerberg defied subpoenas, misled House and Senate Committees in the US (these misstatements have been assiduously documented on Twitter by Jason Kint, CEO of Digital Content Next and a dedicated critic of Facebook), and allowed other senior executives to promote conflicting and misleading narratives about what had happened. Aleksandr Kogan, the data scientist who developed the app that scraped the data, became the scapegoat (even as his partner in the enterprise, Joseph Chancellor, was hired by Facebook).
Joel Kaplan, a former lobbyist and now Facebook’s vice president of public policy, admitted before an Irish parliamentary committee in 2018 that Facebook knew as early as 2015 that Cambridge Analytica had this data. When Simon Milner, also a vice president of public policy, was interrogated by the UK’s parliamentary committee in February 2018, he denied that Facebook knew Cambridge Analytica had the data. In April 2018, Zuckerberg claimed that Kogan had violated the company’s policies when he sold it to them but, under questioning from Senator Richard Blumenthal, had to admit that under its terms of service Facebook allowed Kogan to sell such data.
And so on. In sum, Facebook has strenuously resisted all attempts to force the company to be more transparent about this episode. Doing so would expose to unwanted scrutiny its wider business model, which relies on aggregating data from many online sources in order to offer its advertisers micro-targeting services.
This resistance to accountability in the face of elected officials involves the deliberate flouting of democratic political procedures. Recently, in Australia, where lawmakers were proposing new legislation that would require tech corporations like Facebook and Google to pay for news content, Facebook’s response was to unilaterally erase from the platform all such material, from international headlines to local weather reports. In the process, the company’s action also swept up content from sites offering health information, domestic violence support, and community group contacts. The content was restored only after the Australian government made concessions and modified the bill. So much for merely holding up a mirror to society: this bullying of elected governments is a direct threat to liberal-democratic norms and institutions.
This is all a long way from the commencement address Zuckerberg gave in May 2017 at Harvard’s 366th graduation ceremony. His face beaming through the day’s rain with the euphoria and hope of a religious visionary, he exhorted the students to go out and find ways of giving purpose and meaning to the lives of others. To some, it seemed like preposterous patrician entitlement, but to others, he looked and sounded like a progressive candidate for office, perhaps even a future president.
We first had a glimpse of the expanding scale of his aspirations in 2015, when he posted as his book of the month Henry Kissinger’s World Order (2014). Zuckerberg said he was reading it because it was about “foreign relations and how we can build peaceful relationships throughout the world.” He explained further: “This is important for creating the world we all want for our children, and that’s what I’m thinking about these days.” Like the commencement speech, these remarks, in so far as they were political at all, seemed benign.
But in fact, this vague aspirationalism is what facilitates Facebook’s political duplicity. Zuckerberg as Facebook avatar aligns perfectly with the ethos of the platform he created. His own face, notably symmetrical ,with that smooth forehead, incomprehensible hairline, and inexpressive eyes, has the same slightly confusing quality as an earlier generation of AI-generated faces. His refusal to embrace any governing principles for that platform, beyond smiling assurances and empty promises, has carried him a long way in the unbridled pursuit of money and power. In this, Nick Clegg is his perfect counterpart.
Clegg was for five years the junior partner as deputy prime minister in a coalition government led by the Conservative David Cameron. He attained this position through what turned out to be a Faustian bargain that traded his party’s independence and integrity for personal position and the trappings, if not the reality, of power.
At the 2009 party conference, Clegg said, “the choice before people is the choice between fake, phony change from David Cameron’s Conservatives, and real change the Liberal Democrats offer.” So, when the May 2010 general election resulted in a hung parliament, and the Labour Party was scrambling to form a coalition with various smaller parties, it came as a shock to many that Clegg undercut them by taking his centrist but traditionally progressive party into partnership with Cameron’s Tories. By October that year, Liberal Democrat supporters got their first clear notion of what that “real change” would look like, when Clegg reneged on the party’s longstanding commitment not to raise university tuition fees by going along with the hike Cameron’s party ordered.
At the following year’s party conference, Clegg tried to retrieve the situation by insisting that the Liberal Democrats were still the party of the “radical center,” one defined by its rejection of “the tribalism of left and right.” But this attempt to turn moral and political ambiguity into a virtue did not go over well with voters, and by 2012 Clegg was forced into a humiliating attempt at self-rehabilitation by releasing a mea culpa video statement about the tuition fees debacle. Its heavily scripted straight talk—with lines like “I’d like to take this opportunity to put a few things straight,” “It was a promise made with the best of intentions,” “I owe it to you to be upfront about it,” and “There’s no easy way to say I’m sorry”—soon received the ultimate satirical put-down of being released as an autotune remix on iTunes.
At the 2015 general election, the Liberal Democrats lost forty-nine of their 2010 total of fifty-seven parliamentary seats; admitting that these “catastrophic” results were “the most crushing blow” in his party’s history, Clegg resigned as leader the same day. The coalition partnership had consigned the Liberal Democrats to an electoral oblivion from which they have yet to emerge.
Clegg appears to have learned from this experience not to stake Facebook’s reputation on mea culpas. In the aftermath of the January 6 Capitol insurrection that Trump was accused of inciting, Facebook, along with other social media companies, took the decision to bar the president temporarily from its site. While Twitter permanently banned Trump from its site on January 8, Facebook merely suspended Trump’s account—first, for twenty-four hours from the evening of January 6, and then, from the following day, for an indefinite period, at least through the end of his presidency. Clegg went on record saying that “whilst it was a controversial decision because he was the president of the United States, it actually wasn’t a particularly complicated one to take,” since the link between Trump’s rhetoric and the violence was “crystal clear.”
But this uncharacteristically out-in-front decisiveness from Facebook obscured an underlying ambiguity in the company’s position. Clegg had earlier helped to establish a Facebook Oversight Board (approved by Zuckerberg in November 2018), a body described by some as its “Supreme Court,” to review major decisions. The board comprises a panel of journalists, academics, former politicians, and leaders of NGOs from around the world, all of whom are paid six-figure sums by an independent trust set up by Facebook.
The idea of the Oversight Board as a Supreme Court is a grandiose conceit. The actual Supreme Court has a clear mandate: to uphold the rule of law under the United States Constitution, according to which it sits at the apex of the third branch of the US government, running a legal system that, imperfect as it may be, has been refined over centuries to be as coherent and consistent as possible in the administration of justice. By contrast, Facebook, as a private corporation, has fiduciary obligations to shareholders, and contractual ones with its advertisers, its workers, and its other commercial clients (such as, for example, defense and intelligence agencies that have funded specific projects with the company). It also defines its own terms of service with its 2.8 billion monthly active users worldwide. But terms of service are not a bill of rights, and while Facebook’s platform demonstrably has the power to swing elections, the company is not accountable to voters. Yet the Supreme Court analogy is designed to create the impression of a legal and judicial institution, one that confers quasi-constitutional legitimacy on the whole enterprise.
It is early to assess how this will work in practice, but the outlines are beginning to come into focus. On January 21, the day after Biden’s inauguration, Facebook referred its indefinite suspension of Trump’s account to the Oversight Board. Clegg has said he hopes the board, which has yet to rule on the matter, will permanently ban Trump. But the board’s decision, while binding, is restricted to whether the suspension conformed to Facebook’s own rules and policies. So, if the board lifts the ban, Facebook has automatic plausible deniability for the responsibility of reactivating Trump’s account—an action sure to cause outrage among many Americans who already drew the conclusion from the Capitol riot that Trump was a menace to democracy. The whole operation encapsulates precisely the sort of constructive ambiguity at which Clegg excels.
It seems Clegg has perfectly repurposed the experience of his past failed political career for his new corporate one: in Mark Zuckerberg he has found his new David Cameron, but in this position there is no downside to being the junior partner because there are no voters to punish you at the ballot box. Instead of providing cover for a right-wing UK government bent on imposing austerity, he is now providing cover for a platform that has helped power the rise of the US far right.
The uncomfortable truth is that the same forces behind Trump’s populist politics—of hate and divisiveness, fueled by misinformation—also impel engagement on Facebook. In July 2020, Clegg published an open letter in which he wrote: “I want to be unambiguous: we do not profit from hate.” But the word “profit” there was an expert contrivance of constructive ambiguity: it could be taken as an active disclaimer that hate speech brings financial rewards to Facebook, or it could mean, much more vaguely, “it doesn’t do any of us any good.” The gauzy title of the published letter was “Facebook does not Benefit from Hate,” which seemed to affirm that the latter interpretation was the correct one. In fact, there is a clear financial incentive in allowing hate to prosper on the platform. Back in 2018, an internal memo admitted that Facebook’s algorithms “exploit the human brain’s attraction to divisiveness.” Despite knowing this, Facebook did not alter its algorithm to minimize this effect—because executives knew, for unambiguous fact, that feeding divisiveness would drive engagement, and engagement would drive profits.
Clegg insists that, in the long run, there is a self-correcting financial disincentive for the platform to host hate and extremism because users and therefore advertisers don’t like this divisiveness. But in the long run, Facebook also wants to grow its databases globally and even divisive engagement that alienates some users is serving that purpose. It’s reasonable to assume that if the trade-off were not worthwhile, Facebook would long since have taken more decisive action. Although some big-name advertisers like Adidas and Coca-Cola did join a boycott of the platform last year over its failure to police hate speech, the vast majority of Facebook’s advertising revenue comes from small vendors who can’t afford to rely on traditional media and are now dependent on the online advertising duopoly that has been created by Facebook and Google.
The right has inhibited the condemnation of widespread hate speech and bigotry by creating moral panic about freedom of speech. Conservative lawmakers and activists are attempting to foment a culture war that uses fear-mongering myths about “cancel culture,” “trigger warnings,” and “safe spaces” to accuse the “woke” left of repressing freedom of speech in America and abroad. In his recent Medium piece, Clegg also plays on this anxiety, saying, “Implicit in the arguments made by many of social media’s critics is an assumption that people can’t be trusted with an extensive right to free speech.” It is difficult to believe this claim is made in good faith. Whether these unnamed critics even exist is, in any case, hardly relevant to whether a private corporation like Facebook should be spreading hate and lies for profits.
It would not be the first time that Facebook executives had wielded expedient, instrumental arguments about censorship and free speech. In December 2016, Facebook’s security engineers shared with the company’s leadership a report on misinformation and disinformation during the 2016 election that identified many pages promulgating “fake news.” Kaplan, the public policy VP, reportedly argued against shutting those pages down right away, on grounds that the action would disproportionately affect conservatives. This bizarre argument apparently conceded that conservatives spread more lies and misinformation, while insisting that they should be free to do so—because, though it went unstated, this was good for business.
After the Capitol insurrection, Facebook paused its political donations so it could review its policies in light of efforts by Republican Party officials and politicians to obstruct or overturn states’ election results. After that attempt to subvert the people’s will was thwarted when Congress certified the Electoral College count in the early hours of January 7, Republican lawmakers swiftly turned to pushing a raft of voter suppression measures through state legislatures (such as those recently passed in Georgia), supported by the Republican State Leadership Committee. Those issues should have been at the top of Facebook’s agenda in any serious review. Instead, as Judd Legum at Popular Information revealed, Facebook covertly funneled $50,000 to the RSLC.
The most nakedly Trumpian part of Clegg’s recent article—one that could just as easily have been written by Steve Bannon—describes Facebook engagement as if it were a popular uprising against an establishment elite. Clegg says the platform has enabled a “dramatic and historic democratization of speech,” even though its business model relies on the promotion of paid speech and the constant manipulation of our attention. He goes on: “like any democratizing force, it challenges existing power structures. Political and cultural elites are confronting a raucous online conversation that they can’t control, and many are understandably anxious about it.” This attack on “elites” who want Facebook regulated is of a piece with Zuckerberg’s threats to the British and Australian governments. The “raucous” extremist voices on Facebook—like those who spread the “Stop the Steal” disinformation—are thus recast as free speech advocates with a popular mandate for disregarding and undermining legitimate political authority.
The idea of Facebook as a neutral platform on which the authentic freedom-loving people can come together to defy elected politicians is the cheapest of populist lies. Facebook is not a neutral platform; it is an extraordinarily powerful political actor. Everything it either does or refrains from doing now has an effect on every polity around the world where it operates. And yet Facebook’s leadership has yet to even acknowledge any need to use this power responsibly.
Although liberal democracies aspire to the broadest tolerance of views they can accommodate, that’s only possible within the constraints of an agreed set of fundamental values, including the rule of law, acceptance of legitimate election results, peaceful transitions of power, and tolerance itself. If the last four years have taught us anything it’s that we shouldn’t refrain from acting forcefully to defend these values and institutions. I want to be unambiguous: if Facebook can’t adapt its business model to avoid undermining liberal democracy itself, our legislators must insist that such a business model doesn’t belong in a liberal democracy.