1.

We are living in an age in which the behavioral sciences have become inescapable. The findings of social psychology and behavioral economics are being employed to determine the news we read, the products we buy, the cultural and intellectual spheres we inhabit, and the human networks, online and in real life, of which we are a part. Aspects of human societies that were formerly guided by habit and tradition, or spontaneity and whim, are now increasingly the intended or unintended consequences of decisions made on the basis of scientific theories of the human mind and human well-being.

Barbara Tversky

Amos Tversky and Daniel Kahneman, Stanford, California, 1970s

The behavioral techniques that are being employed by governments and private corporations do not appeal to our reason; they do not seek to persuade us consciously with information and argument. Rather, these techniques change behavior by appealing to our nonrational motivations, our emotional triggers and unconscious biases. If psychologists could possess a systematic understanding of these nonrational motivations they would have the power to influence the smallest aspects of our lives and the largest aspects of our societies.

Michael Lewis’s The Undoing Project seems destined to be the most popular celebration of this ongoing endeavor to understand and correct human behavior. It recounts the complex friendship and remarkable intellectual partnership of Daniel Kahneman and Amos Tversky, the psychologists whose work has provided the foundation for the new behavioral science. It was their findings that first suggested we might understand human irrationality in a systematic way. When our thinking errs, they claimed, it does so predictably. Kahneman tells us that thanks to the various counterintuitive findings—drawn from surveys—that he and Tversky made together, “we now understand the marvels as well as the flaws of intuitive thought.”

Kahneman presented their new model of the mind to the general reader in Thinking, Fast and Slow (2011), where he characterized the human mind as the interrelated operation of two systems of thought: System One, which is fast and automatic, including instincts, emotions, innate skills shared with animals, as well as learned associations and skills; and System Two, which is slow and deliberative and allows us to correct for the errors made by System One.

Lewis’s tale of this intellectual revolution begins in 1955 with the twenty-one-year-old Kahneman devising personality tests for the Israeli army and discovering that optimal accuracy could be attained by devising tests that removed, as far as possible, the gut feelings of the tester. The testers were employing “System One” intuitions that skewed their judgment and could be avoided if tests were devised and implemented in ways that disallowed any role for individual judgment and bias. This is an especially captivating episode for Lewis, since his best-selling book, Moneyball (2003), told the analogous tale of Billy Beane, general manager of the Oakland Athletics baseball team, who used new forms of data analytics to override the intuitive judgments of baseball scouts in picking players.

The Undoing Project also applauds the story of the psychologist Lewis Goldberg, a colleague of Kahneman and Tversky in their days in Eugene, Oregon, who discovered that a simple algorithm could more accurately diagnose cancer than highly trained experts who were biased by their emotions and faulty intuitions. Algorithms—fixed rules for processing data—unlike the often difficult, emotional human protagonists of the book, are its uncomplicated heroes, quietly correcting for the subtle but consequential flaws in human thought.

The most influential of Kahneman and Tversky’s discoveries, however, is “prospect theory,” since this has provided the most important basis of the “biases and heuristics” approach of the new behavioral sciences. They looked at the way in which people make decisions under conditions of uncertainty and found that their behavior violated expected utility theory—a fundamental assumption of economic theory that holds that decision-makers reason instrumentally about how to maximize their gains. Kahneman and Tversky realized that they were not observing a random series of errors that occur when people attempted to do this. Rather, they identified a dozen “systematic violations of the axioms of rationality in choices between gambles.” These systematic errors make human irrationality predictable.

Lewis describes, with sensitivity to the political turmoil that constantly assailed them in Israel, the realization by Kahneman and Tversky that emotions powerfully influence our intuitive analysis of probability and risk. We particularly aim, on this account, to avoid negative emotions such as regret and loss. Lewis tells us that after the Yom Kippur War, Israelis deeply regretted having to fight at a disadvantage as a result of being taken by surprise. But they did not regret Israel’s failure to take the action that both Kahneman and Tversky thought could have avoided war: giving back the territorial gains from the 1967 war. It seemed to Kahneman and Tversky that in this case as in others people regretted losses caused by their actions more than they regretted inaction that could have averted the loss. And if this were generally the case it would regularly inform people’s judgments about risk.

Advertisement

That research eventually yielded heuristics, or rules of thumb, that have now become well-known shorthand expressions for specific flaws in our intuitive thinking. Some of these seem to be linked by a shared emotional basis: the “endowment effect” (overvaluation of what we already have), “status quo bias” (an emotional preference for maintaining the status quo), and “loss aversion” (the tendency to attribute much more weight to potential losses than potential gains when assessing risk) are all related to an innate conservatism about what we feel we have already invested in.

Many of these heuristics are easy to recognize in ourselves. The “availability heuristic” describes our tendency to think that something is much more likely to occur if we happen to be, for contingent reasons, strongly aware of the phenomenon. After September 11, for instance, fear of terrorism was undoubtedly disproportionate to the probability of its occurrence relative to car crashes and other causes of death that were not flashing across our TV screens night and day. We find it hard to tune out information that should, strictly speaking, not be of high relevance to our judgment.

2.

But in spite of revealing these deep flaws in our thinking, Lewis supplies a consistently redemptive narrative, insisting that this new psychological knowledge permits us to compensate for human irrationality in ways that can improve human well-being. The field of behavioral economics, a subject pioneered by Richard Thaler and rooted in the work of Kahneman and Tversky, has taken up the task of figuring out how to turn us into better versions of ourselves. If the availability heuristic encourages people to ensure against very unlikely occurrences, “nudges” such as providing vivid reminders of more likely bad outcomes can be used to make their judgments of probability more realistic. If a bias toward the status quo means that people tend not to make changes that would benefit them, for instance by refusing to choose between retirement plans, we can make the more beneficial option available by automatically enrolling people in a plan with the option to withdraw if they choose.

This is exactly what Cass Sunstein did when he oversaw the Office of Information and Regulatory Affairs in the Obama White House (Obama subsequently created a Social and Behavioral Sciences Team). He devised “choice architectures” or “nudges” that would work with the intuitive apparatus people have in order to guide their choices. In Lewis’s hands, the potential for doing good through such means can be a kind of magic, stealing like moonlight through the homes of sleeping Americans:

Millions of US corporate and government employees had woken up one day during the 2000s and found they no longer needed to enroll themselves in retirement plans but instead were automatically enrolled.

Sunstein and Thaler have described the political philosophy of such interventions as Libertarian Paternalism. It is “libertarian” because they do not impose mandates to narrow people’s choice, but merely frame choices or provide incentives that tend to make people “better off, as judged by themselves.” Their claim is that this form of influence, albeit often unconscious, is not manipulative or coercive because the possibility of a person choosing differently is not closed down. Lewis’s book ends with an uncomplicated celebration of this form of guided but purportedly free choice.

Lewis does not discuss the ways in which the same behavioral science can be used quite deliberately for the purposes of deception and manipulation, though this has been one of its most important applications. Frank Babetski, a CIA Directorate of Intelligence analyst who also holds the Analytical Tradecraft chair at the Sherman Kent School of Intelligence Analysis at the CIA University, has called Kahneman’s Thinking, Fast and Slow a “must read” for intelligence officers.

Babetski has described the use of behavioral science for deceptive practices that are part of the intelligence officer’s trade.1 He is envisaging this use as constrained by law and by intelligence goals that are ultimately determined by democratic governments. But in doing so he also reveals the potential for coercion that is implicit in these tools for anyone willing to wield it.

The deeper concern that Lewis’s happy narrative omits entirely is that behavioral scientists claim to have developed the capacity to manipulate people’s emotional lives in ways that shape their fundamental preferences, values, and desires. In Kahneman’s recent work he has developed the idea, originally set out in one of his papers with Tversky (who died in 1996), that we are not good judges of our own well-being. Our intuitions are unstable and conflicting. We may retrospectively judge an experience more enjoyable than our subjective reports suggested at the time. Kahneman, working with others in the field of positive psychology, has helped to establish a new subfield, hedonic psychology, which measures not just pleasure but well-being in a broader sense, in order to establish a more objective account of our condition than our subjective reflection can afford us.

Advertisement

This new subfield has led the way in combining research in behavioral science with “big data,” a further development that is beyond the scope of Lewis’s book, but one that has tremendously expanded the potential applications of Kahneman and Tversky’s ideas. Psychologists at the World Well-Being Project, at the University of Pennsylvania, have collaborated with Michal Kosinski and David Stillwell, computational psychologists from the Psychometrics Centre at the University of Cambridge and developers of myPersonality. This was a Facebook application that allowed users to take psychometric tests and gathered six million test results and four million individual profiles. Scores on these tests could be combined with enormous amounts of data from the user’s Facebook environment. The application has been used in conjunction with personality measures such as the “big five,” also known as the OCEAN model, which purportedly measures openness, conscientiousness, extraversion, agreeableness, and neuroticism. Words such as “apparently” and “actually,” for example, are taken to correlate with a higher degree of neuroticism. The architects of myPersonality claim that these tests, in conjunction with other data, permit the prediction of individual levels of well-being.

The guiding idea for the World Well-Being Project is that we need not rely on our faulty subjective judgments about what will make us happy or what path in life will give us a sense of meaning.2 But if those studying behavioral influence are targeting the form of well-being that we value and the kind of happiness we seek, then it is harder to see how people’s being “better off, as judged by themselves” genuinely preserves their freedom. And this concern is not a purely academic one. The manipulation of preferences has driven the commercialization of behavioral insights and is now fundamental to the digital economy that shapes so much of our lives.

In 2007, and again in 2008, Kahneman gave a masterclass in “Thinking About Thinking” to, among others, Jeff Bezos (the founder of Amazon), Larry Page (Google), Sergey Brin (Google), Nathan Myhrvold (Microsoft), Sean Parker (Facebook), Elon Musk (SpaceX, Tesla), Evan Williams (Twitter), and Jimmy Wales (Wikipedia).3 At the 2008 meeting, Richard Thaler also spoke about nudges, and in the clips we can view online he describes choice architectures that guide people toward specific behaviors but that can be reversed with one click if the subject doesn’t like the outcome. In Kahneman’s talk, however, he tells his assembled audience of Silicon Valley entrepreneurs that “priming”—picking a suitable atmosphere—is one of the most important areas of psychological research, a technique that involves offering people cues unconsciously (for instance flashing smiley faces on a screen at a speed that makes them undetectable) in order to influence their mood and behavior. He insists that there are predictable and coherent associations that can be exploited by this sort of priming. If subjects are unaware of this unconscious influence, the freedom to resist it begins to look more theoretical than real.

The Silicon Valley executives clearly saw the commercial potential in these behavioral techniques, since they have now become integral to that sector. When Thaler and Sunstein last updated their nudges.org website in 2011, it contained an interview with John Kenny, of the Institute of Decision Making, in which he says:

You can’t understand the success of digital platforms like Amazon, Facebook, Farmville, Nike Plus, and Groupon if you don’t understand behavioral economic principles…. Behavioral economics will increasingly be providing the behavioral insight that drives digital strategy.

And Jeff Bezos of Amazon, in a letter to shareholders in April 2015, declared that Amazon sellers have a significant business advantage because “through our Selling Coach program, we generate a steady stream of automated machine-learned ‘nudges’ (more than 70 million in a typical week).” It is hard to imagine that these 70 million nudges leave Amazon customers with the full freedom to reverse, after conscious reflection, the direction in which they are being nudged.

Facebook, too, has embraced the behavioral insights described by Kahneman and Thaler, having received wide and unwanted publicity for researching priming. In 2012 its Core Data Science Team, along with researchers at Cornell University and the University of California at San Francisco, experimented with emotional priming on Facebook, without the awareness of the approximately 700,000 users involved, to see whether manipulation of their news feeds would affect the positivity or negativity of their own posts. When this came to light in 2014 it was generally seen as an unacceptable form of psychological manipulation. But Facebook defended the research on the grounds that its users’ consent to their terms of service was sufficient to imply consent to such experiments.

Nathan Myhrvold, the former chief technology officer of Microsoft who attended Kahneman’s masterclasses in 2007, went on to become an adviser to Kahneman’s own consulting firm, TGG Group, chaired by the former Citibank head Vikram Pandit. This group aims, according to its website, to “unpack the knowledge hidden in big data,” “design…choice architectures,” and “reduce noise in decision-making” (that is, to eliminate inconsistencies created by conflicting subjective judgments in organizations).

The website does not list any of TGG’s clients, though early articles mention its pitching Deutsche Bank. In conjunction with big data, behavioral science has become an extraordinarily powerful tool in the world of business and finance, and Kahneman has not shied away from these applications. Lewis’s book ends with the thrill of the phone ringing in Kahneman’s living room on an October morning in 2002, as we anticipate the announcement that he has won the Nobel Prize for his work with Tversky. But the story of their ideas silently transforming our social world, in conjunction with data we supply, has only just begun.

Since the electoral surprise of November 8, 2016, the magical tale of behavioral science making the world a better place has been replaced by a darker story in the public mind. It has been widely reported that Trump’s team, as adviser Jared Kushner puts it, “played Moneyball” with the election. News outlets have claimed that although Obama’s and Clinton’s teams both used social media, data analytics, and finely grained targeting to promote their message, Trump’s team, according to Forbes, “delved into message tailoring, sentiment manipulation and machine learning.”4 If this sinister level of manipulation seems far-fetched, it nevertheless reflects the boasts of Cambridge Analytica, the company they employed to do this for them, a subsidiary of the British-based SCL Group.

The company, whose board has included Trump’s chief strategist, Steve Bannon, has also been held responsible by the press for the outcome of the Brexit vote of June 2016. Its CEO, Alexander Nix, claims in a presentation entitled “The Power of Big Data and Psychographics” (which can be found on Youtube5) that Cambridge Analytica has used OCEAN personality tests in combination with data mined from social media to produce “psychographic profiles”—models that predict personality traits—for every adult in America. It did so without the consent of Kosinski and Stillwell, who developed the technique. Nix claims that they possess between four and five thousand data points on every potential voter, after combining the personality test results with “attitudinal” data, such as credit card spending patterns, consumer preferences, Facebook likes, and civic and political engagement.

There is an interesting slippage in the presentation between Nix saying that hundreds of thousands of people have filled out Cambridge Analytica’s questionnaires and his claiming they have this amount of data on every American adult. It is either an empty boast or there is a disturbing story to be told about how they acquired this information. Nix nevertheless claims that they can use their data in combination with tracking cookies, data from cable companies, and other media tools to target very specific audiences with messages that are persuasive because they are informed by behavioral science.

Bryan Bedder/Getty Images for Concordia Summit

Alexander Nix, the CEO of Cambridge Analytica, which did data analysis and message targeting for the Trump campaign, New York City, September 2016

In describing their “behavioral” methods of persuasion, Nix gives the example of a private beach owner who wishes to keep the public out. He might, Nix says, put up an “informational” sign that seeks to inform attitudes, such as: “Public beach ends here: private property.” Or he could seek “to probe an altogether much more powerful, underlying motivation” by putting up a sign that says “Warning: shark sighted.” The threat of being eaten by a shark, Nix claims, will be more effective. Similarly, in videos made by Cambridge Analytica’s research wing, the Behavioral Dynamics Institute, the group describes strategies for appealing directly to people’s underlying fears and desires in ways that are continuous with the insights of behavioral economics, but that seem less scrupulous about employing lies or half-truths to influence System One motivations.

This “behavioral microtargeting” is what Nix claims to have used when Cambridge Analytica worked on the Cruz campaign. But it is important to remember that this much-discussed video is a sales pitch.

Behavioral techniques, microtargeting, and data analysis are not new to political campaigns, as Sasha Issenberg has shown in The Victory Lab: The Secret Science of Winning Campaigns (2012). Accurate and detailed psychographic profiles are a product that everyone in this business wants, so that’s what Nix claims to be selling. Doubts have been raised about whether the Trump team in fact employed these techniques, though the Cambridge Analytica website has posted articles asserting that they did. There has also been some skepticism about whether the psychographic techniques Nix describes actually work.6

It is impossible to test the claims of organizations such as Cambridge Analytica, since there can be no control group, only the kind of ambiguous observational data that can be attained in a very “noisy” environment. But this doesn’t mean that there is no threat to democracy once we start relying less on information that can be critically scrutinized in favor of unconscious manipulation.

Whatever the truth of Cambridge Analytica’s claims, the very existence of such companies tells us something important about the weight that unconscious influence, relative to reasoned argument, now plays in political campaigns. Kahneman’s TGG Group is not involved in the business of political influence. But according to Issenberg, in 2006, a private group at the University of California, Los Angeles, called the Consortium of Behavioral Scientists, which was run by psychologist Craig Fox and included Kahneman and Thaler, began to persuade Democrats that they needed to employ behavioral science. The secrecy of the group was a result of qualms about how such initiatives would be perceived. By now, behavioral strategies are in the open and are ubiquitous. The term “propaganda” has been replaced by “a behavioral approach to persuasive communication with quantifiable results.”

Companies such as SCL Group claim to have the weapons to win large-scale ideological struggles. We can watch online a video of Nigel Oakes, the head of SCL Group, delivering a presentation to the US Department of State on behalf of SCL Defence, one of its subsidiaries. He points out that traditional advertisers who appeal to individuals and capture 0.6 percent of their market are considered very successful. Strategic communication, however, requires group communication: “There’s no point in having .6 percent of Syrians supporting you or .6 percent of al-Qaeda…. We’ve got to convince the entire communities.”7 The part of the pitch in which he describes his methods is not available for public viewing.

The claim that SCL can deliver this is an extraordinary one, even for a company that has experience in the field through psychological operations led by Steve Tatham, a former commander in the British navy. He worked, for example, with Andrew Mackay, the commander of the British armed forces in Afghanistan, in order to “win” areas that had been “flattened by kinetic activity” through persuasive techniques derived from behavioral economics and refined “in theater.”8

Many of the relevant techniques were suggested directly by Kahneman and Tversky in their 1995 essay “Conflict Resolution: A Cognitive Perspective.” Tatham and Mackay, in a book on their initiatives, Behavioral Conflict: Why Understanding People and Their Motivations Will Prove Decisive in Future Conflict (2011), describe how they were used in the Afghan war. They employed prospect theory, for example, to think about motivations, realizing that the avoidance of further losses was more important to local populations than the potential realization of gains. The reconstruction of the Kajaki Dam in Helmand, while strategically important, was too remote an incentive to limit insurgent activity around the dam. More immediate incentives had to be created. Kahneman and Tversky’s insight into the “wisdom of crowds” was employed when thinking about decision-making in an Afghan shura, or assembly, where the British sought to empower those individuals who had “the right ideas but the least amount of authority.”

We cannot, however, gather data on the successes of these initiatives, since the psychological factors involved are opaque and the counterfactuals impossibly complex. When the party wishing to persuade a population arrives with tanks, guns, and drones, and the population itself is internally divided, we cannot easily determine the extent to which cooperation with the occupying forces is the result of behavioral techniques. There is as yet no scientific evidence of how the military can noncoercively influence group behavior on a large scale in zones of conflict. And claims about winning over the majority of a population in any given state are entirely untested.

Nevertheless, SCL Group, which claims to have mastered behavioral influence both online and in the field, recently signed a $500,000 contract with the State Department and according to The Washington Post is in negotiations with the Trump administration to “help the Pentagon and other government agencies with ‘counter radicalization’ program.”9 They claim to have offers for their services from all over the world. This in turn will doubtless engender competition from around the world.

The idea of Libertarian Paternalism, in which the tools of the new behavioral sciences remain in the hands of benign liberal mandarins, has come to seem hopelessly quaint. In a more combative and unstable environment there must clearly be greater concern about our capacity to regulate the uses of behavioral science, the robustness of the fundamental research, and the political or financial motivations of any behavioral initiatives to be employed or countered.

3.

Nonrational forms of persuasion are clearly nothing new. But many social psychologists credit Kahneman and Tversky with a profoundly original theory of the human mind, one that exposes systematic, unconscious sources of irrationality, just as Freud’s idea of the unconscious was taken to do by previous generations of psychologists. The view that social psychology and behavioral economics are rooted in robust fundamental research of this kind lends the imprimatur of cutting-edge science to the millions of behavioral initiatives now being undertaken across the world.

When Kahneman’s Thinking, Fast and Slow was published in 2011, it elicited comparisons to the innovations of Descartes, Darwin, and Freud. But philosophers have long had qualms about the two-systems model Kahneman sets out there. In 1981, L. Jonathan Cohen published a paper entitled “Can Human Irrationality Be Experimentally Demonstrated?” In it he developed various lines of criticism of Kahneman and Tversky’s work, but the one to which Kahneman was particularly moved to respond was the idea that we cannot easily separate intuition from other cognitive functions, that we in fact have no choice but to rely on intuition in our reasoning.

Kahneman rejected the idea that there can be a realm of intuition that cannot be rationally evaluated because “people often find inconsistent intuitions appealing.”10 If our intuitions conflict, rational deliberation will have to be called upon to adjudicate the disagreement. However, in his ongoing defense of this position he has failed to take into account what Cohen and other philosophers mean by “intuition,” and so failed to engage the sense in which intuitions are necessary for deliberation.

In Thinking, Fast and Slow, Kahneman characterizes System One intuitions as fast and automatic, whereas System Two reasoning is slow and deliberate. In other words, he characterizes our intuitive judgments phenomenologically, by describing the speed and effortlessness with which they come to us. They are, in effect, snap judgments.When philosophers describe our reliance on intuition, however, they are not concerned with the phenomenology of judgments per se but with the architecture of justification.

We have to rely on intuition, they contend, where our discursive justifications come to an end, for instance in the fundamental laws of logic, such as the principle of noncontradiction, or basic rules of inference. We cannot justify our belief in these laws in ways that don’t beg further questions. Our justification for employing them rests on our finding them self-evident. We cannot deliberate rationally without them. Since they are the necessary basis for any deliberative thought, we cannot characterize mental functions as straightforwardly belonging to an intuitive System One or a deliberative System Two.

A further problem arises when we try to assign errors to a particular set of systematic biases, or attribute them to specific flawed heuristics. If we wish to accuse someone employing the word “probable” or “likely” of making a false probabilistic judgment, we need to be sure that they are employing the very same concept of probability that is the object of analysis in probability theory. If we wish to accuse someone of making false probabilistic judgments because they are employing a faulty heuristic, we need to be sure that the correct explanation isn’t that certain people have some complicating beliefs in the background, in luck or fate or God, for instance.

Similarly, when people’s judgments appear to be affected by irrelevant stimuli, for example a reminder of our mortality seeming to make us more risk-averse (priming effects, that is), a very large number of potential causal factors would have to be ruled out before such irrational biases could be confidently described as features intrinsic to System One. If it is not a simple task to divide thinking into two separate systems, it will not be easy to reduce the complex interactions between unconscious biases, background beliefs, and deliberation in any given case to an identifiable and systematic error.

These objections, if correct, would suggest that many of the psychological experiments Kahneman cites in Thinking, Fast and Slow would be impossible to replicate. And indeed the very year that it was published a replicability crisis emerged in the field of psychology, but most severely in social psychology. The psychologist Ulrich Schimmack has recently created a Replicability Index that analyzes the statistical significance of published results in psychology. He and his collaborators, Moritz Heene and Kamini Kesavan, have applied this to the studies cited in Thinking, Fast and Slow to predict how replicable they will be, assigning letter grades to each chapter. Kahneman and Tversky’s own work gets good grades, but many other studies fare very poorly. The chapter on priming, for example, gets an F.11 As reported in Slate, the overall grade of the chapters assessed so far is a C-.12 Kahneman has posted a gracious response to their findings, regretting that he cited studies that used such small sample sizes.13

This seems to represent a serious challenge to the “biases and heuristics” approach to persuasion. Psychologists have not yet uncovered the fundamental mechanisms governing human thought or finally found the secret key to mind control. Since the human mind is not straightforwardly a mechanism (or we are at least far from proving that it is) and its workings are unfathomably complex so far, they may never succeed in that venture. Some of the biases they have identified can easily be redescribed in ways that don’t make them seem like irrational biases at all; some are not transferable across different environments. The fundamental assumption of two discrete systems cannot be sustained.

But this does not mean we can disregard the propaganda initiatives derived from Kahneman and Tversky’s work. Many of the persuasive techniques being employed in these efforts have been known intuitively for centuries. They have been used by governments, religions, and the arts.14 Now, however, these techniques are being extensively tested and combined with sophisticated data analysis. The two-systems view has managed to lend the appearance of legitimacy to techniques that might otherwise appear coercive. Experts, algorithms, and nudges may be presented as a form of collective rationality, assisted institutionally by markets and governments, stealthily undoing the knots of irrationality in which individuals have inevitably entangled themselves.

On this model, it appears that System Two, implemented from above, can liberate us from the flaws of System One. If we reject the distinction between these two supposedly separate psychological systems and instead pay attention to what can and cannot be rationally justified, it will be more evident that behavioral change imposed on us through nonrational means not only is more coercive than that which comes about through the rational evaluation of justifications, but also erodes our capacity to reflect rationally and critically on our social world. The sources of influence that shape social behavior, markets, and politics increasingly become invisible and rationally inscrutable.

Comparatively little attention has been paid to overcoming the biases that psychologists have identified, except insofar as this might serve the national security objective of discouraging extremism through the introduction of measures to combat effects such as confirmation bias.15 It is still possible to envisage behavioral science playing a part in the great social experiment of providing the kind of public education that nurtures the critical faculties of everyone in our society. But the pressures to exploit irrationalities rather than eliminate them are great and the chaos caused by competition to exploit them is perhaps already too intractable for us to rein in. In The Undoing Project, Lewis tells a story full of promise about the unraveling of obsolete assumptions. But Kahneman and Tversky’s ideas have escaped the confines of their troubled friendship and we have yet to see how much will be undone.