This past June The Wall Street Journal ran a piece with the headline “Magic Mushrooms. LSD. Ketamine. The Drugs That Power Silicon Valley.” Though titillating, it should not have been surprising. For years, the tech elite’s predilection for brain hacking has been part of their mystique; more recently, Elon Musk touted ketamine as a better antidepressant than anything in Big Pharma’s armamentarium. Since the 2018 publication of Michael Pollan’s How to Change Your Mind and its subsequent Netflix adaptation, conversations about psychedelics have gone mainstream.

So has their use. Search on the Internet for “moms microdosing mushrooms” and a long roster of articles will appear. (Microdosing is taking small, barely perceptible amounts of a drug.) According to a 2020 survey, annual use of psychedelics “reached its highest level since 1982 among college students.” For a hefty fee and a perfunctory medical assessment, adults can sit in zero-gravity chairs listening to playlists curated for their calming effect as they are infused, legally, with ketamine, whose reputation as a date-rape drug precedes its popular use as a treatment for a host of psychological and physiological ailments, including anxiety and headaches. Last year, Colorado decriminalized psilocybin, legalizing its use in controlled settings, following Oregon, which did so two years earlier. It is estimated that more than five million American adults use hallucinogens.

For Nita Farahany, a professor of law and philosophy at Duke Law School, all this activity is evidence that an increasing number of people are exercising what she calls their “cognitive liberty.” Her provocative and, at times, chilling recent book, The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology, is an argument for the creation of a new international right to cognitive and mental privacy, based in large part on measures already codified in the Universal Declaration of Human Rights, updated to protect our neural data.

In Farahany’s estimation, cognitive liberty includes, among other things, mental privacy, freedom of thought, and the right to self-determination. It would give individuals significant, but not absolute, control over data collected from their brains via various neurotechnology devices, such as those that capture brain wave data, as well as the freedom to decide for oneself, with minimal government interference, which kinds of mind-altering drugs to take and what brain modifications to make. According to Farahany, “To the extent that smart drugs and devices improve our focus, motivation, attention, concentration, memory, we ought to celebrate rather than prohibit them.” It’s a radical position, especially in its broadest interpretation. It suggests, for example, that taking an ADHD drug such as Adderall to write a college paper, or microdosing psychedelics to get an edge at work, is not just acceptable but worthwhile.

Is this a defensible position? Only if self-determination outweighs science. There have been few double-blind, placebo-controlled studies of psychedelics, especially those taken in tiny increments, because, as the authors of a 2019 paper published in PL o
S One observed, “the current legal and bureaucratic climate makes direct empirical investigation of the effects of psychedelics difficult.” Instead, they queried 263 microdosers about their experiences, looking for an “expectancy bias” between what participants imagined would happen versus what did. What the researchers found was that

all participants believed that microdosing would have large and wide-ranging benefits in contrast to the limited outcomes reported by actual microdosers. Notably, the effects believed most likely to change were unrelated to the observed pattern of reported outcomes.

Another study, published last year in the journal Translational Psychiatry, found that “low doses of psilocybin mushrooms can result in noticeable subjective effects and altered EEG rhythms, but without evidence to support enhanced well-being, creativity and cognitive function.” A survey study in the International Journal of Neuropsychopharmacology noted that the negative effects of microdosing “may be underrepresented.” (To be clear, these are drugs taken without medical supervision, unlike MDMA, which has been shown to reduce PTSD when paired with psychotherapy, or psychedelics administered in a controlled setting by trained professionals.)

Similarly, there is little evidence that medications meant to help people with attention disorders actually improve brain function in those without these debilities. Certainly, people think they do, but the research—which is thin at best—is fuzzy. A study from 2018, in which it was hypothesized that Adderall would improve cognition in healthy students, found, to the surprise of the researchers, that the medication failed to improve either reading comprehension or fluency and impaired working memory. A more recent study of Adderall and two other popular “smart” drugs, Ritalin and modafinil, by researchers from Cambridge and Melbourne Universities, found that while these drugs increase motivation, “a reduction in quality of effort, crucial to solve complex problems, annuls this effect.”

Because both Adderall and Ritalin are stimulants, they do increase wakefulness and raise heart rates, which can make them seem as though they are also boosting cognition. And because they increase dopamine levels, there is some evidence that with regular use, they are addicting. Modafinil, a drug developed to treat narcolepsy and used by military pilots to overcome fatigue, is considered safer than the others because it works on a different neural pathway. While it definitely increases wakefulness and attention to tasks that may otherwise be considered boring, its effects on cognition are mixed.

Advertisement

An informal review of twenty years of data, published in 2020 and updated two years later, concluded that

there are definitely positive effects of using modafinil as a smart drug, especially in improving the attention span and memory of individuals with cognitive deficiencies. In students or adults working in a fast-paced environment, however, taking modafinil has not been shown to improve general performance.

Much like with Adderall, it appears that modafinil improves cognition for people “at the lower end of the spectrum” and “impair[s] people who are at the optimum level of cognitive function,” according to researchers affiliated with the University of Nottingham.

If the goal is cognitive liberty, then none of this matters: taking drugs that are understudied and may have negative consequences is one’s right, much like the choice not to wear a motorcycle helmet while speeding down the highway, even though it is demonstrably safer to wear one. Farahany also argues that, in the interest of equity, brain-altering substances should be equally available to everyone. While it is not clear how this would work—would they be covered by insurance, sold over the counter, distributed without a prescription to healthy college students?—this only levels the playing field if equity simply means equal access, since these drugs will affect users differently.

The playing field, in fact, provides an object lesson. Lance Armstrong, Marion Jones, and Russian Olympians from at least as far back as 2008, among many others, took advantage of performance-enhancing drugs to beat their competition. They were caught, stripped of their medals, and branded as cheaters. In Farahany’s estimation, this was an appropriate response, because using performance-enhancing drugs in sports breaks the rules of the game. “Games are self-imposed systems; their rules and components define them,” she writes. “If it’s in the rules, it’s part of the game. If it’s not, it isn’t! It’s just that simple.” For this reason, she argues, sports are not an appropriate analogy to what she is proposing:

Athletics and chess are artificial, man-made systems. But life is not, and the right to self-determination over our brains and mental experiences in our everyday lives is much more important than any game.

Yet one thing we learned from the various doping scandals is that it is very hard to resist using performance-enhancing drugs when others are using them. Life may be more important than a game, but that doesn’t mean people don’t experience the stress of competition in their everyday lives and look for ways to gain an advantage or, at least, to relieve the stress. Taking away the stigma of cheating by making these medications widely available would increase their use—a public benefit, it would seem, following Farahany’s logic—because people, especially students, will not want to feel disadvantaged or left behind. Peer pressure, parental pressure, work pressure, and the pressure of time itself are real and powerful forces. Under their sway, an individual’s ability to act autonomously may be compromised, so that, paradoxically, the freedom to take smart drugs becomes an impediment to self-determination.

Farahany rejects this argument, too, citing a 2014 study of German students that found that “individuals are no more willing to take cognitive enhancers [CE] when others do so, but they are more likely to avoid them when others disapprove of them.” Yet the same study concluded that

prevalent CE-drug use among peers increased willingness, whereas a social environment that strongly disapproved of these drugs decreased it. Regarding the respondents’ characteristics, pronounced academic procrastination, high cognitive test anxiety, low intrinsic motivation, low internalization of social norms against CE-drug use, and past experiences with CE-drugs increased willingness.

In 2011 Farahany’s home institution, Duke University, amended its honor code to consider the use of unauthorized smart drugs cheating.

Farahany is on more familiar, less contested ground when she writes about the many novel ways that corporations and governments are monitoring our minds, intruding on our interior lives, and compromising our cognitive liberty. By now it is well established that digital privacy exists only at the discretion of the companies that mediate our online engagement. But what happens when those companies, an employer, school administrators, or the government have access to our thoughts—or what they interpret to be our thoughts—before they are articulated or shared? What if they can “see” into our brains?

Advertisement

This is not wholly speculative. Apple, for example, has a patent to integrate sensors that track brain wave activity into its earbuds; these sensors can determine if someone is alert and paying attention, or dragging and disengaged. This sounds benign until you consider who, in addition to yourself, might want access to that data.

Farahany reports that an American neurotech company, BrainCo, has been selling EEG headsets to schools in China, where they are used to track student engagement. Farahany also suggests that the Chinese government may be using brain data to predict people’s political beliefs. Brainwave Science, a company that included retired general and Trump conspiracist Michael Flynn on its board of directors, has sold its products to governments around the world. One of these products, iCognative, ostensibly “extracts information from people’s brains” and has been used successfully to prosecute murder cases in Dubai. In one case, investigators showed potential suspects pictures of the crime while monitoring their brain waves with an EEG headset. “Purportedly,” Farahany writes, “a photo of the murder weapon triggered a characteristic ‘recognition’ pattern” in the brain of one of the suspects, who then confessed.

Closer to home, corporate wellness programs, which are often exempt from HIPAA privacy regulations, have been integrating EEG brain wave technology into their offerings, typically without revealing to users what data is being collected and what is being done with that data. This means that it can be sold to data brokers, who then sell it to whoever is willing to pay for it. Companies can use the data to screen potential new hires. Insurers can use it to determine premiums. Colleges might use it to aid admissions decisions. Brain-computer-interface (BCI) devices, such as the one being developed by Elon Musk’s company Neuralink, promise to implant sensors directly into the brain in order to merge human intelligence with artificial intelligence. Since this could open our bodies to malware, it suggests a whole new way of losing our minds.

It is also likely to give advertisers insights into our subconscious, affording them an ever more specific way of targeting us. Farahany quotes Howard Chizeck, a professor of electrical engineering at the University of Washington, explaining how BCI devices will soon be used to play online games that can be designed to extract personal proclivities. The game operator “could flash pictures of [gay and straight] couples and see which ones you react to,” he told her. “And going through a logic tree, I could extract your sexual orientation. I could show political candidates and begin to understand your political orientation, and then sell that to pollsters.” Last year, the cosmetics conglomerate L’Oréal partnered with a neurotechnology company called Emotiv to guide consumers through a “unique fragrance consultation” by connecting “neuro responses to fragrance preferences through a multi-sensor EEG-based headset.” A press release explains:

The headset uses machine learning algorithms that interpret EEG, while consumers experience proprietary scent families, to provide the ability to accurately sense and monitor behavior, preferences, stress, and attention in real-world contexts. The first-of-its-kind experience helps consumers determine their perfect scent suited to their emotions.

Even neurotechnology that serves a public good, such as EEG sensors worn by truck drivers or air traffic controllers to monitor fatigue, can be used in insidious ways, especially to surveil and subordinate workers. In the United States, where privacy laws that cover consumer data are already notoriously weak, none have anticipated a technology that harvests our neural activity and uses it against us. This is where Farahany’s proposed right to cognitive liberty, grounded in human rights law, is both prescient and urgent. As she envisions it, in the workplace,

the human right to mental privacy would prohibit unauthorized access to employees’ neural data without an explicit law justifying doing so, based on a compelling societal interest in so doing, in a manner that is narrowly tailored to balance the interests of the employee with those of society.

Even then, employers

should be limited to collecting data for the specific purpose it serves, and employers should be prohibited from mining brain data for other insights about the employee to ensure that the impact on employee privacy is proportional to the societal benefit it affords.

Yet finding that balance in what the emeritus professor at Harvard Business School Shoshana Zuboff has called “the age of surveillance capitalism” will be difficult, since data has become a valuable commodity that tips the scales in favor of commercial interests. In the halls of Congress, at least, the interests of Big Tech have so far prevailed over the protection of individual privacy.

For Farahany, cognitive liberty also requires an individual right “to obtain and record your own brain activity,” which should only be shared if the individual explicitly opts to do so. This seems sensible and unassailable—they’re our brains, after all—but it’s hard to imagine how, once that activity is recorded on our digital devices, the brain does not become a Pandora’s box of all sorts of insights, real and construed. Though Farahany is concerned that, once we know that others know what we are thinking, we will self-censor our thoughts or be targeted by governments for our beliefs, she still presents this as an opportunity:

Just as we exchanged access to our web search history for free and powerful internet browsers, we will have reasons to want to share the brain data these devices collect.

It’s a dubious calculus: the freedom to trade one’s neural activity—which, as Farahany points out, “powerful machine learning algorithms are getting better and better at translating…into what we are feeling, seeing, imagining, or thinking”—for access to websites, other digital assets, or monetary gain may be a slippery slope at the bottom of which we find that we have inadvertently relinquished control over the very thing that makes us sentient, unique individuals. Rights do not protect us from ourselves if we are willing to waive them.

One can imagine Daniel Barron cheering this idea of Farahany’s. At the moment, he writes in Reading Our Minds: The Rise of Big Data Psychiatry, Google and other tech companies have enormous amounts of information about our interior lives that, if shared, would be of tremendous value to psychiatrists such as himself. Barron, the medical director of the Pain Intervention and Digital Research Program at Brigham and Women’s Hospital, is keen for his profession to cast off the old ways of diagnosis and treatment in favor of data derived, in part, from patients’ digital activities. He is especially disheartened that psychiatry relies on imprecise impressions of a patient’s condition, while other medical specialties have been able to routinize their practice through quantification, using standardized measures such as blood pressure and lung function and urine output, to distinguish the pathological from the normal:

Unlike in heart disease, where a cardiologist will repetitively measure blood pressure to ensure that an anti-hypertensive is having the desired effect, I have no quantitative measures to determine whether and how much my treatments work: An anti-psychotic “works” if a patient looks and feels less psychotic. And because I can’t connect the way my patient looks and feels with specific neurons or receptors, I can’t prescribe treatments to alter those neurons and receptors with any reasonable precision.

Talking—so often a defining feature of psychiatric diagnosis and treatment—with its reliance on vague and subjective language, cannot come close to revealing the inner workings of a patient’s brain, Barron suggests, or guide the physician to dose a drug with something approaching exactitude.

Barron’s solution, which he hopes to see more widely adopted by his colleagues, is to find data that can be quantified in such a way that it gives psychiatrists accurate, real-time insights into a patient’s behavior and mood. This approach is already being employed with some success. Barron cites the work of researchers who used accelerometers to measure when children with ADHD were most hyperactive during the school day (math and reading) and when they were not (recess) in order to find the smallest clinically effective dose of Adderall needed to improve behavior. (Accelerometers used to be relatively large devices; now they are commonly embedded in phones and watches.) Barron also points to work by scientists who have predicted psychotic episodes with surprising accuracy by using computers to analyze speech patterns from one-hour, open-ended interviews “far too subtle for a clinician to detect.” And he points to studies by Dr. Munmun De Choudhury, who has been using social media posts to predict postpartum depression and other mental illnesses.

In his own practice, Barron believes he might extract meaning from patients’ social media posts—not just from their content but from their frequency. “Because social media is digitized and time-stamped,” he writes, “we can use it to measure, trace, and quantitatively predict not only someone’s baseline behavior, but whether they are moving away from that baseline.” As an example, he tells the story of a young woman he calls Irene, whose mother is concerned by her increasing detachment from friends and family, and whom Barron diagnoses with schizophrenia:

Although I stated that Irene was delusional, looking for clandestine messages on the internet, I didn’t bother to look at her browser history and see how often and for how long she visited Urban Dictionary or Spotify or the astrology pages or whether this pattern of internet use differed from her pattern six months ago. I could have measured how many hours Irene spent alone by seeing how often her geolocation was her room. By consulting her call and text log, I could have measured if she was indeed more isolated from her friends than she was one, three, or six months ago.

Imagine how differently my conversation with Irene and her mother might have gone. Instead of telling Irene, “I’m sorry to hear that you’ve distanced yourself from family and friends,” I could have sat with Irene in front of a computer screen and, looking at a summary of her call and text log, said something like, “Irene…I can see here that three months ago, you texted your friends fifty times per day and then—around the same time your internet use began to climb—it looks like you texted them progressively less until you stopped texting them entirely.”

He adds:

This is not just a nifty thing to do with Big Data. Measuring relevant symptoms is the very nuts-and-bolts of clinical work. Without measuring Irene’s clinical problem, I cannot say with confidence whether or to what extent my intervention has improved that problem.

I’ve quoted Barron at length because it is here that his quest to find quantifiable measures to confirm his otherwise subjective conclusion seems more descriptive than scientific. Even if we agree that texting a lot and then texting not at all is a sign of someone’s social disengagement, it is not clear how it is a normative measure of illness. Would texting five times a day as a baseline, rather than fifty, and then ceasing to text at all demonstrate less disengagement and less psychosis? Or is it that the baseline numbers aren’t actually useful, since the real point is that over time they declined to zero? If that is true, then the diagnosis is not grounded in hard numbers so much as it is in an assumption that an overall declination, whatever it is, is meaningful. Arguably, then, quantifying the activity is less important than observing and being able to point to a change in behavior. When numbers have no agreed-upon, scientifically derived, extrinsic meaning, quantification is unavailing. Finding that meaning is the hard and necessary task that must be undertaken before Big Data can be used to diagnose and assess individual cases.

Inadvertently, Barron tells a cautionary tale when he writes about Dr. Tom Insel, who left the National Institute of Mental Health, where he was director, to work at Verily Life Sciences, an offshoot of Google devoted to creating “precision healthcare” derived from Big Data. Earlier this year, Verily laid off two hundred employees, shut down its health care analytics software division, and changed its focus to delivering what it calls “precision risk insurance.” By then Insel was gone, having joined Mindstrong, a start-up that aimed to diagnose depression and other conditions by passively analyzing users’ typing speeds, scrolling patterns, and typos. According to a report in STAT News, in 2018 health care workers involved in a pilot program “expressed concern that Mindstrong’s predictive technology didn’t work.” There were also concerns about participants’ privacy. Eventually, the company moved away from that technology and became an app-based mental health provider. In March the company shut down.

It would be easy—and shortsighted—to interpret these washouts to mean there is little or no possibility that data can assist (not replace) mental health professionals. But finding ways to harness data, big and small, will demand the same devotion to trial and error that makes drug development, where 90 percent of new medications fail, slow and painstaking. And it should require a renewed commitment to patient privacy that goes beyond the inadequacies of HIPAA.

At the end of his book, Barron suggests that “it’s clear that quality mental health care is something that people want and will pay a great deal of money for”—yet the reason attributed to Mindstrong’s demise was that it could not provide cost-effective care. As Roy Perlis, the associate chief of research at Massachusetts General Hospital’s Department of Psychiatry and a professor at Harvard Medical School, observed after the company announced it was shuttering, “Americans value mental health extremely highly until they have to pay for it.”

Even so, Barron, like Farahany, foresees a sunny future where our neural data will have financial as well as social value. “Measure-based care could very well be the ultimate form of healthcare capitalism,” he writes, “a position where the interests of companies and patients are aligned to produce the best outcome.” The battle for your brain is just beginning.


A previous version of this article used an old name for what is now the Pain Intervention and Digital Research Program.