The article prompted Nature to conduct a straw poll of its readers. One in five of the survey’s respondents admitted to using Ritalin, modafinil, or beta-blockers to aid their focus, concentration, or memory.55
Of course, one downside of this academic brain-hacking is that none of these substances are risk-free. Making the decision to use one of these “Professor’s little helpers” to get ahead of your peers requires some careful balancing of short-term gains against potential downsides. These could include headaches, diarrhea, agitation, sleeplessness, odd behavior,56 hair loss, and the need for increasing doses to get the same effect.
Because the side effects of off-label prescription drugs use aren’t widely tracked, it’s hard to tell just how safe or otherwise their use is, although the indications are that moderate or occasional use isn’t likely to lead to serious or lasting problems. But this uncertainty has led to experimentation around less restricted—and often less studied—substances in the quest for the perfect cognitive enhancer, the one that boosts your brain’s abilities without any unwanted downsides.
In 1973, the Romanian researcher and medical doctor Cornelius Giurgea published an article on a new drug called piracetam.57 What was unusual about piracetam was its seeming inertness compared to other pharmaceutical drugs. According to Giurgea, even at high doses, it showed “no sedation or tranquilization, no stimulation, no interference with synaptic transmitters, no acute or long-term toxicity…no cortical or subcortical EEG changes, no interference with limbic after-discharges, reticular sensory or direct arousal threshold” and “no changes of the cardiovascular, respiratory, gastrointestinal systems.” In other words, it did pretty much nothing. Except that, based on Giurgea’s research, it protected against severe brain hypoxia (oxygen deprivation), and it enhanced learning and memory.
To Giurgea, piracetam was a unique class of drug that enhanced the integration of evolutionarily important brain functions like memory and learning, without obviously deleterious side effects. He considered this class of drug so unique that he coined a new term for it, from the root “noos,” referring to “mind,” and “tropein,” meaning “towards.” And so “nootropics” were born.
Since then, the term nootropics has been used to cover pretty much all types of substances that purportedly enhance brain function. But, increasingly, purists are going back to Giurgea’s roots and using it to describe cocktails and “stacks” that improve function without unwanted side effects. To them, this means discounting those off-label prescription drugs.
Piracetam remains a popular nootropic and is readily purchased in many countries (although it occupies a legal gray zone in some), and there’s a growing body of research on its use and effects. A quick search on Google Scholar pulls up over 19,000 papers and articles on the substance. That said, the benefits to healthy adults remain ambiguous. But this doesn’t stop people from using it to, in the words of one supplier, “give you a serious cognitive edge without putting your health at risk.”
This is just the tip of the cognitive-enhancement iceberg though. Increasingly, advocates like George Burke and others are experimenting with esoteric cocktails of substances to boost their brains and to tap into what they believe is their full potential. And it’s not hard to see why. If your livelihood and ambitions depend on your ability to squeeze every last ounce of performance out of your brain, why wouldn’t you try everything possible to make sure you were running at peak performance?
This, of course, assumes that most people aren’t running on all four cylinders in the smarts department in the first place, and that our brains have the capacity to work better than they do. In Limitless, the plot depends on the old myth that we’re only using 10–20 percent of our brains, and that chemical enhancement can unlock the rest of our presumably unused potential. Sadly, while this works as a plot device, it’s pure scientific bunkum. Despite the tenacity of the myth, research has shown that we use every last ounce of our brain. Admittedly, we still don’t know precisely what parts of it are doing at any given time, or why they do what they do. But we do know that we don’t typically have unused cognitive capacity just waiting to be jump-started.
What’s more interesting, and potentially more relevant, is the idea that’s developed in Limitless that we could chemically enhance memory storage and recall, and our ability to make sense of the seemingly-disparate pieces of information we all have tucked away in our heads. Certainly, I struggle with memory and recall, and my ability to make sense of and act on new information suffers as a result. It’s easy for me to fantasize about how much smarter I’d be if everything I’ve experienced or learned was always at my fingertips, just waiting to be combined together in a flash of genius. And while I may be using 100 percent of my brain, it doesn’t take much to convince me that 90 percent of this is, at times, a dysfunctional mess.
To someone who depends on their brain for their living, I must confess that the idea of clearing the fog and making things work better is attractive. Surely with better recall and data processing, I’d be better at what I do. And maybe I would. But there’s a danger to thinking of our brains as computers, which of course is where these ideas of memory and data processing come from. It’s tempting to conflate what’s important in our heads with what we think is important in our computers, including more memory, faster recall, and more efficient data processing. If we follow this pathway, we run the risk of sacrificing possibly essential parts of ourselves for what we mistakenly think is important.
Unfortunately, we don’t know enough about the human brain yet to understand the benefits and dangers of how we think about human intelligence and success, although we do know that comparing what’s in our head to a computer is probably a bad idea.58 More than this, though, we also have a tendency to conflate achievements that we associate with intelligence, with success. But what if we’re using the wrong measures of success here? What if our urge to make more money, to publish more papers, or to be famous, leads to us ultimately risking what makes us who we are? And does this even matter?
To many people, I suspect it doesn’t. And this leads into the ethics of smart drugs, regardless of what they can or cannot do for us.
If You Could, Would You?
On April 1, 2008, a press release was published announcing that the US National Institutes of Health (NIH) was launching a new initiative to fight the use of brain-enhancing drugs by scientists. Spurred on by a perceived need to prevent pill-induced academic advantages, it claimed that:
While “doping” is now accepted as a problem among athletes, it is less widely known that so-called “brain doping” has been affecting the competitive balance in scientific research as well.
The release went on to announce the formation of the new World Anti-Brain Doping Authority, or WABDA.
It should have been apparent from its publication date that the press release was an elaborate April Fool’s joke. It was the brainchild of Jonathan Eisen of the University of California, Davis,59 and it played into a growing interest in the use of nootropics and other cognitive enhancers in academia and the ethical questions that this raises.
A few days after the press release hit the internet, the journal Nature published the results of its informal survey of 1,400 people on their academic smart-drug habits. The survey was an open, global online survey, and so at best provides only a rough indication of what academics were doing at the time. There was no control over who completed it, or how honest they were. Yet it still provided a fascinating insight into what, up to then, had been the stuff of rumor and conjecture.
The survey asked participants whether they had ever used Ritalin, modafinil, and beta-blockers for non-medical purposes. Those that had were then asked a number of additional questions about their usage habits. Around one in five respondents said they had used one or more of these drugs to increase their focus, concentration, or memory. Ritalin was the most frequently-used substance, and respondents between eighteen and twenty-five years old were the most prevalent user
s (with an interesting spike for those between fifty-five and sixty-five, suggesting a fear of late-career performance-inadequacy). What was even more interesting to me was that 69 percent of the respondents said they’d risk mild side effects to take these drugs themselves, and 80 percent thought that healthy adults should be free to use them if they wanted to.
In stark contrast to competitive sports, these respondents were remarkably indifferent to their fellow scientists getting a drug-induced leg up.60 It seems—at least from this somewhat qualitative sample—that there’s an ambivalence around using brain enhancements to succeed academically that we don’t see in other areas.
This is an attitude I’ve also come across in talking to colleagues, and it’s one that I must confess surprises me. Academia is deeply competitive, as are most professions that depend on mental skills. And yet, I find it hard to detect much concern over others getting a competitive advantage through what they imbibe. That doesn’t mean we shouldn’t be concerned, though.
In his 2004 commentary on Cosmetic Neurology, Anjan Chatterjee asked five questions of readers that were designed to test their ethical boundaries. These included:
1.Would you take a medication with minimal side effects half an hour before Italian lessons if it meant that you would learn the language more quickly?
2.Would you give your child a medication with minimal side effects half an hour before piano lessons if it meant that they learned to play more expertly?
3.Would you pay more for flights whose pilots were taking a medication that made them react better in emergencies? How much more?
4.Would you want residents to take medications after nights on call that would make them less likely to make mistakes in caring for patients because of sleep deprivation?
5.Would you take a medicine that selectively dampened memories that are deeply disturbing? Slightly disturbing?
These were designed to get people thinking about their own values when considering cognition-enhancing drugs. To this list, I would add five more questions:
1.Would you take a smart drug to help pass a professional exam?
2.Would you take a smart drug to shine more than the competition in a job interview?
3.Would you take a smart drug to increase your chances of winning a lucrative grant?
4.Would you use a smart drug to help win a business contract?
5.Would you use a smart drug to help get elected?
On the face of them, Chatterjee’s questions focus on personal gains that either don’t adversely impact others, or that positively impact them. For instance, learning a language or the piano can be seen as personal enrichment and as developing a socially-useful skill. And ensuring that pilots and medical professionals are operating to the best of their abilities can only be a good thing, right?
It’s hard to argue against these benefits of taking smart drugs. But there’s a darker side to these questions, and that is what happens if enhancement becomes the norm, and there is mounting social pressure to become a user.
For instance, should you be expected to take medication to keep up with your fellow students? Should you feel you have to dose your child up so they don’t fall behind their piano-playing peers? Should medical staff be required to be on meds, with a threat of legal action if they make an error while not dosed-up?
The potential normalization of nootropic use raises serious ethical questions around autonomy and agency, even where the arguments for their use seem reasonable.61 And because of this, there should probably be more consideration given to their socially responsible use. This is not to say that they should be banned or discouraged, and academics like Henry Greely and colleagues actively encourage their responsible use.62 But we should at least be aware of the dangers of potentially stepping out on a slippery slope of marginalizing anyone who doesn’t feel comfortable self-medicating each day to succeed, or who feels pressured into medicating their kids for fear that they’ll flunk out otherwise. And this is where the issue flips from the “would you be OK” in Chatterjee’s questions, to the “would you do this” in my five follow-up questions.
In each of these additional questions, taking a cognitive enhancer gives the user a professional advantage. In some of these cases, I can imagine one-off use being enough to get someone over a career hurdle—outperforming the competition in a job interview, for example. In others, there’s a question of whether someone will only be able to do their job if they continue to self-medicate. Is it appropriate, for instance, if someone uses cognitive enhancers to gain a professional qualification, a teaching qualification, say, and then can only deliver on expectations through continued use?
In all of these questions, there’s the implicit assumption that, by using an artificial aid to succeed, someone else is excluded from success. And this is where the ethics get really tricky.
To understand this better, we need to go back to the Nature survey and the general acceptance of academics toward using smart drugs. For most academics, their success depends on shining brighter than their peers by winning more grants, making bigger discoveries, writing more widely cited papers, or gaining celebrity status. Despite the collegiality of academia (and by on large we are a highly collegial group), things can get pretty competitive when it comes to raising funds and getting promotion, or even securing a lucrative book deal. As a result, if your competitors are artificially boosting their intellectual performance and you are not, you’re potentially at a disadvantage.
As it is, the pressure to do more and to do it better is intense within academic circles. Many academics regularly work sixty- to seventy-hour weeks, and risk sacrificing their health and personal lives in order to be seen as successful. And believe me, if you’re fraying at the edges to keep up with those around you and you discover that they’ve been using artificial means to look super-smart, it’s not likely to sit easily with you, especially if you’re then faced with the choice of either joining the smart-drug crowd, or burning out.
In most places, things aren’t this bad, and nootropic use isn’t so overtly prevalent that it presents a clear and present pressure. But this is a path that self-centered usage risks leading us down.
To me, this is an ethically fraught pathway. The idea of being coerced into behaviors that you don’t want to engage in in order to succeed doesn’t sit comfortably with me. But beyond my personal concerns, it raises broader questions around equity and autonomy. These concerns don’t necessarily preclude the use of cognitive enhancers. Rather, they mean that, as a society, we need to work out what the rules, norms, and expectations of responsible use should be because, without a shadow of doubt, there are going to be occasions where their use is likely to benefit individuals and the communities that they are a part of.
What puts an even finer point on these ethical and social questions is the likely emergence of increasingly effective nootropics. In the US and Europe, there are currently intense efforts to map out and better understand how our brains work.63 And as this research begins to extend the limits of what we know, there is no reason to think that we won’t find ways to develop more powerful nootropics. We may not get as far as a drug like NZT, but I see no reason why we won’t be able to create increasingly sophisticated drugs and drug combinations that substantially increase a user’s cognitive abilities.
As we proceed down this route, we’re going to need new thinking on how, as a society, we use and regulate these chemical enhancers. And part of this is going to have to include making sure this technology doesn’t end up increasing social disparities between people who can afford the technology and those who cannot.
Privileged Technology
One of the perennial challenges of new technologies is their potential to exacerbate social divides between people who can afford them, and as a consequence get the benefits from them, and those who cannot. Over time, technologies tend to trickle down through society, which is how so many people are able to afford cars these days, or own a cell phone. Yet it’s too easy to assume tha
t technology trickle-down is a given, and to ignore some of the more egregious ways in which innovations can line the pockets of the rich at the expense of the poor (a theme we will come back to with the movie Elysium in chapter six). The relationship here between technological innovation and social disparity is complex, especially when enterprising entrepreneurs work out how to open new markets by slashing the cost of access to new tech. Yet it’s hard to avoid the reality that some technologies make it easier for the wealthy to succeed in life and, as a result, put poorer people at a disadvantage. And perhaps nowhere is this more apparent than when wealthy individuals have access to technologies that address their deficiencies or enhance their capabilities, in the process creating a positive feedback loop that further divides the rich and the poor.
Limitless’ Eddie provides an interesting case here. When we first meet him, he’s a failure. Compared to those around him—his soon-to-be-ex girlfriend in particular—he’s not performing particularly well. In fact, it’s fair to say that he has an ability and a lifestyle deficit.
We’re left in no doubt that Eddie’s lack of ability puts him at a disadvantage compared to others. And, while we don’t know whether this is due to his personal choices or the cards he was dealt in life, let’s assume for the sake of argument that this deficit is not his fault. If this is the case, does he have the right to do something about it?
If Eddie’s lack of success was due to a clearly diagnosed disease or disability, I suspect that the consensus would be “yes.” As a society, we’ve developed a pretty strong foundation of medical ethics around doing no harm (non-maleficence), doing good (beneficence), not being coerced into decisions (autonomy), and spreading the burdens and benefits of treatments across all members of society (justice). As long as a course of action didn’t lead to unacceptable harm, it would be easy to argue that Eddie should have access to treatments that would address what he’s lacking.
Films from the Future Page 11