Book Read Free

Blue Dreams

Page 17

by Lauren Slater


  If I touch my temple I can feel the thread of a pattering pulse. But the actual brain itself has no nerves, no pain receptors whatsoever, which I find odd: the seat of all emotion, all sensation, is itself purely numb. Before neuroscience stepped in, there was phrenology, reading the person by the telltale lumps and bumps on the human head. The phrenologist would close his eyes and move his hands around your skull, maybe muttering to himself as he went. Here a rise, there a small swell, here a dent downward, all of it suggesting something, but what? What?

  We have supposedly progressed far from that, with our fMRIs and PET scans glowing gold and green. But still today, when it comes to the persistent use of these chemicals cooked up by men who hope to help (and reap the rewards of that help), we ask, What? What? It’s the same question I ask of my psychopharmacologist as he scrawls my next script, the handwriting, as always, illegible and full of signs and symbols that mean nothing to me. Even all these years later, I take each script and make of it an origami plane or plant, or, my favorite, a tiny white swan with wings enfolded, with a miniature beak, perched gracefully on my palm and delivered to the pharmacist, who, seemingly with a snap of his fingers, a wave of his wand, turns my bird into a bottle of pills.

  4

  SSRIs

  The Birth of Prozac

  The First SSRI

  In the 1950s and ’60s, scientists were beginning at last to unpack the black box of the brain. The postmortem studies of those rabbits that had been given reserpine continued to be an important benchmark, showing as they did that reserpine lowered serotonin while the tricyclics raised it. In the mid-1960s, Joseph Schildkraut, a psychiatrist and researcher at Harvard, cemented the theory that evolved into the monoamine hypothesis of depression. Monoamines, remember, are neurotransmitters such as dopamine, norepinephrine, epinephrine, and serotonin. Schildkraut, building from a growing consensus among scientists, theorized that depression was the result of a deficit of some or all of these neurotransmitters. Norepinephrine, he thought, was related to alertness and energy as well as to anxiety, attention, and interest in life; lack of serotonin to anxiety, obsessions, and compulsions; and dopamine to attention, motivation, pleasure, and reward. Schildkraut and other proponents of the monoamine hypothesis recommended that psychopharmacologists choose the antidepressant on the basis of the patient’s most prominent symptoms. Anxious or irritable patients should be treated with norepinephrine reuptake inhibitors, while those patients displaying loss of energy or lack of enjoyment in life would do best on drugs that increase dopamine.

  This theory predominated for roughly a decade and then was further refined by Swedish researcher Arvid Carlsson, an eventual Nobel winner. In 1972 Carlsson, funded by the pharmaceutical firm Astra, patented zimelidine (brand name Zelmid), the world’s first selective serotonin reuptake inhibitor (SSRI), a class of drugs that increase the amount of serotonin in the synaptic cleft—the space across which nerve impulses are transmitted—by hindering its absorption, or reuptake. Carlsson’s Zelmid, born in Sweden and disseminated throughout Europe, suggested that the critical monoamine in depression was serotonin. Months after it had been on the market, however, Zelmid started making some people ill with strange flu-like symptoms and, still more worrisome, Guillain-Barré syndrome, a neurological condition that can be fatal.

  Astra quickly pulled its drug from the pharmacies, but not so fast that pharmaceutical giant Eli Lilly failed to get a glimpse of all the sunniness and smiles, prompting its researchers to take another look at their own serotonin compound, which had been stalled for several years with nothing more than a numerical label, LY-110140, a drug they had created but had not even bothered to name because they hadn’t yet decided what to do with it. Serotonin, after all, is present outside the brain. In fact it’s omnipresent in the body, playing a role in sleep, digestion, and blood pressure, among other things. Given this potentially wide range of application for LY-110140, Lilly had solicited the opinions of leading scientists as to its possible uses. Perhaps it could be a weight-loss drug, or an anti-hypertensive, both of which seemed more lucrative to Lilly, at that time, than a drug for depression, which one scientist had suggested. Initially, Lilly executives were quick to shoot down that idea because they were not convinced that their compound would actually work as an antidepressant, or that there would be a significant market for it. With no decision made, LY-110140 had languished in the shadows until Zelmid came along, proving that serotonin-specific drugs could definitely improve and regulate mood, if only their unfortunate side effects could be curtailed.

  Eli Lilly is located in Indianapolis, on a gracious campus with gleaming buildings of steel and stone. It was here, in the 1970s, that Ray Fuller, Bryan Molloy, and David T. Wong, working from the compound LY-110140, created Prozac. After the success of Zelmid in treating depression, they were aware in every instance that what they were seeking was a chemical that would increase the amount of serotonin in the brain. The antidepressants that preceded Prozac came to be considered “dirty drugs” because they worked on multiple neurotransmitter systems at once and therefore caused a host of unpleasant somatic side effects. By selecting only serotonin, the inventors of Prozac sought to cure depression while sparing patients the blurred vision, the dry mouth, the excessive sweating, the sluggishness, and the weight gain that were part and parcel of prior antidepressant treatment. In 1975 the manufacturer finally gave its creation a name, fluoxetine, which would eventually be known as Prozac.

  From Nerves to Tears

  Drugs, however, are not mere chemical concoctions. They are capsules, tablets, liquids, what have you, released into a culture that will, inevitably, bestow meaning on them. In the 1930s, ’40s, ’50s, and beyond, the culture was largely one of anxiety. When people suffered, they attributed it to their “nerves,” while psychoanalysis posited anxiety to be the root cause of almost all neurotic problems. Depression was seen as a fringe condition, and a deadly serious one to boot. The Diagnostic and Statistical Manual of Mental Disorders for the 1950s lists four kinds of depression, three of which include psychotic features. The depressed were often patients of the back ward who were lost to light and hope. This doesn’t mean that there weren’t milder forms of the disorder; it’s just that people were far more prone then than now to understand their wayward moods as a bad case of the jitters.

  Then along came Roland Kuhn and Nathan Kline. Kline wasn’t merely a showboat. He also made it his mission to educate the public about depression, visiting family doctors and counseling them to diagnose the disorder when presented with a patient who had psychosomatic complaints. Slowly word spread that the country was suffering not from nerves but from numbness. What had been a fringe illness very gradually became commonplace as the culture let go of Freud and his theories. There is no definitive point at which this occurred; it was a slow process, with the new antidepressants and their inventors contributing to the change. When Kline won the much-coveted Lasker Prize again for his discovery of the MAOIs (he is the only person ever to win it twice), he declared that “more human suffering has resulted from depression than from any other single disease.”

  Not long after that, Freudian adherent Aaron T. Beck broke with psychoanalytic tradition and created what is called cognitive behavioral therapy, which taught patients to identify flawed or maladaptive patterns in their behavior or their thinking, and to replace these defective ways of behaving and thinking with patterns that were more prudent and conducive to avoiding despair. It was a mode of treatment especially suited to dealing with disorders of mood. Nervous illness waned as patients learned, through CBT, that their depressions were borne on the back of self-critical thinking and that, by reframing negative self-talk, they could lift their sunken spirits. The therapy grew and grew in popularity, until it now has millions of adherents.

  Some might say that the MAOIs and the tricyclics caused the interest in and the awareness of depression, that Kline and Kuhn manufactured a disorder for the new drugs to treat. The antidepressants of the
1950s and ’60s, however, were never superstar chemicals, partly because, unlike the antipsychotic Thorazine, which was pushed for a multitude of off-label uses, they were never directly advertised to consumers. Their range was narrower from the beginning. Furthermore, they had whole rafts of side effects other than tardive dyskinesia, some of which were merely extremely unpleasant, while others were downright dangerous. An advertisement in a medical journal for the tricyclic Elavil in 1965 suggests that the drug might replace electroconvulsive therapy, underscoring the seriousness of the condition it was meant to treat. But although the first antidepressants may not have been household names, they nevertheless started a subterranean cultural shift in our understanding of ourselves, priming us for Prozac, so that when the drug was finally approved for release in 1987, we were at last really ready to see ourselves as sad.

  Specificity?

  In the mass-marketing campaign that accompanied Prozac’s eventual release, Lilly touted the supposed specificity of its drug, likening it to a magic bullet, or a Scud missile that lands with programmed precision on millimeters of neural tissue. This, however, is misleading. Although Prozac is called an SSRI, in reality the phrase “selective serotonin reuptake inhibitor” does more to conceal than to reveal. The truth is that there is really no way to have a serotonin-specific drug because the chemical serotonin casts a wide net over the whole of the human brain; is intricately tied up with our other neurotransmitter systems; is, furthermore, found throughout the human corpus, especially in the gut; and beyond that, as noted earlier, is implicated in dozens of physiological functions—from sleep and appetite to pain perception and sensory integration, to name just a few. Indeed, serotonin is one of the oldest neurotransmitters on the planet. It was present on the earth millions of years ago and is found in myriad other life forms as diverse as birds, lizards, wasps, jellyfish, mollusks, and earthworms. Given serotonin’s wide net, not just across species but within the human body and brain, it is virtually impossible to create a drug that acts directly on it, because serotonin not only has so many systems but also is so intimately tied up with dopamine and norepinephrine and acetylcholine and all sorts of other neurotransmitters that flicker inside our skulls.

  Still, this didn’t stop Lilly from celebrating its brand-new compound as a site-specific drug that, given its putative ability to home in on a tiny target, would cause few to no side effects. Within six months of Prozac’s January 1988 release, doctors had written more than a million prescriptions for it in this country alone. Annual sales reached $350 million in the first year. Two years later it appeared on the covers of both Time and Newsweek as the long-coveted cure for depression. It seemed like everyone was either talking about Prozac or taking it and, indeed, feeling fine.

  Depression on the Rise

  And yet something strange was happening. If Prozac was really the cure for depression, then why did the numbers of depressed patients suddenly start to rise in concert with the drug’s release? When anti-tubercular drugs were discovered, tuberculosis rates dropped off sharply and then finally almost disappeared altogether. When antibiotics were invented, deaths from infections became less frequent. Vaccinations wiped out dreaded illnesses like measles and tetanus. Each of these treatments undoubtedly and clearly contributed to a healthier society. The opposite happened with Prozac. The drug was offered to society and society just got sicker, and with precisely the illness the drug was created to treat. In 1955, one in 468 Americans was hospitalized for mental illness. By 1987, however, one in every 184 Americans was receiving disability payments for mental illness. Two decades after Prozac’s release, there were almost 4 million disabled mentally ill citizens on the rolls of the SSI (Supplemental Security Income) and SSDI (Social Security Disability Insurance) programs. And while in 1955 there were comparatively few people hospitalized for depression and bipolar disorder, with 50,937 people in state and county mental hospitals, today approximately 1.4 million people are receiving state and federal funding for affective disorders. In fact, reported incidences of depression have increased a thousandfold since the introduction of antidepressants. A cynic might say the pill to cure depression was in fact causing it.

  There are multiple theories to account for the astounding rise in the diagnosis of depression and its odd timing with the release of a supposedly superior antidepressant designed to treat it. The most obvious explanation is that depression has always been as terribly common as it is now, but that in past decades it was also terribly stigmatized, and that it took Prozac, the drug that became a household word, to lift that stigma and allow floods of people to come forward and claim their cure. This theory, however, cannot explain why now, thirty years after Prozac’s release, the rates of depression have only continued to rise. Surely the stigma is gone by now, and depression is a disorder it’s almost hip to have.

  Perhaps it makes more sense to look first at the society into which Prozac was released. The drug debuted in the late Reagan years and became a blockbuster even before 1993, when Peter Kramer published his famous Listening to Prozac, claiming that the drug made us better than well and that cosmetic psychopharmacology had finally arrived. The eighties were a time of fierce individuality in a country that had always prided itself on autonomy, and now even more so. Our president was something of a Marlboro Man who cut funding to social service agencies and admonished American citizens to get off their couches and earn a living, acquire a skill, do something, anything, with the ultimate goal of creating a self capable of surviving in a bubble. Money for welfare was slashed, mothers with young children were told to find day care and a job, or if not a job, then job skills training at centers set up for such a purpose. Nursing homes, day care centers, afterschool programs, homeless shelters—all these institutions that were geared toward maintaining the fabric of a cohesive and helpful society—lost their federal funds and dwindled in size.

  I remember it well. In my mid-twenties, I was the director of a small community mental health center serving SPMI patients, those with “severe and persistent mental illness,” schizophrenic people felled not only by this dread disease but also by the added burdens of poverty and homelessness, the kind of street people you find muttering in alleyways or talking to invisible angels. I watched as our agency’s state and federal support was halved, and then quartered, as therapy sessions that had been unlimited were reduced under Reagan’s rule to just six, as though that were adequate for penniless patients haunted by visions and voices. But meanwhile Wall Street boomed and the stock market more than doubled during Reagan’s two terms. The images of the 1980s were sleek black limousines and sleek silver skyscrapers, with the money pooling at the upper end of the social spectrum while the rest lost what little they had.

  What does this have to do with Prozac? you might wonder. Everything, really, if you take a sociological view of what is usually understood as a deeply individualistic experience: depression. For a moment, step back and scan the horizon. Study after study has shown that rates of depression rise in concert with isolative societies. For the upper class, the Reagan years may have been lucrative, but for those who depended on a web of social services, Reagan’s presidency was difficult, if not destructive. Help went away. There were no more handouts, and thus, for some people, no more helping hands. Schizophrenics and others with mental illness lost their access to treatment providers.

  I remember my patient Amy Wilson, a thirty-one-year-old woman with a red seam of a scar across her face where a boyfriend had broken her nose with a bat and left her beautiful features slightly askew. She had glitter-green eyes, her lashes coated with mascara thick as tar, her tapered nails painted a carmine red. Despite a stunning façade, Amy struggled with devastating depression and relied on her twice-weekly therapy sessions for succor and perspective. When her Medicaid was cut and our six sessions ran out, there was nothing I could do. I met her by accident in the supermarket one day, her three toddlers jammed into a cart filled with Cheetos and Cheez Whiz, her face as pale as a pillow. Amy is j
ust one of the thousands, maybe millions, of people who suffered in the avid “do-it-yourself” society that marked the Reagan years.

  It would be overly simplistic, however, if not absurd to target Reagan as the sole cause of social breakdown and the rise in depression that may have resulted from it. Reagan, after all, inherited a presidency in a culture that had been steadily moving toward the sort of isolative individualism that breeds widespread depression. He accelerated the process, but its provenance lies in the history of this country, as far back, perhaps, as the nineteenth century, when Tocqueville, coming from France to watch Americans at work and play, remarked on the rampant and insistent autonomy that undergirds so much of what we strive for. In Asia and Africa it is not unusual for whole families to share a bedroom and a bed, while in this country we fetishize Richard Ferber, who admonishes us to let our children cry it out in their own cribs in dark rooms. We know that infant animals separated from their mothers secrete the stress hormone cortisol and that high levels of cortisol, while not causative, are implicated in depression.

 

‹ Prev