Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274)

Home > Nonfiction > Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274) > Page 8
Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274) Page 8

by Frances, Allen


  Lucky again that Freud happened to be around to fill this gap. People usually associate Freud with treatment, not diagnosis—but he did as much to figure out the classification of the outpatient conditions as Kraepelin had done for the inpatient. Interestingly, Freud had also become a classifier only because he too was very short on cash, in his case to get married and start a family. Early in his career, Freud had been a very promising neuroscientist, one of the pioneers in understanding the importance of the neuronal synapse in brain functioning. But unable to land a university job, he was forced out of the laboratory and into the less exalted private practice of neurology. Accepting this career reversal with considerable regret and reluctance, Freud never abandoned his original ambition to achieve scientific recognition. Instead he switched his object of investigation from slides to people. Soon he became the Darwin of the consulting room, using astute clinical observations to make strikingly accurate guesses on how unconscious, inborn instincts play a central role in who we are, what we feel, how we think, and what we do—both in sickness and in health. Modern cognitive science and brain imaging have provided convincing confirmation of Freud’s most profound insights—even if some of his other guesses now seem quaintly off the wall.

  More pertinent here, Freud also opened up the new profession of outpatient psychiatry and provided it with a way of classifying its new patients. The milder symptom presentations that are now the bread and butter of psychiatry were then the province of neurologists, who had named them “neuroses” in the belief they were caused by nerve disease. In developing the altogether new field of psychoanalysis, Freud reconceptualized “neurosis” as being due to psychological conflict—conditioned by the biology of the brain, but not a simple brain disease. And then he proceeded to classify the neuroses—separating mourning from melancholia; panic disorder from phobias and generalized anxiety; and describing obsessive-compulsive disorder, the sexual disorders, and the personality disorders. Freud was a trained neurologist who had spent only a few months studying psychiatry. Paradoxically, he was largely ignored by the neurologists but soon came to be very much adored by the psychiatrists.

  The early psychiatrists were few in number, worked exclusively in inpatient asylums, and were burdened with the unfortunate title “alienists.” But following Freud, this all changed and changed quickly. The specialty of psychiatry switched its focus from very sick inpatients to not-so-sick outpatients. Psychiatrists left hospitals in droves to establish outpatient office practices; whereas in 1917, only 10 percent of psychiatrists practiced outside hospitals, now most do.13 The numbers of psychoanalysts inside the United States were also swelled by prominent refugees who were fleeing the Nazis and by mental health clinicians from the burgeoning new professions of psychology and social work. A quickly expanding group of therapists was treating a much larger, but also much less ill, outpatient population, using the new set of milder outpatient diagnoses derived from psychoanalysis.

  Simultaneously, the world wars were broadening the boundaries of psychiatry and bringing it into the mainstream. Psychiatric illness was identified as a major threat to the war effort—a frequent cause of unfitness for duty; a common form of combat casualty; and a source of continuing disability in those who returned home. Existing classifications, designed for severely ill hospitalized inpatients, were not up to the task of diagnosing what ailed the troops. Psychiatrists were called in to refine the system and figure out how to keep our soldiers ready for combat. Many rose to high military rank (including one general) and had extraordinary influence in decisions regarding recruitment, retention, and combat treatment.14 A new and expanded diagnostic classification was devised by the army, revised by the Veterans Administration and revised again by the American Psychiatric Association as the Diagnostic and Statistical Manual I, published in 1952.15

  DSM-III Saves Psychiatry

  Psychiatry blossomed after World War II—having proven its wartime mettle, it gained a newly prominent role in civilian life. Separate psychiatry departments were created for the first time at all the medical schools, and new psychiatric units were opened in most general hospitals. The predominant model was psychoanalytic; the focus was on treatment; and the attitude was can-do professional confidence. Meanwhile, psychiatric diagnosis enjoyed none of this renaissance—it was a quiet and insignificant backwater completely ignored in all the excitement. DSM-I (published in 1952) and DSM-II (published in 1968) were unread, unloved, and unused.16, 17

  Then suddenly, in the early 1970s, diagnosis was exposed as the Achilles’ heel that might bring psychiatry down. Two widely publicized papers posed an existential threat to its recently acquired credential as a full-fledged medical specialty. First shocker: a cross-national British/American study found that psychiatrists on the different sides of the pond differed radically in their diagnostic conclusions, even when evaluating the same patients on videotape.18 Second shocker: a clever psychologist showed how easy it was to lure psychiatrists into providing not only inaccurate diagnoses but also wildly inappropriate treatment. Several of his graduate students went to different emergency rooms stating they were hearing voices. Every single one was promptly admitted to a psychiatric hospital despite thereafter acting in a perfectly normal manner, and each was kept for several weeks to several months. Psychiatrists looked like unreliable and antiquated quacks, unfit to join in the research revolution just then about to modernize the rest of medicine.

  Without Robert Spitzer, psychiatry might have become increasingly irrelevant, drifting back to its prewar obscurity. It is rare that one man saves a profession, but psychiatry badly needed saving, and Bob was a rare man. Then a young researcher at Columbia University, he had already embarked on what was to become an almost fanatic, lifelong quest to make psychiatric diagnosis systematic and reliable. Think Ahab relentlessly chasing Moby Dick.

  Bob had been among the pioneers in creating the checklists of the Research Diagnostic Criteria—a criteria-based method of sorting symptoms into disorders that increased the diagnostic agreement of raters participating in research studies.19 He had also developed semi-structured interview instruments that controlled the vagaries of evaluation by suggesting a uniform sequence and wording of the questions used to assess the presence or absence of each symptom.20 Early findings using Bob’s methods were encouraging—raters achieved reasonably good agreement if they asked the same questions and used the same rules of the road in going from symptom counts to diagnosis. This met the challenge posed by the cross-national study. More important, a reliable diagnostic system provided psychiatric research with the means to employ the incredible new tools of molecular biology, genetics, brain imaging, multivariate statistics, and placebo-controlled clinical trials. Suddenly research in psychiatry became a darling, no longer the stepchild, of medical research. The budget of the National Institute of Mental Health grew rapidly, and in most medical schools, psychiatry became the second biggest source of research funding—just after the department of internal medicine and far ahead of all the other basic science and clinical departments. Drug companies also began pouring in loads of research money as they raced to develop profitable new psychiatric medicines.

  Spitzer had laid the foundations for the psychiatric research enterprise. Many people might have been content, but Bob was a man of restless spirit and soon realized that there were much bigger fish to fry. If the criteria-based method of diagnosis worked so well in research studies, why not also apply it to everyday clinical practice? This was an outrageously audacious ambition, but the American Psychiatric Association offered Bob the perfect opportunity to realize it. In 1975, he was asked to chair the DSM-III Task Force and given wide authority to set his own goals, choose methods, and pick collaborators. Spitzer was energetic, determined, stubborn, and indomitable—an enthusiastic true believer in whatever he was doing. His goal was nothing less than to transform psychiatric practice as performed everywhere in the world and by all the mental health disciplines. As Bob himself put it at the time, “The
y gave me the ball and I ran with it.” DSM-III 21 would end the diagnostic anarchy, would focus attention on careful diagnosis as a necessary prerequisite to more precise and specific treatment selection, and would also form a much-needed bridge between clinical research and clinical psychiatry.22

  The development of DSM-III faced one great handicap. There was very limited scientific evidence then available to guide any of its decisions—that is, which disorders should be included in the manual and which symptoms should be chosen to describe each disorder. Bob filled the huge gaps by bringing together small groups of experts on each disorder and picking their brains to thrash out how best to define the criteria sets.

  The process wasn’t pretty to watch—it had the feel more of virtuoso performance art than scientific deliberation. The meetings all followed a remarkably uniform pattern. A group of about eight or ten experts would be virtually locked down in a room and were not to emerge until they could come to an agreement. The mornings were loud and unruly—with experts shouting out what they thought were the best symptoms, often disagreeing vociferously with one another. Their passionate views were argued with the fierce determination that comes from lived experience, rather than scientific data, and there seemed to be no rational way of choosing among their differing suggestions. Bob would be mostly quiet—typing fast and furiously in a corner, trying to get it all down. After a few anarchic hours, a big tray of terrific deli food would arrive. The experts would finally quiet down as they instead worked over the sandwiches, slaw, pickles, and cream soda. Bob would keep typing furiously, totally focused and seemingly oblivious to the food or his surroundings. Miraculously, by lunch’s end, Bob would have digested the morning’s chaos into a draft criteria set that neatly condensed into one coherent definition all the divergent suggestions. The afternoon would usually be much calmer, the drowsy experts fine-tuning Bob’s compromise product. Whenever controversy did persist, the advantage went to whoever was most loud, confident, stubborn, senior, or spoke to Bob last. This is a terrible way to develop a diagnostic system, which would be subject to all sorts of biases, but it was the best way available at the time. And the surprise is that it worked as well as it did. The product was surely flawed but also remarkably useful.

  Bob’s colleagues in developing these criteria sets were mostly the young Turks of psychiatry (and a few psychologists)—the newly emerging and closely knit cohort of biologically oriented researchers who saw themselves as a vanguard pushing the field toward the rest of medicine and away from the previously dominant psychoanalytic and social models. DSM-III was advertised as atheoretical in regard to etiology and equally applicable to the biological, psychological, and social models of treatment. This was true on paper but not in fact. It was true in that the criteria sets were based on surface symptoms and said nothing about causes or treatments. But the surface symptoms method fit very neatly with a biological, medical model of mental disorder and greatly promoted it. The rejection of more inferential psychological constructs and social context severely disadvantaged these other models and put psychiatry into something of a reductionistic straitjacket. DSM-III tried to make up for this by introducing an innovative “multiaxial” system—patients were rated not just on Axis I psychiatric symptoms but also on Axis II personality disorders, Axis III medical illness, Axis IV social stressors, and Axis V, overall level of functioning. Unfortunately, the multiaxial system was mostly ignored. For a time Bob proposed a “let a hundred flowers bloom” project that would have highlighted all the factors beyond descriptive diagnosis that should contribute to a complete evaluation—but this never got off the ground. Proponents of psychological or social models felt left out of the game and have steadily lost status and influence since the publication of DSM-III.

  Revolutions are never easy or complete. Bob was an irresistible force wrestling with hundreds of thousands of seemingly immovable objects. Clinicians in those days hated to be herded (they have since been mostly tamed). DSM-III lumped patients based on surface similarities, ignoring their individual differences. In contrast, psychologically minded clinicians preferred to rely on empathy and creative intuition to understand each patient’s complex life story, unconscious motivations, and social context. They didn’t want to be boxed in, mindlessly following rote rules imposed impersonally from without. The simpleminded DSM-III approach was absolutely necessary if psychiatrists were to agree on diagnosis—but it seemed to leave out almost everything that was most interesting about the patient. Bob was providing a lingua franca, but not one that was very attractive to most of the people who would have to use it. He was turning the poetry of individual patients into DSM-III prose.

  I started out a strong DSM-III skeptic and was a late and only partial convert.

  Bob and I went back ten years before DSM-III. He had been a teacher of mine in the late 1960s—someone I liked very much personally but had largely discounted professionally because he focused all his attention on what I thought were superficial and dumb diagnostic questions. What I cared about then was learning what motivations made people tick and how I could help them get on a bit better in life through psychotherapy. When a few years later, Bob began working on DSM-III, I was slightly older, not much wiser, and not the least bit interested. My job then was running the outpatient department at Cornell–New York Hospital; my expensive hobby was completing psychoanalytic training at Columbia, where I would occasionally run into Bob in the hallways. During one of our brief chats, I made what now seems a screwball suggestion—that DSM-III include a masochistic personality disorder to describe people who, for unconscious masochistic reasons, repetitively sabotage their opportunities for happiness in life. This was an idea I was studying in class and thinking about as a possible topic for a psychoanalytic paper. Bob rightly nixed it, just as I did later when someone else suggested it for DSM-IV. He explained that it was far too inferential to ever be assessed reliably and that all psychiatric disorder is inherently self-defeating. But Bob’s approach was ecumenical, and he would instinctively recruit to the DSM-III team anyone willing to spend time feeding his voracious and insatiable appetite for diagnostic discussion. Soon I was busily at work on DSM-III, assigned the tasks of editing the personality disorders section and also of explaining and justifying the new DSM-III methods to my skeptical colleagues in the several different psychoanalytic associations.

  As I gradually came to know DSM-III really well, I could much better appreciate its necessity, but also better understand its inherent limitations. My impression then, which still feels right now, is that DSM-III was absolutely essential but also greatly oversold and overbought. It was the salvation of a scientifically based psychiatry but also truncated the purview of the field and triggered harmful diagnostic inflation. DSM-III was essential because it brought system to the diagnosis and treatment of mental disorders. Previously psychiatry was pure art form—sometimes brilliant, usually idiosyncratic, and always chaotic. There still remains much that is usefully artful in psychiatry, but now there is standardization and a firmer scientific foundation.

  But the much-trumpeted reliability of DSM-III was oversold because the level of diagnostic agreement obtained under ideal circumstances in research settings can never be achieved in the rough- and-tumble of average clinical practice. DSM-III was also wildly overbought—both literally and metaphorically. To everyone’s surprise, it became a perennial best seller with hundreds of thousands of copies sold every year, many more than there are mental health workers. DSM-III was the victim of its own success—it became the “bible” of psychiatry to the exclusion of other aspects of the field that should not have been, but were, cast beneath its shadow. Diagnosis should just be one part of a complete evaluation, but instead it became dominant. Understanding the whole patient was often reduced to filling out a checklist. Lost in the shuffle were the narrative arc of the patient’s life and the contextual factors influencing symptom formation. This was not an inherent flaw of DSM-III—rather it came from DSM-III being given far too mu
ch authority by clinicians, teachers and their students, researchers, insurance companies, school systems, disability agencies, and the courts. And by the public. DSM-III diagnosis quickly replaced psychoanalysis as a topic of cocktail party chatter, and people seemed eager to find a neat fit for their problems (or their boss’s problems) in its pages.

  Diagnostic inflation has been the worst consequence of DSM-III. Part of the fault lies in how DSM-III was written, much of it in how it has been misused, particularly under the influence of drug company disease mongering. DSM-III was a splitter’s dream and a lumper’s nightmare—and splitting inherently leads to diagnostic inflation. To increase the likelihood that clinicians would agree on a given diagnosis, DSM-III divided the diagnostic pie into many very small and easily digested slices—but this also increased the likelihood that many more people would be diagnosed. Add to this that DSM-III was also overly inclusive—with many new mental disorders describing mild symptom presentations at the populous boundary with normality. The fact that DSM-III was suddenly so interesting to clinicians and patients also stimulated more diagnosing.

  Given the conditions in 1980 when it was published, DSM-III probably struck a pretty fair balance between sensitivity (its errors in missing people who needed a diagnosis) and specificity (its errors in mislabeling people who didn’t need diagnosis). At the time, no one understood that this seesaw would soon be weighed down so heavily on the side of overdiagnosis. The seeds of diagnostic inflation that had been planted by DSM-III would soon become giant beanstalks when nourished by drug company marketing.

 

‹ Prev