Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274)
Page 10
The hype for prevention has been everywhere. Breakthroughs in medical science are breathlessly announced daily. New tests are constantly being devised and the thresholds of abnormality of old tests lowered—creating hordes of new patients. Doctors order expensive batteries of every conceivable test on every patient, just to be on the safe side. Advertisements promote the benefits of screening and the terrors of letting disease go unfettered. The screening scare tactics have been an enormous financial success for their promoters, but the evidence shows that with few exceptions (e.g., screening for lung cancer in smokers or colon cancer in everyone), the testing is often not good for the patients—not really improving outcomes, while further burdening them with aggressive, expensive, and unnecessary treatments. And the waste to society runs to hundreds of billions of dollars a year that could be better used treating really sick people who are currently not insured. Preventive medicine is a terrific goal gone badly astray because it became industrialized and enslaved by profit and hype.
Sanity is beginning to prevail. Recently, nine professional societies have initiated a “Choosing Wisely” campaign, publishing a list of forty-five previously heralded tests and procedures that had been vastly oversold.2 Prostate cancer screening is no longer recommended—it failed to save lives and resulted in much needlessly aggressive surgery. Breast cancer screening has been much truncated. No more CT scans for headaches or X-rays for back pain. And it turns out that bronchodilators and oxygen don’t work for most people with chronic obstructive pulmonary disease.3 The list is long and telling. Evidence-based medicine is demonstrating that the push to prevention has been excessive, premature, and not evidence based.
Early identification of illness suffers from the “needle in the haystack” problem. Screening tests routinely set their bar low so as not to miss people who need identifying but in the process inevitably wind up mislabeling lots of people who don’t.4,5 The benefits to the few, if any, are outweighed by the harms to the many. Some of the misleading hype for early intervention comes from the good faith enthusiasm of medical researchers and practitioners eager to help patients fight disease. But best advice for them comes from the White Rabbit in Alice in Wonderland: “Don’t just do something, stand there.”
And the profit motive also plays a part. Fifty years ago, President Eisenhower presciently predicted the economic and social damage that would be caused by a too-powerful military-industrial complex.6 In a parallel development, we have witnessed the explosive growth of a too- powerful medical-industrial complex comprising Big Pharma, insurance companies, testing laboratories, equipment and device makers, hospitals, and doctors—all eager to expand the market by creating a new reservoir of allegedly “about-to-be-sick” well people who need testing and treatment to avoid ever becoming sick in the future.
The United States spends almost twice as much on per capita medical care as the rest of the world.7 This is a terrific drain on our economy and our $2 trillion investment has paltry returns. We get mediocre medical outcomes, excessively test and treat those without need, and fail to provide adequate care for many in great need. You probably couldn’t design a less efficient or less equitable system if you tried hard to do so.
Meanwhile we neglect what are the best forms of prevention—i.e., promoting exercise, proper diet, moderation in alcohol use, abstention from tobacco and drugs. These extremely useful and remarkably cheap prevention measures aren’t profitable for the medical-industrial complex and therefore lack its powerful and well-financed sponsorship. The biggest improvement in the health of our country in the last thirty-five years came from the relatively inexpensive campaign to reduce smoking—not from the enormously expensive efforts of the medical-industrial complex. A similar campaign to reduce overtesting and overtreatment would save us money and make us healthier. Let’s hope that “choosing wisely” helps to correct the excesses of preventive medicine.
And let’s hope that the snake oil of premature preventive medicine doesn’t spread to psychiatry. Those who promote the value of wider boundaries for psychiatric disorder make the argument that identifying and treating the mildly mentally ill will help them avoid later becoming the severely mentally ill, drawing support from the presumed glowing success that has been achieved by medical screening and early intervention.8 But there is a serious fly in this ointment—early intervention in medicine is mostly a flop and provides a terrible model. Psychiatry is wrongheadedly copycatting the very worst aspects of American medicine—the combination of harmful excess for some, combined with heartless neglect for others.
Is Our Stressful Society Making Us Sicker?
One theory says that rates of mental illness are rising because we live under extreme pressures from a speeded-up, stressful society. Perhaps it is hard to be normal because our modern world is driving us crazy. This suggestion is difficult to disprove, but I find it completely unconvincing. Among the hundreds of thousands of generations of our ancestors who have ever walked this earth, we are undoubtedly the luckiest—extraordinarily privileged to live now and to live here. Previous generations (as well as people currently living in less favored parts of our crowded globe) suffer daily catastrophes that are unimaginable to most of us. Life has always been, and will always be, enormously stressful in one or another way. Indeed, our mental discomforts can preoccupy us as much as they do only because most of us don’t have to worry about our next meal or the threat of being eaten by a passing tiger.
A second variant of the toxic environment hypothesis is that rates of mental illness have been driven up by physical, rather than emotional, stresses. The most popular version of this is the completely discredited, but ever lingering, belief that vaccination causes autism.9 Other environmental causes seem equally implausible—fluctuations in diagnostic rates follow a time course much more consonant with fashion than with toxin.
The only environmental pollutants to have a proven substantial impact on mental disorder are alcohol and drugs. These hit the brain with a huge wallop that can mimic virtually all the psychiatric symptoms in the book. But alcohol and drugs can account for only a small part of diagnostic inflation. Tellingly, it is the childhood disorders not much affected by substances that have recently expanded the most.10
A third theory has it that we are not sicker than before, just better able to spot previously missed sickness. Some part of diagnostic inflation is surely desirable—picking up previously missed cases. But only a part, and probably a small one. Diagnostic labels can’t be applied with surgical precision to accurately distinguish those who truly need a diagnosis from those who don’t. At the extremes of severe illness and complete health, the distinction is indeed obvious. But the boundary between mental disorder and normality is so fuzzy that whenever we quickly expand the use of psychiatric labels to identify some few people who do need help, we misidentify many others who don’t.
Human nature is stable and resilient. There has been no real epidemic of mental illness, just a much looser definition of sickness, making it harder for people to be considered well. The people remain the same; the diagnostic labels have changed and are too elastic. Problems that used to be an expected and tolerated part of life are now diagnosed and treated as mental disorder. The application, or withholding, of a sickness label in these boundary situations determines how we see ourselves as individuals and as a society. If we create an overly broad definition and apply it liberally, we readily recruit an army of new “patients,” many of whom will have been much better left to their own devices. We are not a sicker society in any real sense—even if we see ourselves that way.
Societal stress is not causing more real mental illness, but there are other societal trends that do promote the sense that we are getting sicker. Our world is homogenizing—we have increasingly less tolerance for individual difference or eccentricity and instead tend to medicalize it into illness. The youngest boy in the class isn’t the most active because he is just a young boy—instead he must have ADHD and should be put on a pill.11 An
d our society is becoming increasingly perfectionistic. Falling short of complete happiness or failing to have a worry-free life is too often translated into mental illness. Our goals are set too high and our expectations are unrealistic—especially when it comes to our kids.
New Fads Promote Diagnostic Inflation
Fashions in psychiatric diagnosis have recently become almost as fickle as the popularity of rock stars, trendy restaurants, and travel destinations. Because there are no biological tests or clear definitions that distinguish normal from mental disorder, everything in psychiatric diagnosis depends on very easily influenced subjective judgments. Whenever rates of a mental disorder jump explosively, the safe bet is always on fad. Assume that many, if not most, of the newly identified “patients” are really “normal enough.” They have been mislabeled and will likely be overtreated.
Psychiatric fads start when a powerful authority gives them force and legitimacy. The DSM system, and the “experts” who fashioned it, have been the main fashion setters—the driving force in identifying new mental disorders and defining milder forms of those that had been previously described. Unfortunately most experts suffer from an intellectual conflict of interest that biases them toward diagnostic inflation. Focused on their specialized research, they miss the big picture—always worrying so much about not having a diagnosis for a patient who needs one that they ignore the risk of mislabeling someone who doesn’t. There is also an emotional element. Experts become true believers who really come to love their pet diagnoses and want to see them grow. While each one presses for only a small expansion, their aggregate pressure blows up the inflationary balloon. In my thirty-five years of herding experts, not once has anyonet ever suggested raising the bar to narrow the scope of his pet area.12
The media and the Internet feed on and feed fads. In the wired modern world, false epidemics can be spread like wildfire fueled by the 24/7 coverage. Some of the spotlight is extremely valuable—leading to better public understanding and acceptance of mental disorder, but many stories breathlessly hype diagnostic inflation. “Autism is one in eighty!!!” “The test and cure for Alzheimer’s are just around the corner!!!!” “Does your child have ADHD?? “Bipolar is underdiagnosed says Harvard doctor!!!!” And the Internet provides wonderful support, social interaction, information, and destigmatization for people with psychiatric symptoms—but also undercuts normality, as essentially healthy people incorrectly self-identify themselves as sick in order to gain the comforts that come with admittance to the group. Celebrities also play their part as exemplars of diagnoses and endorsers of treatments.
Of course, the biggest promoter of recent fads has been drug company marketing. But that is a sad story in itself that we will get to soon.
DSM Becomes Too Important for Its Own Good
Human nature being what it is, the prevalence of any psychiatric diagnosis will rise artificially whenever it is a gatekeeper to something valuable. In a simpler world, psychiatric diagnosis was once based only on perceived clinical need. But now that it has gained powerful (and unwelcome) influence on many administrative and financial decisions, these decisions have also reciprocally obtained a powerful influence on the rates of diagnosis. Diagnostic inflation is promoted whenever a physician provides an “up-diagnosis” to help a patient gain access to something valuable—like disability benefits or school services. If autism, ADHD, or pediatric bipolar disorder is a prerequisite to being admitted to a small class with lots of individual attention, equivocal cases get shoehorned into these categories, and soon an epidemic is born.
In like fashion, “mental disorder” increases whenever there is high unemployment. Some of the people laid off will get a new diagnosis because they have developed symptoms, others because it will make them eligible for disability. Because veterans’ benefits require a diagnosis of PTSD, PTSD gets overdiagnosed. There is a paradox—trying to help by providing a diagnosis may wind up hurting. Many returning vets from Iraq and Afghanistan are having trouble landing jobs because of the stigma associated with their diagnosis of PTSD. And overdiagnosis distorts allocations across the system, reducing resources and benefits for those who most need them.
The most senseless driver of diagnostic inflation is the way medical insurance works in the United States. To get paid, the doctor must make an approved diagnosis. This is intended to prevent frivolous visits. But the unintended effect is just the opposite of prudent cost control. A premature rush to a reimbursable psychiatric diagnosis often results in unnecessary, potentially harmful, and often costly treatment for problems that would have disappeared on their own. It would be a lot cheaper and better for insurance to reimburse the doctor for watchful waiting and counseling, rewarding him for not jumping to diagnostic conclusions that are very costly in the long run. This perfectly sensible solution is the policy in the rest of the world.
Epidemiology Miscounts
Every so often, the newspaper will report that rates of psychiatric disorder are climbing, sometimes dramatically. The best current examples are autism and attention deficit disorder. Don’t believe the numbers. The “rates” have been generated by psychiatric epidemiologists, using a method that is inherently flawed and systematically biased in the direction of overreporting.
How can an entire field of scientific endeavor have gone so far astray? It comes down to simple dollar-and-cents considerations. Epidemiological studies have to sample huge numbers of people in the general population, usually using telephone interviews. It would be too expensive to employ clinicians in so extensive an endeavor—so the studies rely on the cheap labor provided by lay interviewers who have no clinical experience and no discretion in judging whether symptoms are clinically meaningful. They make their diagnoses of psychiatric disorders based on symptom counts alone with no consideration of whether the symptoms are severe or enduring enough to really warrant diagnosis or treatment.
This results in rates that are always greatly inflated. Psychiatric symptoms in mild form are widely distributed in the general population—from time to time, almost everyone will have some sadness or anxiety, and others may have difficulty concentrating or be a bit eccentric. But isolated or mild symptoms alone do not define psychiatric disorder—they must cohere over time in a specified way and also cause significant distress or impairment. Epidemiologic studies routinely ignore these crucial requirements. They mistakenly diagnose as psychiatric disorder symptoms that are mild, transient, and lacking in clinical significance.13
Results generated in this rough-and-ready way are no more than an upper limit on the prevalence of any given mental disorder. They should never be taken at face value as a true reflection of the real extent of illness in the community. Unfortunately, the exaggerated rates are always reported without proper caveat and are accepted as if they are an accurate reflection of the real prevalence of psychiatric disorder. Disraeli exaggerated only a tad when he said: “There are three kinds of lies: lies, damned lies, and statistics.”
The epidemiologists are good bean counters, but they are not clinicians and probably don’t know any better. Pharma is less innocent—the results are used to promote the misleading notion that psychiatric disorder is everywhere. The National Institute of Mental Health also likes high rates because they support budget requests to Congress—if mental disorder is everywhere, we should be spending a lot more to research its causes.14
Easy-to-Use Drugs Make Excessive Drug Use Far Too Easy
Before the 1950s, the psychotropic drug business was small, and the available drugs were terrible. The opiates and the barbiturates were popular with patients but were nonspecific in their effects and caused big-time problems with addiction and overdose. Bromides, paraldehyde, chloral hydrate, and Miltown were all pretty useless and had hard-to-take side effects.
By the time I began prescribing psychiatric drugs in the 1960s, these old medicines had been mostly superseded by the newly discovered and specific wonder drugs in psychiatry—Thorazine for psychosis, lithium for mania, and E
lavil and Nardil for depression. But giving these medicines to patients was still a relatively new thing and a big deal. I trained on the first unit in the United States to use lithium, and we were frankly scared to death of it—an overdose could kill patients or destroy their kidneys, and we were not yet completely sure what were the most effective doses and the safest blood levels. It turned out that the Thorazine doses we were using were way too high and transformed our agitated patients into drugged zombies. The antidepressants available at the time were all extremely risky for use with suicidal outpatients—just a week’s worth of pills could be lethal. And they made life miserable for many of the patients taking them—mouth forever parched, bowel movements few and far between, and fainting on standing up a frequent risk. Because the medicines could cause arrhythmias, a fancy cardiac workup had to precede their initiation. Nardil required extremely strict dietary precautions because it interacted dangerously with many foods and with red wine—a little blue cheese, fava beans, or Chianti could be deadly. All of the first psychotropic drugs were so risky and unpleasant to take that only the sickest patients received them, and only well-trained psychiatrists felt comfortable prescribing them.