Real psychiatric disorders require prompt diagnosis and active treatment—they don’t get better by themselves and become harder to treat the longer they are allowed to persist. In contrast, the unavoidable everyday problems of life are best resolved through our natural resilience and the healing powers of time. We are a tough species, the successful survivors of ten thousand generations of resourceful ancestors who had to make their precarious daily living and avoid ever-present dangers far beyond our coddled imagining. Our brains and our social structures are adapted to deal with the toughest of circumstances—we are fully capable of finding solutions to most of life’s troubles without medical meddling, which often muddles the situation and makes it worse. As we drift ever more toward the wholesale medicalization of normality, we lose touch with our strong self-healing capacities—forgetting that most problems are not sickness and that only rarely is popping a pill the best solution.
But writing this book has a serious risk that I wouldn’t have assumed were the risks of not writing it even greater. My nightmare scenario is that some people will do a selective reading and draw the completely incorrect and unintended conclusion that I am against psychiatric diagnosis and treatment. They may be overly influenced by my criticisms of the field when it is practiced poorly and miss my very strong endorsements of psychiatry when it is practiced well. My DSM-IV experience taught me that any word that can possibly be misused or misunderstood likely will be and that authors have to worry about the consequences that can follow not only from the proper use of their writings but also those that can result from the predictable distortions. I am already widely and misleadingly quoted by scientologists and other groups that are rabidly opposed to psychiatry. This book can be similarly misused by them to discourage people who desperately need help from getting it. Let us suppose this hypothetical chain of events—a basic misunderstanding of my message induces some people who need medicine to precipitously stop it, causing a relapse into illness that is accompanied by suicidal or violent behavior. Although not directly responsible, I would still have good reason to feel terrible.
Despite these realistic concerns, I decided to go ahead and write the book because the current massive overuse of psychotropic drugs in our country presents a much greater and more immediate danger. My hope is to simultaneously serve two purposes—first to alert people who don’t need treatment to avoid it, but equally to encourage those who do need treatment to seek it out and to stick with it. My critique is directed only against the excesses of psychiatry, not its heart or soul. “Saving normal” and “saving psychiatry” are really two sides of the very same coin. Psychiatry needs to be saved from rushing in where it should fear to tread. Normal needs to be saved from the powerful forces trying to convince us that we are all sick.
PART I
Normality Under Siege
CHAPTER 1
What’s Normal and What’s Not?
The pool of normality is shrinking into a mere puddle.
TIL WYKES
BEFORE WE BEGIN saving normal, we need to figure out what it is. You might expect normal to be an approachable sort of word, confident in its popularity, safe in its preponderance over abnormal. Defining normal should be easy, and being normal should be a modest ambition. Not so. Normal has been badly besieged and is already sadly diminished. Dictionaries can’t provide a satisfying definition; philosophers argue over its meaning; statisticians and psychologists measure it endlessly but fail to capture its essence; sociologists doubt its universality; psychoanalysts doubt its existence; and doctors of the mind and body are busily nipping away at its borders. Normal is losing all purchase—if only we look hard enough perhaps everyone will eventually turn out to be more or less sick. My task in this book will be to try to stop this steady, inexorable encroachment—to help save normal.
How Does the Dictionary Define Normal?
The word normal plays in many different arenas. It started its life in Latin as a carpenter’s square and is still used in geometry to describe right angles and perpendiculars. Not surprisingly, normal then took on any number of right-minded connotations denoting the regular, standard, usual, routine, typical, average, run-of-the-mill, expected, habitual, universal, common, conforming, conventional, correct, or customary. From this, a short jump had normal describing good biological and psychological functioning—not physically sick, not mentally sick.1
The dictionary definitions of normal are all entirely and beguilingly tautological. To know what is normal you have to know what is abnormal. And guess how abnormal is defined in the dictionaries—those things that are not normal or regular or natural or typical or usual or conforming to a norm. Talk about circular tail chasing—each term is defined exclusively as the negative of the other; there is no real definition of either, and no meaningful definitional line between them.
The dichotomous terms “normal” and “abnormal” inspire a sense of familiar recognition and false familiarity. We instinctively intuit what they mean in a general way but find it inherently hard to pin them down when it gets to the specific. There is no universal and transcendental definition that works operationally to solve real-world sorting problems.
What Does Philosophy Say?
Surprisingly little. Philosophy has exerted itself endlessly to understand the deeper meanings of big concepts like reality and illusion, how we know things, the nature of human nature, truth, morality, justice, duty, love, beauty, greatness, goodness, evil, mortality, immortality, natural law, and on and on. Normal generally got lost in the philosophical shuffle—perhaps too ordinary and uninteresting to warrant deep philosophical thought.
This neglect finally ended with the Enlightenment attempt to apply philosophy to the more mundane problems of day-to-day living. Utilitarianism provided the first, and remains the only practical, philosophic guidance on how and where to set a boundary between “normal” and “mental disorder.” The guiding assumptions are that “normal” has no universal meaning and can never be defined with precision by the spinning wheels of philosophical deduction—it is very much in the eye of the beholder and is changeable over time, place, and cultures. From this it follows that the boundary separating “normal” from “mental disorder” should be based not on abstract reasoning, but rather on the balance between the positive and the negative consequences that accrue from different choices. Always seek the “greatest good for the greatest number.”2 Make decisions depending on what measurably works best.
But there are also undeniable uncertainties in being a practical utilitarian, and even worse there are dangerous value land mines. “The greatest good for the greatest number” sounds great on paper, but how do you measure the quantities and how do you decide what’s the good? It is no accident that utilitarianism is currently least popular in Germany, where Hitler gave it such an enduringly bad name. During World War II, it was statistically normal for the German population to act in barbaric ways that would be deemed decidedly abnormal before or since—all justified at the time on utilitarian grounds as necessary to provide for the greatest good of the master race. Statistical “normal” (based on the frequency of what is) temporarily badly trumped injunctive “normal” (the world as it should be or customarily had been).
Granted that, in the wrong hands, utilitarianism can be blind to good values and twisted by bad ones, it still remains the best or only philosophical guide when we embark on the difficult task of setting boundaries between the mentally “normal” and the mentally “abnormal.” This is the approach we used in DSM-IV.
Can Statistics Dictate Normal?
Having previously befuddled linguistics and philosophy, normal next defeated statistics. This may be surprising. Statistics would seem perfectly poised to define normal by switching the method of analysis from playing with words to playing with numbers. The answer could come from the beautifully symmetrical shape of a normal bell curve. Whenever you measure things, there is never one absolutely perfect and replicable right answer. There is always a greater
or lesser error of measurement that prevents us from reproducing the same answer every time—no matter how carefully we try and how wonderful is our measuring rod. It is inherently impossible for us to pin down the nature of anything with absolute precision. But a truly remarkable thing happens if we go to the trouble of taking enough measurements. Even though no one reading is perfectly accurate or predictable, the aggregate of all readings sort in the most perfectly predictable and the loveliest of curves. At the curve’s peak is the most popular measure; then the successively less likely ones trail down on both sides as we move away from this golden mean.
The bell curve explains a great deal about how life works—most things about nature and about people follow its shape and deviate predictably around the mean. The distributions of every conceivable characteristic in the universe have been measured in enormous and painstakingly collected data sets. Miraculously, the same wonderful “normal curve” always emerges from what might otherwise appear to be a jumble of numbers. The curve provides remarkably precise predictive power on virtually everything that matters to our species and to the world.
Human beings are diverse in every one of our physical, emotional, intellectual, attitudinal, and behavioral characteristics, but our diversity is not at all random. We are “normally” distributed on a bell-shaped curve on any given trait that is continuously distributed in our population. IQ, height, weight, personality traits all cluster around a golden mean with the outliers sorting symmetrically on both sides.
The best way of summarizing this economically and systematically is the standard deviation (SD)—a technical term used in statistics to describe the way measures arrange themselves with reliable regularity around the mean. Being within one SD of the mean for height (which is 5´10˝ for men in the United States with an SD of three inches) puts you in very popular territory joined by 68 percent of the population—34 percent will be just somewhat taller than the completely average guy (up to 6´1˝) and 34 percent will be somewhat shorter (down to 5´7˝). As you get much taller or much shorter you become more and more the rare bird—it gets lonelier and lonelier out there on either side of the further reaches of the bell curve. Only 5 percent of the population drifts off by more than two SDs —in this remote region we have the 2.5 percent of really tall men (over 6´4˝) and the 2.5 percent of short men (under 5´4˝). This is the region of the bell curve at the extreme right and the extreme left, far away from the popular golden mean. Suppose we take it even further and go out to three SDs—here we are in the really rarefied territory with the very few men who are over 6´7˝or under 5´1˝.3
This brings us to the question of the moment—can we use statistics in some simple and precise way to define mental normality? Can the bell curve provide a scientific guide in deciding who is mentally normal and who is not? Conceptually, the answer is “why not,” but practically the answer is “hell no.” In theory, we could arbitrarily decide that the most troubled among us (5 percent, or 10 percent, or 30 percent or whatever) would be defined as mentally ill and that the rest of us are normal. Then we could develop survey instruments, score everyone, draw up the curves, set the boundary line, and thus label the sick. But in practice it just doesn’t work that way. There are just too many statistical, contextual, and value judgments that perplex a simple statistical solution.
First, people on the immediate opposite sides of whatever boundary is set will look almost exactly alike—how silly to call one sick, the other healthy. People who are 6´3˝ and 6´4˝ are both tall. And what percentage to choose? If there are just a few mental health clinicians in a developing country, only the most severely disturbed will appear mentally disordered—so perhaps the boundary will be set so that only 1 percent are not normal. In a New York City saturated with therapists, the level required for disorder is radically defined down and perhaps the boundary would be set at 30 percent or more. It gets completely arbitrary and the pretty curve has no way of telling us where to draw the line.
We must reconcile to there not being any simple standard to decide the question of how many of us are abnormal. The normal curve tells us a great deal about the distribution of everything from quarks to koalas, but it doesn’t dictate to us where normal ends and abnormal begins. A ranting psychotic is far enough away from mean to be recognized as mentally sick by your aunt Tilly, but how do you decide when everyday anxiety or sadness is severe enough to be considered mental disorder? One thing does seem perfectly clear. On the statistical face of it, it is ridiculous to stretch disorder so elastically that the near average person can qualify. Shouldn’t most people be normal?
What Does the Doctor Say About Normal?
Until the late 1800s, medicine was ruled by the dogma that health and illness were determined by the relative quantities of the four bodily humors—blood, phlegm, and yellow and black bile. This seems quaint and daffy now, but it was one of mankind’s most enduring ideas (outlasting by far the dogma that the sun revolves around the Earth). Humoral theory was the commonly held belief of a hundred generations of the smartest people in the world and governed medical practice for four millennia. The ticket to normality was achieving perfect balance and harmony of the bodily fluids—nothing in excess, nothing lacking. Only in the late nineteenth century did dramatic advances in physiology, pathology, and neuroscience finally relegate humoral theory to the dusty closet of atavistic medical curiosities.4
But, despite all of its acknowledged wonders, modern medical science has never provided a workable definition of “health” or “illness”—in either the physical or the mental realms. Many have tried and all have failed. Take, for example, the World Health Organization’s definition5: “Health is a state of complete physical, mental, and social well-being and not merely the absence of infirmity.” Who among us would dare claim health if it requires meeting this impossibly high standard? Health loses value as a concept when it is so unobtainable that everyone is at least partly sick. The definition also exudes culture and context-sensitive value judgments. Who gets to define what is “complete” physical, mental, and social well-being? Is someone sick because his body aches from hard work or he feels sad after a disappointment or is in a family feud? And are the poor inherently sicker because they have fewer resources to achieve the complete well-being required of “health”?
More realistic modern definitions of health focus not on the perfectibility of life, but on the lack of definable disease. This is better, but there is no bright-line definition of physical disease and certainly nothing that works across time, place, and culture. How do we decide what is normal in continuum situations like blood pressure, or cholesterol, or blood sugar, or bone density? Is a slow-growing prostate cancer in an old person best diagnosed and treated aggressively as disease—or left alone because neglect may be much less dangerous than treatment? Is the average expectable forgetting that occurs in old age the disease of dementia or the unavoidable degenerative destiny of a brain grown old? Is a very short child just short or in need of hormone injections?6
Why No Lab Tests to Define Normal in Psychiatry
The human brain is by far the most complicated thing in the known universe. The brain has 100 billion neurons, each of which is connected to 1,000 other neurons—making for a grand total of 100 trillion synaptic connections. Every second, an average of 1,000 signals cross each of these synapses; each signal is modulated by 1,500 proteins and mediated by one or more of dozens of neurotransmitters.7 Brain development is even more improbable—a miracle of intricately choreographed sequential nerve cell migration. Each nerve has to somehow find just its right spot and make just the right connections. Given all the many steps involved and all the possible things that can go wrong, you might want to place your bet on Murphy’s Law and chaos theory—the odds seem to be stacked against normal brain functioning. The weird and wonderful thing is that we work as well as we do—the improbable result of exquisitely wrought DNA engineering that has to accomplish trillions and trillions of steps. But any supercomplicated system wi
ll have its occasional chaotic glitches. Things can and do go wrong in many different ways to produce each disease, which makes it hard for medical science to take giant steps.
The two most exciting advances in the entire history of biology are unraveling the workings of the human brain and breaking the genetic code. No one could have predicted that we would have come so far and so fast. But there has also been a great disappointment. Although we have learned a great deal about brain functioning, we have not yet figured out ways of translating basic science into clinical psychiatry. The powerful new tools of molecular biology, genetics, and imaging have not yet led to laboratory tests for dementia or depression or schizophrenia or bipolar or obsessive-compulsive disorder or for any other mental disorders. The expectation that there would be a simple gene or neurotransmitter or circuitry explanation for any mental disorder has turned out to be naive and illusory.
We still do not have a single laboratory test in psychiatry. Because there is always more variability in the results within the mental disorder category than between it and normal or other mental disorders, none of the promising biological findings has ever qualified as a diagnostic test. The brain has provided us no low-hanging fruit—thousands of studies on hundreds of putative biological markers have so far come up empty. Why the gaping disconnect—so much knowledge and so little practical utility? As Roger Sperry put it in his Nobel Prize in Medicine acceptance speech: “The more we learn, the more we recognize the unique complexity of any one individual intellect, the stronger the conclusion becomes that the individuality inherent in our brain networks makes that of fingerprints or facial features gross and simple by comparison.”8 Teasing out the heterogeneous underlying mechanisms of mental disorder will be the work of lifetimes. There will not be one pathway to schizophrenia; there may be dozens, perhaps hundreds or thousands.
Saving Normal : An Insider's Revolt Against Out-of-control Psychiatric Diagnosis, Dsm-5, Big Pharma, and the Medicalization of Ordinary Life (9780062229274) Page 2