Book Read Free

The Best American Science and Nature Writing 2012

Page 19

by Dan Ariely


  The “representative” heuristic makes us think something is probable if it is part of a known set of characteristics. John wears glasses, is quiet, and carries a calculator. John is therefore . . . a mathematician? An engineer? His attributes taken together seem to fit the common stereotype.

  But all of those mental rules of thumb and biases banging around our brain are only part of a larger risk-perception system. The affect heuristic, a type of unified-field theory of risk perception, emcompasses those biases, and also hints at other factors researchers are slowly untangling, which may be even more influential in the way we make choices. Slovic, who recently edited The Feeling of Risk: New Perspectives on Risk Perception, calls affect a “faint whisper of emotion” that creeps into our decisions. In fact, just reading words like radiation or head trauma creates a split second of emotion that can subconsciously influence us. His research and studies by others have shown that positive feelings associated with a choice tend to make us think it has more benefits. Negative correlations make us think an action is riskier. One study by Slovic showed that when people decide to start smoking despite years of exposure to antismoking campaigns, they hardly ever think about the risks. Instead, it’s all about the short-term “hedonic” pleasure. The good outweighs the bad, which they never fully expect to experience.

  Our fixation on illusory threats at the expense of real ones influences more than just our personal lifestyle choices. Public policy and mass action are also at stake. The Office of National Drug Control Policy reports that prescription drug overdoses have killed more people than crack and heroin combined did in the 1970s and 1980s. Law enforcement and the media were obsessed with crack, yet it was only recently that prescription drug abuse merited even an after-school special.

  Despite the many obviously irrational ways we behave, social scientists have only just begun to systematically document and understand this central aspect of our nature. In the 1960s and 1970s, many still clung to the Homo economicus model. They argued that releasing detailed information about nuclear power and pesticides would convince the public that these industries were safe. But the information drop was an epic backfire and helped spawn opposition groups that exist to this day. Part of the resistance stemmed from a reasonable mistrust of industry spin. Horrific incidents like those at Love Canal and Three Mile Island did not help. Yet one of the biggest obstacles was that industry tried to frame risk purely in terms of data, without addressing the fear that is an instinctual reaction to their technologies.

  The strategy persists even today. In the aftermath of Japan’s nuclear crisis, many nuclear-energy boosters were quick to cite a study commissioned by the Boston-based nonprofit Clean Air Task Force. The study showed that pollution from coal plants is responsible for 13,000 premature deaths and 20,000 heart attacks in the United States each year, while nuclear power has never been implicated in a single death in this country. True as that may be, numbers alone cannot explain away the cold dread caused by the specter of radiation. Just think of all those alarming images of workers clad in radiation suits waving Geiger counters over the anxious citizens of Japan. Seaweed, anyone?

  At least a few technology promoters have become much more savvy in understanding the way the public perceives risk. The nanotechnology world in particular has taken a keen interest in this process, since even in its infancy it has faced high-profile fears. Nanotech, a field so broad that even its backers have trouble defining it, deals with materials and devices whose components are often smaller than 1/100,000,000,000 of a meter. In the late 1980s, the book Engines of Creation by the nanotechnologist K. Eric Drexler put forth the terrifying idea of nanoscale self-replicating robots that grow into clouds of “gray goo” and devour the world. Soon gray goo was turning up in video games, magazine stories, and delightfully bad Hollywood action flicks (see, for instance, the last G.I. Joe movie).

  The odds of nanotechnology’s killing off humanity are extremely remote, but the science is obviously not without real risks. In 2008 a study led by researchers at the University of Edinburgh suggested that carbon nanotubes, a promising material that could be used in everything from bicycles to electrical circuits, might interact with the body the same way asbestos does. In another study, scientists at the University of Utah found that nanoscopic particles of silver used as an antimicrobial in hundreds of products, including jeans, baby bottles, and washing machines, can deform fish embryos.

  The nanotech community is eager to put such risks in perspective. “In Europe, people made decisions about genetically modified food irrespective of the technology,” says Andrew Maynard, director of the Risk Science Center at the University of Michigan and an editor of the International Handbook on Regulating Nanotechnologies. “People felt they were being bullied into the technology by big corporations, and they didn’t like it. There have been very small hints of that in nanotechnology.” He points to incidents in which sunblock makers did not inform the public that they were including zinc oxide nanoparticles in their products, stoking the skepticism and fears of some consumers.

  For Maynard and his colleagues, influencing public perception has been an uphill battle. A 2007 study conducted by the Cultural Cognition Project at Yale Law School and coauthored by Paul Slovic surveyed 1,850 people about the risks and benefits of nanotech. Even though 81 percent of participants knew nothing or very little about nanotechnology before starting the survey, 89 percent of all respondents said they had an opinion on whether nanotech’s benefits outweighed its risks. In other words, people made a risk judgment based on factors that had little to do with any knowledge about the technology itself. And as with public reaction to nuclear power, more information did little to unite opinions. “Because people with different values are predisposed to draw different factual conclusions from the same information, it cannot be assumed that simply supplying accurate information will allow members of the public to reach a consensus on nanotechnology risks, much less a consensus that promotes their common welfare,” the study concluded.

  It should come as no surprise that nanotech hits many of the fear buttons in the psychometric paradigm: it is a man-made risk; much of it is difficult to see or imagine; and the only available images we can associate with it are frightening movie scenes, such as a cloud of robots eating the Eiffel Tower. “In many ways, this has been a grand experiment in how to introduce a product to the market in a new way,” Maynard says. “Whether all the up-front effort has gotten us to a place where we can have a better conversation remains to be seen.”

  That job will be immeasurably more difficult if the media—in particular cable news—ever decide to make nanotech their fear du jour. In the summer of 2001, if you switched on the television or picked up a newsmagazine, you might think the ocean’s top predators had banded together to take on humanity. After eight-year-old Jessie Arbogast’s arm was severed by a seven-foot bull shark on the Fourth of July weekend while the child was playing in the surf off Santa Rosa Island, near Pensacola, Florida, cable news put all its muscle behind the story. Ten days later, a surfer was bitten just six miles from the beach where Jessie had been mauled. Then a lifeguard in New York claimed he had been attacked. There was almost round-the-clock coverage of the “Summer of the Shark,” as it came to be known. By August, according to an analysis by the historian April Eisman of Iowa State University, it was the third-most-covered story of the summer until the September 11 attacks knocked sharks off the cable news channels.

  All that media created a sort of feedback loop. Because people were seeing so many sharks on television and reading about them, the “availability” heuristic was screaming at them that sharks were an imminent threat.

  “Certainly anytime we have a situation like that where there’s such overwhelming media attention, it’s going to leave a memory in the population,” says George Burgess, curator of the International Shark Attack File at the Florida Museum of Natural History, who fielded thirty to forty media calls a day that summer. “Perception problems have always
been there with sharks, and there’s a continued media interest in vilifying them. It makes a situation where the risk perceptions of the populace have to be continually worked on to break down stereotypes. Anytime there’s a big shark event, you take a couple steps backward, which requires scientists and conservationists to get the real word out.”

  Then again, getting out the real word comes with its own risks—like the risk of getting the real word wrong. Misinformation is especially toxic to risk perception because it can reinforce generalized confirmation biases and erode public trust in scientific data. As scientists studying the societal impact of the Chernobyl meltdown have learned, doubt is difficult to undo. In 2006, twenty years after reactor number 4 at the Chernobyl nuclear power plant was encased in cement, the World Health Organization (WHO) and the International Atomic Energy Agency released a report compiled by a panel of one hundred scientists on the long-term health effects of the level-7 nuclear disaster and future risks for those exposed. Among the 600,000 recovery workers and local residents who received a significant dose of radiation, the WHO estimates that up to 4,000 of them, or 0.7 percent, will develop a fatal cancer related to Chernobyl. For the 5 million people living in less contaminated areas of Ukraine, Russia, and Belarus, radiation from the meltdown is expected to increase cancer rates less than 1 percent.

  Even though the percentages are low, the numbers are little comfort for the people living in the shadow of the reactor’s cement sarcophagus who are literally worrying themselves sick. In the same report, the WHO states that “the mental health impact of Chernobyl is the largest problem unleashed by the accident to date,” pointing out that fear of contamination and uncertainty about the future have led to widespread anxiety, depression, hypochondria, alcoholism, a sense of victimhood, and a fatalistic outlook that is extreme even by Russian standards. A recent study in the journal Radiology concludes that “the Chernobyl accident showed that overestimating radiation risks could be more detrimental than underestimating them. Misinformation partially led to traumatic evacuations of about 200,000 individuals, an estimated 1,250 suicides, and between 100,000 and 200,000 elective abortions.”

  It is hard to fault the Chernobyl survivors for worrying, especially when it took twenty years for the scientific community to get a grip on the aftereffects of the disaster, and even those numbers are disputed. An analysis commissioned by Greenpeace in response to the WHO report predicts that the Chernobyl disaster will result in about 270,000 cancers and 93,000 fatal cases.

  Chernobyl is far from the only chilling illustration of what can happen when we get risk wrong. During the year following the September 11 attacks, millions of Americans opted out of air travel and slipped behind the wheel instead. While they crisscrossed the country, listening to breathless news coverage of anthrax attacks, extremists, and Homeland Security, they faced a much more concrete risk. All those extra cars on the road increased traffic fatalities by nearly 1,600. Airlines, on the other hand, recorded no fatalities.

  It is unlikely that our intellect can ever paper over our gut reactions to risk. But a fuller understanding of the science is beginning to percolate into society. Earlier this year, David Ropeik and others hosted a conference on risk in Washington, DC, bringing together scientists, policymakers, and others to discuss how risk perception and communication impact society. “Risk perception is not emotion and reason, or facts and feelings. It’s both, inescapably, down at the very wiring of our brain,” says Ropeik. “We can’t undo this. What I heard at that meeting was people beginning to accept this and to realize that society needs to think more holistically about what risk means.”

  Ropeik says policymakers need to stop issuing reams of statistics and start making policies that manage our risk-perception system instead of trying to reason with it. Cass Sunstein, a Harvard law professor who is now the administrator of the White House Office of Information and Regulatory Affairs, suggests a few ways to do this in his book Nudge: Improving Decisions about Health, Wealth, and Happiness, published in 2008. He points to the organ donor crisis, in which thousands of people die each year because others are too fearful or uncertain to donate organs. People tend to believe that doctors won’t work as hard to save them or that they won’t be able to have an open-casket funeral (both false). And the gory mental images of organs being harvested from a body give a definite negative affect to the exchange. As a result, too few people focus on the lives that could be saved. Sunstein suggests—controversially—“mandated choice,” in which people must check “yes” or “no” to organ donation on their driver’s license application. Those with strong feelings can decline. Some lawmakers propose going one step further and presuming that people want to donate their organs unless they opt out.

  In the end, Sunstein argues, by normalizing organ donation as a routine medical practice instead of a rare, important, and gruesome event, the policy would short-circuit our fear reactions and nudge us toward a positive societal goal. It is this type of policy that Ropeik is trying to get the administration to think about, and it is the next step in risk perception and risk communication. “Our risk perception is flawed enough to create harm,” he says, “but it’s something society can do something about.”

  DAVID DOBBS

  Beautiful Brains

  FROM National Geographic

  ALTHOUGH YOU KNOW your teenager takes some chances, it can be a shock to hear about them. One fine May morning not long ago my oldest son, seventeen at the time, phoned to tell me that he had just spent a couple hours at the state police barracks. Apparently he had been driving “a little fast.” What, I asked, was “a little fast”? Turns out this product of my genes and loving care, the boy-man I had swaddled, coddled, cooed at, and then pushed and pulled to the brink of manhood, had been flying down the highway at 113 miles an hour.

  “That’s more than a little fast,” I said.

  He agreed. In fact, he sounded somber and contrite. He did not object when I told him he’d have to pay the fines and probably a lawyer. He did not argue when I pointed out that if anything happens at that speed—a dog in the road, a blown tire, a sneeze—he dies. He was in fact almost irritatingly reasonable. He even proffered that the cop did the right thing in stopping him, for, as he put it, “We can’t all go around doing 113.”

  He did, however, object to one thing. He didn’t like it that one of the several citations he had received was for reckless driving.

  “Well,” I huffed, sensing an opportunity to finally yell at him, “what would you call it?”

  “It’s just not accurate,” he said calmly. “‘Reckless’ sounds like you’re not paying attention. But I was. I made a deliberate point of doing this on an empty stretch of dry interstate, in broad daylight, with good sight lines and no traffic. I mean, I wasn’t just gunning the thing. I was driving.

  “I guess that’s what I want you to know. If it makes you feel any better, I was really focused.”

  Actually, it did make me feel better. That bothered me, for I didn’t understand why. Now I do.

  My son’s high-speed adventure raised the question long asked by people who have pondered the class of humans we call teenagers: What on earth was he doing? Parents often phrase this question more colorfully. Scientists put it more coolly. They ask, What can explain this behavior? But even that is just another way of wondering, What is wrong with these kids? Why do they act this way? The question passes judgment even as it inquires.

  Through the ages, most answers have cited dark forces that uniquely affect the teen. Aristotle concluded more than 2,300 years ago that “the young are heated by Nature as drunken men by wine.” A shepherd in William Shakespeare’s The Winter’s Tale wishes “there were no age between ten and three-and-twenty, or that youth would sleep out the rest; for there is nothing in the between but getting wenches with child, wronging the ancientry, stealing, fighting.” His lament colors most modern scientific inquiries as well. G. Stanley Hall, who formalized adolescent studies with his 1904 Adolescence: Its
Psychology and Its Relations to Physiology, Anthropology, Sociology, Sex, Crime, Religion and Education, believed this period of “storm and stress” replicated earlier, less civilized stages of human development. Freud saw adolescence as an expression of torturous psychosexual conflict; Erik Erikson, as the most tumultuous of life’s several identity crises. Adolescence: always a problem.

  Such thinking carried into the late twentieth century, when researchers developed brain-imaging technology that enabled them to see the teen brain in enough detail to track both its physical development and its patterns of activity. These imaging tools offered a new way to ask the same question—what’s wrong with these kids?—and revealed an answer that surprised almost everyone. Our brains, it turned out, take much longer to develop than we had thought. This revelation suggested both a simplistic, unflattering explanation for teens’ maddening behavior—and a more complex, affirmative explanation as well.

  The first full series of scans of the developing adolescent brain—a National Institutes of Health (NIH) project that studied over a hundred young people as they grew up during the 1990s—showed that our brains undergo a massive reorganization between our twelfth and twenty-fifth years. The brain doesn’t actually grow very much during this period. It has already reached 90 percent of its full size by the time a person is six, and a thickening skull accounts for most head growth afterward. But as we move through adolescence, the brain undergoes extensive remodeling, resembling a network and wiring upgrade.

 

‹ Prev