The Best American Science and Nature Writing 2012

Home > Other > The Best American Science and Nature Writing 2012 > Page 18
The Best American Science and Nature Writing 2012 Page 18

by Dan Ariely


  This approach to information delivery is a radical departure from how our health care system usually works. Conventional wisdom holds that medical information won’t be heeded unless it sets off alarms. Instead of glowing orbs, we’re pummeled with FDA cautions and Surgeon General warnings and front-page reports, all of which serve to heighten our anxiety about our health. This fear-based approach can work—for a while. But fear, it turns out, is a poor catalyst for sustained behavioral change. After all, biologically our fear response girds us for short-term threats. If nothing threatening actually happens, the fear dissipates. If this happens too many times, we end up simply dismissing the alarms.

  It’s worth noting here how profoundly difficult it is for most people to improve their health. Consider: self-directed smoking-cessation programs typically work for perhaps 5 percent of participants, and weight-loss programs are considered effective if people lose as little as 5 percent of their body weight. Part of the problem is that so much in our lives—the foods we eat, the ads we see, the things our culture celebrates—is driven by negative feedback loops that sustain bad behaviors. A positive feedback loop offers a chance to counterprogram this onslaught and dramatically increase our odds of changing course.

  Though GlowCaps improved compliance by an astonishing 40 percent, feedback loops more typically improve outcomes by about 10 percent compared to traditional methods. That 10 percent figure is surprisingly persistent; it turns up in everything from home energy monitors to smoking cessation programs to those YOUR SPEED signs. At first glance, 10 percent may not seem like a lot. After all, if you’re 250 pounds and obese, losing 25 pounds is a start, but your BMI (body mass index) is likely still in the red zone. But it turns out that 10 percent does matter. A lot. An obese forty-year-old man would spare himself three years of hypertension and nearly two years of diabetes by losing 10 percent of his weight. A 10 percent reduction in home energy consumption could reduce carbon emissions by as much as 20 percent (generating energy during peak demand periods creates more pollution than off-peak generation). And those YOUR SPEED signs? It turns out that reducing speeds by 10 percent, from forty to thirty-five miles an hour, would cut fatal injuries by about half.

  In other words, 10 percent is something of an inflection point, where lots of great things happen. The results are measurable, the economics calculable. “The value of behavior change is incredibly large: nearly five thousand dollars a year,” says David Rose, citing a CVS pharmacy white paper. “At that rate, we can afford to give every diabetic a connected glucometer. We can give the morbidly obese a Wi-Fi–enabled scale and a pedometer. The value is there; the savings are there. The cost of the sensors is negligible.”

  So feedback loops work. Why? Why does putting our own data in front of us somehow compel us to act? In part, it’s that feedback taps into something core to the human experience, even to our biological origins. Like any organism, humans are self-regulating creatures, with a multitude of systems working to achieve homeostasis. Evolution itself, after all, is a feedback loop, albeit one so elongated as to be imperceptible by an individual. Feedback loops are how we learn, whether we call it trial and error or course correction. In so many areas of life, we succeed when we have some sense of where we stand and some evaluation of our progress. Indeed, we tend to crave this sort of information; it’s something we viscerally want to know, good or bad. As Stanford’s Bandura put it, “People are proactive, aspiring organisms.” Feedback taps into those aspirations.

  The visceral satisfaction and even pleasure we get from feedback loops is the organizing principle behind GreenGoose, a startup being hatched by Brian Krejcarek, a Minnesota native who wears a near-constant smile, so enthusiastic is he about the power of cheap sensors. His mission is to stitch feedback loops into the fabric of our daily lives, one sensor at a time.

  As Krejcarek describes it, GreenGoose started with a goal not too different from Shwetak Patel’s: to measure household consumption of energy. But the company’s mission took a turn in 2009, when he experimented with putting one of those ever-cheaper accelerometers on a bicycle wheel. As the wheel rotated, the sensor picked up the movement, and before long Krejcarek had a vision of a grander plan. “I wondered what else we could measure. Where else could we stick these things?” The answer he came up with: everywhere. The GreenGoose concept starts with a sheet of stickers, each containing an accelerometer labeled with a cartoon icon of a familiar household object—a refrigerator handle, a water bottle, a toothbrush, a yard rake. But the secret to GreenGoose isn’t the accelerometer; that’s a less-than-a-dollar commodity. The key is the algorithm that Krejcarek’s team has coded into the chip next to the accelerometer that recognizes a particular pattern of movement. For a toothbrush, it’s a rapid back-and-forth that indicates somebody is brushing her teeth. For a water bottle, it’s a simple up-and-down that correlates with somebody taking a sip. And so on. In essence, GreenGoose uses sensors to spray feedback loops like atomized perfume throughout our daily life—in our homes, our vehicles, our backyards. “Sensors are these little eyes and ears on whatever we do and how we do it,” Krejcarek says. “If a behavior has a pattern, if we can calculate a desired duration and intensity, we can create a system that rewards that behavior and encourages more of it.” Thus the first component of a feedback loop: data gathering.

  Then comes the second step: relevance. GreenGoose converts the data into points, with a certain amount of action translating into a certain number of points, say, thirty seconds of teeth brushing for two points. And here Krejcarek gets noticeably excited. “The points can be used in games on our web site,” he says. “Think FarmVille but with live data.” Krejcarek plans to open the platform to game developers, who he hopes will create games that are simple, easy, and sticky. A few hours of raking leaves might build up points that can be used in a gardening game. And the games induce people to earn more points, which means repeating good behaviors. The idea, Krejcarek says, is to “create a bridge between the real world and the virtual world. This has all got to be fun.”

  As powerful as the idea appears now, not long ago it seemed like a fading pipe dream. Then based in Cambridge, Massachusetts, Krejcarek had nearly run out of cash—not just for his company but for himself. During the day, he was working on GreenGoose in an office building near the MIT campus—and each night, he’d sneak into the building’s air shaft, where he’d stashed an air mattress and some clothes. Then, in late February 2011, he went to the Launch conference in San Francisco, a two-day event where select entrepreneurs get a chance to demo their company to potential funders. Krejcarek hadn’t been selected for an onstage demo, but when the conference organizers saw a crowd eyeing his product on the exhibit floor, he was given four minutes to make a presentation. It was one of those only-in-Silicon Valley moments. The crowd “just got it,” he recalls. Within days, he had nearly $600,000 in new funding. He moved to San Francisco, rented an apartment—and bought a bed. GreenGoose will release its first product, a kit of sensors that encourage pet owners to play and interact with their dogs, with sensors for dog collars, pet toys, and dog doors, sometime this fall.

  Part of the excitement around GreenGoose is that the company is so good at “gamification,” the much-blogged-about notion that game elements like points or levels can be applied to various aspects of our lives. Gamification is exciting because it promises to make the hard stuff in life fun—just sprinkle a little video-game magic and suddenly a burden turns into bliss. But as happens with fads, gamification is both overhyped and misunderstood. It is too often just a shorthand for badges or points, like so many gold stars on a spelling test. But just as no number of gold stars can trick children into thinking that yesterday’s quiz was fun, game mechanics, to work, must be an informing principle, not a veneer.

  With its savvy application of feedback loops, though, Green-Goose is on to more than just the latest fad. The company represents the fruition of a long-promised technological event horizon: the Internet of Things, in which a s
ensor-rich world measures our every action. This vision, championed by Kevin Ashton at Belkin, Sandy Pentland at MIT, and Bruce Sterling in the pages of Wired, has long had the whiff of vaporware, something promised by futurists but never realized. But as GreenGoose, Belkin, and other companies begin to use sensors to deploy feedback loops throughout our lives, we can finally see the potential of a sensor-rich environment. The Internet of Things isn’t about the things; it’s about us.

  For now, the reality still isn’t as sexy as the visions. Stickers on toothbrushes and plugs in wall sockets aren’t exactly disappearing technology. But maybe requiring people to do a little work—to stick accelerometers around their house or plug a device into a wall socket—is just enough of a nudge to get our brains engaged in the prospect for change. Perhaps it’s good to have the infrastructure of feedback loops just a bit visible now, before they disappear into our environments altogether, so that they can serve as a subtle reminder that we have something to change, that we can do better—and that the tools for doing better are rapidly, finally, turning up all around us.

  JASON DALEY

  What You Don’t Know Can Kill You

  FROM Discover

  IN MARCH 2011, as the world watched the aftermath of the Japanese earthquake/tsunami/nuclear near-meltdown, a curious thing began happening in West Coast pharmacies. Bottles of potassium iodide pills used to treat certain thyroid conditions were flying off the shelves, creating a run on an otherwise obscure nutritional supplement. Online prices jumped from ten dollars a bottle to upward of two hundred dollars. Some residents in California, unable to get the iodide pills, began bingeing on seaweed, which is known to have high iodine levels.

  The Fukushima disaster was practically an infomercial for iodide therapy. The chemical is administered after nuclear exposure because it helps protect the thyroid from radioactive iodine, one of the most dangerous elements of nuclear fallout. Typically, iodide treatment is recommended for residents within a ten-mile radius of a radiation leak. But the people in the United States who were popping pills were at least five thousand miles away from the Japanese reactors. Experts at the Environmental Protection Agency estimated that the dose of radiation that reached the western United States was equivalent to ¹⁄ 100,000 the exposure one would get from a round-trip international flight.

  Although spending two hundred dollars on iodide pills for an almost nonexistent threat seems ridiculous (and could even be harmful—side effects include skin rashes, nausea, and possible allergic reactions), forty years of research into the way people perceive risk shows that it is par for the course. Earthquakes? Tsunamis? Those things seem inevitable, accepted as acts of God. But an invisible, man-made threat associated with Godzilla and three-eyed fish? Now that’s something to keep you up at night. “There’s a lot of emotion that comes from the radiation in Japan,” says the cognitive psychologist Paul Slovic, an expert on decision making and risk assessment at the University of Oregon. “Even though the earthquake and tsunami took all the lives, all of our attention was focused on the radiation.”

  We like to think that humans are supremely logical, making decisions on the basis of hard data and not on whim. For a good part of the nineteenth and twentieth centuries, economists and social scientists assumed this was true too. The public, they believed, would make rational decisions if only it had the right pie chart or statistical table. But in the late 1960s and early 1970s, that vision of homo economicus—a person who acts in his or her best interest when given accurate information—was kneecapped by researchers investigating the emerging field of risk perception. What they found, and what they have continued teasing out since the early 1970s, is that humans have a hell of a time accurately gauging risk. Not only do we have a system that gives us conflicting advice from two powerful sources—logic and instinct, or the head and the gut—but we are also at the mercy of deep-seated emotional associations and mental shortcuts.

  Even if a risk has an objectively measurable probability—like the chances of dying in a fire, which are 1 in 1,177—people will assess the risk subjectively, mentally calibrating it based on dozens of subconscious calculations. If you have been watching news coverage of wildfires in Texas nonstop, chances are you will assess the risk of dying in a fire higher than will someone who has been floating in a pool all day. If the day is cold and snowy, you are less likely to think global warming is a threat.

  Our hard-wired gut reactions developed in a world full of hungry beasts and warring clans, where they served important functions. Letting the amygdala (part of the brain’s emotional core) take over at the first sign of danger, milliseconds before the neocortex (the thinking part of the brain) was aware that a spear was headed for our chest, was probably a very useful adaptation. Even today those nano-pauses and gut responses save us from getting flattened by buses or dropping a brick on our toes. But in a world where risks are presented in parts-per-billion statistics or as clicks on a Geiger counter, our amygdala is out of its depth.

  A risk-perception apparatus permanently tuned for avoiding mountain lions makes it unlikely that we will ever run screaming from a plate of fatty mac ’n’ cheese. “People are likely to react with little fear to certain types of objectively dangerous risk that evolution has not prepared them for, such as guns, hamburgers, automobiles, smoking, and unsafe sex, even when they recognize the threat at a cognitive level,” says the Carnegie Mellon University researcher George Loewenstein, whose seminal 2001 paper, “Risk as Feelings,” debunked theories that decision making in the face of risk or uncertainty relies largely on reason. “Types of stimuli that people are evolutionarily prepared to fear, such as caged spiders, snakes, or heights, evoke a visceral response even when, at a cognitive level, they are recognized to be harmless,” he says. Even Charles Darwin failed to break the amygdala’s iron grip on risk perception. As an experiment, he placed his face up against the puff adder enclosure at the London Zoo and tried to keep himself from flinching when the snake struck the plate glass. He failed.

  The result is that we focus on the one-in-a-million bogeyman while virtually ignoring the true risks that inhabit our world. News coverage of a shark attack can clear beaches all over the country, even though sharks kill a grand total of about one American annually, on average. That is less than the death count from cattle, which gore or stomp 20 Americans per year. Drowning, on the other hand, takes 3,400 lives a year, without a single frenzied call for mandatory life vests to stop the carnage. A whole industry has boomed around conquering the fear of flying, but while we down beta-blockers in coach, praying not to be one of the 48 average annual airline casualties, we typically give little thought to driving to the grocery store, even though there are more than 30,000 automobile fatalities each year.

  In short, our risk perception is often at direct odds with reality. All those people bidding up the cost of iodide? They would have been better off spending ten dollars on a radon testing kit. The colorless, odorless, radioactive gas, which forms as a byproduct of natural uranium decay in rocks, builds up in homes, causing lung cancer. According to the Environmental Protection Agency, radon exposure kills 21,000 Americans annually.

  David Ropeik, a consultant in risk communication and the author of How Risky Is It, Really? Why Our Fears Don’t Always Match the Facts, has dubbed this disconnect the perception gap. “Even perfect information perfectly provided that addresses people’s concerns will not convince everyone that vaccines don’t cause autism, or that global warming is real, or that fluoride in the drinking water is not a Commie plot,” he says. “Risk communication can’t totally close the perception gap, the difference between our fears and the facts.”

  In the early 1970s, the psychologists Daniel Kahneman, now at Princeton University, and Amos Tversky, who passed away in 1996, began investigating the way people make decisions, identifying a number of biases and mental shortcuts, or heuristics, on which the brain relies to make choices. Later Paul Slovic and his colleagues Baruch Fischhoff, now a professor of social scie
nces at Carnegie Mellon University, and psychologist Sarah Lichtenstein began investigating how these leaps of logic come into play when people face risk. They developed a tool, called the psychometric paradigm, that describes all the little tricks our brain uses when staring down a bear or deciding to finish the eighteenth hole in a lighting storm.

  Many of our personal biases are unsurprising. For instance, the optimism bias gives us a rosier view of the future than current facts might suggest. We assume we will be richer ten years from now, so it is fine to blow our savings on a boat—we’ll pay it off then. Confirmation bias leads us to prefer information that backs up our current opinions and feelings and to discount information contradictory to those opinions. We also have tendencies to conform our opinions to those of the groups we identify with, to fear man-made risks more than we fear natural ones, and to believe that events causing dread—the technical term for risks that could result in particularly painful or gruesome deaths, like plane crashes and radiation burns—are inherently more risky than other events.

  But it is heuristics—the subtle mental strategies that often give rise to such biases—that do much of the heavy lifting in risk perception. The “availability” heuristic says that the easier a scenario is to conjure, the more common it must be. It is easy to imagine a tornado ripping through a house; that is a scene we see every spring on the news and all the time on reality TV and in movies. Now try imagining someone dying of heart disease. You probably cannot conjure many breaking-news images for that one, and the drawn-out process of atherosclerosis will most likely never be the subject of a summer thriller. The effect? Twisters feel like an immediate threat, although we have only a 1-in-46,000 chance of being killed by a cataclysmic storm. Even a terrible tornado season like the one last spring typically yields fewer than 500 tornado fatalities. Heart disease, on the other hand, which eventually kills 1 in every 4 people in this country, and 800,000 annually, hardly even rates with our gut.

 

‹ Prev