Book Read Free

How Change Happens

Page 15

by Cass R Sunstein


  Because inconvenience can be a real problem and because higher rates might hit people especially hard, overdraft protection might well be in the interest of many or most of the people who end up opting in. Note in this regard that state-level regulation of payday lenders has led consumers to have to resort to equally expensive sources of credit.22 This finding strongly suggests that if people cannot use overdraft protection, they might simply go elsewhere.

  With this point in mind, we might even think that the Federal Reserve’s policy has been a significant success. People are no longer automatically enrolled in overdraft protection, and the vast majority of customers no longer have such protection, which may well be saving them money. At the same time, those who want such protection, or need it, have signed up for it. That sounds like a success for nudging, all things considered. Is there really a problem? That question can be asked whenever institutions succeed in convincing people to opt out of a default rule. Such institutions might be self-interested, but they might also be producing mutually advantageous deals. To know, we need to investigate the details. Who is getting what? Who might be losing?

  The same points might be made if people reject a default rule in favor of protection of privacy in online behavior. Perhaps that form of privacy does not much matter to people. Perhaps those who want them to waive it can offer something to make doing so worth their while. A default rule gives people a kind of entitlement, and entitlements have prices. If people are willing to give up an entitlement for what seems to be a satisfactory price, where is the objection? A counternudge might be quite welcome.

  Nudge Better

  Choice architects might have started with a hypothesis, which is that a nudge—say, disclosure of information—will change behavior in a desirable direction. Perhaps the hypothesis was plausible, but turns out to be wrong. Once people are given more information, they keep doing exactly what they have been doing.

  Again, the failure of the hypothesis does not, by itself, tell us whether something else should be done. Perhaps people are given a warning about the risks associated with some anticancer drug; perhaps they continue to use the drug in spite of the warning. If so, there might be no problem at all. The goal of the warning is to ensure that choices are informed, not that they are different. If people’s behavior does not change after they receive information about the risks associated with certain activities (say, football or boxing), nothing might have gone wrong.

  Suppose, however, that the underlying problem is significant, and that once people are informed, we have every reason to think at least some of them should do something different. Perhaps people are not applying for benefits for which an application would almost certainly be in their interest. Perhaps they are failing to engage in behavior that would much improve their economic situation or their health (say, taking prescription medicine or seeing a doctor). If so, then other nudges might be tried and tested instead—for example, a clearer warning, uses of social norms, or a default rule. By itself, information might not trigger people’s attention, and some other approach might be superior. And if a default rule fails, it might make sense to accompany it with information or with warnings. There may well be no alternative but to experiment and to test—to identify a way to nudge better.

  Consider a few possibilities. We have seen that if the goal is to change behavior, choice architects should “make it easy”; in the case of an ineffective nudge, the next step might be to “make it even easier.” Information disclosure might be ineffective if it is complex but succeed if it is simple. A reminder might fail if it is long and poorly worded but succeed if it is short and vivid. We have also seen that people’s acts often depend on their social meaning, which can work like a subsidy or a tax; if a nudge affects meaning, it can change a subsidy into a tax or vice versa. For example, certain kinds of information disclosure, and certain kinds of warnings, can make risk-taking behavior seem silly, stupid, or uncool. A nudge might be altered so that it affects social meaning in the desired direction. Publicizing social norms might move behavior, but only if they are the norms in a particular community, not in the nation as a whole. If publicizing national norms does not work, it might make sense to focus on those that have sway in the relevant community.

  Freedom Failed

  In some cases, freedom of choice itself might be an ambiguous good. For behavioral or other reasons, an apparently welcome and highly effective counternudge, leading consumers or employees in certain directions, might turn out to be welfare reducing. In extreme cases, it might ruin their lives. People might suffer from present bias, optimistic bias, or a problem of self-control. The counternudge might exploit a behavioral bias of some kind. What might be necessary is some kind of counter-counternudge—for example, a reminder or a warning to discourage people from opting into a program that generally is not in their interest.

  In the case of overdraft protection programs, some of those who opt in and who end up receiving that protection are probably worse off as a result. Perhaps they do not understand the program and its costs; perhaps they were duped by a behaviorally informed messaging campaign. Perhaps they are at risk of overdrawing their accounts not because they need a loan, but because they have not focused on those accounts and on how they are about to go over. Perhaps they are insufficiently informed or attentive. To evaluate the existing situation, we need to know a great deal about the population of people who opt in. In fact this is often the key question, and it is an empirical one. The fact that they have opted in is not decisive.

  The example can be taken as illustrative. If a default rule or some other nudge is well-designed to protect people from their own mistakes and it does not stick, then its failure is nothing to celebrate. The fact of its ineffectiveness is a tribute to the success of a self-interested actor seeking to exploit behavioral biases. The counternudge is a form of manipulation or exploitation, something to counteract rather than to celebrate. Perhaps the counternudge encourages people to smoke or to drink excessively; perhaps it undoes the effect of the nudge, causing premature deaths in the process.

  The same point applies to strong antecedent preferences, which might be based on mistakes of one or another kind. A GPS device is a defining nudge, and if people reject the indicated route on the ground that they know better, they might end up lost. The general point is that if the decision to opt out is a blunder for many or most, then there is an argument for a more aggressive approach. The overdraft example demonstrates the importance of focusing not only on default rules, but also on two other kinds of rules as well, operating as counter-counternudges: altering rules and framing rules.23

  Altering rules establish how people can change the default. If choice architects want to simplify people’s decisions, and if they lack confidence about whether a default is suitable for everyone, they might say that consumers can opt in or opt out by making an easy phone call (good) or by sending a quick email (even better). Alternatively, choice architects, confident that the default is right for the overwhelming majority of people, might increase the costs of departing from it. For example, they might require people to fill out complex forms or impose a cooling-off period. They might also say that even if people make a change, the outcome will “revert” to the default after a certain period (say, a year), requiring repeated steps. Or they might require some form of education or training, insisting on a measure of learning before people depart from the default.

  Framing rules establish and regulate the kinds of “frames” that people can use when they try to convince people to opt in or opt out. We have seen that financial institutions enlisted loss aversion in support of opt in. Behaviorally informed strategies of this kind could turn out to be highly effective. But that is a potential problem. Even if they are not technically deceptive, they might count as manipulative, and they might prove harmful. Those who believe in freedom of choice but seek to avoid manipulation or harm might want to constrain the permissible set of frames—subject, of course, to existing safeguards for free
dom of speech. Framing rules might be used to reduce the risk of manipulation.

  Consider an analogy. If a company says that its product is “90 percent fat-free,” people are likely to be drawn to it, far more so than if the company says that its product is “10 percent fat.” The two phrases mean the same thing, and the 90 percent fat-free frame is legitimately seen as a form of manipulation. In 2011, the American government allowed companies to say that their products are 90 percent fat-free—but only if they also say that they are 10 percent fat. We could imagine similar constraints on misleading or manipulative frames that are aimed to get people to opt out of the default. Alternatively, choice architects might use behaviorally informed strategies of their own, supplementing a default rule with, say, uses of loss aversion or social norms to magnify its impact.24

  To the extent that choice architects are in the business of choosing among altering rules and framing rules, they can take steps to make default rules more likely to stick, even if they do not impose mandates. They might conclude that mandates and prohibitions would be a terrible idea, but that it makes sense to make it harder for people to depart from default rules. Sometimes that is the right conclusion. The problem is that when choice architects move in this direction, they lose some of the advantages of default rules, which have the virtue of easy reversibility, at least in principle. If the altering rules are made sufficiently onerous, the default rule might not be all that different from a mandate.

  There is another possibility: Choice architects might venture a more personalized approach. They might learn that one default rule suits one group of people and that another suits a different group; by tailoring default rules to diverse situations, they might have a larger effect than they would with a mass default rule.25 Or they might learn that an identifiable subgroup is opting out, either for good reasons or for bad ones. (Recall that aggregate effectiveness data might disguise very large effects or very small ones for relevant subgroups.) If the reasons do not seem good, choice architects might adopt altering rules or framing rules as safeguards, or they might enlist, say, information and warnings. If they can be made to work well, more personalized approaches have the promise of preserving freedom of choice while simultaneously increasing effectiveness.

  But preserving freedom of choice might not be a good idea. Indeed, we can easily imagine cases in which a mandate or ban might be justified on behavioral or other grounds. Most democratic nations have mandatory social security systems, based in part on a belief that “present bias” is a potent force and a conclusion that some level of compulsory savings is justified on welfare grounds. Food-safety regulations forbid people from buying goods that pose risks that reasonable people would not run. Such regulations might be rooted in a belief that consumers lack relevant information (and it is too difficult or costly to provide it to them), or they might be rooted in a belief that people suffer from limited attention or optimistic bias. Some medicines are not allowed to enter the market, and for many others a prescription is required; people are not permitted to purchase them on their own.

  Many occupational safety and health regulations ultimately have a paternalistic justification, and they take the form of mandates and bans, not nudges. Consider, for example, the domains of fuel economy and energy efficiency. To be sure, they reduce externalities in the form of conventional air pollutants, greenhouse gases, and energy insecurity. But if we consider only those externalities, the benefits of those requirements are usually lower than the costs. The vast majority of the monetized benefits accrue to consumers, in the form of reduced costs of gasoline and energy.

  On standard economic grounds, those benefits should not count in the analysis of costs and benefits, because consumers can obtain them through their own free choices; if they are not doing so, it must be because the relevant goods (automobiles, refrigerators) are inferior along some dimension. The US government’s current response is behavioral; it is that in the domain of fuel economy and energy efficiency, consumers are making some kind of mistake, perhaps because of present bias, perhaps because of a lack of sufficient salience. Some people contest this argument. But if it is correct, the argument for some kind of mandate is secure on welfare grounds. (I return to this issue in chapter 10.)

  The same analysis holds, and more simply, if the interests of third parties are involved. Default rules are typically meant to protect choosers, but in some cases, third parties are the real concern. For example, a green default rule, designed to prevent environmental harm, is meant to reduce externalities and to solve a collective action problem, not to protect choosers as such. A nudge, in the form of information disclosure or a default rule, is not the preferred approach to pollution (including carbon emissions). If a nudge is to be used, it is because it is a complement to more aggressive approaches or because such approaches are not feasible (perhaps for political reasons). But if a default rule proves ineffective—for one or another of the reasons sketched here—there will be a strong argument for economic incentives, mandates, and bans.

  Further Considerations

  Default rules are often thought to be the most effective nudge—but for two reasons, they might not have the expected impact. The first involves strong antecedent preferences. The second involves the use of counternudges by those with an economic or other interest in convincing choosers to opt out.

  These two reasons help account for the potential ineffectiveness of nudges in many other contexts. Information, warnings, and reminders will not work if people are determined to engage in the underlying behavior (smoking, drinking, texting while driving, eating unhealthy foods). And if, for example, cigarette companies and sellers of alcoholic beverages have opportunities to engage choosers, they might be able to weaken or undo the effects of information, warnings, and reminders.

  It is important to observe that nudges may be ineffective for independent reasons. Consider five.

  1. If a nudge is based on a plausible but inaccurate understanding of behavior and of the kinds of things to which people respond, it might have no impact. This is an especially important point, and it suggests the immense importance of testing apparently reasonable behavioral hypotheses. We might believe, for example, that because people are loss averse, a warning about potential losses will change behavior. But if such a warning frightens people or makes them think that they cannot engage in self-help, and essentially freezes them, then we might see little or no change. We might believe that people are not applying for important services because of excessive complexity and that simplification will make a big difference. But perhaps it will not; skepticism, fear, or inertia might be the problem, and simplification might not much help.

  Or we might hypothesize that an understanding of social norms will have a large effect on what people do. But if the target audience is indifferent to (general) social norms and is happy to defy them, then use of social norms might have no impact. Some nudges seem promising because of their effects on small or perhaps idiosyncratic populations. Whether their impact will diminish or be eliminated when applied elsewhere is a question that needs to be tested. It is true that the failure of a behavioral hypothesis should pave the way to alternative or more refined hypotheses, including a specification of the circumstances in which the original hypothesis will or will not hold.

  2. If information is confusing or complex to process, people might be unaffected by it. There is a lively debate about the general effectiveness of disclosure strategies; some people are quite skeptical. What seems clear is that the design of such strategies is extremely important. Disclosure nudges, or educative nudges in general, may have far less impact than one might think in the abstract.26

  3. People might show reactance to some nudges, rejecting an official effort to steer because it is an official effort to steer. Most work on the subject of reactance explores how people rebel against mandates and bans because they wish to maintain control.27 Default rules are not mandates, and hence it might be expected that reactance would be a nonissue. B
ut as for mandates, so for defaults: they might prove ineffective if people are angry or resentful that they have been subjected to it. So too, an effort to invoke social norms might not work if people do not care about social norms or if they want to defy them. We are at the early stages of learning about the relationship between reactance and nudges, and thus far it seems safe to say that, for the most part, reactance is not likely to be an issue, simply because autonomy is preserved. But in some cases, it might prove to be important (see chapter 9 for more details).

  4. Nudges might have only a short-term effect.28 If people see a single reminder, they might pay attention to it—but only once. If people receive information about health risks, their behavior might be influenced—but after a time, that information might become something like background noise or furniture. It might cease to be salient or meaningful. Even a graphic warning might lose its resonance after a time. By contrast, a default rule is more likely to have a persistent effect, because people have to work to change it—but after a while, its informational signal might become muted, or inertia might be overcome. We continue to learn about the circumstances in which a nudge is likely to produce long-term effects.29

  5. Some nudges might have an influence on the desired conduct but also produce compensating behavior, nullifying the overall effect. Suppose, for example, that smart cafeteria design can lead high school students to eat healthy foods. Suppose too that such students eat unhealthy foods at snack time, at dinner, or after school hours. If so, the nudge will not improve public health. There is a general risk of a “rebound effect”—as, for example, when fuel-efficient cars lead people to drive more, reducing and potentially nullifying the effects of interventions designed to increase fuel efficiency. Perhaps a nudge will encourage people to exercise more than they now do, but perhaps they will react by eating more.

 

‹ Prev