Book Read Free

How Change Happens

Page 28

by Cass R Sunstein


  Availability

  The availability heuristic is particularly important for purposes of understanding people’s fear and their interest in precautions.17 When people use the availability heuristic, they assess the magnitude of risks by asking whether examples can readily come to mind. If people can easily think of such examples, they are far more likely to be frightened than if they cannot. In fact, a belief in the benevolence of nature often stems from the availability heuristic, as people recall cases in which “tampering” resulted in serious social harm.

  Furthermore, “a class whose instances are easily retrieved will appear more numerous than a class of equal frequency whose instances are less retrievable.”18 Consider a simple study showing participants a list of well-known people of both sexes and asking whether the list contains more names of women or more names of men. In lists in which the men were especially famous, people thought that there were more names of men; in lists in which the women were more famous, people thought that there were more names of women.19

  This is a point about how familiarity can affect the availability of instances. A risk that is familiar, like that associated with smoking, will be seen as more serious than a risk that is less familiar, like that associated with sunbathing. But salience is important as well. “For example, the impact of seeing a house burning on the subjective probability of such accidents is probably greater than the impact of reading about a fire in the local paper.”20 So too, recent events will have a greater impact than earlier ones. The point helps explain much risk-related behavior, including decisions to take precautions. Whether people will buy insurance for natural disasters is greatly affected by recent experiences.21 If floods have not occurred in the immediate past, people who live on flood plains are far less likely to purchase insurance. In the aftermath of an earthquake, insurance for earthquakes rises sharply—but it declines steadily from that point as vivid memories recede. Note that the use of the availability heuristic, in these contexts, is hardly irrational. Both insurance and precautionary measures can be expensive, and what has happened before seems, much of the time, to be the best available guide to what will happen again. The problem is that the availability heuristic can lead to serious errors, in terms of both excessive fear and neglect.

  The availability heuristic helps to explain the operation of the precautionary principle for a simple reason: sometimes a certain risk, said to call for precautions, is cognitively available, whereas other risks, including the risks associated with regulation itself, are not. For example, it is easy to see that arsenic is potentially dangerous; arsenic is well-known as a poison, forming the first word of a well-known movie about poisoning, Arsenic and Old Lace. By contrast, there is a relatively complex mental operation in the judgment that arsenic regulation might lead people to use less safe alternatives. In many cases in which the precautionary principle seems to offer guidance, the reason is that some of the relevant risks are available while others are barely visible. And when people seek to protect nature against human intervention, it is often because the dangers of intervention are visible and familiar while the dangers of nonintervention are not.

  Precautions, Uncertainty, and Irreversibility

  Some of the central claims on behalf of the precautionary principle involve four legitimate points: (1) uncertainty, (2) learning over time, (3) irreversibility, and (4) the need for epistemic humility on the part of scientists. With the help of these points, we might make our way toward more refined and defensible understandings of the principle. A central question involves the appropriate approach to “worst-case” thinking. This is not the place for a full analysis, which would require investigation of some complex issues in decision theory,22 but three points should be uncontroversial (bracketing hard questions about quantification).

  First, if a product or activity has modest or no benefits, the argument for taking precautions is far stronger than if the benefits are significant. Second, if a product or activity imposes a trivially small risk (taking into account both the probability and the magnitude of a bad outcome), then the product or activity should not be banned or regulated (including through labels) if it also promises significant benefits. Third, if a product creates a small (but not trivial) risk of catastrophe, there is a strong argument for banning or regulating it (including through labels) if the benefits are very modest and thus do not justify running that risk.

  Some of the most difficult cases arise when (1) a product or activity has significant benefits and (2) (a) the probability of a bad outcome is difficult or impossible to specify (creating a situation of uncertainty rather than risk)23, and (b) the bad outcome is catastrophic or (c) the harms associated with the bad outcome cannot be identified (creating a situation of ignorance). In such difficult cases, it is not simple to balance the two sides of the ledger, and there is a real argument for eliminating the worst-case scenario.24 And if a risk is irreversible (a concept that requires independent analysis), the argument for addressing it is fortified. A more refined precautionary principle, or several such principles, might be justified in this light; consider the irreversible harm precautionary principle, or the catastrophic harm precautionary principle.25

  Wider Viewscreens

  I have not suggested any particular substitute for the precautionary principle. But none of the arguments here supports the views of Aaron Wildavsky, an acute and influential political scientist with a special interest in risk regulation, who also rejects the precautionary principle.26 In Wildavsky’s view, the notion of precaution should be abandoned and replaced with a principle of resilience, based on an understanding that nature and society are quite able to adjust to even strong shocks and that the ultimate dangers are therefore smaller than we are likely to fear. It would follow from Wildavsky’s resilience principle that people should be less concerned than many now are with the risks associated with, for example, climate change, foodborne illness, and the destruction of the ozone layer.

  Unfortunately, the principle of resilience is no better than that of precaution. Some systems, natural and social, are resilient, but some are not. Whether an ecosystem or a society is resilient cannot be decided in the abstract. In any case, resilience is a matter of degree. Everything depends on the facts. The resilience principle should be understood as a heuristic, or mental shortcut, one that favors inaction in the face of possibly damaging technological change. Like most heuristics, the resilience principle will work well in many circumstances, but it can also lead to systematic and even deadly errors.

  A better approach would be to acknowledge that a wide variety of adverse effects may come from inaction, regulation, and everything between. Such an approach would attempt to consider all of those adverse effects, not simply a subset. When existing knowledge does not allow clear assessments of the full range of adverse effects, such an approach would develop rules of thumb, helping to show the appropriate course of action in the face of uncertainty. When societies face risks of catastrophe, it is appropriate to act, not to stand by and merely to hope. A sensible approach would attempt to counteract, rather than to embody, the various cognitive limitations that people face in thinking about risks. An effort to produce a fair accounting of the universe of dangers on all sides should also help to diminish the danger of interest-group manipulation.

  To be sure, public alarm, even if ill-informed, is itself a harm, and it is likely to lead to additional harms, perhaps in the form of large-scale ripple effects. A sensible approach to risk will attempt to reduce public fear even if it is baseless. My goal here has been not to deny that point, but to explain the otherwise puzzling appeal of the precautionary principle and to isolate the strategies that help make it operational.

  At the individual level, these strategies are hardly senseless, especially for people who lack much information or who do the best they can by focusing on only one aspect of the situation at hand. But for governments, the precautionary principle is not sensible, for the simple reason that once the viewscreen is widened, it
becomes clear that the principle provides no guidance at all. Rational nations should certainly take precautions, but they should not adopt the precautionary principle.

  Notes

  1. The literature is vast. See, for general discussion, The Precautionary Principle in the 20th Century: Late Lessons from Early Warnings (Poul Harremoes et al. eds. 2002); Arie Trouwborst, Evolution and Status of the Precautionary Principle in International Law (2002); Interpreting the Precautionary Principle (Tim O’Riordan & James Cameron eds., 2002); Precaution, Environmental Science and Preventive Public Policy (Joel Tickner ed., 2002); Protecting Public Health and the Environment: Implementing the Precautionary Principle (Carolyn Raffensberger & Joel Tickner eds. 1999).

  2. Benny Joseph, Environmental Studies 254 (2005).

  3. Final Declaration of the First European “Seas At Risk” Conference, Annex 1, Copenhagen, 1994.

  4. Alan McHughen, Pandora’s Picnic Basket (2000).

  5. See Ling Zhong, Note, Nuclear Energy: China’s Approach toward Addressing Global Warming, 12 Geo. Int’l Envtl. L. Rev. 493 (2000). Of course, it is reasonable to urge that nations should reduce reliance on either coal-fired power plants or nuclear power and move instead toward environmentally preferred alternatives, such as solar power. For general discussion, see Renewable Energy: Power for a Sustainable Future (Godfrey Boyle ed., 1996); Allan Collinson, Renewable Energy (1991); Dan E. Arvizu, Advanced Energy Technology and Climate Change Policy Implications, 2 Fla. Coastal L. J. 435 (2001). But these alternatives pose problems of their own, involving feasibility and expense.

  6. See Testimony of Vice Admiral Charles W. Moore, Deputy Chief of Naval Operations for Readiness and Logistics, before the House Resources Committee, Subcommittee on Fisheries Conservation, Wildlife and Oceans, June 13, 2002.

  7. Paul Rozin & Carol Nemeroff, Sympathetic Magical Thinking: The Contagion and Similarity “Heuristics,” in Heuristics and Biases: The Psychology of Intuitive Judgment (Thomas Gilovich, Dale Griffin, & Daniel Kahneman eds. 2002).

  8. Id.

  9. See Paul Slovic, The Perception of Risk 291 (2000).

  10. See James P. Collman, Naturally Dangerous (2001).

  11. See Daniel B. Botkin, Adjusting Law to Nature’s Discordant Harmonies, 7 Duke Envtl. L. & Pol’y F. 25, 27 (1996).

  12. Id., 33.

  13. See Collman, supra note 10.

  14. Id., 31.

  15. See Richard H. Thaler, The Psychology of Choice and The Assumptions of Economics, in Quasi-rational Economics 137, 143 (1991) (arguing that “losses loom larger than gains”); Daniel Kahneman, Jack L. Knetsch, & Richard H. Thaler, Experimental Tests of the Endowment Effect and the Coase Theorem, 98 J. Pol. Econ. 1325, 1328 (1990); Colin Camerer, Individual Decision Making, in The Handbook of Experimental Economics 587, 665–670 (John H. Kagel & Alvin E. Roth eds. 1995).

  16. See Slovic, supra note 9, at 140–143.

  17. See Amos Tversky & Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, in id., 3, 11–14.

  18. Id., 11.

  19. Id.

  20. Id.

  21. Slovic, supra note 9, at 40.

  22. For an especially good discussion of this point, see Daniel Steel, Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy (2014).

  23. See Frank H. Knight, Risk, Uncertainty, and Profit 19–20 (1921/1985) (distinguishing measurable uncertainties, or “‘risk’ proper,” from unknowable uncertainties, called uncertainty); Paul Davidson, Is Probability Theory Relevant for Uncertainty? A Post Keynesian Perspective, 5 J. Econ. Persp. 129, 129–131 (1991) (describing the difference between true uncertainty and risk); Cass R. Sunstein, Irreversible and Catastrophic, 91 Cornell L. Rev. 841, 848 (2006) (noting that for risk, “probabilities can be assigned to various outcomes,” while for uncertainty, “no such probabilities can be assigned”). For a technical treatment of the possible rationality of maximin, see generally Kenneth J. Arrow & Leonid Hurwicz, An Optimality Criterion for Decision-Making under Ignorance, in Uncertainty and Expectations in Economics: Essays in Honor of G.L.S. Shackle 1 (C. F. Carter & J. L. Ford eds., 1972). For a nontechnical overview, see Jon Elster, Explaining Technical Change app. 1 at 185–207 (1983).

  24. A distinctive argument, ventured by Nassim Nicholas Taleb et al., refers to a “ruin” problem, involving a low probability of catastrophically high costs. Nassim Nicholas Taleb et al., The Precautionary Principle (with Application to the Genetic Modification of Organisms) 10 (Extreme Risk Initiative—NYU Sch. of Eng’g Working Paper Series, 2014), http://www.fooledbyrandomness.com/pp2.pdf (arguing that by “manipulat[ing] large sets of interdependent factors at the same time,” GMOs have the potential to upset the entire food system).

  25. See Cass R. Sunstein, Worst-Case Scenarios (2009); Cass R. Sunstein, Irreparability as Irreversibility, 2018 Supreme Court Review 93.

  26. See Aaron Wildavsky, But Is It True? (1995), 433.

  14

  Moral Heuristics

  Pioneering the modern literature on heuristics in cognition, Amos Tversky and Daniel Kahneman contended that “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations.”1 But the relevant literature has only started to investigate the possibility that in the moral and political domain, people also rely on simple rules of thumb that often work well but that sometimes misfire.2 The central point seems obvious. Much of everyday morality consists of simple, highly intuitive rules that generally make sense but that fail in certain cases. It is wrong to lie or steal, but if a lie or a theft would save a human life, lying or stealing is probably obligatory. Not all promises should be kept. It is wrong to try to get out of a longstanding professional commitment at the last minute, but if your child is in the hospital, you may be morally required to do exactly that.

  One of my major goals here is to show that heuristics play a pervasive role in moral, political, and legal judgments and that they sometimes produce significant mistakes. I also attempt to identify a set of heuristics that now influence both law and policy and try to make plausible the claim that some widely held practices and beliefs are a product of those heuristics. Often moral heuristics represent generalizations from a range of problems for which they are indeed well-suited,3 and most of the time, such heuristics work well. The problem comes when the generalizations are wrenched out of context and treated as freestanding or universal principles, applicable to situations in which their justifications no longer operate. Because the generalizations are treated as freestanding or universal, their application seems obvious, and those who reject them appear morally obtuse, possibly even monstrous. I want to urge that the appearance is misleading and even productive of moral mistakes. There is nothing obtuse or monstrous about refusing to apply a generalization in contexts in which its rationale is absent.

  Because Kahneman and Tversky were dealing with facts and elementary logic, they could demonstrate that the heuristics sometimes lead to errors. Unfortunately, that cannot easily be demonstrated here. In the moral and political domains, it is hard to come up with unambiguous cases in which the error is both highly intuitive and, on reflection, uncontroversial—cases in which people can ultimately be embarrassed about their own intuitions. Nonetheless, I hope to show that whatever one’s moral commitments, moral heuristics exist and indeed are omnipresent. We should treat the underlying moral intuitions not as fixed points for analysis, but as unreliable and at least potentially erroneous. In the search for reflective equilibrium, understood as coherence among our judgments at all levels of generality,4 it is important to see that some of our deeply held moral beliefs might be a product of heuristics that sometimes produce mistakes.

  If moral heuristics are in fact pervasive, then people with diverse foundational commitments should be able to agree not that their own preferred theories are wrong, but that they are often applied in
a way that reflects the use of heuristics. Utilitarians ought to be able to identify heuristics for the maximization of utility; deontologists should be able to point to heuristics for the proper discharge of moral responsibilities; those uncommitted to any large-scale theory should be able to specify heuristics for their own, more modest normative commitments. And if moral heuristics exist, blunders are highly likely not only in moral thinking but in legal and political practice as well. Conventional legal and political arguments are often a product of heuristics masquerading as universal truths. Hence I will identify a set of political and legal judgments that are best understood as a product of heuristics and that are often taken, wrongly and damagingly, as a guide to political and legal practice even when their rationale does not apply.

  Ordinary Heuristics and an Insistent Homunculus

  Heuristics and Facts

  The classic work on heuristics and biases deals not with moral questions but with issues of fact. In answering hard factual questions, those who lack accurate information use simple rules of thumb. How many words, in four pages of a novel, will have ing as the last three letters? How many words, in the same four pages, will have n as the second-to-last letter? Most people will give a higher number in response to the first question than in response to the second5—even though a moment’s reflection shows that this is a mistake. People err because they use an identifiable heuristic—the availability heuristic—to answer difficult questions about probability. When people use this heuristic, they answer a question of probability by asking whether examples come readily to mind. Lacking statistical knowledge, people try to think of illustrations. For people without statistical knowledge, it is far from irrational to use the availability heuristic; the problem is that this heuristic can lead to serious errors of fact in the form of excessive fear of small risks and neglect of large ones.

 

‹ Prev