How Change Happens

Home > Other > How Change Happens > Page 32
How Change Happens Page 32

by Cass R Sunstein


  I believe that in cases of this kind, the underlying moral intuitions ordinarily work well, but when they are wrenched out of familiar contexts their reliability, for purposes of moral and legal analysis, is unclear. Consider the following rule: do not kill an innocent person, even if doing so is necessary to save others. (I put to one side the contexts of self-defense and war.) In all likelihood, a society does much better if most people have this intuition, if only because judgments about necessity are likely to be unreliable and self-serving. But in a hypothetical case, in which it really is necessary to kill an innocent person to save twenty others, our intuitions might well turn out to be unclear and contested—and if our intuitions about the hypothetical case turn out to be very firm (do not kill innocent people, ever!), they might not deserve to be so firm simply because they have been wrenched out of the real-world context, which is where they need to be to make sense.

  The use of exotic cases has been defended not on the ground that they are guaranteed to be correct but as a means of eliciting the structure of our moral judgments in a way that enables us to “isolate the reasons and principles” that underlie our responses.69 But if those responses are unreliable, they might not help to specify the structure of moral judgments, except when they are ill-informed and unreflective. For isolating reasons and principles that underlie our responses, exotic cases might be positively harmful.

  In short, I believe that some philosophical analysis, based on exotic moral dilemmas, is inadvertently and even comically replicating the early work of Kahneman and Tversky: uncovering situations in which intuitions, normally quite sensible, turn out to misfire. The irony is that while Kahneman and Tversky meant to devise cases that would demonstrate the misfiring, some philosophers develop exotic cases with the thought that the intuitions are likely reliable and should form the building blocks for sound moral judgments. An understanding of the operation of heuristics offers reason to doubt the reliability of those intuitions, even when they are very firm.

  Now it is possible that the firmness of the underlying intuitions is actually desirable. Social life is almost certainly better, not worse, because of the large number of people who treat heuristics as moral rules and who believe, for example, that innocent people should never be killed. If the heuristic is treated as a universal and freestanding principle, perhaps some mistakes will be made, but only in highly unusual cases, and perhaps people who accept the principle will avoid the temptation to depart from it when the justification for doing so appears sufficient but really is not. In other words, a firm rule might misfire in some cases, but it might be better than a more fine-grained approach, which, in practice, would misfire even more. Those who believe that you should always tell the truth may do and be much better, all things considered, than those who believe that truth should be told only on the basis of case-specific, all-things-considered judgments in its favor.

  To the extent that moral heuristics operate as rules, they might be defended in the way that all rules are—better than the alternatives even if productive of error in imaginable cases. I have noted that moral heuristics might show a kind of “ecological rationality,” working well in most real-world contexts; recall the possibility that human beings live by simple heuristics that make us good. My suggestion is not that the moral heuristics, in their most rigid forms, are socially worse than the reasonable alternatives. It is hard to resolve that question in the abstract. I am claiming only that such heuristics lead to real errors and significant confusion. A great deal of experimental work remains to be done on this question; existing research has only scratched the surface.

  Within philosophy, there is a large body of literature on the role of intuitions in moral argument, much of it devoted to their role in the search for reflective equilibrium.70 In John Rawls’ influential formulation, people’s judgments about justice should be made via an effort to ensure principled consistency between their beliefs at all levels of generality.71 Rawls emphasizes that during the search for reflective equilibrium, all beliefs are revisable in principle. But as Rawls also emphasizes, some of our beliefs, about particular cases and more generally, seem to us especially fixed, and it will take a great deal to uproot them. It is tempting to use an understanding of moral heuristics as a basis for challenging the search for reflection equilibrium, but I do not believe that anything said here supports that challenge. Recall that in Rawls’ formulation, all of our intuitions are potentially revisable, including those that are quite firm.

  What I am adding here is that if moral heuristics are pervasive, then some of our apparently fixed beliefs might result from them. We should be aware of that fact in attempting to reach reflective equilibrium. Of course some beliefs that are rooted in moral heuristics might turn out, on reflection, to be correct, perhaps for reasons that will not occur to people who use the heuristics mechanically. I am suggesting only that judgments that seem most insistent, or least revisable, may result from overgeneralizing intuitions that work well in many contexts but also misfire in others.

  If this is harder to demonstrate in the domain of morality than in the domain of facts, it is largely because we are able to agree, in the relevant cases, about what constitutes factual error and often less able to agree about what constitutes moral error. With respect to the largest disputes about what morality requires, it may be too contentious to argue that one side is operating under a heuristic, whereas another side has it basically right. But I hope that I have said enough to show that in particular cases, sensible rules of thumb lead to demonstrable errors not merely in factual judgments, but in the domains of morality, politics, and law as well.

  Notes

  1. Amos Tversky & Daniel Kahneman, Judgment under Uncertainty: Heuristics and Biases, 185 Science 1124 (1974).

  2. See Jonathan Baron, Nonconsequentialist Decisions, 17 Behav. & Brain Sci. 1 (1994); Jonathan Baron, Judgment Misguided: Intuition and Error in Public Decision Making (1998); David Messick, Equality as a Decision Heuristic, in Psychological Perspectives on Justice (B. Mellers & J. Baron eds. 1993).

  3. See Jonathan Baron, Nonconsequentialist Decisions, 17 Behav. & Brain Sci. 1 (1994).

  4. John Rawls, A Theory of Justice (1971); Norman Daniels, Justice and Justification: Reflective Equilibrium in Theory and Practice (1996).

  5. Daniel Kahneman & Amos Tversky, Choices, Values, and Frames, 39 Am. Psychol. 341 (1984).

  6. See Daniel Kahneman & Shane Frederick, Representativeness Revisited: Attribute Substitution in Intuitive Judgment, in Heuristics and Biases: The Psychology of Intuitive Judgment 49, 62 (Thomas Gilovich, Dale Griffin, & Daniel Kahneman eds. 2002); Barbara Mellers, Ralph Hertwig, & Daniel Kahneman, Do Frequency Representations Eliminate Conjunction Effects?, 12 Psychol. Sci. 469 (2001).

  7. Stephen J. Gould, Bully for Brontosaurus: Reflections in Natural History 469 (1991).

  8. Kahneman & Frederick, supra note 6, at 63.

  9. See D. G. Myers, Intuition: Its Powers and Perils (2002).

  10. Paul Slovic et al., The Affect Heuristic, in Heuristics and Biases: The Psychology of Intuitive Judgment 397 (Thomas Gilovich, Dale Griffin, & Daniel Kahneman eds. 2002).

  11. Kahneman & Frederick, supra note 6.

  12. Joshua D. Greene & Jonathan Haidt, How (and Where) Does Moral Judgment Work?, 6 Trends in Cognitive Sci. 517 (2002); Jonathan Haidt & Matthew Hersh, Sexual Morality: The Cultures and Emotions of Conservatives and Liberals, 31 J. Applied Soc. Psychol. 191 (2002). Compare David A. Pizarro & Paul Bloom, The Intelligence of the Moral Intuitions: Comment on Haidt, 110 Psychol. Rev. 193 (2001).

  13. Jonathan Haidt et al., Moral Dumbfounding: When Intuition Finds No Reason, unpublished manuscript, University of Virginia (2004).

  14. Jeremy Bentham, An Introduction to the Principles of Morals and Legislation 12 (J. H. Burns & H. L. A. Hart eds. 1970).

  15. See John Stuart Mill, Utilitarianism 28–29 (1861/1971); Henry Sigdwick, The Methods of Ethics 199–216 (1874); R. M. Hare, Moral Thin
king: Its Levels, Method and Point (1981); J. J. C. Smart, An Outline of a System of Utilitarian Ethics, in Utilitarianism: For and Against (J. J. C. Smart & B. Williams eds. 1973).

  16. See Mill, supra note 15, at 29. In a widely held view, a primary task of ethics is to identify the proper general theory and to use it to correct intuitions in cases in which they go wrong. B. Hooker, Ideal Code, Real World: A Rule-Consequentialist Theory of Morality (2000). Consider here the provocative claim that much of everyday morality, nominally concerned with fairness, should be seen as a set of heuristics for the real issue, which is how to promote utility. See J. Baron, Judgment Misguided: Intuition and Error in Public Decision Making (1998), https://www.sas.upenn.edu/~baron/vbook.htm. To the same general effect, with numerous examples from law, see L. Kaplow & S. Shavell, Fairness versus Welfare (2002).

  17. Amartya Sen, Fertility and Coercion, 63 U. Chi. L. Rev. 1035, 1038 (1996).

  18. Frans de Waal, Good Natured: The Origins of Right and Wrong in Humans and Other Animals (1996); Elliot Sober & David Sloan Wilson, Unto Others: The Evolution and Psychology of Unselfish Behavior (1999); Leonard D. Katz, Evolutionary Origins of Morality: Cross-Disciplinary Perspectives (2000).

  19. Ethics and Evolution: The MIT Encyclopedia of the Cognitive Sciences (R. A. Wilson & F. C. Weil eds. 2001).

  20. See Cass R. Sunstein, Why Societies Need Dissent (2003).

  21. See Brad Hooker, Ideal Code, Real World: A Rule-Consequentialist Theory of Morality (2000).

  22. Gerd Gigerenzer et al., Simple Heuristics that Make Us Smart (1999).

  23. Daniel Kahneman & Amos Tversky, Choices, Values, and Frames, 39 Am. Psychol. 341 (1984).

  24. Amos Tversky & Daniel Kahneman, Loss Aversion in Riskless Choice: A Reference-Dependent Model, 106 Q. J. Econ. 1039 (1991).

  25. See Daniel Kahneman, Jack L. Knetsch, & Richard H. Thaler, Fairness as a Constraint on Profit-Seeking: Entitlements in the Market, 76 Am. Econ. Rev. 728 (1986).

  26. Craig R. McKenzie, Framing Effects in Inference Tasks—and Why They Are Normatively Defensible, 32 Memory & Cognition 874 (2004).

  27. Note also that loss aversion is quite robust in the real world. Colin Camerer, Prospect Theory in the Wild: Evidence from the Field, in Choices, Values and Frames (Daniel Kahneman & Amos Tversky eds. 2000); Shlomo Benartzi & Richard H. Thaler, Myopic Loss Aversion and the Equity Premium Puzzle, in Choices, Values and Frames (Daniel Kahneman & Amos Tversky eds. 2000). And it has not been shown to be solely or mostly a result of the speaker’s clues. Note also that the nature of the cue, when there is one, depends on the speaker’s appreciation of the existence of framing effects; otherwise, the clue would be ineffective.

  28. See Shane Frederick, Measuring Intergenerational Time Preference: Are Future Lives Valued Less?, 26 J. Risk & Uncertainty 39 (2003).

  29. Richard Revesz, Environmental Regulation, Cost–Benefit Analysis, and the Discounting of Human Lives, 99 Colum. L. Rev. 941 (1999); Edward R. Morrison, Comment, Judicial Review of Discount Rates Used in Regulatory Cost–Benefit Analysis, 64 U. Chi. L. Rev. 1333 (1998).

  30. Maureen L. Cropper et al., Preferences for Life-Saving Programs: How the Public Discounts Time and Age, 8 J. Risk & Uncertainty 243 (1994).

  31. Shane Frederick, Measuring Intergenerational Time Preference: Are Future Lives Valued Less?, 26 J. Risk & Uncertainty 39 (2003).

  32. For a similar result, see Jonathan Baron, Can We Use Human Judgments to Determine the Discount Rate?, 20 Risk Analysis 861 (2000). Here, too, the frame may indicate something about the speaker’s intentions, and subjects may be sensitive to the degree of certainty in the scenario (assuming, for example, that future deaths may not actually occur). While strongly suspecting that these explanations are not complete, see Shane Frederick, Measuring Intergenerational Time Preference: Are Future Lives Valued Less?, 26 J. Risk & Uncertainty 39 (2003). I mean not to reject them, but only to suggest the susceptibility of intuitions to frames. For skeptical remarks, see Frances Kamm, Moral Intuitions, Cognitive Psychology, and the Harming-versus-Not-Aiding Distinction, 108 Ethics 463 (1998).

  33. See Cass R. Sunstein, Lives, Life-Years, and Willingness to Pay, 104 Colum. L. Rev. 205 (2004).

  34. See W. Kip Viscusi, Corporate Risk Analysis: A Reckless Act?, 52 Stan. L. Rev. 547, 547, 558 (2000).

  35. See id; Phillip E. Tetlock et al., The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals, 78 J. Personality & Soc. Psychol. 853 (2000).

  36. See Viscusi, supra note 34.

  37. Phillip E. Tetlock et al., The Psychology of the Unthinkable: Taboo Trade-Offs, Forbidden Base Rates, and Heretical Counterfactuals, 78 J. Personality & Soc. Psychol. 853–870 (2000).

  38. I am grateful to Jonathan Haidt for this suggestion.

  39. Frank Ackerman & Lisa Heinzerling, Priceless: On Knowing the Price of Everything and the Value of Nothing (2004).

  40. Cass R. Sunstein, Risk and Reason: Safety, Law, and the Environment (2002).

  41. Michael Sandel, It’s Immoral to Buy the Right to Pollute, New York Times (December 15, 1997); see also Steven Kelman, What Price Incentives? Economists and the Environment (1981).

  42. See Jonathan J. Koehler & Andrew D. Gershoff, Betrayal Aversion: When Agents of Protection Become Agents of Harm, 90 Org. Behav. & Hum. Decision Processes 244 (2003).

  43. See id.

  44. Id. at 244.

  45. Id.

  46. Ilana Ritov & Jonathan Baron, Reluctance to Vaccinate: Omission Bias and Ambiguity, 3 J. Behav. Decision Making 263 (1990).

  47. John Darley et al., Incapacitation and Just Deserts as Motives for Punishment, 24 L. & Hum. Behav. 659 (2000); Kevin M. Carlsmith et al., Why Do We Punish? Deterrence and Just Deserts as Motives for Punishment, 83 J. of Personality & Soc. Psychol. 284 (2000).

  48. See Daniel Kahneman, David Schkade, & Cass R. Sunstein, Shared Outrage and Erratic Awards: The Psychology of Punitive Damages, 16 J. of Risk & Uncertainty 49 (1998); Cass R. Sunstein et al., Punitive Damages: How Juries Decide (2002).

  49. See Kahneman & Frederick, supra note 6, at 63.

  50. Ilana Ritov & Jonathan Baron, Reluctance to Vaccinate: Omission Bias and Ambiguity, 3 J. Behav. Decision Making 263 (1990).

  51. See Jonathan Baron, Morality and Rational Choice 108, 123 (1993).

  52. Id.

  53. Jonathan Baron & Ilana Ritov, Intuitions about Penalties and Compensation in the Context of Tort Law, 7 J. Risk & Uncertainty 17 (1993).

  54. Mitchell A. Polinsky & Steven S. Shavell, Punitive Damages: An Economic Analysis, 111 Harv. L. Rev. 869 (1998).

  55. Cass R. Sunstein, David Schkade, & Daniel Kahneman, Do People Want Optimal Deterrence?, 29 J. Legal Stud. 237, 248–249 (2000).

  56. L. Kass, The Wisdom of Repugnance, in The Ethics of Human Cloning 17–19 (L. Kass & J. Q. Wilson eds. 1998).

  57. Id.

  58. P. Rozin, Technological Stigma: Some Perspectives from the Study of Contagion, in Risk, Media, and Stigma: Understanding Public Challenges to Modern Science and Technology 31, 38 (J. Flynn, P. Slovic, & H. Kunreuther eds. 2001).

  59. Id.

  60. Id.

  61. A. McHughen, Pandora’s Picnic Basket (2000).

  62. E. Schlosser, Fast Food Nation: The Dark Side of the All-American Meal (2002).

  63. J. Haidt & M. Hersh, Sexual Morality: The Cultures and Emotions of Conservatives and Liberals, 31 J. Applied Soc. Psychol. 191–221.

  64. Haidt et al., supra note 13.

  65. See Washington v. Glucksberg 1997, 724–725.

  66. J. Baron & I. Ritov, Intuitions about Penalties and Compensation in the Context of Tort Law, 7 J. Risk & Uncertainty 17–33.

  67. B. Williams, A Critique of Utilitarianism, in Utilitarianism: For and Against (J. J. C. Smart & B. Williams eds. 1973).

  68. J. J. Thomson, The Trolley Problem, in Rights, Restitution and Risk: Essays in Moral Theor
y 31 (J. J. Thomson & W. Parent eds. 1986).

  69. F. Kamm, Morality, Mortality, Vol. 1: Death and Whom to Save from It 8 (1993); see generally R. Sorenson, Thought Experiments (1992).

  70. B. Hooker, Ideal Code, Real World: A Rule-Consequentialist Theory of Morality (2000); J. Raz, The Relevance of Coherence, in Ethics in the Public Domain 277–326 (J. Raz ed. 1994).

  71. J. Rawls, A Theory of Justice (1971).

  15

  Rights

  In political, moral, and legal theory, many of the largest debates pit consequentialists against deontologists. Recall that consequentialists believe that the rightness of actions turns on their consequences, which are to be measured, aggregated, and ranked. (Utilitarianism is a species of consequentialism.) By contrast, deontologists believe that some actions are wrong even if they have good consequences. Many deontologists think that it is wrong to torture people or to kill them even if the consequences of doing so would be good. Many deontologists also think that you should not throw someone in the way of a speeding train even if that action would save lives on balance; that you should not torture someone even if doing so would produce information that would save lives; that slavery is a moral wrong regardless of the outcome of any utilitarian calculus; that the protection of free speech does not depend on any such calculus; that the strongest arguments for and against capital punishment turn on what is right, independent of the consequences of capital punishment.

 

‹ Prev