Book Read Free

How Change Happens

Page 24

by Cass R Sunstein


  Welfare and Experience

  For consumer goods, the central question (putting externalities to one side) is which choice will improve the welfare of choosers. In cases subject to preference reversals, the problem is that in separate evaluation, some characteristic of an option is difficult or impossible to evaluate—which means that it will not receive the attention that it may deserve. A characteristic that is important to welfare or actual experience might be ignored. In joint evaluation, the problem is that the characteristic that is evaluable may receive undue attention. A characteristic that is unimportant to welfare or to actual experience might be given great weight.

  Sellers can manipulate choosers in either separate evaluation or joint evaluation, and the design of the manipulation should now be clear. In separate evaluation, the challenge is to show choosers a characteristic that they can evaluate, if it is good (intact cover), and to show them a characteristic that they cannot evaluate, if it is not so good (0.01 total harmonic distortion). In joint evaluation, the challenge is to allow an easy comparison along a dimension that is self-evidently important, even if the difference along that dimension matters little or not at all to experience or to what matters.

  If external observers had perfect information, they could of course decide what the chooser should do. The problem is that external observers usually have imperfect information. Among other things, they tend to lack a full sense of the chooser’s preferences and values. Nonetheless, we can find easy cases in which separate evaluation is best and easy cases in which joint evaluation is best, and we are now in a position to understand why some cases are hard. The larger lesson is that separate evaluation and joint evaluation have serious and characteristic defects.

  The problem of discrimination can be analyzed in broadly similar terms. The general idea is that a bias may have more weight in separate evaluation than in joint evaluation if (1) people are made explicitly aware, in joint evaluation, that the only way to make a certain decision is to show bias and (2) they are ashamed or otherwise troubled about doing that. On different assumptions, joint evaluation could increase discrimination. Everything depends on people’s reflective judgments about their own propensity to discriminate.

  In the context of punitive damages, the problem is that people generate their own frame of reference, typically limited to the category that the case spontaneously brings to mind. Joint evaluation solves that problem. It does not follow, however, that joint evaluation produces sensible awards. From the standpoint of optimal deterrence, it will not do that. From the retributive point of view, joint evaluation does seem better because it better reflects people’s moral judgments. The problem is that joint evaluation is not global evaluation. If a single case from a single category is added to another, there is a risk of manipulation.

  Contingent valuation studies and evaluations of nudges can be analyzed similarly. Separate evaluation creates serious risks because people’s judgments might be category-bound or because they might neglect important characteristics of options. Joint evaluation also creates serious risks because it is inevitably selective or because it accentuates a characteristic of options that does not deserve much weight. If it is feasible, something in the direction of global evaluation is best. If it is not feasible, a choice between separate and joint evaluation must depend on an independent judgment about whether a characteristic, ignored in the former but potentially decisive in the latter, really deserves weight.

  We can find cases in which joint evaluation solves some kind of problem; discrimination, under the stated assumptions, is an example. But the largest points lie elsewhere. Separate evaluation can create serious trouble because of the challenge of evaluability or its cousin, category-bound thinking. Joint evaluation can create serious trouble because it focuses people on a characteristic of a product, a person, or a context that does not deserve the attention that they give it. Sellers, doctors, lawyers, and politicians can easily enlist these points to achieve their goals. But both forms of trouble should be avoided. For purposes of producing good decisions and good outcomes, we need to look beyond both separate and joint evaluation and instead design structures that avoid the problems and pathologies associated with each.

  Notes

  1. The initial discovery was in 1992. See Max Bazerman et al., Reversals of Preference in Allocation Decisions: Judging an Alternative versus Choosing among Alternatives, 37 Admin. Sci. Q. 220 (1992). For a valuable overview, see Christopher Hsee et al., Preference Reversals between Joint and Several Evaluations of Options, 125 Psychol. Bull. 576 (1999). For a recent treatment with helpful additional complexity, see Yin-Hui Cheng et al., Preference Reversals between Joint and Separate Evaluations with Multiple Alternatives and Context Effects, 120 Psychol. Rep. 1117 (2017).There is extensive literature on other kinds of preference reversals not discussed here. See, for example, Amos Tversky & Richard Thaler, Preference Reversals, 4 J. Econ. Persp. 201 (1990); Amos Tversky et al., The Causes of Preference Reversal, 80 Am. Econ. Rev. 204 (1990). The best explanation of some such reversals—“scale compatibility”—belongs, I think, in the same general family as those explored here, though I cannot establish that proposition in this space.

  2. Christopher Hsee, The Evaluability Hypothesis: An Explanation for Preference Reversals between Joint and Separate Evaluations of Alternatives, 67 Organizational Behav. & Hum. Decision Processes 247, 248 (1996).

  3. See id.; Christopher K. Hsee, Attribute Evaluability: Its Implications for Joint-Separate Evaluation Reversals and Beyond, in Choices, Values, and Frames 543–565 (Daniel Kahneman & Amos Tversky eds. 2000). Other explanations are explored in Max H. Bazerman et al., Explaining How Preferences Change across Joint versus Separation Evaluation, 39 J. Econ. Behav. & Org. 41 (1999).

  4. Compare the important finding of “comparison friction” in Jeffrey R. Kling et al., Comparison Friction: Experimental Evidence from Medicare Drug Plans, 127 Q. J. Econ. 199 (2012). In brief, Kling and his coauthors describe comparison friction as “the wedge between the availability of comparative information and consumers’ use of it.” Id. at 200. They find that the wedge is significantly larger than people think, in the sense that even when information is readily available, people do not use it. There is a clear relationship between comparison friction and preference reversals of the kind on which I focus here; in real markets, and in politics, it is because of comparison friction that in separate evaluation, people do not obtain the information that they could readily obtain.

  5. See Shane Frederick et al., Opportunity Cost Neglect, 36 J. Consumer Res. 553 (2009).

  6. See Ted O’Donoghue & Matthew Rabin, Doing It Now or Later, 89 Am. Econ. Rev. 103 (1999).

  7. Hsee, supra note 2, at 253.

  8. Id.

  9. John A. List, Preferences Reversals of a Different Kind: The “More Is Less” Phenomenon, 92 Am. Econ. Rev. 1636 (2002).

  10. Id. at 1641.

  11. George Loewenstein, Exotic Preferences: Behavioral Economics and Human Motivation 261 (2007).

  12. A methodological note: At several points, I will offer some speculations about what imaginable groups would do, without collecting data. My hope is that the speculations will be sufficiently plausible, a logical necessity (given the assumptions), or even self-evident, so that the absence of data is not a problem. But I emphasize that some of the speculations are only that, and that data would be much better.

  13. For evidence in this vein, see Cheng et al., supra note 1.

  14. See Christopher Hsee et al., Magnitude, Time, and Risk Differ Similarly between Joint and Single Evaluations, 40 J. Consumer Res. 172 (2013).

  15. See Max H. Bazerman et al., Explaining How Preferences Change across Joint versus Separate Evaluation, 39 J. Econ. Behav. & Org. 41 (1999).

  16. See Mark Kelman, Yuval Rottenstreich, & Amos Tversky, Context- Dependence in Legal Decision-Making, 25 J. Legal Stud. 287 (1996).

  17. See Christopher Hsee & Jiao Zhang, Distinction Bias: Misp
rediction and Mischoice Due to Joint Evaluation, 86 J. Personality and Social Psych. 680 (2004).

  18. See Daniel Kahneman & Richard Thaler, Utility Maximization and Experienced Utility, 20 J. Econ. Persp. 221 (2006); Daniel Kahneman et al., Back to Bentham? Explorations of Experienced Utility, 112 Q. J. Econ. 376 (1997).

  19. Timothy Wilson & Daniel Gilbert, Affective Forecasting, 35 Advances Experimental Soc. Psychol. 345 (2003), http://homepages.abdn.ac.uk/c.n.macrae/pages/dept/HomePage/Level_3_Social_Psych_files/Wilson%26Gilbert(2003).pdf.

  20. See Hsee & Zhang, supra note 17.

  21. This suggestion can be found in Max Bazerman et al., Negotiating With Yourself and Losing: Making Decisions With Competing Internal Preferences, 23 Acad. Mgmt. Rev. 225, 231 (1998).

  22. See Max Bazerman et al., Joint Evaluation as a Real-World Tool for Managing Emotional Assessments of Morality, 3 Emotion Rev. 290 (2011).

  23. See Max Bazerman et al., Explaining How Preferences Changes across Joint versus Separate Evaluation, 39 J. Econ. Behav. & Org. 41 (1999).

  24. I borrow here from id. at 46, which uses a VCR instead of a cell phone.

  25. Xavier Gabaix & David Laibson, Shrouded Attributes, Consumer Myopia, and Information Suppression in Competitive Markets, 121 Q. J. Econ. 505 (2006).

  26. True, this is based on a personal mistake. I love the old 11-inch MacBook Air, but I bought the new MacBook, with its terrific screen and its awful keyboard. (I am writing this on the former. The latter is in some drawer somewhere.)

  27. Max Bazerman et al., Explaining How Preferences Change across Joint versus Separate Evaluation, 39 J. Econ. Behav. & Org. 41 (1999).

  28. See Hsee, supra note 2.

  29. Iris Bohnet et al., When Performance Trumps Gender Bias: Joint vs. Separate Evaluation, 62 Mgmt. Sci. 1225 (2015).

  30. Id.

  31. State Farm Mutual Automobile Insurance Co. v. Campbell, 538 US 408 (2003); TXO Production Corp. v. Alliance Resources, 509 US 443 (1993).

  32. Cass R. Sunstein, Daniel Kahneman, Ilana Ritov, & David Schkade, Predictably Incoherent Judgments, 54 Stan. L. Rev. 1153 (2002).

  33. Id.

  34. Id.

  35. See Daniel Kahneman et al., Shared Outrage and Erratic Awards: The Psychology of Punitive Damages, 16 J. Risk & Uncertainty 49 (1998).

  36. See Daniel Kahneman & Cass R. Sunstein, Cognitive Psychology of Moral Intuitions, in Neurobiology of Human Values: Research and Perspectives in Neurosciences 91 (Jean-Pierre Changeux et al. eds. 2005).

  37. See Kahneman et al., supra note 35.

  38. See David Schkade et al., Do People Want Optimal Deterrence?, 29 J. Legal Stud. 237 (2000).

  39. See Netta Barak-Corren et al., If You’re Going to Do Wrong, At Least Do It Right: Considering Two Moral Dilemmas At the Same Time Promotes Moral Consistency, 64 Mgmt. Sci. 1528 (2017).

  40. See Cass R. Sunstein, How Do We Know What’s Moral?, N.Y. Rev. Books (Apr. 24, 2014).

  41. See J. A. Hausman, Contingent Valuation: A Critical Assessment (2012); Handbook on Contingent Valuation (Anna Albertini & James Kahn eds. 2009).

  42. See, e.g., Peter A. Diamond, Contingent Valuation: Is Some Number Better Than No Number?, 8 J. Econ. Persp. 45 (1994).

  43. Daniel Kahneman, Ilana Ritov, & David Schkade, Economic Preferences or Attitude Expressions? An Analysis of Dollar Responses to Public Issues, 19 J. Risk & Uncertainty 220 (1999).

  44. Id.

  45. Id.

  46. Janice Y. Jung & Barbara A. Mellers, American Attitudes toward Nudges, 11 Judgment & Decision Making 62 (2016).

  47. See id.; Cass R. Sunstein, The Ethics of Influence (2016).

  48. See Jung & Mellers, supra note 46.

  49. See Shai Davidai & Eldar Shafir, Are Nudges Getting A Fair Shot? Joint Versus Separate Evaluations, 3 Behav. Pub. Pol’y (forthcoming 2018).

  50. Id.

  III

  Excursions

  12

  Transparency

  There is a distinction between two kinds of transparency: output transparency and input transparency. Suppose that the Department of Transportation has completed a detailed study of what kinds of policies help to reduce deaths on the highways or that the Department of Labor has produced an analysis of the health risks associated with exposure to silica in the workplace. Or suppose that the Environmental Protection Agency produces a regulation to curtail greenhouse gas emissions from motor vehicles or adopts a policy about when it will bring enforcement actions against those who violate its water-quality regulations. All these are outputs.

  The government might also become aware of certain facts—for example, the level of inflation in European nations, the number of people who have died in federal prisons, the apparent plans of terrorist organizations, or levels of crime and air pollution in Los Angeles and Chicago. For the most part, facts also should be seen as outputs, at least if they are a product of some kind of process of information acquisition.

  In all of these cases, transparency about outputs can be a nudge. It might be designed to influence the private sector—for example, by promoting companies to do better to increase safety or by helping consumers and workers to avoid risks. Transparency can also nudge government, by showing officials, and those whom they serve, that they are not doing as well as they might.

  Now suppose that officials within the Department of Energy and the Environmental Protection Agency staffs have exchanged views about what form a greenhouse regulation should take or that political appointees within the Department of Labor have had heated debates about the risks associated with silica in the workplace and about how those risks are best handled. The various views are inputs.

  To be sure, there are intermediate cases. The EPA might conclude that a substance is carcinogenic, and in a sense that conclusion is an output—but it might also be an input into a subsequent regulatory judgment. The Department of Transportation might reach certain conclusions about the environmental effects of allowing a highway to be built, which seem to be an output, but those conclusions might be an input into the decision whether to allow the highway to be built. The National Environmental Policy Act can be seen as a requirement that agencies disclose outputs in the form of judgments about environmental effects—but those outputs are, by law, mere inputs into ultimate decisions about what to do. Some outputs are inputs, and in the abstract it would be possible to characterize them as one or the other or as both. As we shall see, the appropriate characterization depends in part on whether and how the public would benefit from disclosure.

  Acknowledging the existence of hard intermediate cases, I offer two claims here. The first is that for outputs, the argument on behalf of transparency is often exceptionally strong. If the government has information about levels of crime in Boise; about water quality in Flint, Michigan; about security lines at LaGuardia Airport; about the hazards associated with certain toys; or about the effects of driverless cars, it should usually disclose that information—certainly on request and, if people stand to gain from it, even without request. (The latter point is especially important.) In all of these cases, the benefits of transparency are significant. Sometimes members of the public can use the information in their daily lives, and output transparency can promote accountability and therefore increase transparency. Most of the time, the costs of output transparency are trivial. All over the world, governments should offer much more in the way of output transparency. In particular, they should make outputs freely available to the public as a matter of course—at least if the public could or would benefit from them and unless there is a particular reason why they need to remain confidential.

  But input transparency is a much more complicated matter. The costs of disclosure are often high and the benefits may be low, and in any case they are qualitatively different from those that justify output transparency. There are strong reasons to protect processes of internal deliberation, above all to ensure openness, candor, and trust. In additio
n, it is often unclear that the public would gain much from seeing inputs, not least because of their massive volume (and usual irrelevance to anything that matters). Outside of unusual circumstances, the public would usually gain little or nothing (except perhaps something like gossip). Another way to put the point is that while those who seek to attract eyeballs or to embarrass their political opponents often like input transparency, the public usually does not much benefit from it.

  To be sure, transparency about inputs can be informative, and inputs may have keen historical interest. If the public learns that the deputy secretary of transportation had a different view from that of the secretary on the content of a fuel-economy regulation, it knows something; internal disagreement paints a different picture from internal unanimity. But how much, exactly, does the public learn, and why is it important for the public to learn it? It should be acknowledged that in some cases input transparency is a good idea, especially under circumstances of corruption (or anything close to it) and when relevant inputs have genuine historic importance (and when their disclosure can reduce mistakes). Nations need catalogs. But the argument for input transparency is much different from the argument for output transparency, and it often stands on weaker ground.

  It should be clear from these remarks that my approach to this topic is insistently and unabashedly welfarist: What are the benefits of transparency and what are the costs? It is true that the benefits and the costs may not be easy to quantify, but some kind of assessment of both is, I suggest, indispensable to an evaluation of when transparency is most and least necessary. For those who are not comfortable with talk of costs and benefits in this context, it might be useful to understand those terms as an effort not to create some kind of arithmetic straightjacket, but to signal the importance of asking concrete questions about the human consequences of competing approaches. At least for difficult problems, those questions are (I suggest) far more productive than abstractions about “legitimacy” and “the right to know.”

 

‹ Prev