Book Read Free

How Change Happens

Page 31

by Cass R Sunstein


  Closely related experiments support that expectation.46 In deciding whether to vaccinate their children against risks of serious diseases, people show a form of “omission bias.” Many people are more sensitive to the risk of the vaccination than to the risk from diseases—so much so that they will expose their children to a greater risk from “nature” than from the vaccine. (There is a clear connection between omission bias and trust in nature and antipathy to “playing God,” as discussed below.) The omission bias, I suggest, is closely related to people’s special antipathy to betrayals. It leads to moral errors in the form of vaccination judgments, and undoubtedly others, by which some parents increase the fatality risks faced by their own children.

  Morality and Punishment

  Pointless Punishment

  In the context of punishment, moral intuitions are sometimes disconnected with the consequences of punishment, suggesting that a moral heuristic may well be at work.47 Suppose, for example, that a corporation has engaged in serious wrongdoing. People are likely to want to punish the corporation as if it were a person.48 They are unlikely to inquire into the possibility that the consequences of serious punishment (say, a stiff fine) will not be to “hurt” corporate wrongdoers but instead to decrease wages, increase prices, or produce lost jobs. Punishment judgments are rooted in a simple heuristic, to the effect that penalties should be a proportional response to the outrageousness of the act. We have seen that in thinking about punishment, people use an outrage heuristic.49 According to this heuristic, people’s punishment judgments are a product of their outrage. This heuristic may produce reasonable results much of the time, but in some cases it seems to lead to systematic errors—at least if we are willing to embrace weak consequentialism.

  Consider, for example, an intriguing study of people’s judgments about penalties in cases involving harms from vaccines and birth control pills.50 In one case, subjects were told that the result of a higher penalty would be to make companies try harder to make safer products. In an adjacent case, subjects were told that the consequence of a higher penalty would be to make the company more likely to stop making the product, with the result that less-safe products would be on the market. Most subjects, including a group of judges, gave the same penalties in both cases. “Most of the respondents did not seem to notice the incentive issue.”51 In another study, people said that they would give the same punishment to a company that would respond with safer products and one that would be unaffected because the penalty would be secret and covered by insurance (the price of which would not increase).52 Here too the effects of the punishment did not affect judgments by a majority of respondents.

  A similar result emerged from a test of punishment judgments that asked subjects, including judges and legislators, to choose penalties for dumping hazardous waste.53 In one case, the penalty would make companies try harder to avoid waste. In another, the penalty would lead companies to cease making a beneficial product. Most people did not penalize companies differently in the two cases. Most strikingly, people preferred to require companies to clean up their own waste, even if the waste did not threaten anyone, instead of spending the same amount to clean up far more dangerous waste produced by another, now-defunct company.

  How could this preference make sense? Why should a company be asked to engage in a course of action that costs the same but that does much less good? In these cases, it is most sensible to think that people are operating under a heuristic, mandating punishment that is proportional to outrageousness and requiring that punishment be based not at all on consequential considerations. As a general rule, of course, it is plausible to think that penalties should be proportional to the outrageousness of the act; utilitarians will accept the point as a first approximation, and retributivists will insist on it. But it seems excessively rigid to adopt this principle whether or not the consequence would be to make human beings safer and healthier. Weak consequentialists, while refusing to reject retributivism, will condemn this excessive rigidity. Those who seek proportional punishments might well disagree in principle. But it would be worthwhile for them to consider the possibility that they have been tricked by a heuristic—and that their reluctance to acknowledge the point is a product of the insistent voice of their own homunculus.

  Probability of Detection

  Now turn to closely related examples from the domain of punishment. On the economic account, the state’s goal, when imposing penalties for misconduct, is to ensure optimal deterrence.54 To increase deterrence, the law might increase the severity of punishment or instead increase the likelihood of punishment. A government that lacks substantial enforcement resources might impose high penalties, thinking that it will produce the right deterrent “signal” in light of the fact that many people will escape punishment altogether. A government that has sufficient resources might impose a lower penalty but enforce the law against all or almost all violators. These ideas lead to a simple theory in the context of punitive damages for wrongdoing: the purpose of such damages is to make up for the shortfall in enforcement. If injured people are 100 percent likely to receive compensation, there is no need for punitive damages. If injured people are 50 percent likely to receive compensation, those who bring suit should receive a punitive award that is twice the amount of the compensatory award. The simple exercise in multiplication will ensure optimal deterrence.

  But there is a large question whether people accept this account and, if not, why not. (For the moment, let us put to one side the question whether they should accept it in principle.) Experiments suggest that people reject optimal deterrence and that they do not believe that the probability of detection is relevant to punishment. The reason is that they use the outrage heuristic. I participated in two experiments designed to cast light on this question.55 In the first, subjects were given cases of wrongdoing, arguably calling for punitive damages, and also were provided with explicit information about the probability of detection. Different subjects saw the same case, with only one difference: the probability of detection was substantially varied. Subjects were asked about the amount of punitive damages that they would choose to award. The goal was to see if subjects would impose higher punishments when the probability of detection was low. In the second experiment, subjects were asked to evaluate judicial and executive decisions to reduce penalties when the probability of detection was high and to increase penalties when the probability of detection was low. Subjects were asked whether they approved or disapproved of varying the penalty with the probability of detection.

  The findings were simple and straightforward. The first experiment found that varying the probability of detection had no effect on punitive awards. Even when people’s attention was explicitly directed to the probability of detection, they were indifferent to it. The second experiment found that strong majorities of respondents rejected judicial decisions to reduce penalties because of a high probability of detection—and rejected executive decisions to increase penalties because of a low probability of detection. In other words, people did not approve of an approach to punishment that would make the level of punishment vary with the probability of detection. What apparently concerned them was the extent of the wrongdoing and the right degree of moral outrage—not optimal deterrence.

  To be sure, many people have principled reasons for embracing retributivism and for rejecting utilitarian accounts of punishment. And some such people are likely to believe, on reflection, that the moral intuitions just described are correct—that what matters is what the defendant did, not whether his action was likely to be detected. But if we embrace weak consequentialism, we will find it implausible to suggest that the aggregate level of misconduct is entirely irrelevant to punishment. We will be unwilling to ignore the fact that if a legal system refuses to impose enhanced punishment on hard-to-detect wrongdoing, then it will end up with a great deal of wrongdoing. People’s unwillingness to take any account of the probability of detection suggests the possibility that a moral heuristic is at work,
one that leads to real errors. Because of the contested nature of the ethical issues involved, I cannot demonstrate this point. But those who refuse to consider the probability of detection might consider the possibility that System 1 has gotten the better of them.

  Playing God: Reproduction, Nature, and Sex

  Issues of reproduction and sexuality are prime candidates for the operation of moral heuristics. Consider human cloning, which most Americans reject and believe should be banned. Notwithstanding this consensus, the ethical and legal issues here are not so simple. To make progress, it is necessary to distinguish between reproductive and nonreproductive cloning; the first is designed to produce children, whereas the second is designed to produce cells for therapeutic use. Are the ethical issues different in the two cases? In any case, it is important to identify the grounds for moral concern. Do we fear that cloned children would be means to their parents’ ends and, if so, why? Do we fear that they would suffer particular psychological harm? Do we fear that they would suffer from especially severe physical problems?

  In a highly influential discussion of new reproductive technologies—above all, cloning—ethicist Leon Kass points to the “wisdom in repugnance.”56 He writes:

  People are repelled by many aspects of human cloning. They recoil from the prospect of mass production of human beings, with large clones of look-alikes, compromised in their individuality, the idea of father-son or mother-daughter twins; the bizarre prospects of a woman giving birth to and rearing a genetic copy of herself, her spouse or even her deceased father or mother; the grotesqueness of conceiving a child as an exact replacement for another who has died; the utilitarian creation of embryonic genetic duplicates of oneself, to be frozen away or created when necessary, in case of need for homologous tissues or organs for transplantation; the narcissism of those who would clone themselves and the arrogance of others who think they know who deserves to be cloned or which genotype any child-to-be should be thrilled to receive; the Frankensteinian hubris to create human life and increasingly to control its destiny; man playing God. … We are repelled by the prospect of cloning human beings not because of the strangeness or novelty of the undertaking, but because we intuit and feel, immediately and without argument, the violation of things that we rightfully hold dear. … Shallow are the souls that have forgotten how to shudder.57

  Kass is correct to suggest that revulsion toward human cloning might be grounded in legitimate concerns, and I mean to be agnostic here on whether human cloning is ethically defensible. But I want to suggest that moral heuristics, and System 1, are responsible for what Kass seeks to celebrate as “we intuit and feel, immediately and without argument.” Kass’s catalog of alleged errors seems to me an extraordinary exercise in the use of such heuristics. Availability operates in this context, not to drive judgments about probability but to call up instances of morally dubious behavior (e.g., “mass production of human beings, with large clones of look-alikes, compromised in their individuality”). The representativeness heuristic plays a similar role (e.g., “the Frankensteinian hubris to create human life and increasingly to control its destiny”). But I believe that Kass gets closest to the cognitive process here with three words: “man playing God.”

  We might well think that “do not play God” is the general heuristic here, with different societies specifying what falls in that category and with significant changes over time. As we saw in chapter 13, a closely related heuristic plays a large role in judgments of fact and morality: do not tamper with nature. This heuristic affects many moral judgments, though individuals and societies often become accustomed to various kinds of tampering (consider in vitro fertilization). An antitampering heuristic helps explain many risk-related judgments. For example, “human intervention seems to be an amplifier in judgments on food riskiness and contamination,” even though “more lives are lost to natural than to man-made disasters in the world.”58 Studies show that people overestimate the carcinogenic risk from pesticides and underestimate the risks of natural carcinogens.59 People also believe that nature implies safety, so much that they will prefer natural water to processed water even if the two are chemically identical.60

  The moral injunction against tampering with nature plays a large role in public objections to genetic engineering of food, and hence legal regulation of such engineering is sometimes driven by that heuristic rather than by a deliberative, System 2 encounter with the substantive issues. For genetic engineering, the antitampering heuristic drives judgments even when the evidence of risk is slim.61 In fact, companies go to great lengths to get a “natural” stamp on their products,62 even though the actual difference between what counts as a natural additive and an artificial additive bears little or no relation to harms to consumers. So too in the domains of reproduction and sexuality, in which a pervasive objection is that certain practices are “unnatural.” (With respect to sex, it is especially difficult to say what is “natural,” and whether what is natural is good.) And for cloning, there appears to be a particular heuristic at work: do not tamper with natural processes for human reproduction. It is not clear that this heuristic works well—but it is clear that it can misfire.

  Issues at the intersection of morality and sex provide an obvious place for the use of moral heuristics. Such heuristics are peculiarly likely to be at work in any area in which people are likely to think, “That’s disgusting!” Any examples here will be contentious, but return to the incest taboo. We can easily imagine incestuous relationships—say, between first cousins or second cousins—that ought not give rise to social opprobrium but that might nonetheless run afoul of social norms or even the law.63 The incest taboo is best defended by reference to coercion, psychological harm, and risks to children who might result from incestuous relationships. But in many imaginable cases, these concrete harms are not involved.

  It is plausible to say that the best way to defend against these harms is by a flat prohibition on incest, one that has the disadvantage of excessive generality but the advantage of easy application. Such a flat prohibition might have evolutionary origins; it might also have strong rule-utilitarianism justifications. We would not like to have family members asking whether incest would be a good idea in individual cases, even if our underlying concern is limited to coercion and psychological harm. So defended, however, the taboo stands unmasked as a moral heuristic. Recall the phenomenon of moral dumbfounding—moral judgments that people “feel” but are unable to justify.64 In the domain of sex and reproduction, many taboos can be analyzed in similar terms.

  Acts and Omissions

  To say the least, there has been much discussion of whether and why the distinction between acts and omissions might matter for morality, law, and policy. In one case, for example, a patient might ask a doctor not to provide life-sustaining equipment, thus ensuring the patient’s death. In another case, a patient might ask a doctor to inject a substance that will immediately end the patient’s life. Many people seem to have a strong moral intuition that a decision not to provide life-sustaining equipment, and even the withdrawal of such equipment, is acceptable and legitimate—but that the injection is morally abhorrent. And indeed, American constitutional law reflects judgments to exactly this effect: people have a constitutional right to withdraw equipment that is necessary to keep them alive, but they have no constitutional right to physician-assisted suicide.65 But what is the morally relevant difference?

  It is worth considering the possibility that the act-omission distinction operates as a heuristic for a more complex and difficult assessment of the moral issues at stake. From the moral point of view, harmful acts are generally worse than harmful omissions, in terms of both the state of mind of the wrongdoer and the likely consequences of the wrong. A murderer is typically more malicious than a bystander who refuses to come to the aid of someone who is drowning; the murderer wants his victim to die, whereas the bystander need have no such desire. In addition, a murderer typically guarantees death, whereas a bystander may do no su
ch thing. (I put to one side some complexities about causation.) But in terms of either the wrongdoer’s state of mind or the consequences, harmful acts are not always worse than harmful omissions.

  The moral puzzles arise when life, or a clever interlocutor, comes up with a case in which there is no morally relevant distinction between acts and omissions, but moral intuitions (and the homunculus) strongly suggest that there must be such a difference. As an example, consider the vaccination question discussed earlier; many people show an omission bias, favoring inaction over statistically preferable action.66 Here an ordinarily sensible heuristic, favoring omissions over actions, appears to produce moral error.

  In such cases, we might hypothesize that moral intuitions reflect an overgeneralization of principles that usually make sense—but that fail to make sense in a particular case. Those principles condemn actions but permit omissions—a difference that is often plausible in light of relevant factors but that, in hard cases, cannot be defended. I believe that the persistent acceptance of withdrawal of life-saving equipment, alongside persistent doubts about euthanasia, is a demonstration of the point. There is no morally relevant difference between the two; the act-omission distinction makes a difference apparent or even clear when it is not real.

  Exotic Cases, Moral Judgments, and Reflective Equilibrium

  Some of these examples will seem more contentious than others. But taken as a whole, they seem to me to raise serious doubts about the wide range of work that approaches moral and political dilemmas by attempting to uncover moral intuitions about exotic cases of the kind never or rarely encountered in ordinary life. Should you shoot an innocent person if that is the only way to save twenty innocent people?67 What is the appropriate moral evaluation of a case in which a woman accidentally puts cleaning fluid in her coffee, and her husband, wanting her dead, does not provide the antidote, which he happens to have handy?68 If Martians arrived and told you that they would destroy the world unless you tortured a small child, should you torture a small child? Is there a difference between killing someone by throwing him into the path of a train and killing someone by diverting the train’s path to send it in his direction?

 

‹ Prev