How Change Happens
Page 18
Note that default rules of these kinds might be objectionable for both welfarists and nonwelfarists. Welfarists might want to focus on people’s subjective feelings. Their belief that they are being treated as children, and their objection to that treatment, would count in the assessment. Nonwelfarists would insist that the offense to dignity is objectionable even if it has some kind of welfarist justification. (There is a question whether and when nonwelfarists would be willing to allow welfarist consideration to override the objection.)
In extreme situations, default rules could indeed be a serious affront to dignity. If so, there should be a strong presumption against them (whatever our foundational commitments). But it would be a mistake to use extreme situations, or imaginable cases, as a reason to challenge default rules in general. People are not treated disrespectfully if an institution adopts a double-sided default for printing or if they are automatically enrolled in health insurance or retirement plans. The objection from dignity has far more force in the abstract than in the context of the vast majority of real-world cases in which default rules are at work. Admittedly, the objection must be taken seriously in some real-world contexts.
Manipulation
To deal with this objection, we need to say something about the complex idea of manipulation. An initiative does not count as manipulative merely because it is an effort to alter people’s behavior. If you warn a driver that he is about to drive into a ditch or get into a crash, you are not engaging in manipulation. The same is true if you remind someone that a bill is due or that a doctor’s appointment is upcoming.
It is not clear that the idea of manipulation can be subject to a simple definition or a statement of necessary and sufficient conditions. It might be an umbrella concept for an assortment of related practices. But we might begin by saying that manipulation exists when a choice architect fails to respect people’s capacity for reflective choice, as, for example, by trying to alter people’s behavior in a covert way, or by exploiting their weaknesses. If someone has persuaded you to buy an expensive new cell phone by focusing you on a pointless but apparently exciting feature of the product, you have been manipulated. Or if someone has persuaded you to buy a useless exercise machine with videos of fit people getting all the fitter, you can claim to have been manipulated.
A lie can be seen an extreme example of manipulation. Deceptive behavior can be counted as an extreme example of manipulation as well, even if no one has actually spoken falsely. If you imply that certain food is unhealthy to eat when it is not, you are engaged in manipulation.
An action might generally be counted as manipulative if it lacks transparency—if the role or the motivation of the choice architect is hidden or concealed. In the pivotal scene in The Wizard of Oz, the wizard says, “Pay no attention to the man behind the curtain.” The man behind the curtain is of course a mere human being who is masquerading as the great wizard. If choice architects conceal their own role, it seems fair to charge them with being manipulative.
An action also can be counted as manipulative if it attempts to influence people subconsciously or unconsciously in a way that undermines their capacity for conscious choice. Consider the suggestion that “manipulation is intentionally and successfully influencing someone using methods that pervert choice.”8 Of course the term pervert choice is not self-defining; it might well be taken to refer to methods that do not appeal to or produce conscious deliberation. If so, the objection to manipulation is that it “infringes upon the autonomy of the victim by subverting and insulting their decision-making powers.”9 The objection applies to lies, which attempt to alter behavior by appealing to falsehoods rather than truth (where falsehoods would enable people to decide for themselves). In harder cases, the challenge is to concretize the ideas of subverting and insulting.
Subliminal advertising may be deemed manipulative and insulting because it operates behind the back of the person involved, without appealing to his conscious awareness. People’s decisions are affected in a way that bypasses their own deliberative capacities. If this is the defining problem with subliminal advertising, we can understand why involuntary hypnosis would also count as manipulative. But most people do not favor subliminal advertising, and, to say the least, the idea of involuntary hypnosis does not have much appeal. The question is whether taboo practices can shed light on interventions that can command broader support.
On one view, nudges generally or frequently count as manipulative. Sarah Conly suggests that when nudges are at work, “rather than regarding people as generally capable of making good choices, we outmaneuver them by appealing to their irrationality, just in more fruitful ways. We concede that people can’t generally make good decisions when left to their own devices, and this runs against the basic premise of liberalism, which is that we are basically rational, prudent creatures who may thus, and should thus, direct themselves autonomously.”10 This is a strong charge, and it is not fairly leveled against most kinds of nudges. Recall some examples: disclosure, reminders, warnings, default rules, simplification. Some forms of choice architecture are rooted in an acknowledgment that human beings suffer from bounded rationality, but they do not appeal to “irrationality” or reflect a judgment that “people can’t generally make good decisions when left to their own devices.”
But consider some testing cases in which Conly’s charge is not self-evidently misplaced:
1. Choice architects might choose a graphic health warning on the theory that an emotional, even visceral presentation might have significant effects.
2. Choice architects might be alert to framing effects and present information accordingly. They might enlist loss aversion, suggesting that if people decline to engage in certain action, they will lose money, rather than suggesting that if they engage in certain action, they will gain money. They might be aware that a statement that a product is 90 percent fat-free has a different impact from a statement that a product is 10 percent fat, and they might choose the frame that has the desired effect.
3. They might make a strategic decision about how to present social norms, knowing that the right presentation—for example, emphasizing behavior within the local community—could have a large impact on people’s behavior.
4. They might organize options—say, in a cafeteria or on a form—to make it more likely that people will make certain choices.
It is an understatement to say that none of these cases involves the most egregious forms of manipulation. There is no lying and no deceit. But is there any effort to subvert or to insult people’s decision-making powers? I have said that government should be transparent about what it is doing. It should not hide its actions or its reasons for those actions. Does transparency eliminate the charge of manipulation? In cases of this kind, the answer is not self-evident.
Perhaps a graphic health warning could be counted as manipulative if it is designed to target people’s emotions, rather than to inform them of facts. But what if the warning is explained, in public, on exactly that ground? What if a warning is introduced and justified as effective because it appeals to people’s emotions and thus saves lives? What if it is welcomed by the relevant population—say, smokers—for exactly that reason? Similar questions might be asked about strategic uses of framing effects, social norms, and order effects. T. M. Wilkinson contends, plausibly, that it is too crude to say that manipulation infringes upon autonomy, because “manipulation could be consented to. If it were consented to, in the right kind of way, then the manipulation would at least be consistent with autonomy and might count as enhancing it.”11
If government is targeting System 1—perhaps through framing, perhaps through emotionally evocative appeals—it may be responding to the fact that System 1 has already been targeted, and to people’s detriment. In the context of cigarettes, for example, it is plausible to say that a range of past and current manipulations—including advertising and social norms—have influenced people to become smokers.
If this is so, perhaps we can say
that public officials are permitted to meet fire with fire. But some people might insist that two wrongs do not make a right—and that if the government seeks to lead people to quit, it must treat them as adults and appeal to their deliberative capacities. There is no obvious answer to the resulting debates. Some people are committed to welfarism—understood, very roughly, as an approach that attempts to maximize social welfare, taken in the aggregate. Other people are committed to some form of deontology—understood, very roughly, as an approach that is committed to certain principles, such as respect for persons, regardless of whether those principles increase social welfare. Welfarists and deontologists might have different answers to the question when and whether it is acceptable to target System 1 or to manipulate people.
It is not implausible to say that even with full transparency, at least some degree of manipulation is involved whenever a choice architect is targeting emotions or seeking a formulation that will be effective because of its interaction with people’s intuitive or automatic thinking. But there are degrees of manipulation, and there is a big difference between a lie and an effort to frame an alternative in an appealing light.
In ordinary life, we would not be likely to accuse our friends or loved ones of manipulation if they offered a smile or a frown if we said that we were seriously considering a particular course of action. It would be an expansive understanding of the word manipulation if we used it to cover people who characterized one approach as favored by most members of our peer group or who emphasized the losses that might accompany an alternative that they abhor. Actions that are conceivably characterized as manipulative fall along a continuum, and if a doctor or a lawyer uses body language to support or undermine one or another alternative, it would be pretty fussy to raise objections about “subverting” or “perverting” the deliberative processes of a patient or client.
We should be able to agree that most nudges are not manipulative in any relevant sense. But to the extent that some of them can be counted as such, the force of the objection or concern depends on the degree of any manipulation. We might well insist on an absolute or near-absolute taboo on lying or deception on government’s part, for welfarist or nonwelfarist reasons. But surely we should be more lenient toward emotional appeals and framing. One question is whether such approaches produce significant welfare gains. If a graphic health warning saves many lives, is it unacceptable if and because it can be counted as a (mild) form of manipulation? A welfarist would want to make an all-things-considered judgment about the welfare consequences.
It is true that some people, focused on autonomy as an independent good, would erect a strong and perhaps conclusive presumption against defining cases of manipulation. But I hope that I have said enough to show that the modest forms discussed here strain the boundaries of the concept—and that it would be odd to rule them off-limits.
Learning
Choice making is a muscle, and the ability to choose can be strengthened through exercise. If nudges would make the muscle atrophy, we would have an argument against them. Here too, it is necessary to investigate the particulars.
Active choosing and prompted choice hardly impede learning. Nor do information and reminders. On the contrary, they promote learning. Here the point is plain and the evidence is compelling: nudges of this kind exercise the choice-making muscle, rather than the opposite.
With respect to learning, the real problem comes from default rules. It is possible to say that active choosing is far better than defaults simply because choosing promotes learning. Consider, for example, the question whether employers should ask employees to make active choices about their retirement plans or should instead default people into plans that fit their situations. The potential for learning might well count in favor of active choosing. If people are defaulted into certain outcomes, they do not add to their stock of knowledge, and that may be a significant lost opportunity.
The argument for learning depends on the setting. For most people, it is not important to become expert in the numerous decisions that lead to default settings on computers and cell phones, and hence the use of such settings is not objectionable. The same point holds in many other contexts in which institutions rely on defaults rather than active choosing. To know whether choice architects should opt for active choosing, it is necessary to explore whether the context is one in which it is valuable, all things considered, for choosers to acquire a stock of knowledge.
Biased Officials
Choice architects are emphatically human as well, and potentially subject to behavioral biases; to say the least, they are often unreliable. It is reasonable to object to some nudges and to some efforts to intervene in existing choice architecture on the ground that the choice architects might blunder. They might lack important information; followers of F. A. Hayek emphasize what they call the knowledge problem, which means that public officials often lack important information held by the public as a whole. Choice architects might be biased, perhaps because their own parochial interests are at stake; many skeptics emphasize the public choice problem, pointing to the role of self-interested private groups. Choice architects might themselves be subject to important biases—suffering, for example, from present bias, optimistic bias, or probability neglect. In a democratic society, public officials are responsive to public opinion, and if the public is mistaken, officials might be mistaken as well.
It is unclear whether and to what extent this objection is a distinctly ethical one, but it does identify an important cautionary note. One reason for nudges, as opposed to mandates and bans, is that choice architects may err. No one should deny that proposition, which argues in favor of choice-preserving approaches. If choice architects blunder, at least it can be said that people can go their own way.
The initial response to this objection should be familiar: choice architecture is inevitable. When choice architects act, they alter the architecture; they do not create an architecture where it did not exist before. A certain degree of nudging from the public sector cannot be avoided, and there is no use in wishing it away. Nonetheless, choice architects who work for government might decide that it is usually best to rely on free markets and to trust in invisible-hand mechanisms. If so, they would select (or accept) choice architecture that reflects those mechanisms.
This idea raises many conceptual and empirical questions, which I will not engage here. The question is whether it is so abstract, and so rooted in dogmas, that it ought not to command support. To be sure, free markets have many virtues. But in some cases, disclosure, warnings, and reminders can do far more good than harm. As we have seen, active choosing is sometimes inferior to default rules. Someone has to decide in favor of one or another, and in some cases that someone is inevitably the government. It is true that distrust of public officials will argue against nudging, at least where it is avoidable, but if it is dogmatic and generalized, such distrust will likely produce serious losses in terms of both welfare and freedom.
Contexts
Nudges and choice architecture cannot be avoided, but intentional changes in choice architecture, deliberately made by choice architects, can indeed run into ethical concerns—most obviously where the underlying goals are illicit. Indeed, a concern about illicit goals underlies many of the most plausible objections to (some) nudges.
Where the goals are legitimate, an evaluation of ethical concerns needs to be made with close reference to the context. Disclosure of accurate information, reminders, and (factual) warnings are generally unobjectionable. If nothing is hidden or covert, nudges are less likely to run afoul of ethical constraints, not least because and when they promote informed choices.
Default rules frequently make life more manageable, and it does not make much sense to reject such rules as such. At the same time, it must be acknowledged that active choosing might turn out to be preferable to default rules, at least when learning is important and one size does not fit all.
It is also true that some imaginable nudges can be counted as
forms of manipulation, raising objections from the standpoint of both autonomy and dignity. That is a point against them. But the idea of manipulation contains a core and a periphery, and some interventions go beyond the periphery. Even when nudges target System 1, it might well strain the concept of manipulation to categorize them as such (consider a graphic warning or a use of loss aversion in an educational message). If they are fully transparent and effective, if their rationale is not hidden, and if they do not limit freedom of choice, they should not be ruled out-of-bounds on ethical grounds, whatever the foundations of our ethical commitments.
Notes
1. David Foster Wallace, Commencement Address, Kenyon College, Gambier, Ohio, May 21, 2005; for the text of his address, see David Foster Wallace, in His Own Words, September 19, 2008, https://www.1843magazine.com/story/david-foster-wallace-in-his-own-words.
2. Meghan R. Busse et al., Projection Bias in the Car and Housing Markets (Nat’l Bureau of Econ. Research, Working Paper No. 18212, 2012), http://www.nber.org/papers/w18212.
3. See Daniel Kahneman, Thinking, Fast and Slow (2011).