by Alfie Kohn
Assor, Roth, and their collaborators have gone on to publish a variety of replications and extensions of these original studies. In one of them, their subjects were ninth graders, and this time giving more attention and affection when children did what parents wanted was carefully distinguished from giving less when they didn’t. The studies showed that both positive and negative versions of conditional parenting were harmful, but in slightly different ways. The positive kind—praise for success—sometimes succeeded in getting children to work harder on academic tasks, but, again, at the cost of unhealthy feelings of “internal compulsion.” Negative conditional parenting, meanwhile, didn’t even work in the short run; it just increased the teenagers’ resentment toward their parents.60
In their other studies, which were conducted with children from age five to twenty-three, conditional parenting had consistently disturbing effects on their emotional and social well-being:
•If young children got the idea that their parents valued them more when they were happy, that interfered with their ability to recognize and respond to sadness in other people.
•If young adults had perceived that their parents’ affection varied with the extent to which they were helpful and considerate, that affected the way these grown children thought about helping: It seemed less a matter of choice than something they felt they had to do to try to feel better about themselves.
•If teenagers got the idea that their parents’ approval of them depended on how well they did in school, they were apt to be self-aggrandizing when they succeeded and ashamed when they failed. “Conditional positive regard promote[d] the development of a fragile, contingent . . . and unstable sense of self.”61
Regardless of the child’s age, regardless of what behavior is required as a condition of the parent’s love, and regardless of whether love is offered for engaging in that behavior or withheld for not engaging in it, the outcomes are troubling. (In fact, they’re troubling even when a teacher rather than a parent seems to accept children only conditionally.62) One way of making sense of these findings, I’ve argued, is to consider the creation of conditional self-acceptance. The adult’s “My care for you depends on your doing x” becomes the child’s “I’m worthwhile only if I do x.” Research confirms that conditionality is a recipe for dysfunction.
But our objections may go beyond what the data say. In a scholarly article entitled “Contingencies of Self-Worth,” Jennifer Crocker and Connie Wolfe took two dozen pages to review empirical findings on the topic. Then, at the very end, they took the unusual step of adding a personal note:
We are alarmed at the suggestion that schools should be teaching or creating self-esteem that is justified by achievements or “warranted” (Baumeister, 1999; Seligman, 1998). Such recommendations are equally lacking in empirical support. Furthermore, and perhaps more seriously, the logical implication of this approach is that some children’s low self-esteem is warranted and that children who do not achieve in socially desirable ways, such as getting good grades, being attractive and popular, or being good at sports, rightly believe that they are not worthy human beings. It is a slippery slope from the view that self-esteem can be warranted or unwarranted to the view that some people are unworthy and justifiably devalued.63
The attack on “empty” or “excessive” praise, or on “unearned” grades or trophies, is not just about what adults offer to kids; it’s ultimately about how kids view themselves. There’s no evidence to support the idea that self-esteem is “inflated” or that children do better when their view of themselves varies with their performance (or with anything else). But once again, this critique is based less on evidence than on the value judgment that feeling good about oneself is something one ought to have to earn. Crocker and Wolfe confronted that value judgment head-on, saying, in effect, that this is an appalling way to raise and educate children. How well you do things should be incidental, not integral, to the way you regard yourself.
CHAPTER 7
Why Self-Discipline Is Overrated
A Closer Look at Grit, Marshmallows, and Control from Within
There’s not much mystery about the purpose of punishments and rewards: They’re generally intended to elicit compliance. Any adult who regards that as a priority will be tempted to make children suffer in some way if they fail to do what they’re told, if they slack off or talk back. Alternatively, he or she may offer praise or some other goodie when they follow directions. But the problem with both of these strategies, even for someone who finds them morally unobjectionable, is that they require continuous monitoring. An authority figure has to be available to hand out rewards or punishments as the child’s behavior merits, and that’s just not very practical. Thus, those who place a premium on obedience may dream of somehow “equipping the child with a built-in supervisor”1 so he’ll keep following the rules, even when no adult is around.
Think for a moment about the word disciplined. It can refer to making a concerted effort at a task (“She’s so disciplined that she spent more than an hour weeding the garden”) or to having been trained to obey authority (“They’ve been disciplined, so they shouldn’t give the babysitter any trouble”). If the goal is to induce children to work hard or to behave in a particular way on their own, then the most expedient arrangement for parents and teachers is to get the children to discipline themselves. Or, as we prefer to say, to be self-disciplined.
This basic concept actually includes a constellation of specific ideas. “Self-discipline” might be defined as marshaling one’s willpower to accomplish things that are regarded as desirable, whereas “self-control” means applying that same sort of willpower to prevent oneself from doing what is seen to be undesirable. In practice, these often function as two aspects of the same machinery of self-regulation—the point being to override one’s “natural” tendencies—so I’ll use the two terms more or less interchangeably. There are also two specific applications of self-discipline: perseverance (or “grit”); and the practice of deferring gratification, in which kids are transformed from lazy grasshoppers into hard-working ants by convincing them to put off doing what they enjoy.
Search for these terms in indexes of published books, scholarly articles, or Internet sites, and you’ll quickly discover how rare it is to find a discouraging word, or even a penetrating question, about their value. That may be because all of them fit naturally with the traditionalist sensibility I’ve been exploring throughout this book. Anyone who believes that children are spoiled, disobedient, and self-satisfied, that they don’t do enough to earn the praise they get or the esteem they have for themselves, would probably see these as promising strategies to make kids act in such a way as to become more deserving.
Which brings us to the marshmallow meme.
S’MORE MISREPRESENTATION OF RESEARCH
Back in the 1960s, at the Stanford University laboratory of a psychologist named Walter Mischel, preschool-age children were left alone in a room after having been told they could get a small treat (say, a marshmallow or pretzel) by ringing a bell at any time to summon the experimenter—or, if they held out until he returned on his own, they could have a bigger treat (two marshmallows or pretzels). More than four decades later, Mischel’s studies have resurfaced, perhaps reflecting a fresh wave of interest in the broader issue of self-discipline. The way his results are typically summarized, however, turns out to be rather different from what the research actually found.
Let’s back up a step. Mischel had made a name for himself among psychologists with a subversive argument that threatened to turn the field of personality theory upside down. His contention was that we’re too quick to assume that each of us has a stable personality that manifests itself across different situations and can be identified by psychological tests. We may think of ourselves as having generalized traits, but how we act turns out, for the most part, to be a function of the various environments in which we find ourselves. What’s attributed to your “personality” is really just a bunch of cog
nitive strategies that you devise to deal with what happens to you. Mischel was curious about how each of us comes up with those strategies, but he doubted they added up to a distinctive and invariant profile.2
That’s the context in which his marshmallow experiments should be understood, but it’s not the context in which they’re normally presented. Usually, we’re just told that the children who were able to wait for an extra treat scored better on measures of cognitive and social skills many years later and that they had higher SAT scores. Teach kids to put off the payoff as long as possible and they’ll end up more successful.
But the marshmallow studies actually don’t support that conclusion at all. Here’s why:
1. What mattered was the setting, not the individual’s self-control. What mostly interested Mischel, as a student of cognitive strategies, wasn’t whether children could wait for a bigger treat—which, by the way, most of them could.3 It wasn’t even whether waiters fared better in life than non-waiters. Rather, the central question was how children go about trying to wait and which strategies help. It turned out that kids waited longer when they were distracted by a toy. What worked best wasn’t (in his words) “self-denial and grim determination” but doing something enjoyable while waiting so that self-control wasn’t needed at all.4
Mischel and his colleagues systematically varied the details of the situation to see if children’s willingness to wait was different under each condition. These included telling them about the marshmallow as opposed to showing it to them, encouraging them to think about its shape rather than its taste, and suggesting a distraction strategy instead of having the kids come up with their own. Sure enough, such factors made a big difference. In fact, they were more important for predicting the outcome than any trait the child possessed.5
Of course that’s exactly the conclusion we’d expect from Walter Mischel in light of his theoretical views. But it’s precisely the opposite of the usual message that (a) self-control is a matter of individual character, which (b) we ought to be helping children to develop.6 In fact, when these children were tracked down ten years later, those who had been more likely to wait for a bigger snack didn’t have any more self-control or willpower than the others.7
This is hardly the only psychology study whose central finding was changed in the telling. (Another example is the famous Milgram experiments.8) But Mischel’s work provides a classic illustration of how research can be distorted in the service of an ideological agenda. Consider, for example, this passage from an article about the marshmallow studies that appeared in the New Yorker in 2009:
Mischel argues that intelligence is largely at the mercy of self-control: even the smartest kids still need to do their homework. “What we’re really measuring with the marshmallows isn’t will power or self-control,” Mischel says. “It’s much more important than that. This task forces kids to find a way to make the situation work for them.”9
The writer, Jonah Lehrer, sticks in a curious non sequitur about homework. (Even if self-control did turn out to be a valuable attribute, it’s neither necessary for, nor enhanced by, making students do more academic assignments when they get home from school.) More important, though, Mischel emphasizes that his experiments weren’t about self-control at all, yet Lehrer introduces that direct quote by asserting exactly the opposite—that self-control is even more important than intelligence. Usually you have to dig up the original study to determine whether (and how) the press coverage has misrepresented it. In this case the inaccurate conclusion is right there for any reasonably careful reader to spot—as if Lehrer weren’t even aware of what he’d done.
2. Deferral of gratification may be an effect, not a cause. Just because some children were more effective than others at distracting themselves from the snack doesn’t mean this capacity was responsible for the impressive results found ten years later. Instead, both of these things may have been due to something about their home environment.10 If that’s true, there’s no reason to believe that enhancing children’s ability to defer gratification would be beneficial: It was just a marker, not a cause. By way of analogy, teenagers who visit ski resorts over winter break probably have a superior record of being admitted to the Ivy League. Should we therefore hire consultants to teach low-income children how to ski in order to improve the odds that colleges will accept them?
3. What counts is just the capacity to distract oneself. Even to the extent that Mischel looked at characteristics of individual children in addition to the details of the situation, he was primarily concerned with “cognitive competencies”—strategies for how to think about (or stop thinking about) something attractive—and how those competencies may be related to other skills that will be assessed years later. In fact, those outcomes were not associated with the ability to defer gratification, per se, but only with the ability to distract oneself when distractions weren’t provided by the experimenters.11
What’s more, that facility for creating a distraction turned out to be significantly correlated with plain old intelligence12—a very interesting finding because other writers have argued that self-discipline and intelligence are two entirely different things and that we ought to train children to acquire the former.13 It isn’t really so surprising, then, that kids’ capacity to come up with a way to think about something other than the food was associated with their SAT scores. This doesn’t mean willpower makes kids successful; it means the same loose cluster of mental proficiencies that helped them with distraction when they were young also helped them score well on a test of reasoning when they were older. (In fact, when the researchers held those scores constant, most, though not all, of the other long-term benefits associated with their marshmallow-related behavior evaporated.)14
4. Holding out for more isn’t necessarily the smarter choice. Finally, most people who cite these experiments take for granted that it’s always better to wait for two marshmallows—that is, to defer gratification. But is that really true? Mischel, for one, didn’t think so. “The decision to delay or not to delay hinges, in part, on the individual’s values and expectations with regard to the specific contingencies,” he and his colleagues wrote. “In a given situation, therefore, postponing gratification may or may not be a wise or adaptive choice.”15 Sometimes a marshmallow in the hand is better than two in the bush.
It’s true, for example, that if you spend too much of your money when you’re young, you may regret it when you’re old. But how much should you deprive yourself—and perhaps your children—in order to accumulate savings for retirement? For one thing, a group of economists argued that as our health declines we derive less pleasure from what we’re able to buy.16 More generally, as John Maynard Keynes famously pointed out, “In the long run, we are all dead.”
To take what you can while you can may be a rational choice, depending on what you happen to be doing. Some tasks favor that strategy; others favor waiting. In one experiment, researchers fiddled with the algorithm that determined how points were earned in a simulation game and then watched how that change interacted with the personalities of the people who were playing. “Impulsivity,” they concluded, “is not a purely maladaptive trait but one whose consequences hinge on the structure of the decision-making environment.”17
What’s more, someone’s inclination to take now rather than wait can depend on what that person has experienced in the past. “For a child accustomed to stolen possessions and broken promises, the only guaranteed treats are the ones you have already swallowed,” remarked a group of social scientists at the University of Rochester. In 2013 they set up their own version of Mischel’s experiment for preschool-age children. But before any marshmallows made an appearance, they introduced a couple of art projects. During this period, the kids were encouraged to wait for “a brand-new set of exciting art supplies” rather than using the well-worn crayons and dinky little stickers that were already available. After a few minutes, the adult returned. Half the kids received the promised, far-superior materials. But th
e other half got only an apology: “I’m sorry, but I made a mistake. We don’t have any other art supplies after all.”
Then it was time for the marshmallow challenge. And how long did the children wait for two to appear before they gave up and ate the one sitting in front of them? Well, it depended on their earlier experience. Those for whom the adult had proved unreliable (by failing to deliver the promised art supplies) waited only about three minutes. But those who had learned that good things do come to those who wait were willing to hold off, on average, for a remarkable twelve minutes.
The researchers’ point, which they described in an article called “Rational Snacking,” was twofold. The decision about whether to defer gratification may tell us what the child has already learned about whether waiting is likely to be worth it. If her experience is that it isn’t, then taking whatever is available at the moment is a perfectly reasonable choice. But that possibility also blasts a marshmallow-sized hole in the conclusion that the capacity to defer gratification produces various later-life benefits. “It is premature to conclude that most of the observed variance—and the longitudinal correlation between wait-times and later life outcomes—is due to differences in individuals’ self-control capacities. Rather, an unreliable worldview, in addition to self-control, may be causally related to later life outcomes.” Success may reflect one’s earlier experiences, in which case self-restraint would be just another result of those experiences, not the explanation for how well one fares later.18
“Rational Snacking” helps to clarify what may have been going on in Mischel’s original experiments, where there was no effort to learn about the children’s lives before they walked into his lab. But even on their own, those experiments simply don’t support the case for willpower and self-denial that traditionalists have tried to make. Waiting for a bigger treat doesn’t always make sense. And when it does, the question is, “What changes in the environment can facilitate that choice?” In other words: How can distractions make self-discipline irrelevant?