Willful

Home > Other > Willful > Page 11
Willful Page 11

by Richard Robb


  At the opposite extreme from the merchant’s choice is the trolley problem. There are many versions of the thought experiment originally posed by Philippa Foot; in one of the most famous, five people are standing on trolley tracks with a runaway trolley car hurtling toward them. You are standing on a bridge over the tracks next to a fat man. Pushing him onto the tracks would kill him but stop the trolley and save five people.6

  What might drive you to do it? Not selfish altruism, since you have nothing to gain (and in fact something to lose: you could be charged with manslaughter). Not care altruism, unless you’re an effective altruist, because you have no connection to any of the potential victims. It’s possible that personal moral principles would dictate a particular response—a utilitarian might favor pushing, while a Kantian might not. For effective altruists, utilitarians, and Kantians, the moral considerations arising from the trolley problem fit with purposeful choice. As long as they don’t abandon their principles at the crucial moment, their actions should be predictable. For the rest of us, though, it might not be so easy.

  The trolley problem is carefully constructed so that there is no Pareto-efficient solution. Variations of the problem that deal with injury, where everyone can be made better off, are easy to solve. Say the man you pushed would break his arm to save five people from breaking their arms. Well, then go ahead and push, since you can make it up to him later. At least in principle, the five people who avoided injury could pay him part of their gain from not breaking their arms, leaving everyone better off. But if the fat man is a shot putter about to compete in the Olympics, don’t push, since the cost of breaking his arm likely exceeds the sum cost of breaking the arms of five random people. In the actual trolley problem, though, he can’t be compensated for blocking the trolley since he’ll be dead.

  I don’t think we can resort to a “veil of ignorance” solution, either. If I didn’t know ex ante whether I’d be the fat man or one of the five people on the tracks and I had a one-sixth chance of each, of course I would choose “push.” But that doesn’t help with the trolley problem. It is already resolved who will be the fat man and that’s the individual you’d have to kill.

  In the end, I probably wouldn’t push, although I can’t say for sure. It would depend on the details. Like many people, I’d be more likely to push if the fat man were a villain, if children were on the tracks, or if the number of potential victims were significantly greater than five. But even though I probably wouldn’t push, I won’t argue that others should make the same choice. It’s a for-itself act: no moral principle that I hold would fully justify favoring one life over five, and no amount of calculation would simplify the problem.

  If you favored pushing, I wouldn’t quarrel. We’d just have to disagree, although “disagree” is the wrong word, since I couldn’t make a rational case for my decision. In a sense, I’d probably prefer that you push, since the lives of five abstract people matter more to me than one. But “prefer” isn’t quite right, either—let’s just say that I guess I’d be pleased if I learned after the fact that you had pushed. I’m not going to encourage you to do it, though—that’s too close to pushing myself.

  What would Antipater and Diogenes have to say about the trolley problem? They could talk all day, debating ethical rules and practical consequences. But while they might convince me to take one side or the other in the merchant’s problem, they almost certainly couldn’t convince me that there’s a correct solution to the trolley problem.

  My point is not to sermonize on which actions are right or wrong, but rather to consider how we do behave when actions have moral dimensions. In some cases, we weigh moral principles: it’s wrong to withhold information, worse to lie. Violating these principles would impose costs to varying degrees on the merchant. At the same time, he seeks to earn a profit. All these factors contribute to his decision. Since he is acting purposefully, we should expect a reasonably consistent response when similar problems arise in the future. He might disclose sometimes and not others, but that doesn’t necessarily make him inconsistent, since the particulars might differ. A for-itself component might also influence his decision if he’s moved by an impulse to aid the Rhodians.

  In other cases, there’s little weighting to be done and little self-interest to consider, and valid principles conflict with each other: it’s wrong to cause one person to be killed and also wrong to allow five people to be killed. Since most of us lack a unifying system that dictates which principle should trump the others, the problem is fundamentally for-itself.

  Computing the Monetary Value of a Life

  If the trolley problem were posed to you, you might refuse to answer. You could say, “I don’t know. It would depend on countless particulars, and I don’t know myself well enough to answer with confidence. Maybe I would push sometimes but not others. The act is unpredictable and for-itself.”

  That’s a perfectly fine answer for an individual confronted with hypotheticals, but in the arena of public policy, society cannot avoid addressing real-world analogs to the trolley problem. Whom should we save when resources are limited? How much should we spend to save them? Should we raise the speed limit to reduce the travel time of five million people by ten minutes each at the cost of one extra traffic death? Here, rational choice provides the only sensible guidance.

  Government policies involving everything from the military to health and safety have a direct bearing on life and death. Callous though it may seem, the cost-benefit analysis for these decisions necessarily attaches numbers to life. The standard approach estimates the expected present value of someone’s lifetime earnings plus the monetary value of the services she provides to her family, such as emotional comfort. Another approach infers the value that people place on their own lives from the risks that they take. According to this calculation, someone willing to pay up to but no more than $100 for safety equipment that would reduce his risk of death by 0.001 percent values his life at around $10 million.

  This type of calculus shapes auto safety regulation. The government directs labor and capital to make safer cars, but only up to a point. It determines speed limits in a similar way. Lowering the highway speed limit to twenty miles per hour would sharply reduce traffic fatalities, but most everyone would object to frittering away more time sitting in cars. Transportation departments must balance time wasted against the value of the expected extra lives lost when raising the speed limit.

  We can cast this argument in terms of Pareto efficiency. People vary with respect to the cost they assign to driving very slowly and the value they implicitly assign to their lives. Suppose a policy raising the speed limit to 70 mph has been proposed, and a few people who are rarely in a hurry and care a great deal about safety object. These cautious people could, in principle, be compensated from a tax on those who benefit from the higher speed limit, leaving everyone better off. The fact that it’s hard to identify the winners and losers from this policy and impractical to implement the transfer doesn’t invalidate the calculation.

  It’s a losing argument to maintain that assigning a finite monetary value to human life crosses a moral line. Government officials must make calculated choices. No one can know who will die in traffic accidents, so policymakers are unlikely to feel a connection to future victims. This allows them to maintain the level of abstraction that the calculation requires. Even if the official in charge of speed limits refuses to explicitly assign a value to human life, she still does so implicitly.

  The value-of-a-life calculation becomes more difficult for a congress or head of state declaring war, and more difficult still for a military officer sending specific individuals into battle. The general will likely feel a mix of emotions over purposefully calculating the value of the lives of people he knows. Generals like Alexander the Great have mitigated this unease by personally leading troops into battle, communicating that they value the soldiers’ lives on the same scale as their own.

  Every day, in dozens of ways, modern, unadventurous c
ivilians implicitly assign value to lives—their own, the lives of their families, and those of strangers. Someone asked how much money he would accept for his child’s life would probably object that such a calculation was impossible and that the question was unthinkable. Yet the Department of Transportation official who assigns a value to a life and the parent who refuses to put a number on his child’s life can coexist (in fact, they could be the same person). When it comes time to act, the parent makes calculations, too. When my son, Nathan, was an infant, I chose a larger, safer car than I would otherwise have bought. But I did not buy an armored car, which would have been more expensive and harder to park.

  If you want to live in the world, then there’s no way to completely avoid the abstraction necessary for purposeful calculations. But the necessity of making these calculations doesn’t mean you must answer questions like “At what odds would you bet your child’s life for a dollar?” in order to be rational. First, the question is spiritually degrading to entertain. Second, even if you answered 1:1 trillion, how could you trust the questioner not to cheat? After all, he’s prepared to kill your children if he wins. The odds that you’ve misunderstood the rules or misjudged the laws of the universe may exceed the likelihood of a very unlucky outcome from a conventional random number generator.

  The philosopher John Searle poses this question in terms of a paradox. On the one hand, he objects strenuously to the idea that he should be willing to bet his life for twenty-five cents if the odds are sufficiently high. He considers the wager absurd and can’t conceive of a probability close enough to one. Even if he could, he says, he would not wager, at any odds, his child’s life or the survival of humankind for a quarter.7

  On the other hand, Searle concedes that he willingly takes many small risks of death in exchange for a benefit, sometimes even a monetary benefit. For example, he would agree to drive someone to the San Francisco airport for $1,000, even though his risk of death would be lower if he stayed home. But if he thinks of the trip as four thousand equally sized increments, isn’t he effectively taking one four-thousandth of the risk over each increment in exchange for a quarter?8

  Driving someone to the airport arises naturally in life, so we are likely to have confidence in the rules of the game. We have less confidence in theoretical scenarios. Who is this homicidal genie who asks me to bet my life against a quarter? Does he have the intent and ability to carry out the murder if I lose? How do I know the rules are as he describes them? Maybe it’s a trick. I don’t have any experience with supernatural beings who pose these kinds of questions—who knows what they have up their sleeves? I might propose a-trillion-to-one odds for my life versus a quarter, but how do I account for the odds that the genie is cheating?

  Driving your child somewhere or, for that matter, allowing her to leave the house, indicates a willingness to tolerate some amount of risk for some benefit. But fortunately for you, unless you are a policymaker dealing with such questions, you don’t have to assign an abstract value to life. You can reject the question because it mixes the purposeful (money) with the for-itself (your child’s life)—two things that cannot be compared.

  PART IV

  Time

  In “economic” life … the motivation of competitive sport plays a role at least as great as the endeavor to secure gratifications mechanically dependent on quantitative consumption … The real problem centers, of course, in the fact that activity has both characters; it is a game, but one in which the most vital substantive goods, comfort and life itself, are stakes, inseparably combined with victory and defeat and their bauble-symbols.

  —FRANK KNIGHT, “The Role of Principles in Economics and Politics”

  8

  Changing Our Minds

  Suppose you buy an expensive motorcycle to get around town, but it’s not quite as much fun as you expected. You return it and buy a small car for the same price. But you find the car hard to park and miss the motorcycle. You return the car, buy the same motorcycle, and keep it. You may feel a little foolish, but this is hardly cause for alarm as long as you don’t do it too often. Your preferences for transportation, safety, and fun never budged—you just learned a few things through experience. This is rational choice at work.

  Now, suppose a middle-aged man buys a motorcycle rather than a car out of nostalgia for his happy, carefree, younger days riding on the Pacific Coast Highway. But the motorcycle doesn’t fit his suburban lifestyle. He quickly sells it back to the dealer and buys a car. His mistake was allowing emotionally charged memories, rather than rational deliberation, to guide his decision. He acted on a behavioral bias and, one hopes, will learn from his mistake.

  In both of these examples, ending up with the right vehicle required learning of some kind. But suppose that you know perfectly what your desires are and how to satisfy them. There are no surprises in your world, and you have all the information you need without learning from experience. You buy a new motorcycle, and it’s everything you dreamed it would be. You never tire of riding it. Yet you still have difficult decisions to make. You lead a busy life, and riding cuts into your work and hence your earnings. How often and when do you indulge in your new favorite pastime? Even as a well-informed and rational actor, is it still possible that you’d change your mind over time?

  Imagine that a demon shows up and demands that you pick a motorcycle riding plan, once and for all, for all future days. The demon makes you choose for yourself and enforces whatever you select. You consider every possible plan available to you—a constant amount of working at your job and riding every day; working like a madman for years and riding for days on end later; or living for the moment, riding today and working hard tomorrow. You rank all the options and choose the one you like the best.

  In this context, changing your mind would mean wanting to break from the plan you agreed to. For example, when the demon first arrived, it may have seemed like a good idea that today should be dedicated to recreation and tomorrow to austerity. But once tomorrow arrives, austerity no longer feels like the way to go. If the demon would let you off, you’d head back to the open road. (It’s assumed in this thought experiment that you trust the demon when you make your choice the first time around. That is, you’re not choosing with a view toward bargaining later on.)

  Can this be rational? Is it possible for rational actors to change their minds even in the absence of new information, to want to get off the path they agreed to with the demon? Not only is it possible but, as we’ll see shortly, it’s inevitable. Eventually, even rational actors always change their minds.

  In the real world, there is no demon to make you pick once and for all. If your idea on Monday about what to do on Tuesday no longer seems best when Tuesday rolls around, you can rethink it. Whether deciding how much to ride your motorcycle or how much money to spend, you can choose what you would like to do now, then choose again later. It wouldn’t matter if one of your previous incarnations emerged from thin air to boss you around, saying, “I would have picked something different.” That person is gone—you owe her nothing.

  There are times, no doubt, when an enforcement demon would be helpful. In the absence of one, you might seek a commitment device to force your future self to do the bidding of today’s self. The prototypical example of such a device comes from Homer’s Odyssey. Odysseus wanted to hear the Sirens’ song but knew it would compel him to leap from the ship to join them at his peril. Before the ship reached earshot, he had the crew tie him tightly to the mast and plug their own ears with wax.

  On a less heroic scale, I keep my alarm clock across the room from my bed to force myself to get up. Acting on similar logic, someone who drives to a party might immediately give his car keys to a teetotaling neighbor who can drive him home, anticipating that he’ll be too inebriated later to make a sound decision.

  But let’s not make too much of these examples; the occasional need for a commitment device does not imply that some deep instability pervades everyday life. Odysseus, the alarm-se
tter, and the partygoer all know they will experience a nonrational state in the near future. Odysseus will fall under the Sirens’ spell, the person setting the alarm clock will be groggy, and the partygoer will be drunk. Preparing for a temporary, anticipated loss of judgment is rational.

  As long as we take steps to keep ourselves on track during periods of impaired judgment, changing our minds (deviating from the path we’d agree to with the demon) does not present a serious problem for individuals. It does, however, for economists attempting to model the way we save, consume, and invest. They will be disappointed by real people who do not and cannot trade present well-being for future well-being in some grand optimization problem.

  When we depart from their models, frustrated economists might conclude that we suffer from a dysfunctional relationship with time. It may not occur to them that we simply don’t have preferences, in the rational choice sense, about how to trade off the present for the future. We can’t rank different paths in a coherent way, since we always change our plans—unless we somehow shackle our future selves to a path against their wishes, and why would we want to do that? The “choice” of a path that we know we won’t stick to isn’t really a choice.

  Thus we need to think about intertemporal choice in a new way. This type of choice can’t be described as purposeful because we lack true preferences. It must not be the case, then, that each moment is tied to an optimal plan that we select and then execute. Neoclassical economic theory cannot account for planning through time; moreover, our sense of choosing in a purposeful way is often illusory. Whether you’re a hippie or Zen master professing to live in “the now” or a wealthy miser who takes pride in delaying gratification, each moment necessarily stands for itself.

 

‹ Prev