Book Read Free

Unfair

Page 19

by Adam Benforado


  Many of us say that we want an umpire judge, but perhaps what we really want is a robot judge—and presumably not one programmed by humans. Flesh-and-blood adjudicators come with the same limited hardware we all carry in our brains—circuits designed for a Pleistocene past, processors too slow to keep up, and storage drives wanting in capacity.

  —

  All of this raises an interesting puzzle: as I’ve mentioned, judges (and referees, for that matter) rarely, if ever, feel like they are acting in a biased way. Most would vigorously deny that they are being influenced by impermissible and irrelevant elements in their environments or that they are being driven by intuition rather than pure reason. Indeed, most would feel quite confident that they are tuning out biasing factors and focusing on the pertinent details of the case. In light of the growing body of research that makes such rosy accounts of objectivity highly doubtful, how can judges be so blind?

  The answer is that introspection and personal observation don’t tell the whole story. An appellate judge sitting in her chambers with the text of the Constitution on her left, a pile of cases on her right, and the lower court record on her lap can feel quite sure that she is simply applying the letter of the law to the facts of the case. But feeling as if you are reading the text of the Fourth Amendment through unfiltered lenses and applying precedent without bias does not make it so.

  Legal training, experience, and the rules and expectations of the job have the potential to help judges overcome certain biases, but they also reinforce a myth of impartiality. For example, part of the socialization of law school involves learning to deal with serious and sensitive legal issues—like when a sexual encounter qualifies as a rape—without becoming “emotional.” When students learn to discuss nonconsensual sex without getting upset, they are understood to be looking at matters objectively. But, of course, cultivating a flat affect does little or nothing to eliminate the biases that a person might bring to the issue. And approaching something like sexual assault without emotion is neither objective nor fair and balanced. It only feels that way.

  Similarly, over the last hundred years, most law professors have focused their classroom teaching on bringing structure and meaning to the numerous opinions and statutes students are required to absorb. And we assign reading from casebooks that offer up a vision of the law as a set of ordered rules that can be deduced, learned, and applied with consistency and predictability. Certainly, precedent and statutory laws can act as powerful constraints on judges—and may even help eliminate or reduce certain biases—but we professors engage in a damaging charade when we pantomime a legal world in which clear instructions are implemented by dutiful technicians. That world does not exist, but it’s what all judges have been trained to expect.

  Once they are sitting on the bench, approaches to interpretation that seem to offer judges a way to ensure objectivity help keep the truth hidden. Justice Antonin Scalia’s textual originalism, for example, advises the judge to “look for meaning in the governing text, ascribe to that text the meaning that it has borne from its inception, and reject judicial speculation about both the drafters’ extra-textually derived purposes and the desirability of the fair reading’s anticipated consequences.” A judge’s decision, then, turns on the text, the text, and nothing but the text. With seemingly no room for personal agendas or political distortions, it’s the ideal method for the umpire judge.

  But the truth is that a text is rarely confined to just one interpretation, and figuring out the proper historical meaning of a legal source is inherently subjective and conjectural. The Fourth Amendment begins, “The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated…” So what is a “search”? Is using a thermal-imaging device from across the street to see if someone is using heat lamps to grow marijuana in his home a “search”? Is placing a GPS tracking device on a car a “search”? Textual originalism does not dictate a clear answer; it just provides a cover of legitimacy to an inherently biased task.

  In situations like this, a judge is free to attach the meaning that supports his preferred outcome and “find” the history that backs up that meaning, all the while feeling certain that it is the text that’s doing all of the work.

  While textual originalism makes it especially difficult for us to see and acknowledge the biases that judges bring to the table, all judges, whether in the mold of Justice Scalia or Justice Ginsburg, struggle to appreciate the blinders they wear as they go about their work. This is particularly evident in the widespread practice of Supreme Court justices and clerks conducting their own research regarding facts in a case.

  The common portrayal is that justices don’t find facts; they receive them, applying the law to what was established at trial. But members of the Court actually conduct a significant amount of “in-house” investigation into general questions about the world that are relevant to particular legal and policy issues. Rather than simply relying on the lower court record and the briefs before them, the justices (or their clerks) regularly search Google or Westlaw or the Supreme Court library catalogue to determine the amount of carbon dioxide emissions in the air, whether late-term abortions tend to be pursued primarily by women below the poverty line, or whether most members of the public believe that self-defense is a fundamental right. Indeed, in surveying the 120 most important opinions of the last ten years, one legal scholar found that a majority of cases included citations to one or more outside sources.

  On first glance that seems rather unproblematic. If a case comes down to whether fleeing from the police in a vehicle amounts to a “violent felony” under the Armed Career Criminal Act, what’s wrong with a justice or clerk doing a little background reading to get a better sense of the number of injuries and deaths from police chases? In United States v. Sykes, both Justice Kennedy and Justice Thomas uncovered crash statistics that helped them conclude that vehicular flight is indeed a violent felony. Isn’t this precisely what we want when a justice lacks sufficient knowledge about a particular subject or when relevant data does not appear in the record or in the briefs?

  The problem, as scholars have pointed out, is not that the justices are trying to become more informed. It’s the nature of the information they turn up. In many cases, the “facts” they discover are flawed or misleading.

  Judges, just like the rest of us, tend to make gut decisions and then look for supporting data, discarding and dismissing conflicting evidence along the way. It’s the same problem we encountered with the police, emergency responders, and medical personnel who focused on the facts that fit the initial conclusion that David Rosenbaum was just a drunk. When judges do research, they already have an idea of what they are looking for and—surprise!—they tend to find it. The underlying drive is to bolster an argument, not discover the truth.

  Suppose you are a justice looking, for the first time, at the details of the Sykes case. Your immediate instinct (no doubt informed by watching the video of the police chase and fiery crash in Scott v. Harris four years earlier) is that of course fleeing the police in a vehicle is a violent felony. But you need a reason to justify that position, and so your brain offers a rationalization: lots of people are injured and killed during pursuits. It seems like that has to be true; you just need to find the data that says so. And so you go online to look for sources and you search until you find just such a study. There: proof that people are injured and killed when police chases occur, which in turn establishes that vehicular flight crimes are indeed violent felonies. It felt like the conclusion was dictated by the facts, when really a gut reaction led you to engage in a narrow, targeted search.

  We tend to assume that the more data a person has at her fingertips, the more accurate she’ll be. But, in fact, having more information may make it easier to find the necessary support for an erroneous proposition. When a pair of political scientists asked a group of Republicans whether “the size of the yearly budget deficit increased,
decreased, or stayed about the same during Clinton’s time as President”—it, in fact, decreased—the most politically informed members (those in the 95th percentile) gave a wrong answer more often than less informed members (those in the 50th percentile). A similar effect was found for well-informed Democrats asked about the state of inflation during President Reagan’s time in office. With more information on hand to support the intuition that the other side’s president had been a failure, it was easier to reach the wrong (but favored) conclusion.

  Our analytical skills can be distorted by a similar dynamic: sometimes being more adept at evaluating something can actually amplify our ideological biases. In one set of experiments, researchers looked at how people with different levels of math competency assessed the effectiveness of a skin-rash treatment or a firearm regulation when given basic data. On the skin-rash evaluation, things played out exactly as you might expect: those who were bad at math got the right answer about half as often as those who were good at math. But when participants were asked to determine the effectiveness of the gun ban, something funny happened with the highly numerate. When the data pointed to a conclusion that conflicted with their ideology, they appeared to disregard it. Given numbers suggesting that crime decreased, mathematically adept conservatives got the right answer only about 20 percent of the time, as compared with 85 percent of the time when the data suggested that crime increased. The reverse was true for liberals: about 70 percent reached the correct conclusion when the data pointed to a decrease in crime, but that dropped to below half when the data implied that the ban was ineffective. Despite knowing how to use the numbers to make an accurate determination, those with conflicting ideological positions simply went with their gut. The findings suggest that being a more skilled and experienced member of the bench might not bring the benefits we’d expect.

  The situation may be exacerbated by the fact that when it comes to the controversial issues that come before the Supreme Court, there are almost always authorities to buttress any position one might want to take. Indeed, when Justice Elena Kagan sought support for her dissent in Sykes—that fleeing the cops in a car is not inherently violent and aggressive—she was easily able to find it, citing evidence that a driver might legitimately fear that a criminal rather than a police officer was pulling her over. The fact that justices’ research may be driven more by motivated reasoning than by an open-minded quest for information is reflected in the diversity of sources that justices cite. Look at recent opinions and you’ll see interest-group sites and blogs alongside esteemed peer-reviewed journals.

  It doesn’t help that judges are exposed to a surprisingly narrow set of ideas, experiences, and viewpoints in their daily interactions. Sure, judges are not sequestered in ivory towers. They have spouses, children, and friends; they attend cookouts and weddings and plays; they read books, watch movies, and go on vacations. But, like all of us, they fall into routines, sticking to what they already know, prefer, and trust.

  Justice Scalia reads two newspapers in the morning: the Wall Street Journal and the Washington Times. As he told a journalist for New York magazine, he “used to get the Washington Post, but it just…went too far for me. I couldn’t handle it anymore.” He was tipped over the edge by “the treatment of almost any conservative issue. It was slanted and often nasty. And, you know, why should I get upset every morning?” He “usually” listens to talk radio—that is where he gets most of his news. In the past, he went to dinner parties that had a real mix of liberals and conservatives, he said, but that hasn’t happened in a long time.

  We all wear blinders fashioned from our limited lives. And if you happen to live in northern Virginia, listen to NPR, and mingle primarily with liberals, you are going to conduct your judicial research accordingly, clicking on certain websites and not others, recalling particular research studies, reading beyond the abstract of this author’s paper but not his colleague’s. And you may surround yourself with clerks who do the same.

  A search engine like Google may seem to offer a way out of the bind. But search engines are themselves deeply biasing. Many of them create filter bubbles by organizing the results based on your particular interests and proclivities, as revealed by the other websites you visited, your Facebook profile, or other personal details. In essence, without your awareness, you are being steered toward the sources that you are likely to find the most persuasive and that are most likely to support your views—and away from those that might cause you to rethink your positions.

  Amicus curiae briefs—“friend-of-the-court” filings, widely believed to aid the justices by helping to fill informational gaps—are a dead end as well. Although they often purport to offer impartial counsel, they are advocacy documents with facts chosen to persuade. And members of the Court draw from them—more than a hundred times between 2008 and 2013—with a startling lack of scrutiny, citing amicus facts backed up by e-mails, research funded by the amicus itself, unpublished studies “on file with the authors,” and, sometimes, nothing at all. With dozens of amici in certain cases and more each year, it seems as if justices are being given a deep reservoir of knowledge, but all that the system really does is supply an easier way to support preexisting conclusions.

  —

  As we have seen, much of the bias that infects a judge’s decisionmaking is subtle and automatic. And in many cases it is small enough or disguised enough to go unnoticed by others.

  It is like having a single step that’s ever so slightly higher than the others. Until recently, at the 36th Street subway stop in Brooklyn, there was just such a step. Every day it caused numerous people to trip as they ascended to street level. But no one did anything, even those who were most severely affected. The guy who nearly dropped his baby? The woman who fell to her knees? They caught their balance or brushed themselves off and walked on, thinking that it was just bad luck or that they had been clumsy or distracted. Few, if any, blamed the step, and so there it remained, a fraction of an inch off, until someone decided to film the entrance. Suddenly, with a pool of data, the problem was so clear it was comical. In under an hour, the videographer captured seventeen people stumbling on the step. And within a day of the evidence being posted online, New York’s Metropolitan Transportation Authority had begun replacing the staircase.

  We should embrace a similar approach with respect to our courts. Judges need to know if they are more likely to grant parole in the morning than at the end of the day or show more leniency toward a white petitioner than a black one. They need to be aware of how they conduct research and how often they side with the government and whether female attorneys coming before them fare as well as men. But if no one is keeping careful track of their decisions, how will they see the patterns?

  Fortunately, a lot of the monitoring and aggregating machinery already exists. Journalists and academics are in a better position than ever to uncover unequal treatment and distorted outcomes. It was the Boston Globe’s analysis of more than fifteen hundred Massachusetts drunk-driving cases, for example, that revealed a gross disparity in verdicts. In 2010, 82 percent of defendants who selected a bench trial before a judge were acquitted (well above the national average); for those who stuck with a jury trial, the figure was just 51 percent. In interviews, the judges themselves seemed surprised at the results—they simply did not realize how tilted they were against the prosecution. It took an outside monitor pulling together all of the data to reveal the slant. The journalists’ work prompted the Supreme Judicial Court to commission its own year-long study of the problem, which made specific recommendations to help reduce the acquittal rate and restrict the ability of defense attorneys to steer their clients to the most favorable judges, among other things. These ongoing reform efforts hold the potential not only to increase fairness but also to save lives.

  That said, with the decline of print media—the traditional bastion of serious investigative journalism—and constraints on university research funding, the judiciary itself ought to commit to better recor
dkeeping aimed specifically at uncovering hidden biases. If the Massachusetts Trial Court had collected data on conviction rates for bench and jury trials, it might have noticed the problem years earlier.

  On an individual level, psychological research suggests that judges could also benefit from self-monitoring by learning about the biases that influence their behavior, expecting (and accepting) that they are not immune, and then taking stock of how they actually behave. The judiciary could help judges with this process not only by providing training on relevant psychological dynamics (a few seminars have already been offered at the federal level on implicit bias) but also by providing individualized statistics. Judges receive surprisingly little feedback on their decisionmaking—lawyers rarely offer it, and the appellate review process rarely yields any meaningful information on cognitive biases or errors. How does a judge know, for example, whether race, gender, or age impact her treatment of defendants, or whether the harsh sentences she hands down are effective? Judges usually make calls and move on. But seeing the data could be a powerful antidote.

  A judge is always going to have hunches about a case—and when those hunches reflect years of experience, they can be valuable. But since they can also lead to errors, our intuitions need to be carefully examined.

  That’s equally true for police officers, lawyers, jurors, and witnesses: we need to get all of our key legal actors in the business of second-guessing themselves. That sounds strange, but doubt isn’t the enemy of justice—blind certainty is. And, in most cases, healthy skepticism isn’t going to develop on its own because there are so many forces pulling the other way.

 

‹ Prev