Book Read Free

Mastermind: How to Think Like Sherlock Holmes

Page 20

by Maria Konnikova


  Logically, neither idea makes sense: a conjunction cannot be more likely than either of its parts. If you didn’t think it likely that Bill played jazz or that Linda was a bank teller to begin with, you should not have altered that judgment just because you did think it probable that Bill was an accountant and Linda, a feminist. An unlikely element or event when combined with a likely one does not somehow magically become any more likely. And yet 87 percent and 85 percent of participants, for the Bill scenario and the Linda scenario, respectively, made that exact judgment, in the process committing the infamous conjunction fallacy.

  They even made it when their choices were limited: if only the two relevant options (Linda is a bank teller or Linda is a feminist bank teller) were included, 85 percent of participants still ranked the conjunction as more likely than the single instance. Even when people were given the logic behind the statements, they sided with the incorrect resemblance logic (Linda seems more like a feminist, so I will say it’s more likely that she’s a feminist bank teller) over the correct extensional logic (feminist bank tellers are only a specific subset of bank tellers, so Linda must be a bank teller with a higher likelihood than she would be a feminist one in particular) in 65 percent of cases. We can all be presented with the same set of facts and features, but the conclusions we draw from them need not match accordingly.

  Our brains weren’t made to assess things in this light, and our failings here actually make a good amount of sense. When it comes to things like chance and probability, we tend to be naive reasoners (and as chance and probability play a large part in many of our deductions, it’s no wonder that we often go astray). It’s called probabilistic incoherence, and it all stems from that same pragmatic storytelling that we engage in so naturally and readily—a tendency that may go back to a deeper, neural explanation; to, in some sense, W.J. and the split brain.

  Simply put, while probabilistic reasoning seems to be localized in the left hemisphere, deduction appears to activate mostly the right hemisphere. In other words, the neural loci for evaluating logical implications and those for looking at their empirical plausibility may be in opposite hemispheres—a cognitive architecture that isn’t conducive to coordinating statement logic with the assessment of chance and probability. As a result, we aren’t always good at integrating various demands, and we often fail to do so properly, all the while remaining perfectly convinced that we have succeeded admirably.

  The description of Linda and feminist (and Bill and accountant) coincides so well that we find it hard to dismiss the match as anything but hard fact. What is crucial here is our understanding of how frequently something occurs in real life—and the logical, elementary notion that a whole simply can’t be more likely than the sum of its parts. And yet we let the incidental descriptors color our minds so much that we overlook the crucial probabilities.

  What we should be doing is something much more prosaic. We should be gauging how likely any separate occurrence actually is. In chapter three, I introduced the concept of base rates, or how frequently something appears in the population, and promised to revisit it when we discussed deduction. And that’s because base rates, or our ignorance of them, are at the heart of deductive errors like the conjunction fallacy. They hamper observation, but where they really throw you off is in deduction, in moving from all of your observations to the conclusions they imply. Because here, selectivity—and selective ignorance—will throw you off completely.

  To accurately cast Bill and Linda’s likelihood of belonging to any of the professions, we need to understand the prevalence of accountants, bank tellers, amateur jazz musicians, active feminists, and the whole lot in the population at large. We can’t take our protagonists out of context. We can’t allow one potential match to throw off other information we might have.

  So, how does one go about resisting this trap, sorting the details properly instead of being swept up in irrelevance?

  Perhaps the pinnacle of Holmes’s deductive prowess comes in a case that is less traditional than many of his London pursuits. Silver Blaze, the prize-winning horse of the story’s title, goes missing days before the big Wessex Cup race, on which many a fortune ride. That same morning, his trainer is found dead some distance from the stable. His skull looks like it has been hit by some large, blunt object. The lackey who had been guarding the horse has been drugged and remembers precious little of the night’s events.

  The case is a sensational one: Silver Blaze is one of the most famous horses in England. And so, Scotland Yard sends Inspector Gregson to investigate. Gregson, however, is at a loss. He arrests the most likely suspect—a gentleman who had been seen around the stable the evening of the disappearance—but admits that all evidence is circumstantial and that the picture may change at any moment. And so, three days later, with no horse in sight, Sherlock Holmes and Dr. Watson make their way to Dartmoor.

  Will the horse run the race? Will the trainer’s murderer be brought to justice? Four more days pass. It is the morning of the race. Silver Blaze, Holmes assures the worried owner, Colonel Ross, will run. Not to fear. And run he does. He not only runs, but wins. And his trainer’s murderer is identified soon thereafter.

  We’ll be returning to “Silver Blaze” several times for its insights into the science of deduction, but first let’s consider how Holmes introduces the case to Watson.

  “It is one of those cases,” says Holmes, “where the art of the reasoner should be used rather for the sifting of details than for the acquiring of fresh evidence. The tragedy has been so uncommon, so complete, and of such personal importance to so many people that we are suffering from a plethora of surmise, conjecture, and hypothesis.” In other words, there is too much information to begin with, too many details to be able to start making them into any sort of coherent whole, separating the crucial from the incidental. When so many facts are piled together, the task becomes increasingly problematic. You have a vast quantity of your own observations and data but also an even vaster quantity of potentially incorrect information from individuals who may not have observed as mindfully as you have.

  Holmes puts the problem this way: “The difficulty is to detach the framework of fact—of absolute undeniable fact—from the embellishments of theorists and reporters. Then, having established ourselves upon this sound basis, it is our duty to see what inferences may be drawn and what are the special points upon which the whole mystery turns.” In other words, in sorting through the morass of Bill and Linda, we would have done well to set clearly in our minds what were the actual facts, and what were the embellishments or stories of our minds.

  When we pry the incidental and the crucial apart, we have to exercise the same care that we spent on observing to make sure that we have recorded accurately all of the impressions. If we’re not careful, mindset, preconception, or subsequent turns can affect even what we think we observed in the first place.

  In one of Elizabeth Loftus’s classic studies of eyewitness testimony, participants viewed a film depicting an automobile accident. Loftus then asked each participant to estimate how fast the cars were going when the accident occurred—a classic deduction from available data. But here’s the twist: each time she asked the question, she subtly altered the phrasing. Her description of the accident varied by verb: the cars smashed, collided, bumped, contacted, or hit. What Loftus found was that her phrasing had a drastic impact on subjects’ memory. Not only did those who viewed the “smashed” condition estimate a higher speed than those who viewed the other conditions, but they were also far more likely to recall, one week later, having seen broken glass in the film, even though there was actually no broken glass at all.

  It’s called the misinformation effect. When we are exposed to misleading information, we are likely to recall it as true and to take it into consideration in our deductive process. (In the Loftus experiment, the subjects weren’t even exposed to anything patently false, just misleading.) All the specific word choice does is act as a simple frame that impacts our li
ne of reasoning and even our memory. Hence the difficulty, and the absolute necessity, that Holmes describes of learning to sift what is irrelevant (and all that is media conjecture) from the real, objective, hard facts—and to do so thinkingly and systematically. If you don’t, you may find yourself remembering broken glass instead of the intact windshield you actually saw.

  In fact, it’s when we have more, not less, information that we should be most careful. Our confidence in our deductions tends to increase along with the number of details on which we base them—especially if one of those details makes sense. A longer list somehow seems more reasonable, even if we were to judge individual items on that list as less than probable given the information at hand. So when we see one element in a conjunction that seems to fit, we are likely to accept the full conjunction, even if it makes little sense to do so. Linda the feminist bank teller. Bill the jazz-playing accountant. It’s perverse, in a way. The better we’ve observed and the more data we’ve collected, the more likely we are to be led astray by a single governing detail.

  Similarly, the more incidental details we see, the less likely we are to home in on the crucial, and the more likely we are to give the incidental undue weight. If we are told a story, we are more likely to find it compelling and true if we are also given more details, even if those details are irrelevant to the story’s truth. Psychologist Ruma Falk has noted that when a narrator adds specific, superfluous details to a story of coincidence (for instance, that two people win the lottery in the same small town), listeners are more likely to find the coincidence surprising and compelling.

  Usually when we reason, our minds have a tendency to grab any information that seems to be related to the topic, in the process retrieving both relevant cues and those that seem somehow to be connected but may not actually matter. We may do this for several reasons: familiarity, or a sense that we’ve seen this before or should know something even when we can’t quite put our finger on it; spreading activation, or the idea that the activation of one little memory node triggers others, and over time the triggered memories spread further away from the original; or simple accident or coincidence—we just happen to think of something while thinking about something else.

  If, for example, Holmes were to magically emerge from the book and ask us, not Watson, to enumerate the particulars of the case at hand, we’d rummage through our memory (What did I just read? Or was that the other case?), take certain facts out of storage (Okay: horse gone, trainer dead, lackey drugged, possible suspect apprehended. Am I missing anything?), and in the process, likely bring up others that may not matter all that much (I think I forgot to eat lunch because I was so caught up in the drama; it’s like that time I was reading The Hound of the Baskervilles for the first time, and forgot to eat, and then my head hurt, and I was in bed, and . . .).

  If the tendency to over-activate and over-include isn’t checked, the activation can spread far wider than is useful for the purpose at hand—and can even interfere with the proper perspective needed to focus on that purpose. In the case of Silver Blaze, Colonel Ross constantly urges Holmes to do more, look at more, consider more, to leave, in his words, “no stone unturned.” Energy and activity, more is more; those are his governing principles. He is supremely frustrated when Holmes refuses, choosing instead to focus on the key elements that he has already identified. But Holmes realizes that to weed out the incidental, he should do anything but take in more and more theories and potentially relevant (or not) facts.

  We need, in essence, to do just what the CRT teaches us: reflect, inhibit, and edit. Plug System Holmes in, check the tendency to gather detail thoughtlessly, and instead focus—thoughtfully—on the details we already have. All of those observations? We need to learn to divide them in our minds in order to maximize productive reasoning. We have to learn when not to think of them as well as when to bring them in. We have to learn to concentrate—reflect, inhibit, edit—otherwise we may end up getting exactly nowhere on any of the myriad ideas floating through our heads. Mindfulness and motivation are essential to successful deduction.

  But essential never means simple, nor does it mean sufficient. Even with Silver Blaze, Holmes, as focused and motivated as he is, finds it difficult to sift through all of the possible lines of thought. As he tells Watson once Silver Blaze is recovered, “I confess that any theories which I had formed from the newspaper reports were entirely erroneous. And yet there were indications there, had they not been overlaid by other details which concealed their true import.” The separation of crucial and incidental, the backbone of any deduction, can be hard for even the best-trained minds. That’s why Holmes doesn’t run off based on his initial theories. He first does precisely what he urges us to do: lay the facts out in a neat row and proceed from there. Even in his mistakes, he is deliberative and Holmes-like, not letting System Watson act though it may well want to.

  How does he do this? He goes at his own pace, ignoring everyone who urges haste. He doesn’t let anyone affect him. He does what he needs to do. And beyond that he uses another simple trick. He tells Watson everything—something that occurs with great regularity throughout the Holmes canon (and you thought it was just a clever expository device!). As he tells the doctor before he delves into the pertinent observations, “nothing clears up a case so much as stating it to another person.” It’s the exact same principle we’ve seen in operation before: stating something through, out loud, forces pauses and reflection. It mandates mindfulness. It forces you to consider each premise on its logical merits and allows you to slow down your thinking so that you do not blunder into a feminist Linda. It ensures that you do not let something that is of real significance go by simply because it didn’t catch your attention enough or fit with the causal story that you have (subconsciously, no doubt) already created in your head. It allows your inner Holmes to listen and forces your Watson to pause. It allows you to confirm that you’ve actually understood, not just thought you understood because it seemed right.

  Indeed, it is precisely in stating the facts to Watson that Holmes realizes the thing that will allow him to solve the case. “It was while I was in the carriage, just as we reached the trainer’s house, that the immense significance of the curried mutton occurred to me.” The choice of a dinner is easy to mistake for triviality, until you state it along with everything else and realize that the dish was perfectly engineered to hide the smell and taste of powdered opium, the poison that was used on the stable boy. Someone who didn’t know the curried mutton was to be served would never risk using a poison that could be tasted. The culprit, then, is someone who knew what was for dinner. And that realization prompts Holmes to his famous conclusion: “Before deciding that question I had grasped the significance of the silence of the dog, for one true inference invariably suggests others.” Start on the right track, and you are far more likely to remain there.

  While you’re at it, make sure you are recalling all of your observations, all of the possible permutations that you’ve thought up in your imaginative space, and avoiding those instances that are not part of the picture. You can’t just focus on the details that come to mind most easily or the ones that seem to be representative or the ones that seem to be most salient or the ones that make the most intuitive sense. You have to dig deeper. You would likely never judge Linda a likely bank teller from her description, though you very well might judge her a likely feminist. Don’t let that latter judgment color what follows; instead, proceed with the same logic that you did before, evaluating each element separately and objectively as part of a consistent whole. A likely bank teller? Absolutely not. And so, a feminist one? Even less probable.

  You have to remember, like Holmes, all of the details about Silver Blaze’s disappearance, stripped of all of the papers’ conjectures and the theories your mind may have inadvertently formed as a result. Never would Holmes call Linda a feminist bank teller, unless he was first certain that she was a bank teller.

  The Improbable Is Not Impossible
/>
  In The Sign of Four, a robbery and murder are committed in a small room, locked from the inside, on the top floor of a rather large estate. How in the world did the criminal get inside to do the deed? Holmes enumerates the possibilities: “The door has not been opened since last night,” he tells Watson. “Window is snibbed on the inner side. Framework is solid. No hinges at the side. Let us open it. No water-pipe near. Roof quite out of reach.”

  Then how to possibly get inside? Watson ventures a guess: “The door is locked, the window is inaccessible. Was it through the chimney?”

  No, Holmes tells him. “The grate is much too small. I had already considered that possibility.”

  “How then?” asks an exasperated Watson.

  “You will not apply my precept.” Holmes shakes his head. “How often have I said to you that when you have eliminated the impossible, whatever remains, however improbable, must be the truth? We know that he did not come through the door, the window, or the chimney. We also know that he could not have been concealed in the room, as there is no concealment possible. Whence, then, did he come?”

  And then, at last, Watson sees the answer: “He came from the hole in the roof.” And Holmes’s reply, “Of course he did. He must have done so,” makes it seem the most logical entrance possible.

  It isn’t, of course. It is highly improbable, a proposition that most people would never consider, just as Watson, trained as he is in Holmes’s approach, failed to do without prompting. Just like we find it difficult to separate the incidental from the truly crucial, so, too, we often fail to consider the improbable—because our minds dismiss it as impossible before we even give it its due. And it’s up to System Holmes to shock us out of that easy narrative and force us to consider that something as unlikely as a rooftop entrance may be the very thing we need to solve our case.

 

‹ Prev