Human Errors

Home > Other > Human Errors > Page 17
Human Errors Page 17

by Nathan H. Lents


  As early humans were eking out livings, mental abilities such as drawing inferences from an incomplete picture, predicting future events based on past experiences, and sizing up a situation using only a partial glimpse of it would have been incredibly powerful and often lifesaving. Occasionally, though, this impressive feature of the brain can lead us astray, creating inaccurate pictures in our minds.

  Entertaining optical illusions exploit these mental faculties. Take, for example, images that appear to move even though they are static. These generally involve alternating or interlocking patterns of some shape with sharp or tapered corners or an otherwise pointy edge. The effect seems to work only when the same patterns are laid out in an oppositional or alternating fashion. Something about the heightened contrast in the pattern drives the effect. Our brains ascribe movement to these shapes as a side effect of a pretty ingenious neurological innovation shared with many other creatures: the “smoothing out” of perception for objects in motion.

  Patterns of alternating shapes, such as this one, can invoke the sensation of motion in the human brain. This is due to the way our brain creates smooth “video” from the still images captured by our eyes.

  The neurons in retinas capture visual information and relay it to the brain as fast as they can, but this relay is not instantaneous. What we see is not the world as it is now but as it was about one-tenth of a second ago. The delay is due to the maximum frequency that neurons can fire.

  This maximum firing rate, when considered for all the neurons in the retina (since they all send information at the same time), leads to something called the flicker fusion threshold: the frequency faster than which our eyes cannot work. When visual information is changing faster than the eyes can detect, the brain “smoothens” this information into the perception of an object in steady motion. In a sense, we do not actually see motion; we infer it. The eye takes snapshots—about fifteen per second in dim light—and sends them to the brain. The visual cortex then creates a smooth experience out of what is really an old-time film reel of still pictures.

  That is no idle analogy; in fact, much of the visual media that we consume is delivered to us as rapid flashes. Both television and movies have a frame rate, which is the number of screen flashes per second, usually between twenty-five and fifty. As long as this rate is faster than the eyes can work, the brain smooths out the input and creates the perception of fluid motion. If the frame rate were just a little slower, people would perceive television programs and movies as they really are: a strobe of flashing pictures. Part of the reason dogs and cats show little interest in television is that their retinal neurons work so much faster than ours that they actually see the flashes, which must be awfully annoying. Birds tend to have higher flicker fusion thresholds than mammals, helping to explain their impressive abilities to hunt quick prey such as fish and flying insects. Apes and other primates, including humans, have rather slow flicker fusion thresholds despite their superior color vision, indicating that hunting fast-moving prey is not usually a priority. (Humans engage in persistence hunting, which relies on endurance and ingenuity more than quick actions.) Still, human brains do create the illusion of motion out of still pictures, even if we do it more slowly than others.

  The same functional anatomy in our brains that creates the perception of smooth motion often misfires when we look at certain patterns. Our brains get tricked like this only with particular shapes. When a person looks at a checkerboard, it doesn’t usually evoke the illusion of movement. The patterns that tend to trip our “motion-creating” function are those with sharp corners that seem as though they are pressing forward. In the open plains of the savanna, the motif of a pointy-edged something plunging into an open visual field is reliably associated with movement; our brains are adapted for this.

  Artists have long known this and often exploit the brain’s ability to create the illusion of motion in their work. A 140-year-old oil painting is as static as can be, but many of Edgar Degas’s masterpieces, such as his famous ballet dancers, leave the viewer with the distinct sense that the subjects of the works are in motion.

  As error-prone as our visual faculties are, they’re by no means our only mental assets to be intrinsically flawed, let alone the most impressively flawed. Our species’ advanced computational brain—the single biggest human feature—is full of bugs. These are called cognitive biases, and they can get us into big messes.

  Born to Be Biased

  The term cognitive bias refers to any systemic breakdown of rational or “normal” decision-making. Collectively, these defects in human decision-making receive a huge amount of attention from psychologists, economists, and other scholars seeking to understand how something as miraculously advanced as the human brain can go so incredibly wrong so incredibly often and with such incredible predictability.

  The human brain is, on the whole, a marvel of logic and reason. Even as children, humans are capable of deductive reasoning and learn the simple rules of if/then logic. Mathematics, the basic version of which is an innate skill, is essentially an exercise in logic. This is not to say that reason never escapes us, but in general, humans think and act logically. This is why cognitive biases are strange and call out for study—they are deviations from the rational way that we expect our brains to work.

  There is an entire subfield of economics, known as behavioral economics, that has arisen in recent decades to explore these biases. One of the founders of the field, Daniel Kahneman, won a Nobel Prize for this work and has explained many of our biases in his popular book Thinking, Fast and Slow. There are literally hundreds of cognitive biases with overlapping definitions and common root causes, and they’re grouped into three broad categories: those affecting beliefs, decisions, and behaviors; those affecting social interactions and prejudices; and those associated with distorted memories. Cognitive biases in general are the result of shortcuts that the brain takes in making sense of the world. In order to avoid having to thoroughly analyze each and every situation you find yourself in, your brain establishes rules based on past experience that help you make quicker judgments. Saving time has always been a priority, and the brain has evolved to save time whenever it can. Psychologists refer to these time-saving tricks as heuristics.

  Not surprisingly, a brain built to make quick judgments frequently makes errors. Fast work is sloppy work. In that light, it would not really be fair to consider many of the mistakes that our brains make design flaws, seeing how well they perform in most instances. Limits, after all, are not the same things as flaws.

  What does make cognitive biases qualify as defects is that they are not the result of an overtaxed system; they are patterns of mistakes that are made over and over again. Even worse, they are deeply ingrained and resistant to correction. Even when people know that their brains tend to get something wrong, and even when they’re given all the information needed to get things right, there are some mistakes that they’ll just keep on making.

  For example, all of us are frequently guilty of something called confirmation bias. This is the very human tendency to interpret information in a way that confirms what you already believe is true rather than making a fair and objective assessment. Confirmation bias can take many forms, from selective memory to errors in inductive reasoning to outright refusal to acknowledge contradictory evidence. All of these are information-processing glitches that people usually cannot see in themselves even when they are pointed out but that they find immensely frustrating in others.

  Most people’s views on political and social policies are highly resistant to change, regardless of the data they are presented with. In a classic example, social scientists assembled a random group of individuals and showed them two (made-up) research studies, one seemingly proving that the death penalty was an effective deterrent to violent crime and the other seemingly proving that it was not. The researchers then asked the subjects to rate the quality and relevance of each study. Overall, the participants tended to give high ratings to the
study that supported their own views and low ratings to the study supporting the opposing view. They would even sometimes specifically cite limitations of the opposing study that were also present in the study they agreed with! In other experiments, scientists have gone even further by giving participants fabricated studies regarding affirmative action and gun control, two political hot topics. These studies were more comprehensive and more powerful and gave clearer results than any true study that had been done. It made no difference. People rated a study as well designed if, and only if, it supported their views on the subject. (This research reflects another fact about confirmation bias: it pervades our political climate, which is why no one has ever changed his or her mind on an issue because of an argument on Facebook.)

  Another manifestation of confirmation bias is something called the Forer effect, named for Bertram Forer, who performed a now-famous demonstration on a group of unsuspecting college students. Professor Forer asked his students to take a very long and involved personality test, a diagnostic interest inventory, and told them that he would use the results to create a complete description of their personalities. A week later, he provided each of them with a supposedly tailored vignette describing his or her personality in a series of statements. Here is what one of the students received:

  1) You have a great need for other people to like and admire you. 2) You have a tendency to be critical of yourself. 3) You have a great deal of unused capacity, which you have not turned to your advantage. 4) While you have some personality weaknesses, you are generally able to compensate for them. 5) Your sexual adjustment has presented problems for you. 6) Disciplined and self-controlled outside, you tend to be worrisome and insecure inside. 7) At times you have serious doubts as to whether you have made the right decision or done the right thing. 8) You prefer a certain amount of change and variety and become dissatisfied when hemmed in by restrictions and limitations. 9) You pride yourself as an independent thinker and do not accept others’ statements without satisfactory proof. 10) You have found it unwise to be too frank in revealing yourself to others. 11) At times you are extroverted, affable, sociable, while at other times you are introverted, wary, reserved. 12) Some of your aspirations tend to be pretty unrealistic. 13) Security is one of your major goals in life.

  Here’s the thing: All of the students received the same personality description, though they didn’t know it. Their ignorance was key to the experiment. Once they all received their “personal” and “tailored” personality descriptions, they were asked to rate its accuracy on a scale of 1 to 5. The average score was 4.26. If you’re like me, you probably thought the report above described you pretty accurately too. And it does. It’s pretty accurate for everyone because the statements are either so vague or so universal that each one can apply to almost anyone who isn’t a total psychopath. “Security is one of your major goals in life.” Who wouldn’t agree with that?

  When you read the statements thinking they are tailored for you, you don’t critically evaluate what they are really saying (or not saying). Instead, the statements seem to confirm what you already think about yourself. Of course, if the students had been told they were reading just a random list of personality traits, they would probably note that some didn’t really apply, but because they were told that the statements were written especially for them, they believed what they read.

  This error in our information-processing ability can get us into very real trouble. Astrologers, fortunetellers, mediums, psychics, and the like are well versed in the finer points of the Forer effect. With a little practice, a huckster can use only vague hints from his mark to weave an elaborate tale that seems eerily accurate and applicable. The key is that the hapless victim has to want to believe what he’s being told. For this reason, the Forer effect is often called by another name: the Barnum effect, after P. T. Barnum, who famously said, “There’s a sucker born every minute.” Considering how universal the confirmation bias is, Barnum’s quip is a gross underestimate. At the current global birthrate, there are two hundred and fifty suckers born every minute—roughly one every quarter second.

  Let’s Make a Memory

  Like the human brain’s ability to think logically, its incredible capacity for memory is a marvel. From the world capitals you memorized in seventh grade to the phone number of your elementary school best friend to your vivid recollections of trips, movies, and emotional experiences, there are literally billions of bits of information bouncing around in your head. Yet here, too, an amazing human feature is filled with bugs.

  There are all kinds of flaws in the way our brains form, store, and access memories. For instance, most people have had the experience of recalling and enjoying a vivid memory for years only to find out later, through a recording or by comparing notes with others, that the recollection has major inaccuracies. Sometimes, people remember an event as a first-person experience when they were actually bystanders. Other times, they translocate the memories to a different time or place, or they change the cast of players involved.

  While these small errors may seem innocuous, they can have big consequences. To find them, look no further than the world of criminal justice.

  If a prosecutor has an eyewitness to a crime, a conviction is usually a slam dunk. If a witness positively identifies someone as the assailant whom he saw commit the act, how could he be wrong about that? If the witness had never met either the accused or the victim before, why would he lie?

  But researchers in the field of forensic psychology have made startling discoveries regarding the reliability of eyewitness testimony. Although you’d never know it from how police and prosecutors pursue and present evidence, there is at least three decades of research proving that eyewitness identifications are extremely biased and often mistaken, especially when it comes to violent crimes.

  Psychologists have used simulations to show how easily memories can be distorted after the fact, and these shed light on what is going wrong in the brains of many eyewitnesses. For example, researchers recruited volunteers and randomly assigned them to two groups. Both groups watched a video of a simulated violent crime from a fixed and limited perspective, as though they were bystanders to that crime. Afterward, both groups were asked to give a physical description of the assailant. One group was then left alone for an hour while the other group’s members were shown a lineup of possible assailants and asked if they could identify the perpetrator. However, there was a slight trick with the lineup. None of the actors was the perpetrator, but one—and only one—matched the rough physical description from each eyewitness in terms of height, build, and race. More often than not, the witness identified that person as the perpetrator, and in a majority of those cases, the witness was “very sure” that the identification was correct.

  That’s a troubling outcome, of course—but it’s not the most disturbing part of this experiment. Sometime later, both groups were again asked to describe the perpetrator of the crime. The members of the group that had not seen a lineup gave pretty much the same descriptions that they had before. However, most of the people in the other group gave much more detailed descriptions. Seeing the lineup somehow “improved” their memory of the perpetrator. The increased detail that they provided always matched the actor in the lineup, not the actual perpetrator of the crime they had witnessed. When the researchers pressed the witnesses about their memories of the crime, they discovered that they were honestly reporting their recollections to the best of their ability. Their memories had been warped.

  This work has been expanded in lots of interesting ways and has affected how lineups are done in most states. Experts in eyewitness memory tell us that the only valid way to do a lineup is to have every single person in it—the suspect as well as the foils (as the paid lineup actors are known)—match every part of the physical description that is given by the witness. And if the eyewitness description doesn’t match a suspect perfectly (which happens a lot!), the foils must match the suspect, not the description. Moreover, con
spicuous identifying marks and even clothing must be as similar as possible among the people in the lineup. Scars and tattoos should be covered, because if the witness remembers that the perpetrator had a neck tattoo and only one of the men in the lineup has a neck tattoo, there is a good chance she will identify that person, even if he is innocent. Her memory of the crime will then be retroactively edited with the new person’s face spliced in. Even clothing can trip this memory-editing property of the brain, and it all happens without a person’s conscious awareness. The false memory is as vivid as a real one. More, actually!

  Bystander memory is bad enough, but memories of one’s own experiences are even worse. For instance, it turns out that personal traumas are also vulnerable to memory distortion. This has been documented in cases of a single traumatic event, such as a sexual assault, and sustained stresses that might involve multiple types of trauma, such as being in a war. The memory distortion most often observed with regard to trauma is that people tend to remember suffering more trauma than they really felt. This usually translates into greater severity of posttraumatic stress disorder (PTSD) symptoms over time as the remembered trauma grows.

  Not surprisingly, this deepens and prolongs the suffering associated with the trauma. In one example, researchers asked Desert Storm veterans about certain traumatic experiences (running from sniper fire, sitting with a dying solider, and so forth) at one month and then at two months following their return from service. Eighty-eight percent of veterans changed their response to at least one event, and 61 percent changed more than one. Importantly, the majority of those changes were from “No, that did not happen to me” to “Yes, that happened to me.” This overremembering was associated with an increase in PTSD symptoms.

 

‹ Prev