Book Read Free

This Will Make You Smarter

Page 6

by John Brockman


  Here are three simple conceptual tools that might help us see in front of our noses: nexus causality, moral warfare, and misattribution arbitrage. Causality itself is an evolved conceptual tool that simplifies, schematizes, and focuses our representation of situations. This cognitive machinery guides us to think in terms of the cause—of an outcome’s having a single cause. Yet for enlarged understanding, it is more accurate to represent outcomes as caused by an intersection, or nexus, of factors (including the absence of precluding conditions). In War and Peace, Tolstoy asks, “When an apple ripens and falls, why does it fall? Because of its attraction to the earth, because its stem withers, because it is dried by the sun, because it grows heavier, because the wind shakes it . . . ?” With little effort, any modern scientist could extend Tolstoy’s list endlessly. We evolved, however, as cognitively improvisational tool users, dependent on identifying actions we could take that would lead to immediate payoffs. So our minds evolved to represent situations in a way that highlighted the element in the nexus that we could manipulate to bring about a favored outcome. Elements in the situation that remained stable and that we could not change (like gravity or human nature) were left out of our representation of causes. Similarly, variable factors in the nexus (like the wind blowing), which we could not control but which predicted an outcome (the apple falling), were also useful to represent as causes, to prepare ourselves to exploit opportunities or avoid dangers. So the reality of the causal nexus is cognitively ignored in favor of the cartoon of single causes. While useful for a forager, this machinery impoverishes our scientific understanding, rendering discussions (whether elite, scientific, or public) of the “causes”—of cancer, war, violence, mental disorders, infidelity, unemployment, climate, poverty, and so on—ridiculous.

  Similarly, as players of evolved social games, we are designed to represent others’ behavior and associated outcomes as caused by free will (by intentions). That is, we evolved to view “man,” as Aristotle put it, as “the originator of his own actions.” Given an outcome we dislike, we ignore the nexus and trace “the” causal chain back to a person. We typically represent the backward chain as ending in—and the outcome as originating in—the person. Locating the “cause” (blame) in one or more persons allows us to punitively motivate others to avoid causing outcomes we don’t like (or to incentivize outcomes we do like). More despicable, if something happens that many regard as a bad outcome, this gives us the opportunity to sift through the causal nexus for the one thread that colorably leads back to our rivals (where the blame obviously lies). Lamentably, much of our species’ moral psychology evolved for moral warfare, a ruthless zero-sum game. Offensive play typically involves recruiting others to disadvantage or eliminating our rivals by publicly sourcing them as the cause of bad outcomes. Defensive play involves giving our rivals no ammunition to mobilize others against us.

  The moral game of blame attribution is only one subtype of misattribution arbitrage. For example, epidemiologists estimate that it was not until 1905 that you were better off going to a physician. (Ignaz Semelweiss noticed that doctors doubled the mortality rate of mothers at delivery.) The role of the physician predated its rational function for thousands of years, so why were there physicians? Economists, forecasters, and professional portfolio managers typically do no better than chance, yet command immense salaries for their services. Food prices are driven up to starvation levels in underdeveloped countries, based on climate models that cannot successfully retrodict known climate history. Liability lawyers win huge sums for plaintiffs who get diseases at no higher rates than others not exposed to “the” supposed cause. What is going on? The complexity and noise permeating any real causal nexus generates a fog of uncertainty. Slight biases in causal attribution or in blameworthiness (e.g., sins of commission are worse than sins of omission) allow a stable niche for extracting undeserved credit or targeting undeserved blame. If the patient recovers, it was due to my heroic efforts; if not, the underlying disease was too severe. If it weren’t for my macroeconomic policy, the economy would be even worse. The abandonment of moral warfare and a wider appreciation of nexus causality and misattribution arbitrage would help us all shed at least some of the destructive delusions that cost humanity so much.

  Self-Serving Bias

  David G. Myers

  Social psychologist, Hope College; author, A Friendly Letter to Skeptics and Atheists

  Most of us have a good reputation with ourselves. That’s the gist of a sometimes amusing and frequently perilous phenomenon that social psychologists call self-serving bias.

  Accepting more responsibility for success than for failure, for good deeds than for bad.

  In experiments, people readily accept credit when told they have succeeded, attributing it to their ability and effort. Yet they attribute failure to such external factors as bad luck or the problem’s “impossibility.” When we win at Scrabble, it’s because of our verbal dexterity. When we lose, it’s because “I was stuck with a Q but no U.” Self-serving attributions have been observed with athletes (after victory or defeat), students (after high or low exam grades), drivers (after accidents), and managers (after profits or losses). The question “What have I done to deserve this?” is one we ask of our troubles, not our successes.

  The better-than-average phenomenon: How do I love me? Let me count the ways.

  It’s not just in Lake Wobegon that all the children are above average. In one College Board survey of 829,000 high-school seniors, 0 percent rated themselves below average in “ability to get along with others,” 60 percent rated themselves in the top 10 percent, and 25 percent rated themselves in the top 1 percent. Compared with our average peer, most of us fancy ourselves as more intelligent, better-looking, less prejudiced, more ethical, healthier, and likely to live longer—a phenomenon recognized in Freud’s joke about the man who told his wife, “If one of us should die, I shall move to Paris.”

  In everyday life, more than nine in ten drivers are above-average drivers, or so they presume. In surveys of college faculty, 90 percent or more have rated themselves as superior to their average colleague (which naturally leads to some envy and disgruntlement when one’s talents are underappreciated). When husbands and wives estimate what percent of the housework they contribute, or when work-team members estimate their contributions, their self-estimates routinely sum to more than 100 percent.

  Studies of self-serving bias and its cousins—illusory optimism, self-justification, and in-group bias—remind us of what literature and religion have taught: Pride often goes before a fall. Perceiving ourselves and our group favorably protects us against depression, buffers stress, and sustains our hopes. But it does so at the cost of marital discord, bargaining impasses, condescending prejudice, national hubris, and war. Being mindful of self-serving bias beckons us not to false modesty but to a humility that affirms our genuine talents and virtues and likewise those of others.

  Cognitive Humility

  Gary Marcus

  Director, Center for Child Language, New York University; author, Kluge: The Haphazard Evolution of the Human Mind

  Hamlet may have said that human beings are noble in reason and infinite in faculties, but in reality—as four decades of experiments in cognitive psychology have shown—our minds are finite and far from noble. Knowing their limits can help us to become better reasoners.

  Almost all of those limits start with a peculiar fact about human memory: Although we are pretty good at storing information in our brains, we are pretty poor at retrieving it. We can recognize photos from our high school yearbooks decades later, yet find it impossible to remember what we had for breakfast yesterday. Faulty memories have been known to lead to erroneous eyewitness testimony (and false imprisonment), to marital friction (in the form of overlooked anniversaries), and even death (skydivers, for example, have been known to forget to pull their rip cords, accounting by one estimate for approximately 6 percent of diving fatalities).r />
  Computer memory is much better than human memory because early computer scientists discovered a trick that evolution never did: organizing information by assigning every memory to a master map in which each bit of information to be stored is assigned a uniquely identifiable location in the computer’s memory vaults. Human beings, by contrast, appear to lack such master memory maps and retrieve information in far more haphazard fashion, by using clues (or cues) to what’s being looked for. In consequence, our memories cannot be searched as systematically or as reliably as that of a computer (or Internet database). Instead, human memories are deeply subject to context. Scuba divers, for example, are better at remembering the words they study underwater when they are tested underwater rather than on land, even if the words have nothing to do with the sea.

  Sometimes this sensitivity to context is useful. We are better able to remember what we know about cooking when we’re in the kitchen than when we’re, say, skiing. But it also comes at a cost: When we need to remember something in a situation other than the one in which it was stored, the memory is often hard to retrieve. One of the biggest challenges in education, for example, is to get children to apply what they learn in school to real-world situations. Perhaps the most dire consequence is that human beings tend to be better at remembering evidence consistent with their beliefs than evidence that contradicts those beliefs. When two people disagree, it is often because their prior beliefs lead them to remember (or focus on) different bits of evidence. To consider something well, of course, is to evaluate both sides of an argument, but unless we also go the extra mile of deliberately forcing ourselves to consider alternatives—which doesn’t come naturally—we’re more prone to recall evidence consistent with a belief than inconsistent with it.

  Overcoming this mental weakness (known as confirmation bias) is a lifelong struggle; recognizing that we all suffer from it is an important first step. We can try to work around it, compensating for our inborn tendencies toward self-serving and biased recollection by disciplining ourselves to consider not just the data that might fit with our own beliefs but also the data that might lead other people to have beliefs different from ours.

  Technologies Have Biases

  Douglas Rushkoff

  Media theorist; documentary writer; author, Program or Be Programmed: Ten Commands for a Digital Age

  People like to think of technologies and media as neutral and that only their use or content determines their effect. Guns don’t kill people, after all; people kill people. But guns are much more biased toward killing people than, say, pillows—even though many a pillow has been utilized to smother an aging relative or adulterous spouse.

  Our widespread inability to recognize or even acknowledge the biases of the technologies we use renders us incapable of gaining any real agency through them. We accept our iPads, Facebook accounts, and automobiles at face value—as preexisting conditions—rather than as tools with embedded biases.

  Marshall McLuhan exhorted us to recognize that our media affect us beyond whatever content is being transmitted through them. And while his message was itself garbled by the media through which he expressed it (the medium is the what?), it is true enough to be generalized to all technology. We are free to use any car we like to get to work—gasoline-, diesel-, electric-, or hydrogen-powered—and this sense of choice blinds us to the fundamental bias of the automobile toward distance, commuting, suburbs, and energy consumption.

  Likewise, soft technologies, from central currency to psychotherapy, are biased in their construction as much as in their implementation. No matter how we spend U.S. dollars, we are nonetheless fortifying banking and the centralization of capital. Put a psychotherapist on his own couch and a patient in the chair and the therapist will begin to exhibit treatable pathologies. It’s set up that way, just as Facebook is set up to make us think of ourselves in terms of our “likes” and an iPad is set up to make us start paying for media and stop producing them ourselves.

  If the concept that technologies have biases were to become common knowledge, we could implement them consciously and purposefully. If we don’t bring this concept into general awareness, our technologies and their effects will continue to threaten and confound us.

  Bias Is the Nose for the Story

  Gerald Smallberg

  Practicing neurologist, New York City; playwright, off-off-Broadway: Charter Members, The Gold Ring

  The exponential explosion of information and its accessibility make our ability to gauge its truthfulness not only more important but also more difficult. Information has importance in proportion to its relevance and meaning. Its ultimate value is how we use it to make decisions and put it in a framework of knowledge.

  Our perceptions are crucial in appreciating truth. However, we do not apprehend objective reality. Perception is based on recognition and interpretation of sensory stimuli derived from patterns of electrical impulses. From this data, the brain creates analogs and models that simulate tangible, concrete objects in the real world. Experience, though, colors and influences all of our perceptions by anticipating and predicting everything we encounter. It is the reason Goethe advised that “one must ask children and birds how cherries and strawberries taste.” This preferential set of intuitions, feelings, and ideas—less poetically characterized by the term “bias”—poses a challenge to our ability to weigh evidence accurately to arrive at truth. Bias is the thumb that experience puts on the scale.

  Our brains evolved having to make the right bet with limited information. Fortune, it has been said, favors the prepared mind. Bias, in the form of expectation, inclination, and anticipatory hunches, helped load the dice in our favor and for that reason is hardwired into our thinking. Bias is an intuition—a sensitivity, a receptiveness—that acts as a lens or filter on all our perceptions. “If the doors of perception were cleansed,” William Blake said, “every thing would appear to man as it is, infinite.” But without our biases to focus our attention, we would be lost in that endless and limitless expanse. We have at our disposal an immeasurable assortment of biases, and their combination in each of us is as unique as a fingerprint. These biases mediate between our intellect and emotions to help congeal perception into opinion, judgment, category, metaphor, analogy, theory, and ideology, which frame how we see the world.

  Bias is tentative. Bias adjusts as the facts change. Bias is a provisional hypothesis. Bias is normal.

  Although bias is normal in the sense that it is a product of how we select and perceive information, its influence on our thinking cannot be ignored. Medical science has long been aware of the inherent bias that occurs in collecting and analyzing clinical data. The double-blind, randomized, controlled study, the gold standard of clinical design, was developed in an attempt to nullify its influence.

  We live in the world, however, not in a laboratory, and bias cannot be eliminated. Bias, critically utilized, sharpens the collection of data by knowing when to look, where to look, and how to look. It is fundamental to both inductive and deductive reasoning. Darwin didn’t collect his data randomly or disinterestedly to formulate the theory of evolution by natural selection. Bias is the nose for the story.

  Truth needs continually to be validated against all evidence that challenges it fairly and honestly. Science, with its formal methodology of experimentation and the reproducibility of its findings, is available to anyone who plays by its rules. No ideology, religion, culture, or civilization is awarded special privileges or rights. The truth that survives this ordeal has another burden to bear. Like the words in a multidimensional crossword puzzle, it has to fit together with all the other pieces already in place. The better and more elaborate the fit, the more certain the truth. Science permits no exceptions. It is inexorably revisionary, learning from its mistakes, erasing and rewriting even its most sacred texts, until the puzzle is complete.

  Control Your Spotlight

  Jonah Lehrer

  Contributing
editor, Wired magazine; author, How We Decide

  In the late 1960s, the psychologist Walter Mischel began a simple experiment with four-year-old children. He invited the kids into a tiny room containing a desk and a chair and asked them to pick a treat from a tray of marshmallows, cookies, and pretzel sticks. Mischel then made the four-year-olds an offer: They could either eat one treat right away or, if they were willing to wait while he stepped out for a few minutes, they could have two treats when he returned. Not surprisingly, nearly every kid chose to wait.

  At the time, psychologists assumed that the ability to delay gratification in order to get that second marshmallow or cookie depended on willpower. Some people simply had more willpower than others, which allowed them to resist tempting sweets and save money for retirement. However, after watching hundreds of kids participate in the marshmallow experiment, Mischel concluded that this standard model was wrong. He came to realize that willpower was inherently weak and that children who tried to postpone the treat—gritting their teeth in the face of temptation—soon lost the battle, often within thirty seconds.

  Instead, Mischel discovered something interesting when he studied the tiny percentage of kids who could successfully wait for the second treat. Without exception, these “high delayers” all relied on the same mental strategy: They found a way to keep themselves from thinking about the treat, directing their gaze away from the yummy marshmallow. Some covered their eyes or played hide-and-seek underneath the desks. Others sang songs from Sesame Street, or repeatedly tied their shoelaces, or pretended to take a nap. Their desire wasn’t defeated, it was merely forgotten.

 

‹ Prev