Thinking in Bets

Home > Other > Thinking in Bets > Page 14
Thinking in Bets Page 14

by Annie Duke


  Haidt, along with Philip Tetlock and four others (social psychologists José Duarte, Jarret Crawford, and Lee Jussim, and sociologist Charlotta Stern) founded an organization called Heterodox Academy, to fight this drift toward homogeneity of thought in science and academics as a whole. In 2015, they published their findings in the journal Behavioral and Brain Sciences (BBS), along with thirty-three pieces of open peer commentary. The BBS paper explained and documented the political imbalance in social psychology, how it reduces the quality of science, and what can be done to improve the situation.

  Social psychology is particularly vulnerable to the effects of political imbalance. Social psychologists are researching many of the hot-button issues dividing the political Left and Right: racism, sexism, stereotypes, and responses to power and authority. Coming from a community composed almost entirely of liberal-leaning scientists, the quality and impact of research can suffer.

  The authors identified instances in which political values became “embedded into research questions in ways that make some constructs unobservable and unmeasurable, thereby invalidating attempts at hypothesis testing.” This occurred in several experiments involving attitudes on environmental issues and attempts to link ideology to unethical behavior. They also identified the risk of researchers concentrating on topics that validated their shared narrative and avoiding topics that contested that narrative, such as stereotype accuracy and the scope and direction of prejudice. Finally, they pointed to the obvious problem inherent in the legitimacy of research characterizing conservatives as dogmatic and intolerant done by a discipline that is over 10-to-1 liberal leaning.

  First, the Heterodox Academy effort shows that there is a natural drift toward homogeneity and confirmatory thought. We all experience this gravitation toward people who think like we do. Scientists, overwhelmingly trained and chartered toward truthseeking, aren’t immune. As the authors of the BBS paper recognized, “Even research communities of highly intelligent and well-meaning individuals can fall prey to confirmation bias, as IQ is positively correlated with the number of reasons people find to support their own side in an argument.” That’s how robust these biases are. We see that even judges and scientists succumb to these biases. We shouldn’t feel bad, whatever our situation, about admitting that we also need help.

  Second, groups with diverse viewpoints are the best protection against confirmatory thought. Peer review, the gold standard that epitomizes the open-mindedness and hypothesis testing of the scientific method, “offers much less protection against error when the community of peers is politically homogeneous.” In other words, the opinions of group members aren’t much help if it is a group of clones. Experimental studies cited in the BBS paper found that confirmation bias led reviewers “to work extra hard to find flaws with papers whose conclusions they dislike, and to be more permissive about methodological issues when they endorse the conclusions.” The authors of the BBS paper concluded that “[n]obody has found a way to eradicate confirmation bias in individuals, but we can diversify the field to the point to where individual viewpoint biases begin to cancel out each other.”

  The BBS paper, and the continuing work of Heterodox Academy, includes specific recommendations geared toward encouraging diversity and dissenting opinions. I encourage you to read the specific recommendations, which include things like a stated antidiscrimination policy (against opposing viewpoints), developing ways to encourage people with contrary viewpoints to join the group and engage in the process, and surveying to gauge the actual heterogeneity or homogeneity of opinion in the group. These are exactly the kinds of things we would do well to adopt (and, where necessary, adapt) for groups in our personal lives and in the workplace.

  Even among those who are committed to truthseeking, judges and academics, we can see how strong the tendency is to seek out confirmation of our beliefs. If you have any doubt this is true for all of us, put this book down for a moment and check your Twitter feed for whom you follow. It’s a pretty safe bet that the bulk of them are ideologically aligned with you. If that’s the case, start following some people from the other side of the aisle.

  Wanna bet (on science)?

  If thinking in bets helps us de-bias, couldn’t we apply it to help solve the Heterodox Academy problem? One might guess that scientists would be more accurate if they had to bet on the likelihood that results would replicate as compared to traditional peer review, which can be vulnerable to viewpoint bias. Especially in an anonymous betting market, confirming the strength of your pre-existing ideology or betting solely on the basis that replication of a study confirms your own work or beliefs counts for nothing. The way a scientist would be “right” in such a betting market is by using their skill in a superior way to make the most objective bets on whether results would or would not replicate. Researchers who knew in advance their work would be subject to a market test would also face an additional form of accountability that would likely modulate their reporting of results.

  At least one study has found that, yes, a betting market where scientists wager on the likelihood of experimental results replicating was more accurate than expert opinion alone. In psychology, there has been a controversy over the last decade about a potentially large number of published studies with results subsequent researchers could not replicate. The Reproducibility Project: Psychology has been working on replicating studies from top psychology journals. Anna Dreber, a behavioral economist at the Stockholm School of Economics, with several colleagues set up a betting market based on these replication attempts. They recruited a bunch of experts in the relevant fields and asked their opinions on the likelihood the Reproducibility Project would replicate the results of forty-four studies. They then gave those experts money to bet on each study’s replication in a prediction market.

  Experts engaging in traditional peer review, providing their opinion on whether an experimental result would replicate, were right 58% of the time. A betting market in which the traders were the exact same experts and those experts had money on the line predicted correctly 71% of the time.

  A lot of people were surprised to learn that the expert opinion expressed as a bet was more accurate than expert opinion expressed through peer review, since peer review is considered a rock-solid foundation of the scientific method. Of course, this result shouldn’t be surprising to readers of this book. We know that scientists are dedicated to truthseeking and take peer review seriously. Arguably, there is already an implied betting element in the scientific process, in that researchers and peer reviewers have a reputational stake in the quality of their review. But we know that scientists, like judges—and like us—are human and subject to these patterns of confirmatory thought. Making the risk explicit rather than implicit refocuses us all to be more objective.

  A growing number of businesses are, in fact, implementing betting markets to solve for the difficulties in getting and encouraging contrary opinions. Companies implementing prediction markets to test decisions include Google, Microsoft, General Electric, Eli Lilly, Pfizer, and Siemens. People are more willing to offer their opinion when the goal is to win a bet rather than get along with people in a room.

  Accuracy, accountability, and diversity wrapped into a group’s charter all contribute to better decision-making, especially if the group promotes thinking in bets. Now that we understand the elements of a good charter, we move on to the rules of engagement for a productive decision group, how to most effectively communicate with one another. A pioneering sociologist actually designed a set of truthseeking norms for a group (scientists) that form a pretty good blueprint for engagement. I don’t know if he was a bettor, but he was influenced by something very relevant to thinking about bias, rationality, and the potential gulf between perception and reality: he was a magician.

  CHAPTER 5

  Dissent to Win

  CUDOS to a magician

  Meyer R. Schkolnick was born on the Fourth of July, 1910, in South Phila
delphia. He performed magic at birthday parties as a teenager and considered a career as a performer. He adopted the performing name “Robert Merlin.” Then a friend convinced him that a teen magician naming himself after Merlin was too on the nose, so he performed as Robert Merton. When Robert K. Merton (to distinguish him from his son, economist and Nobel laureate Robert C. Merton) died in 2003, the New York Times called him “one of the most influential sociologists of the 20th century.”

  The founders of Heterodox Academy, in the BBS paper, specifically recognized Merton’s 1942 and 1973 papers, in which he established norms for the scientific community known by the acronym CUDOS: “An ideologically balanced science that routinely resorted to adversarial collaborations to resolve empirical disputes would bear a striking resemblance to Robert Merton’s ideal-type model of a self-correcting epistemic community, one organized around the norms of CUDOS.” Per the BBS paper, CUDOS stands for

  Communism (data belong to the group),

  Universalism (apply uniform standards to claims and evidence, regardless of where they came from),

  Disinterestedness (vigilance against potential conflicts that can influence the group’s evaluation), and

  Organized Skepticism (discussion among the group to encourage engagement and dissent).

  If you want to pick a role model for designing a group’s practical rules of engagement, you can’t do better than Merton. To start, he coined the phrase “role model,” along with “self-fulfilling prophecy,” “reference group,” “unintended consequences,” and “focus group.” He founded the science of sociology and was the first sociologist awarded the National Medal of Science.

  Merton began his academic career in the 1930s, studying the history of institutional influences on the scientific community. To him, it was a story of many periods of scientific advancement spurred on by geopolitical influences, but also periods of struggle to maintain independence from those influences. His life spanned both world wars and the Cold War, in which he studied and witnessed nationalist movements in which people “arrayed their political selves in the garb of scientists,” explicitly evaluating scientific knowledge based on political and national affiliations.

  In 1942, Merton wrote about the normative structure of science. He tinkered with the paper over the next thirty-one years, publishing the final version as part of a book in 1973. This twelve-page paper is an excellent manual for developing rules of engagement for any truthseeking group. I recognized its application to my poker group and professional and workplace groups I’ve encountered in speaking and consulting. Each element of CUDOS—communism, universalism, disinterestedness, and organized skepticism—can be broadly applied and adapted to push a group toward objectivity. When there is a drift toward confirmation and away from exploring accuracy, it’s likely the result of the failure to nurture one of Merton’s norms. Not surprisingly, Merton’s paper would make an excellent career guide for anyone seeking to be a profitable bettor, or a profitable decision-maker period.

  Mertonian communism: more is more

  The Mertonian norm of communism (obviously, not the political system) refers to the communal ownership of data within groups. Merton argued that, in academics, an individual researcher’s data must eventually be shared with the scientific community at large for knowledge to advance. “Secrecy is the antithesis of this norm; full and open communication its enactment.” In science, this means that the community has an agreement that research results cannot properly be reviewed without access to the data and a detailed description of the experimental design and methods. Researchers are entitled to keep data private until published but once they accomplish that, they should throw the doors open to give the community every opportunity to make a proper assessment. Any attempt at accuracy is bound to fall short if the truthseeking group has only limited access to potentially pertinent information. Without all the facts, accuracy suffers.

  This ideal of scientific sharing was similarly described by physicist Richard Feynman in a 1974 lecture as “a kind of utter honesty—a kind of leaning over backwards. For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results . . .”

  It is unrealistic to think we can perfectly achieve Feynman’s ideal; even scientists struggle with it. Within our own decision pod, we should strive to abide by the rule that “more is more.” Get all the information out there. Indulge the broadest definition of what could conceivably be relevant. Reward the process of pulling the skeletons of our own reasoning out of the closet. As a rule of thumb, if we have an urge to leave out a detail because it makes us uncomfortable or requires even more clarification to explain away, those are exactly the details we must share. The mere fact of our hesitation and discomfort is a signal that such information may be critical to providing a complete and balanced account. Likewise, as members of a group evaluating a decision, we should take such hesitation as a signal to explore further.

  To the extent we regard self-governance in the United States as a truthseeking experiment, we have established that openness in the sharing of information is a cornerstone of making and accounting for decisions by the government. The free-press and free-speech guarantees of the Constitution recognize the importance of self-expression, but they also exist because we need mechanisms to assure that information makes it to the public. The government serves the people, so the people own the data and have a right to have the data shared with them. Statutes like the Freedom of Information Act have the same purpose. Without free access to information, it is impossible to make reasoned assessments of our government.

  Sharing data and information, like the other elements of a truthseeking charter, is done by agreement. Academics agree to share results. The government shares information by agreement with the people. Without an agreement, we can’t and shouldn’t compel others to share information they don’t want to share. We all have a right of privacy. Companies and other entities have rights to trade secrets and to protect their intellectual property. But within our group, an agreement to share details pertinent to assessing the quality of a decision is part of a productive truthseeking charter.

  If the group is discussing a decision and it doesn’t have all the details, it might be because the person providing them doesn’t realize the relevance of some of the data. Or it could mean the person telling the story has a bias toward encouraging a certain narrative that they likely aren’t even aware of. After all, as Jonathan Haidt points out, we are all our own best PR agents, spinning a narrative that shines the most flattering light on us.

  We have all experienced situations where we get two accounts of the same event, but the versions are dramatically different because they are informed by different facts and perspectives. This is known as the Rashomon Effect, named for the 1950 cinematic classic Rashomon, directed by Akira Kurosawa. The central element of the otherwise simple plot was how incompleteness is a tool for bias. In the film, four people give separate, drastically different accounts of a scene they all observed, the seduction (or rape) of a woman by a bandit, the bandit’s duel with her husband (if there was a duel), and the husband’s death (from losing the duel, murder, or suicide).

  Even without conflicting versions, the Rashomon Effect reminds us that we can’t assume one version of a story is accurate or complete. We can’t count on someone else to provide the other side of the story, or any individual’s version to provide a full and objective accounting of all the relevant information. That’s why, within a decision group, it is helpful to commit to this Mertonian norm on both sides of the discussion. When presenting a decision for discussion, we should be mindful of details we might be omitting and be extra-safe by adding anything that could possibly be relevant. On the evaluation side, we must query each other to extract those details when necessary.

  My consultation with the CEO who traced his compa
ny’s problems to firing the president demonstrated the value of a commitment to data sharing. After he described what happened, I requested a lot more information. As he got into details of the hiring process for that executive and approaches to dealing with the president’s deficiencies on the job, that led to further questions about those decisions, which, in turn, led to more details being shared. He was identifying what he thought was a bad decision, justified by his initial description of the situation. After we got every detail out of all the dimensions of the decision, we reached a different conclusion: the decision to fire the president had been quite reasonable strategically. It just happened to turn out badly.

  Be a data sharer. That’s what experts do. In fact, that’s one of the reasons experts become experts. They understand that sharing data is the best way to move toward accuracy because it extracts insight from your listeners of the highest fidelity.

  You should hear the amount of detail a top poker player puts into the description of a hand when they are workshopping that hand with another player. A layperson would think, “That seems like a lot of irrelevant, nitpicky detail. Why are they saying all that stuff?” When two expert poker players get together to trade views and opinions about hands, the detail is extraordinary: the positions of everyone acting in the hand; the size of the bets and the size of the pot after each action; what they know about how their opponent(s) has played when they have encountered them in the past; how they were playing in the particular game they were in; how they were playing in the most recent hands in that game (particularly whether they were winning or losing recently); how many chips each person had throughout the hand; what their opponents know about them, etc., etc. What the experts recognize is that the more detail you provide, the better the assessment of decision quality you get. And because the same types of details are always expected, expert players essentially work from a template, so there is less opportunity to convey only the information that might lead the listener down a garden path to a desired conclusion.

 

‹ Prev