Super Thinking

Home > Other > Super Thinking > Page 26
Super Thinking Page 26

by Gabriel Weinberg


  If your rich grandfather leaves his fortune to all his kids equally, that would probably be perceived as fair from a distributive justice perspective. However, if one of the kids was taking care of your grandfather for the last twenty years, then this distribution no longer seems fair from a procedural justice perspective. Many current political debates around topics such as income inequality and affirmative action revolve around these different formulations of justice.

  Sometimes this distinction is framed as fair share versus fair play. For example, in the U.S., K–12 public education is freely available to all. Because of this educational access, some conclude that everyone has an equal opportunity to become successful. Others believe that the quality of public educational opportunities differs widely depending on where you live, and that education itself doesn’t grant access to the best advancement opportunities, which often come through family and social connections. From this latter perspective, fair play doesn’t really exist, and so there needs to be some corrections to achieve a more fair share, such as affirmative action or similar policies. As Martin Luther King Jr. put it in a May 8, 1967, interview with NBC News: “It’s all right to tell a man to lift himself by his own bootstraps, but it is a cruel jest to say to a bootless man that he ought to lift himself by his own bootstraps.”

  In any case, perceived unfairness triggers strong emotional reactions. Knowing that, people will try to influence you by framing situations from a fairness perspective. In fact, many arguments try to sway you from rational decision making by pulling at your emotions, including fear, hope, guilt, pride, anger, sadness, and disgust. Influence by manipulation of emotions, whether created by perceived injustice, violation of social norms, or otherwise, is called appeal to emotion.

  Fear is a particularly strong influencer, and it has its own named model associated with it, FUD, which stands for fear, uncertainty, and doubt. FUD is commonly used in marketing (“Our competitor’s product is dangerous”), political speeches (“We could suffer dire consequences if this law is passed”), religion (eternal damnation), etc.

  A related practice is the use of a straw man, where instead of addressing your argument directly, an opponent misrepresents (frames) your argument by associating it with something else (the straw man) and tries to make the argument about that instead. For example, suppose you ask your kid to stop playing video games and do his homework, and he replies that you’re too strict and never let him do anything. He has tried to move the topic of conversation from doing homework to your general approach to parenting.

  In complex subjects where there are a multitude of problems and potential solutions (e.g., climate change, public policy, etc.), it is easy to have two people talk past each other when they both set up straw men rather than address each other’s points. In these settings it helps to get on the same page and clarify exactly what is under debate. However, sometimes one side (or both) may be more interested in persuading bystanders than in resolving the debate. In these situations, they could be deliberately putting up a straw man, which can unfortunately be an effective way to frame the argument to their advantage in terms of bystander influence.

  Many negative political ads and statements use straw men to take a vote or action out of context. You may be familiar with the National Football League (NFL) controversy regarding the fact that some players kneeled during the national anthem in protest of police brutality against African Americans. Some politicians responded by criticizing the action as disrespectful to the military. Shifting the focus to how the players were protesting drew attention away from the underlying issue of why they were protesting.

  Another related mental model is ad hominem (Latin for “to the person”), where the person making the argument is attacked without addressing the central point they made. “Who are you to make this point? You’re not an expert on this topic. You’re just an amateur.” It’s essentially name-calling and often involves lobbing much more incendiary labels at the other side. Political discourse in recent years in the U.S. is unfortunately littered with this model, and the usual names leveled are so undignified that we don’t want to include them in our book.

  This model is the flip side of the authority model we examined in the last section. Instead of relying on authority to gain influence, here another’s authority is being attacked so that they will lack influence. Again, like straw man and appeal to emotion, these models attempt to frame a situation away from an important issue and toward another that is easier to criticize.

  When you are in a conflict, you should consider how its framing is shaping the perception of it by you and others. Take the prisoner’s dilemma. The prosecutors have chosen to frame the situation competitively because, for them, the Nash equilibrium with both criminals getting five years is actually the preferred outcome. However, if the criminals can instead frame the situation cooperatively—stick together at all costs—they can vastly improve their outcome.

  WHERE’S THE LINE?

  In Chapter 3, we advised seeking out design patterns that help you more quickly address issues, and watching out for anti-patterns, intuitively attractive yet suboptimal solutions. Influence models like those we’ve been discussing in the past two sections can also be dark patterns when they are used to manipulate you for someone else’s benefit (like at the casino).

  The name comes from websites that organize their sites to keep you in the dark through using disguised ads, burying information on hidden costs, or making it really difficult to cancel a subscription or reach support. In short, they use these types of patterns to manipulate and confuse you.

  However, this concept is also applicable to everyday life offline as well. And knowing a few specific dark patterns can be helpful in adversarial situations. You’re probably familiar with the mythical tale of the Trojan horse, a large wooden horse made by the Greeks to win a war against the Trojans. The Greeks couldn’t get into the city of Troy, and so they pretended to sail away, leaving behind this supposed parting gift. What the Trojans didn’t know is that the Greeks also left a small force of soldiers inside the horse. The Trojans brought the horse into the city, and under the cover of night, the Greek soldiers exited the horse and proceeded to destroy Troy and win the war.

  A Trojan horse can refer to anything that persuades you to lower your defenses by seeming harmless or even attractive, like a gift. It often takes the form of a bait and switch, such as a malicious computer program that poses as an innocuous and enticing download (the bait), but instead does something nefarious, like spying on you (the switch).

  A familiar example would be an advertised low price for an item (such as a hotel room) that doesn’t really exist at that price (after “resort fees” or otherwise). Builders similarly attract buyers to new-construction homes with low list prices that correspond to so-called “builder-grade” finishes that no one really wants. They then proceed to show buyers a model home with more expensive finishes—all upgrades—which in aggregate can easily push the bounds of a buyer’s budget. If it sounds too good to be true, it usually is.

  Spectacular examples of dark patterns can be found in business. Enron, a now bankrupt energy company, once built a fake trading floor at its Houston headquarters to trick Wall Street analysts into believing that Enron was trading much more than it actually was. When the analysts came to Houston for Enron’s annual meeting, the Enron executives pretended that there was all this action going on, when in fact it was all a ruse that they had been rehearsing, including having an elaborate array of TVs and computers assembled into a “war room.”

  Theranos, a now bankrupt healthcare company, committed a similar fraud when putting on demonstrations of its “product” for partners, including executives from Walgreens. Theranos machines were put on display, but according to the U.S. Securities and Exchange Commission, the blood samples collected were actually run on outside lab equipment that Theranos purchased through a shell company.

  The Enron and Theranos tactics both exemplify another dark pattern, c
alled a Potemkin village, which is something specifically built to convince people that a situation is better than it actually is. The term is derived from a historically questionable tale of a portable village built to impress Empress Catherine II on her 1787 visit to Crimea. Nevertheless, there are certainly real instances of Potemkin villages, including a village built by North Korea in the 1950s near the DMZ to lure South Korean soldiers to defect, and, terribly, a Nazi-designed concentration camp in World War II fit to show the Red Cross, which actually disguised a way station to Auschwitz.

  In film, The Truman Show depicts a Potemkin village on a massive scale, where the character Truman Burbank (played by Jim Carrey) resides in an entirely fake town filled with actors as part of a reality TV show. A form of this dark pattern can occur online when a website makes it seem like it has more users or content than it actually does in order to get you to participate. For example, the infamous dating site Ashley Madison (which targets people already in relationships) was found to be sending messages from fake female accounts to lure males in.

  The military has employed this model widely, from dummy guns to dummy tanks and even dummy paratroopers. These were used by all sides in World War II and in many other armed conflicts to trick foreign intelligence services. They are also used internally in training exercises. As technology has improved, so have the dummies. Modern dummies can mimic the heat signature of a real tank, even fooling infrared detectors.

  People similarly make homes and businesses seem secure by putting up fake security cameras, having lights in their home on timers, or even putting up signs for a security service they don’t actually use. A related business practice is known as vaporware, where a company announces a product that it actually hasn’t made yet to test demand, gauge industry reaction, or give a competitor pause from participating in the same market.

  In any conflict situation, you should be on the lookout for dark patterns. While many influence models, such as the ones in this section, are commonly thought of as malicious and are therefore easier to look out for (e.g., bait and switch), others from the previous two sections are subtler. Many are considered more innocuous (e.g., scarcity), but they too can all be used to manipulate you. For example, are the common nonprofit uses of reciprocity techniques (free address labels) or social proof (celebrity endorsements) also dark patterns? In one sense, they might lead you to donate more than you would otherwise. However, it may be a good cause and they aren’t tricking you in the same way that a hidden bait-and-switch cost is.

  This sliding scale poses an interesting ethical question, one that any organization in business or politics is often faced with: Should you focus on truth and clarity in your promotional materials? Or should you look to influence models to find language that is more persuasive, perhaps due to its emotional appeal? Do the ends justify the means? Only you can decide where the line is for you.

  THE ONLY WINNING MOVE IS NOT TO PLAY

  Considering a conflict through a game-theory lens helps you identify what you have to gain and what you have to lose. We have just looked at models that increase your chances of a good outcome through influencing other players. Now we will consider the same problem from the inverse (see inverse thinking in Chapter 1) and explore models that decrease your chances of a bad outcome. Often this means finding a way to avoid the conflict altogether.

  At the climax of the classic 1983 movie WarGames, World War III seems imminent. An artificial intelligence (known as Joshua) has been put in charge of the U.S. nuclear launch control system. Thinking he has hacked into his favorite game manufacturer, a teenage hacker (played by Matthew Broderick) unwittingly asks Joshua to play a “game” against him called Global Thermonuclear War, setting off a chain of events that has Joshua attempting to launch a real full-scale nuclear attack against Russia.

  Through dialogue, the character Professor Falken explains why he created Joshua and this game:

  The whole point was to find a way to practice nuclear war without destroying ourselves. To get the computers to learn from mistakes we couldn’t afford to make. Except, I never could get Joshua to learn the most important lesson. . . . Futility. That there’s a time when you should just give up.

  Professor Falken then draws an analogy to tic-tac-toe, continuing,

  There’s no way to win. The game itself is pointless! But back at the war room, they believe you can win a nuclear war. That there can be “acceptable losses.”

  When all hope seems lost, the teenager recalls this conversation and asks if there is any way to make Joshua play against itself in tic-tac-toe, hoping the computer will learn that any strategy ends in a tie. After learning the futility of playing tic-tac-toe, Joshua proceeds to simulate all the possible strategies for the Global Thermonuclear War game and comes to the same conclusion. He says (in a computer voice):

  A strange game. The only winning move is not to play. How about a nice game of chess?

  The reason that there is no winner in Global Thermonuclear War is that both sides have amassed enough weapons to destroy the other side and so any nuclear conflict would quickly escalate to mutually assured destruction (MAD). As a result, neither side has any incentive to use its weapons offensively or to disarm completely, leading to a stable, albeit tense, peace.

  Mutually assured destruction isn’t just a military model. A parallel in business is when companies amass large patent portfolios, but generally don’t use them on one another for fear of escalating lawsuits that could potentially destabilize all the companies involved. Occasionally you see these suits and countersuits, such as the ones between Apple and Qualcomm (over chip patents), Oracle and Google (over Java patents), and Uber and Google (over autonomous vehicle patents), but these companies often have so many patents (sometimes tens of thousands each) that there could be literally hundreds of suits like these if not for MAD.

  There are countless possible destructive outcomes to a conflict besides this arguably most extreme outcome of MAD. Engaging in any direct conflict is dangerous, though, because conflicts are unpredictable and often cause collateral damage (see Chapter 2). For example, drawn-out divorce battles can be harmful to the children. That’s why it makes sense to consider conflict prevention measures like mediation, or, more generally, diplomacy (see win-win in Chapter 4 for some related mental models).

  If diplomacy by itself doesn’t work, though, there is another set of models to turn to, starting with deterrence, or using a threat to prevent (deter) an action by an adversary. Credible mutually assured destruction makes an excellent deterrent. But even one nuclear blast is so undesirable that simply the possession of a nuclear weapon has proven to be a powerful deterrent. For example, North Korea seemingly developed nuclear weapons to secure its survival as a state, despite being an authoritarian dictatorship with a well-documented history of human rights violations. So far, this tactic is working as a deterrence strategy along with other strategies it pursues, including threats of conventional bombing of South Korea and aligning with China.

  The deterrence model can be appropriate when you want to try to prevent another person or organization from taking an action that would be harmful to you or society at large. In the criminal justice system, punishments may be enacted to try to deter future crime (e.g., three-strikes laws). Government regulations are often designed in part to deter unpleasant future economic or societal outcomes (e.g., deposit insurance deterring bank runs). Businesses also take actions to deter new entrants, for example, by using their scale to price goods so low that new firms cannot profitably compete (e.g., Walmart) or lobbying for regulations that benefit them at the expense of competition (e.g., anti–net neutrality laws).

  The primary challenge of this model, though, is actually finding an effective deterrent. As we discussed in Chapter 2, things don’t always go according to plan. When you want to put a deterrent in place, you must evaluate whether it is truly effective and whether there are any unintended consequences.

  For example, what are effective crime deterr
ents? Research shows that people are more deterred by the certainty they will be caught and convicted than by the specific punishment they might receive. If there is little chance of getting caught, some simply do not care what the punishment is. Further, most people are not even aware of the specific punishments they might face. Financial fraudster Bernie Madoff thought he was never going to be caught and probably never considered the possibility of a 150-year prison sentence.

  Additionally, there is evidence to suggest that not only does prison time not reduce repeat offenses, but there is actually a chance that it increases the probability of committing a crime again. The real solution to deterring crime is likely related to the root cause of why people commit specific types of crimes rather than to any particular punishment.

  A tactical approach to deterrence is the carrot-and-stick model, which uses a promise of a reward (the carrot) and at the same time a threat of punishment (the stick) to deter behavior. In our household, we sometimes try to deter fighting between our kids using dessert as the carrot and loss of iPad time as the stick. It’s a form of good cop, bad cop.

  What you don’t want is for the carrot-stick combination to be too weak such that the rational decision is just to ignore the carrot and deal with the stick. Economic sanctions and corporate fines are hotly debated in terms of efficacy because of this, with the latter often being thought of as more of a cost of doing business than a deterrent.

  One effective example of the carrot-and-stick approach is Operation Ceasefire, an initiative started in Boston that aims to curtail gang-related violence. The stick part of the program focuses on a message to specific repeat perpetrators of violent crime about the certainty of future enforcement, including a promise that any new violence, especially gun violence, will result in an immediate and intense police response. The carrot part of the program is the offer of help to these same individuals, including money, job training, community support, and one-to-one mentoring in a concerted effort to get them to live productive lives. In the U.S., cities that have implemented this strategy, such as Boston, Chicago, Cincinnati, and Indianapolis, have amazingly reduced their gun homicide rates nearly 25 to 60 percent by assisting only a handful of people.

 

‹ Prev