Book Read Free

The Culture Code: The Secrets of Highly Successful Groups

Page 9

by Daniel Coyle


  When you watch highly cohesive groups in action, you will see many moments of fluid, trusting cooperation. These moments often happen when the group is confronted with a tough obstacle—for example, a SEAL team navigating a training course, or an improv comedy team navigating a sketch. Without communication or planning, the group starts to move and think as one, finding its way through the obstacle in the same way that a school of fish finds its way through a coral reef, as if they are all wired into the same brain. It’s beautiful.

  If you look closely, however, you will also notice something else. Sprinkled amid the smoothness and fluency are moments that don’t feel so beautiful. These moments are clunky, awkward, and full of hard questions. They contain pulses of profound tension, as people deal with hard feedback and struggle together to figure out what is going on. What’s more, these moments don’t happen by accident. They happen by design.

  At Pixar, those uncomfortable moments happen in what they call BrainTrust meetings. The BrainTrust is Pixar’s method of assessing and improving its movies during their development. (Each film is BrainTrusted about half a dozen times, at regular intervals.) The meeting brings the film’s director together with a handful of the studio’s veteran directors and producers, all of whom watch the latest version of the movie and offer their candid opinion. From a distance, the BrainTrust appears to be a routine huddle. Up close, it’s more like a painful medical procedure—specifically, a dissection that spotlights, names, and analyzes the film’s flaws in breathtaking detail.

  A BrainTrust meeting is not fun. It is where directors are told that their characters lack heart, their storylines are confusing, and their jokes fall flat. But it’s also where those movies get better. “The BrainTrust is the most important thing we do by far,” said Pixar president Ed Catmull. “It depends on completely candid feedback.”

  In rhythm and tone, BrainTrust meetings resemble the atmosphere inside the cockpit of Flight 232. They consist of a steady stream of here’s-the-bad-news notifications accompanied by a few big, scary questions—Does anybody know how to land this thing? Participants spend most of the time in a state of brow-furrowing struggle as they grapple with the fact that the movie, at the moment, isn’t working. “All our movies suck at first,” Catmull says. “The BrainTrust is where we figure out why they suck, and it’s also where they start to not suck.”

  At the Navy SEALs, such uncomfortable, candor-filled moments happen in the After-Action Review, or AAR. The AAR is a gathering that takes place immediately after each mission or training session: Team members put down their weapons, grab a snack and water, and start talking. As in BrainTrusts, the team members name and analyze problems and face uncomfortable questions head-on: Where did we fail? What did each of us do, and why did we do it? What will we do differently next time? AARs can be raw, painful, and filled with pulses of emotion and uncertainty.

  “They’re not real fun,” said Christopher Baldwin, a former operator with SEAL Team Six. “They can get tense at times. I’ve never seen people fistfight, but it can get close. Still, it’s probably the most crucial thing we do together, aside from the missions themselves, because that’s where we figure out what really happened and how to get better.”

  While the SEALs and Pixar generate these moments in a structured way, other groups use looser, more organic methods. At Gramercy Tavern, a New York restaurant whose staff ranks as the culinary world’s version of a SEAL team, I watched as Whitney Macdonald was minutes away from a moment she had long anticipated: her first-ever shift as a front waiter. The lunch crowd was lining up on the sidewalk, and she was excited and a bit nervous.

  Assistant general manager Scott Reinhardt approached her—for a pep talk, I presumed.

  I was wrong. “Okay,” Reinhardt said, fixing Whitney with a bright, penetrating gaze. “The one thing we know about today is that it’s not going to go perfectly. I mean, it could, but odds are really, really, really high that it won’t.”

  A flicker of surprise traveled across Whitney’s face. She had trained for six months for this day, learning every painstaking detail of the job, hoping to perform well. She had worked as a back server, taken notes, sat in on lineup meetings, and shadowed shift after shift. Now she was being told in no uncertain terms that she was destined to screw up.

  “So here’s how we’ll know if you had a good day,” Reinhardt continued. “If you ask for help ten times, then we’ll know it was good. If you try to do it all alone…” His voice trailed off, the implication clear—It will be a catastrophe.

  On the face of it, these awkward moments at Pixar, the SEALs, and Gramercy Tavern don’t make sense. These groups seem to intentionally create awkward, painful interactions that look like the opposite of smooth cooperation. The fascinating thing is, however, these awkward, painful interactions generate the highly cohesive, trusting behavior necessary for smooth cooperation. Let’s look deeper into how this happens.

  Imagine that you and a stranger ask each other the following two sets of questions.

  SET A

  • What was the best gift you ever received and why?

  • Describe the last pet you owned.

  • Where did you go to high school? What was your high school like?

  • Who is your favorite actor or actress?

  SET B

  • If a crystal ball could tell you the truth about yourself, your life, the future, or anything else, what would you want to know?

  • Is there something that you’ve dreamed of doing for a long time? Why haven’t you done it?

  • What is the greatest accomplishment of your life?

  • When did you last sing to yourself? To someone else?

  At first glance, the two sets of questions have a lot in common. Both ask you to disclose personal information, to tell stories, to share. However, if you were to do this experiment (its full form contains thirty-six questions), you would notice two differences. The first is that as you went through Set B, you would feel a bit apprehensive. Your heart rate would increase. You would be more uncomfortable. You would blush, hesitate, and perhaps laugh out of nervousness. (It is not easy, after all, to tell a stranger something important you’ve dreamed of doing all your life.)

  The second difference is that Set B would make you and the stranger feel closer to each other—around 24 percent closer than Set A, according to experimenters.* While Set A allows you to stay in your comfort zone, Set B generates confession, discomfort, and authenticity that break down barriers between people and tip them into a deeper connection. While Set A generates information, Set B generates something more powerful: vulnerability.

  At some level, we intuitively know that vulnerability tends to spark cooperation and trust. But we may not realize how powerfully and reliably this process works, particularly when it comes to group interactions. So it’s useful to meet Dr. Jeff Polzer, a professor of organizational behavior at Harvard who has spent a large chunk of his career examining how small, seemingly insignificant social exchanges can create cascade effects in groups.

  “People tend to think of vulnerability in a touchy-feely way, but that’s not what’s happening,” Polzer says. “It’s about sending a really clear signal that you have weaknesses, that you could use help. And if that behavior becomes a model for others, then you can set the insecurities aside and get to work, start to trust each other and help each other. If you never have that vulnerable moment, on the other hand, then people will try to cover up their weaknesses, and every little microtask becomes a place where insecurities manifest themselves.”

  Polzer points out that vulnerability is less about the sender than the receiver. “The second person is the key,” he says. “Do they pick it up and reveal their own weaknesses, or do they cover up and pretend they don’t have any? It makes a huge difference in the outcome.” Polzer has become skilled at spotting the moment when the signal travels through the group. “You can actually see the people relax and connect and start to trust. The group picks up the idea and
says, ‘Okay, this is the mode we’re going to be in,’ and it starts behaving along those lines, according to the norm that it’s okay to admit weakness and help each other.”

  The interaction he describes can be called a vulnerability loop. A shared exchange of openness, it’s the most basic building block of cooperation and trust. Vulnerability loops seem swift and spontaneous from a distance, but when you look closely, they all follow the same discrete steps:

  1. Person A sends a signal of vulnerability.

  2. Person B detects this signal.

  3. Person B responds by signaling their own vulnerability.

  4. Person A detects this signal.

  5. A norm is established; closeness and trust increase.

  Consider the situation of Al Haynes on Flight 232. He was the captain of the plane, the source of power and authority to whom everyone looked for reassurance and direction. When the explosion knocked out the controls, his first instinct was to play that role—to grab the yoke and say, “I got it.” (Later he would call those three words “the dumbest thing I’ve ever said in my life.”) Had he continued interacting with his crew in this way, Flight 232 would have likely crashed. But he did not continue on that path. He was able to do something even more difficult: to send a signal of vulnerability, to communicate to his crew that he needed them. It took just four words:

  Anybody have any ideas?

  Likewise, when pilot trainer Denny Fitch entered the cockpit, he could have attempted to issue commands and take charge—after all, he knew as much, if not more, about emergency procedures as Haynes did. Instead, he did the opposite: He explicitly put himself beneath Haynes and the crew, signaling his role as helper:

  Tell me what you want, and I’ll help you.

  Each of these small signals took only a few seconds to deliver. But they were vital, because they shifted the dynamic, allowing two people who had been separate to function as one.

  It’s useful to zoom in on this shift. As it happens, scientists have designed an experiment to do exactly that, called the Give-Some Game. It works like this: You and another person, whom you’ve never met, each get four tokens. Each token is worth a dollar if you keep it but two dollars if you give it to the other person. The game consists of one decision: How many tokens do you give the other person?

  This is not a simple decision. If you give all, you might end up with nothing. If you’re like most people, you end up giving an average of 2.5 tokens to a stranger—slightly biased toward cooperation. What gets interesting, however, is how people tend to behave when their vulnerability levels are increased a few notches.

  In one experiment, subjects were asked to deliver a short presentation to a roomful of people who had been instructed by experimenters to remain stone-faced and silent. They played the Give-Some Game afterward. You might imagine that the subjects who endured this difficult experience would respond by becoming less cooperative, but the opposite turned out to be true: the speakers’ cooperation levels increased by 50 percent. That moment of vulnerability did not reduce willingness to cooperate but boosted it. The inverse was also true: Increasing people’s sense of power—that is, tweaking a situation to make them feel more invulnerable—dramatically diminished their willingness to cooperate.

  The link between vulnerability and cooperation applies not only to individuals but also to groups. In an experiment by David DeSteno of Northeastern University, participants were asked to perform a long, tedious task on a computer that was rigged to crash just as they were completing it. Then one of their fellow participants (who was actually a confederate of the researchers) would walk over, notice the problem, and generously spend time “fixing” the computer, thereby rescuing the participant from having to reload the data. Afterward the participants played the Give-Some Game. As you might expect, the subjects were significantly more cooperative with the person who fixed their computer. But here’s the thing: They were equally cooperative with complete strangers. In other words, the feelings of trust and closeness sparked by the vulnerability loop were transferred in full strength to someone who simply happened to be in the room. The vulnerability loop, in other words, is contagious.

  “We feel like trust is stable, but every single moment your brain is tracking your environment, and running a calculation whether you can trust the people around you and bond with them,” says DeSteno. “Trust comes down to context. And what drives it is the sense that you’re vulnerable, that you need others and can’t do it on your own.”

  Normally, we think about trust and vulnerability the way we think about standing on solid ground and leaping into the unknown: first we build trust, then we leap. But science is showing us that we’ve got it backward. Vulnerability doesn’t come after trust—it precedes it. Leaping into the unknown, when done alongside others, causes the solid ground of trust to materialize beneath our feet.

  —

  Question: How would you go about finding ten large red balloons deployed at secret locations throughout the United States?

  This is not an easy question. It was dreamed up by scientists from the Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense tasked with helping America’s military prepare for future technological challenges. The Red Balloon Challenge, which DARPA announced on October 29, 2009, was designed to mimic real-life dilemmas like terrorism and disease control, and offered a $40,000 prize to the first group to accurately locate all ten balloons. The immensity of the task—ten balloons in 3.1 million square miles—led some to wonder if DARPA had gone too far. A senior analyst for the National Geospatial-Intelligence Agency declared it “impossible.”

  Within days of the announcement, hundreds of groups signed up, representing a diverse cross-section of America’s brightest minds: hackers, social media entrepreneurs, tech companies, and research universities. The vast majority took a logical approach to the problem: They built tools to attack it. They constructed search engines to analyze satellite photography technology, tapped into existing social and business networks, launched publicity campaigns, built open-source intelligence software, and nurtured communities of searchers on social media.

  The team from MIT Media Lab, on the other hand, didn’t do any of that stuff because they didn’t find out about the challenge until four days before launch. A group of students, led by postdoctoral fellow Riley Crane, realized they had no time to assemble a team or create technology or do anything that resembled an organized approach. So instead they took a different tack. They built a website that consisted of the following invitation:

  When you sign up to join the MIT Red Balloon Challenge Team, you’ll be provided with a personalized invitation link, like http://balloon.mit.edu/yournamehere

  Have all your friends sign up using your personalized invitation. If anyone you invite, or anyone they invite, or anyone they invite (…and so on) wins money, so will you!

  We’re giving $2000 per balloon to the first person to send us the correct coordinates, but that’s not all—we’re also giving $1000 to the person who invited them. Then we’re giving $500 [to] whoever invited the inviter, and $250 to whoever invited them, and so on…(see how it works).

  Compared to the sophisticated tools and technology deployed by other groups, the MIT team’s approach was laughably primitive. They had no organizational structure or strategy or software, not even a map of the United States to help locate the balloons. This wasn’t a well-equipped team; it was closer to a hastily scrawled plea shoved into a bottle and lobbed into the ocean of the Internet: “If you find this, please help!”

  On the morning of December 3, two days before the balloon launch, MIT switched on the website. For a few hours, nothing happened. Then, at 3:42 P.M. on December 3, people began to join. Connections first bloomed out of Boston, then exploded, radiating to Chicago, Los Angeles, San Francisco, Minneapolis, Denver, Texas, and far beyond, including Europe. Viewed in time lapse, the spread of connections resembled the spontaneous assembly of a gigantic nervou
s system, with hundreds of new people joining the effort with each passing hour.

  At precisely 10:00 A.M. Eastern on December 5, DARPA launched the balloons in secret locations ranging from Union Square in downtown San Francisco to a baseball field outside Houston, Texas, to a woodland park near Christiana, Delaware. Thousands of teams swung into action, and the organizers settled in for a long wait: They estimated it would take up to a week for a team to accurately locate all ten balloons.

  Eight hours, fifty-two minutes, and forty-one seconds later, it was over. The MIT team had found all ten balloons and had done so with the help of 4,665 people—or as DARPA organizer Peter Lee put it, “a huge amount of participation from shockingly little money.” Their primitive, last-minute, message-in-a-bottle method had defeated better-equipped attempts, creating a fast, deep wave of motivated teamwork and cooperation.

  The reason was simple. All the other teams used a logical, incentive-based message: Join us on this project, and you might win money. This signal sounds motivating, but it doesn’t really encourage cooperation—in fact, it does the opposite. If you tell others about the search, you are slightly reducing your chances of winning prize money. (After all, if others find the balloon and you don’t, they’ll receive the entire reward.) These teams were asking for participants’ vulnerability, while remaining invulnerable themselves.

  The MIT team, on the other hand, signaled its own vulnerability by promising that everyone connected to finding a red balloon would share in the reward. Then it provided people with the opportunity to create networks of vulnerability by reaching out to their friends, then asking them to reach out to their friends. The team did not dictate what participants should do or how they should do it, or give them specific tasks to complete or technology to use. It simply gave out the link and let people do with it what they pleased. And what they pleased, it turned out, was to connect with lots of other people. Each invitation created another vulnerability loop that drove cooperation—Hey, I’m doing this crazy balloon-hunting project and I need your help.

 

‹ Prev