Present Shock: When Everything Happens Now

Home > Other > Present Shock: When Everything Happens Now > Page 23
Present Shock: When Everything Happens Now Page 23

by Douglas Rushkoff


  In 2004 Congress authorized the funding of an international cable news channel called Alhurra (“the free one”), headquartered in Virginia but broadcast in Arabic throughout the Arab-speaking world. Launched in the wake of 9/11 at a cost of over $100 million per year, the US government’s channel was supposed to help forge better relations with viewers in these potentially hostile parts of the world. In spite of this massive effort for Alhurra to become the trusted, balanced news channel for Arabic-speaking countries, the target audiences immediately realized the propagandistic purpose of the channel and the true culture from which it emanated. Meanwhile, the English-speaking version of Arab station Al Jazeera became an unintentional hit in the United States during the Arab Spring and Occupy Wall Street—simply because it broadcast its continuous coverage live online and showed journalistic competence in the field. (Hundreds of thousands of Americans still watch Al Jazeera, and talk about it, and interact with its broadcasters—and will likely continue to do so until a major American cable news channel figures out how to get around cable carrier contracts limiting net streaming.)

  In a nonfiction, social media space, only reality counts, because only reality is what is happening in the moment. A company or organization’s best available choice is to walk the walk. This means becoming truly competent. If a company has the best and most inspired employees, for example, then that is the place people will turn to when they are looking for advice, a new product, or a job. It is the locus of a culture. Everything begins to connect. And when it does, the org chart begins to matter less than the fractal of fluid associations.

  The employees and customers of the computer-game-engine company Valve are particularly suited to experiment with these principles. The privately owned company’s flagship product, the video game platform Steam, distributes and manages over 1,800 games to a worldwide community of more than 40 million players. They approach human resources with same playfulness as they do their products, enticing website visitors to consider applying for a job at the company: “Imagine working with super smart, super talented colleagues in a free-wheeling, innovative environment—no bosses, no middle management, no bureaucracy. Just highly motivated peers coming together to make cool stuff. It’s amazing what creative people can come up with when there’s nobody there telling them what to do.”6

  This is both a hiring tactic and good publicity. Customers want to believe the games they are playing emerge from a creative, playful group of innovators. But for such a strategy to work in the era of peer-to-peer communications, the perception must also be true. Luckily for Valve, new-media theorist Cory Doctorow got ahold of the company’s employee manual and published excerpts on the tech-culture blog BoingBoing. In Doctorow’s words, “Valve’s employee manual may just be the single best workplace manifesto I’ve ever read. Seriously: it describes a utopian Shangri-La of a workplace that makes me wish—for the first time in my life—that I had a ‘real’ job.”7

  Excerpts from the manual include an org chart structured more like a feedback loop than a hierarchy, and seemingly unbelievable invitations for employees to choose their own adventures: “Since Valve is flat, people don’t join projects because they’re told to. Instead, you’ll decide what to work on after asking yourself the right questions. . . . Employees vote on projects with their feet (or desk wheels). Strong projects are ones in which people can see demonstrated value; they staff up easily. This means there are any number of internal recruiting efforts constantly under way. If you’re working here, that means you’re good at your job. People are going to want you to work with them on their projects, and they’ll try hard to get you to do so. But the decision is going to be up to you.”8 Can a company really work like that? A gaming company certainly can, especially when, like Valve, it is privately owned and doesn’t have shareholders to worry about. But what makes this utopian workplace approach easiest to accept is the knowledge that one’s employees embody and exude the culture to which they’re supposedly dedicated. They are not responding to game culture, but rather creating it.

  The fractal is less threatening when its shapes are coming from the inside out. Instead of futilely trying to recognize and keep up with the patterns within the screech—which usually only leads to paranoia—the best organizations create the patterns and then enjoy the ripples. Think of Apple or Google as innovators; of Patagonia or Herman Miller as representing cultures; of the Electronic Frontier Foundation or Amnesty International as advocating for constituencies; of Lady Gaga or Christopher Nolan as generating pop culture memes. They generate the shapes we begin to see everywhere.

  In a social world, having people who are capable of actually generating patterns is as important for a church or government agency as it is for a corporation or tech start-up. They do something neat, then friends tell friends, and so on. If an organization already has some great people, all it needs to do is open up and let them engage with the other great people around the world who care. Yes, it may mean being a little less secretive about one’s latest innovations—but correspondingly more confident that one’s greatest innovations still rest ahead.

  The examples go on and on, and surely you know many more yourself. Organizations that focus on controlling everyone’s response to them come off like neurotic, paranoid individuals; there’s just too much happening at once to second-guess everyone. Those who dedicate their time and energy to their stakeholders end up becoming indistinguishable from the very cultures they serve—and just as fertile, interconnected, complex, and alive.

  The fractal acts like a truth serum: the only one who never has to worry about being caught is the one who never lied to begin with.

  MANAGING CHAOS: BIRDS, BEES, AND ECONOMIES

  The hyperconnected fractal reality is just plain incompatible with the way most institutions operate—especially governments, whose exercises in statecraft are based on a cycle of feedback that seems more tuned to messages sent by carrier pigeon than the Internet. Keeping an eye on a message as it spins round through today’s infinitely fast feedback loops only makes them dizzy.

  The problem is we are focused on the object instead of the motion, or, as McLuhan would put it, the figure instead of the ground—Charlie Sheen instead of the standing wave he entered. We need to unfocus our eyes a bit to take in the shape of what’s happening. Stepping back and looking at the picture is fraught with peril, too, however—as we are so tempted to draw premature connections between things. We think of this sort of pattern recognition and lateral thinking as less logical and more intuitive, more “stoned” than “straight.” This is because most of us, at least in the West, are too inexperienced beholding things this way to do so with any rigor or discipline. Once we start seeing how things are connected, we don’t know how to stop. We may as well be on an acid trip, beholding the web of life or the interplay of matter and energy for the very first time.

  This is why, when confronted with the emerging complexity of the twentieth century, the initial response of government and business alike was to simplify. They put their hands over their ears so as not to hear the rising screech and looked to their models, maps, and plans instead. Like generals in the safety of a situation room using toy tanks on a miniature battlefield to re-create a noisy and violent war, government and corporate leaders strategized in isolation from the cacophony of feedback. They sought to engage with their challenges from far above and to make moves as if the action were occurring on a game board. Complexity was reduced to simple, strategic play.

  This is what “gaming” a system really means, and what kept Cold War powers busy building computers for the better part of the twentieth century. After all, the invention and use of the atomic bomb had taught us that we humans might be more connected than we had previously realized. In an early form of fractalnoia, we came to realize that a tiny misstep in Cuba could lead to a global thermonuclear war from which few humans survive.

  Gaming gave leaders a way to choose which feedback to hear and which to ignore. Even if there were
millions of possible actors, actions, and connections, there were only two real superpowers—the Soviet Union and the United States. Military leaders figured that game theory, based on the mathematics of poker, should be able to model this activity and give us simple enough rules for engagement. And so the RAND Corporation was hired to conduct experiments (like the Prisoner’s Dilemma, which we looked at earlier), determine probable outcomes, and then program computers to respond appropriately in any number of individual circumstances. Led by the as yet undiagnosed paranoid schizophrenic John Nash (the mathematician portrayed in the movie A Beautiful Mind), they adopted a principle called MAD, or mutually assured destruction, which held that if the use of any nuclear device could effectively guarantee the complete and utter annihilation of both sides in the conflict, then neither side would opt to use them. While this didn’t stop the superpowers from fighting smaller proxy wars around the world, it did serve as a deterrent to direct conflict.

  Encouraged by this success, Nash applied his game theory to all forms of human interaction. He won a Nobel Prize for showing that a system driven by suspicion and self-interest could reach a state of equilibrium in which everyone’s needs were met. “It is understood not to be a cooperative ideal,” he later admitted, but—at least at the time—neither he nor RAND thought human beings to be cooperative creatures. In fact, if the people in Nash’s equations attempted to cooperate, the results became much more dangerous, messy, and unpredictable. Altruism was simply too blurry. Good planning required predictable behaviors, and the assumption of short-term self-interest certainly makes things easy to see coming.

  A few decades of game theory and analysis since then have revealed the obvious flaws in Nash’s and RAND’s thinking. As Hungarian mathematician and logician László Méro explains it in his rethink of game theory, Moral Calculations,9 the competitive assumptions in game theory have not been proved by consistent results in real-world examples. In study after study, people, animals, and even bacteria are just as likely to cooperate as they are to compete. The reason real human behavior differs from that of the theoretically self-interested prisoners is that the latter are prisoners to begin with. An incarcerated person is the most literal example of one living within a closed environment. These are individuals without access to information and incapable of exercising basic freedoms. All feedback and iteration are removed, other than that between the prisoner and his keepers. With the benefit of several hundred prisoner dilemma studies to mine for data and differences, Méro found that the more the “prisoners” knew about their circumstances and those of their fellow prisoners, the less selfishly they behaved. Communication between prisoners invariably yielded more cooperation. Isolation bred paranoia, as did more opacity about how the rules worked. Communication, on the other hand, generates lateral feedback loops and encourages a more extended time horizon.

  Méro’s research into humans and other biological systems demonstrates that species do conduct game-theory-like calculations about life-or-death decisions, but that they are not so selfishly decided as in a poker game. Rather, in most cases—with enough lateral feedback at their disposal—creatures employ “mixed strategies” to make decisions such as “fight or flight.” Further, these decisions utilize unconsciously developed (or perhaps instinctual) probability matrices to maximize the survival of the species and the greater ecosystem. In just one such cooperative act, the competition over a piece of food, many species engage in threatening dances instead of actual combat. By cooperatively negotiating in this fashion—and using battle gestures, previous experiences, and instinct (species’ memory) to calculate the odds of winning or losing the battle—both individuals live on to see another day.

  In contrast, game-theory tests like the Prisoner’s Dilemma set up competitions where decisions are forced with a lack of information. They are characterized by noncommunication. There is no space for negotiation or transparency of decision making, nor any participation in extending the range of possible outcomes. The prisoners are operating in the least likely circumstances to engender cooperative actions.

  But the zero-sum logic of game theory can still work, as long as it is actualized in a culture characterized by closedness. This is why corporations functioning in this fashion gain more power over laborers who are competing rather than unionizing; it’s why real estate agents can jack up prices more easily in a market where “comps” are not available; and it’s why a larger government can push around developing nations more easily when its cables aren’t being posted on the Internet by WikiLeaks. Less networking and transparency keeps everyone acting more selfishly, individualistically, and predictably.

  In the controlled information landscape, these strategies worked pretty well for a long time. A closed, top-down broadcast media gave marketers and public relations specialists a nation of individuals with whom to communicate. You, you’re the one—or so the commercials informed each of us. Suburban communities such as Levittown were designed in consultation with Roosevelt administration psychologists, to make sure they kept us focused inward.10 Separate plots and zoned neighborhoods reified the nuclear family at the expense of wider, lateral relationships between families. Decades of social control—from corporate advertising to manufacturing public consent for war—were exercised through simple one-to-many campaigns that discouraged feedback from them and between them. As long as people didn’t engage with one another and were instead kept happily competing with one another, their actions, votes, and emotions remained fairly predictable. Screech could be kept to a minimum.

  But the Cold War gave rise to something else: a space race, and the unintended consequence of the first photographs of planet Earth taken from the heavens. Former Merry Prankster Stewart Brand had been campaigning since 1966 for NASA to release a photo of Earth, aware that such an image could change human beings’ perception of not only their place in the universe but also their relationship to one another. Finally, in 1972, NASA released image AS17-148-22727, birthing the notion of our planet as a “big blue marble.” As writer Archibald MacLeish described it, “To see the Earth as it truly is, small and blue and beautiful in that eternal silence where it floats, is to see ourselves as riders on the Earth together, brothers on that bright loveliness in the eternal cold—brothers who know now that they are truly brothers.”11

  Soon after that, the development of the Internet—also an outgrowth of the Cold War funding—concretized this sense of lateral, peer-to-peer relationships between people in a network. Hierarchies of command and control began losing ground to networks of feedback and iteration. A new way of modeling and gaming the activities of people would have to be found.

  The idea of bringing feedback into the mix came from the mathematician Norbert Wiener, back in the 1940s, shortly after his experiences working for the military on navigation and antiaircraft weapons. He had realized that it’s much harder to plan for every eventuality in advance than simply to change course as conditions change. As Wiener explained it to his peers, a boat may set a course for a destination due east, but then wind and tides push the boat toward the south. The navigator reads the feedback on the compass and corrects for the error by steering a bit north. Conditions change again; information returns in the form of feedback; the navigator measures the error and charts a new course; and so on. It’s the same reason we have to look out the windshield of our car and make adjustments based on bumps in the road. It’s the reason elevators don’t try to measure the distance between floors in a building, but instead “feel” for indicators at each level. It’s how a thermostat can turn on the heat when the temperature goes down, and then turn it off again when the desired temperature is reached. It feels.

  Wiener understood that if machines were ever going to develop into anything like robots, they would need to be able to do more than follow their preprogrammed commands; they would have to be able to take in feedback from the world. This new understanding of command and control—cybernetics—meant allowing for real-time measurements and feed
back instead of trying to plan for every emergent possibility. Wiener saw his new understandings of command and control being used to make better artificial limbs for wounded veterans—fingers that could feed back data about the things they were touching or holding. He wanted to use feedback to make better machines in the service of people.

  Social scientists, from psychologist Gregory Bateson to anthropologist Margaret Mead, saw in these theories of feedback a new model for human society. Instead of depending so absolutely on RAND’s predictive gamesmanship and the assumption of selfishness, cybernetics could model situations where the reality of events on the ground strayed from the plan. Qualitative research, polling, and focus groups would serve as the feedback through which more adaptive styles of communication and—when necessary—persuasion could be implemented. During World War II and afterward, the society of Japan became the living laboratory for America’s efforts in this regard—an exercise in “psychological warfare” Bateson would later regret and blame, at least in part, for the breakup of his marriage to Mead, who he was saddened to recall remained more committed to manipulative government agendas such as social control and militarism.12

  Bateson wasn’t really satisfied with the mathematics of simple feedback, anyway, and longed for a more comprehensive way to make sense of the world and all the interactions within it. He saw the individual, the society, and the natural ecology as parts of some bigger system—a supreme cybernetic system he called “Mind,” which was beyond human control and to which human beings must in some sense surrender authority. We’ll look at the religious implications of all this complexity and systems theory in the next chapter. What’s important to this part of our story is that instead of applying cybernetics to mechanical systems in order to serve humans, Bateson was equating human society with a cybernetic system.

 

‹ Prev