Smart Mobs

Home > Other > Smart Mobs > Page 25
Smart Mobs Page 25

by Howard Rheingold


  Kortuem and colleagues realized that p2p computing and wireless networking technologies made it possible to design ad hoc networks of mobile devices to support the ad hoc social networks of the people who wear them. The fundamental technical unit cited by Kortuem and other wearable computing researchers has come to be known as the “personal area network,” an interconnected network of devices worn or carried by the user. The concept was first described by Tom Zimmerman, now at IBM’s Almaden Research Center, who had invented the VR “dataglove” while he was an MIT student.47

  Kortuem and colleagues treat the personal area networks as building blocks of a dynamic community of networks with emergent capabilities of its own. The research is as much behavioral as it is computational, beginning with simple experiments matching properties of mobile computing networks with the needs of social networks. The community of personal area network users within geographic proximity, for example, could serve as a wireless mesh network, dynamically self-organizing a cloud of broadband connectivity as nodes came in and out of physical proximity, providing always-on Internet connections to members. Using Bluetooth and other short-range wireless technologies such as very-low-power wideband radio, individual members of the community could engage in more intimate and timely information exchanges when face to face, whereas WiFi technologies could provide the infrastructure for neighborhood-wide and Internet-wide communication:

  Mobile ad hoc systems provide opportunities for ad hoc meetings, mobile patient monitoring, distributed command and control systems and ubiquitous computing. In particular, personal area networks enable the creation of proximity-aware applications in support of face-to-face collaboration.

  Mobile devices like cell phones, PDAs and wearable computers have become our constant companions and are available wherever we go. . . . Personal area networks open the opportunity for these devices to take part in our everyday social interactions with people. Their ability to establish communication links among devices during face-to-face encounters can be used to facilitate, augment or even promote human social interactions.

  In some sense, an ad hoc mobile information system is the ultimate peer-to-peer system. It is self-organizing, fully decentralized, and highly dynamic.48

  Short-range radio frequency links such as those used by Bluetooth chips and wearable computers create a sphere of connectivity within the immediate vicinity of the wearer. Paul Rankin, at Philips Research laboratory in England, wrote about the need for intermediary agents to negotiate transactions between the “aura” of one person and radio beacons in the environment, or another person’s aura.49 “Auranet” is what Jay Schneider, Kortuem, and colleagues named their “framework for structuring encounters in social space based on reputations and trust.”50 The wireless instantiation of a 12-foot information bubble around wearable computer users is a physical model of what sociologist Erving Goffman calls the “Interaction Order,” the part of social life where face-to-face and spoken interactions occur.51 Goffman claimed that the mundane world of everyday interactions involves complex symbolic exchanges, visible but rarely consciously noticed, which enable groups to negotiate movement through public spaces. Although people use the ways they present themselves to “give” information they want others to believe about themselves, Goffman noted that people also “give off” information, leaking true but uncontrolled information along with their more deliberate performance.

  One form of information that people give off, called “stigma” by Goff-man, is markings or behaviors that locate individuals in a particular social status. Although many stigma can have negative connotations, stigma can also mark positive social status. The information we give off by the way we behave and dress helps us coordinate social interaction and identify likely interaction partners. When the Interaction Order is formalized and modeled automatically in an Auranet, the social network and the technological network meet in a way that makes possible new capabilities such as automated webs of trust for ad hoc interactions—for example, assembling a carpool of trustworthy strangers when you drive downtown or seek a ride.

  Kortuem et al., noting the lack of fully embodied “human moments” in purely virtual worlds, concentrated on ways to enhance the most basic sphere of human social behavior, the face-to-face encounters of everyday life. Indeed, the primary question asked by the Oregon researchers is the primary question regarding smart mobs: What can communities of wearable computer users do in their face-to-face encounters? At a technical level, the wearable devices can share bandwidth by acting as nodes in an ad hoc wireless network. The devices could exchange media and messages, similar to the way Napster and Usenet use links between individual nodes to pass data around. However, as soon as the members of the community allow their computers to exchange data automatically, without human intervention, complex issues of trust and privacy intervene—the unspoken norms of the interaction order. Kortuem et al. explored the social and technical implications of personal agent software, which filters, shields, and acts as a go-between for their users.

  A number of social and technical barriers must be overcome in order for mobile ad hoc communities to self-organize cooperatively. Nobody is going to contribute their personal area network to a community internetwork unless they feel secure about privacy and trust—who snoops whom, and who can be counted on to deal honestly? Privacy requires data security, and security is complicated by wireless communications. Encryption techniques make secure wearable community infrastructure possible, but someone has to figure out how to build them. Trust means a distributed reputation system, which the Oregon group has prototyped. When you break down the interesting idea of mobile, ad hoc social networks into the elements needed to make it work in practice, a rich and largely undeveloped field for research opens. Another experiment by the Eugene group mediates social encounters by comparing personal profiles automatically and alerting participants in a face-to-face encounter of mutual interests or common friends that they might not know about (a recommendation system for strangers).52 Each social encounter of wearable computer users involving automatic exchanges of personal data, sharing of bandwidth, or passing of messages from others would necessarily involve individual computations of where each participant’s self-interest lies in relation to a computation of the other party’s trustworthiness. Kortuem et al. recognized this complex weighing of trust versus self-interest as an example of our old friend, the Prisoner’s Dilemma, and designed an experimental system called WALID to test some of these issues, taking advantage of the fact that the Oregon wearable computing researchers lived and worked in the same general neighborhood in Eugene, Oregon:

  WALID implements a digitized version of the timeworn tradition of borrowing butter from your neighbor. You do a favor for others because you know that one day they will do it for you.

  With WALID two individuals use their mobile devices to negotiate about and to exchange real world tasks: dropping off someone’s dry cleaning, buying a book of stamps at the post office, or returning a book to the local library.

  WALID employs personal agent software to find close-by community members and to negotiate the exchange of tasks. The agents maintain a user’s task list, become fully aware of the locations and activities involved. When an encounter occurs, the agents produce a negotiation. If both users approve, a deal is struck.

  The role of the agent in a negotiation is to evaluate the value of favors and to keep scores. Having to run across town just to drop off someone’s mail compares unfavorably with buying milk for someone if the grocery store is just a block away. Agents employ ideas from game theory to ensure that results of negotiations are mutually beneficial; they cooperate only if there is the opportunity to enhance the user’s goals.53

  In our telephone conversation, Kortuem noted that at the beginning of wearable computing research, the main goals involved either creating tools for professionals, such as maintenance and repair specialists, or creating tools to augment individuals, in the manner promoted by Steve Mann. “I came to r
ealize,” Kortuem told me, “that what is really interesting is not the technology of a specialized application at a job site, but what happens if ordinary people are empowered to use this technology and what effects might emerge when technology penetrates society.”54 These words will be worth remembering when millions of people carry devices that invisibly probe and cloak, reach out, evaluate, interconnect, negotiate, exchange, and coordinate invisible acts of ad hoc cooperation that create wealth, democracy, education, surveillance, and weaponry from pure mind-stuff, the way the alchemy of inscribing ever-tinier patterns on purified sand invokes the same forces from the same place.

  Swarm Intelligence and the Social Mind

  Massive outbreaks of cooperation precipitated the collapse of communism. In city after city, huge crowds assembled in nonviolent street demonstrations, despite decades of well-founded fear of political assembly. Although common sense leads to the conclusion that unanimity of opinion among the demonstrators explained the change of behavior, Natalie Glance and Bernardo Huberman, Xerox PARC researchers who have studied the dynamics of social systems, noted that a diversity of cooperation thresholds among the individuals can tip a crowd into a sudden epidemic of cooperation. Glance and Huberman pointed out that a minority of extremists can choose to act first, and if the conditions are right, their actions can trigger actions by others who needed to see somebody make the first move before acting themselves—at which point the bandwagon-jumpers follow the early adopters who followed the first actors:

  Those transitions can trigger a cascade of further cooperation until the whole group is cooperating.

  The events that led to the mass protests in Leipzig and Berlin and to the subsequent downfall of the East German government in November 1989 vividly illustrate the impact of such diversity on the resolution of social dilemmas. . . . The citizens of Leipzig who desired a change of government faced a dilemma. They could stay home in safety or demonstrate against the government and risk arrest—knowing that as the number of demonstrators rose, the risk declined and the potential for overthrowing the regime increased.

  A conservative person would demonstrate against the government only if thousands were already committed; a revolutionary might join at the slightest sign of unrest. That variation in threshold is one form of diversity. People also differed in their estimates of the duration of a demonstration as well as in the amount of risk they were willing to take. Bernhardt Prosch and Martin Abram, two sociologists from Erlangen University who studied the Leipzig demonstrations, claim that the diversity in thresholds was important in triggering the mass demonstrations.55

  Sudden epidemics of cooperation aren’t necessarily pleasant experiences. Lynch mobs and entire nations cooperate to perpetrate atrocities. Decades before the fall of communism, sociologist Mark Granovetter examined radical collective behavior of both positive and negative kinds and proposed a “threshold model of collective behavior.” I recognized Granovetter’s model as a crucial conceptual bridge that connects intelligent (smart mob) cooperation with “emergent” behaviors of unintelligent actors, such as hives, flocks, and swarms.

  Granovetter studied situations in which individuals were faced with either-or decisions regarding their relationship to a group—whether or not to join a riot or strike, adopt an innovation, spread a rumor, sell a stock, leave a social gathering, migrate to a different country. He identified the pivotal statistic as the proportion of other people who have to act before an individual decides to join them. Thresholds appear to be an individual reaction to the dynamics of a group.

  One of Granovetter’s statements yielded a clue to smart mob dynamics: “By explaining paradoxical outcomes as the result of aggregation processes, threshold models take the ‘strangeness’ often associated with collective behavior out of the heads of actors and put it into the dynamics of situations.”56 Smart mobs might also involve yet-unknown properties deriving from the dynamics of situations, not the heads of actors. Goffman’s Interaction Order, the social sphere in which complex verbal and nonverbal communications are exchanged among individuals in real time, is precisely where individual actions can influence the action thresholds of crowds. Mobile media that can augment the informal, mostly unconscious information exchanges that take place within the Interaction Order, or affect the size or location of the audience for these exchanges, have the potential to change the threshold for collective action.

  I started looking for ways to connect these congruent ideas operationally. How would they map onto an ad hoc social network of wearable computer users, for example? When my idea hunting brought me to “the coordination problem,” a social dilemma that is not a Prisoner’s Dilemma, separate ideas began to fit together into a larger pattern.

  A coordination problem does not involve the Prisoner’s Dilemma zerosum game between self-interest and common resources but instead represents the quandary that confronts individuals who are ready to cooperate, but whose cooperation is contingent on the prior cooperation of others. Monitoring and sanctioning are important not simply as a way of punishing rule breakers but also as a way of assuring members that others are using common resources wisely. That is, many people are contingent cooperators, willing to cooperate as long as most others do (what Ostrom referred to as a “commitment problem”). Thus, monitoring and sanctioning serve the important function of providing information about others’ actions and levels of commitment.

  In Rational Ritual: Culture, Coordination, and Common Knowledge, Michael Suk-Young Chwe claims that public rituals are “social practices that generate common knowledge,” which enables groups to solve coordination problems. Suk-Young Chwe writes: “A public ritual is not just about the transmission of meaning from a central source to each member of an audience; it is also about letting audience members know what other audience members know.”57 Everyone in a group has to know who else is contributing, free riding, and sanctioning in order to solve both free rider and coordination problems on the fly with maximum trust and minimum friction. This is the key to the group-cooperation leverage bestowed by reputation systems and many-to-many communications media.

  Threshold models of collective action and the role of the Interaction Order are both about media for exchange of coordinating knowledge. Understanding this made it possible to see something I had not noticed clearly enough before—a possible connection between computer-wearing social networks of thinking, communicating humans and the swarm intelligence of unthinking (but also communicating) ants, bees, fish, and birds. Individual ants leave chemical trail markers, and the entire nest calculates the most efficient route to a food source from a hundred aggregated trails without direction from any central brain. Individual fish and birds (and tight-formation fighter pilots) school and flock simply by paying attention to what their nearest neighbors do. The coordinated movements of schools and flocks is a dynamically shifting aggregation of individual decisions. Even if there were a central tuna or pigeon who could issue orders, no system of propagating orders from a central source can operate swiftly enough to avoid being eaten by sharks or slamming into trees. When it comes to hives and swarms, the emergent capabilities of decentralized self-organization can be surprisingly intelligent.

  What happens when the individuals in a tightly coordinated group are more highly intelligent creatures rather than simpler organisms like insects or birds? How do humans exhibit emergent behavior? As soon as this question occurred to me, I immediately recalled the story Kevin Kelly told at the beginning of Out of Control, his 1994 book about the emergent behaviors in biology, machinery, and human affairs.58 He described an event at an annual film show for computer graphics professionals. A small paddle was attached to each seat in the auditorium, with reflective material of contrasting colors on each side of the paddle. The screen in the auditorium displayed a high-contrast, real-time video view of the audience. The person leading the exercise, computer graphics wizard Loren Carpenter, asked those on one side of the auditorium aisle to hold the paddles with one color showing
and asked the other half of the audience to hold up the opposite color. Then, following Carpenter’s suggestions, the audience self-organized a dot that moved around the screen, added a couple of paddles on the screen, and began to play a giant game of self-organized video Pong, finally creating a graphical representation of an airplane and flying it around the screen. Like flocks, there was no central control of the exercise after Carpenter made a suggestion. Members of the audience paid attention to what their neighbors were doing and what was happening on the screen. Kelly used this as an example of a self-conscious version of flocking behavior.59 Musician and cognitive scientist William Benzon believes that the graphical coordination exercise led by Carpenter and described by Kelly is similar to what happens when musicians “jam” and that it involves a yet unexplored synchronization of brain processes among the people involved:60

  The group in Carpenter’s story is controlling what appears on the screen. Everyone can see it all, but each can directly affect only the part of the display they control with his or her paddle. In jamming, everyone hears everything but can affect only that part of the collective sound that they create (or withhold).

  Now consider a different example. One of the standard scenes in prison movies goes like this: We’re in a cell block or in the mess hall. One prisoner starts banging his cup on the table (or on one of the bars to his cell). Another joins in, then another, and another, until everyone’s banging away and shouting some slogan in unison. This is a simple example of emergent behavior. But it’s one that you won’t find in chimpanzees. Yes, you will find them involved in group displays where they’re all hooting and hollering and stomping. But the synchrony isn’t as precise as it is in the human case.

 

‹ Prev