I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted

Home > Nonfiction > I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted > Page 11
I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted Page 11

by Nick Bilton


  BJ Fogg, author and founding professor of the Persuasive Technology Lab at Stanford University, specializes in human-computer interaction and the way we trust machines. Fogg has been exploring trust and machines since the early days of the Web. He believes the issue is not just about trust but about credibility as well. Fogg and his research partner Hsiang Tseng found that in the early days of computing, “the public perceived computers as virtually infallible.”4 Then the assumption that computers were credible quickly started to erode. Fogg points out that credibility in any setting is made up of a variety of different components, including the quality of the interaction, trust, expertise, a lack of bias, knowledge, and experience. Credibility is essentially multidimensional. And since people interact with computers via a screen, that makes building credibility all the more challenging.

  When people started to build Web pages, Fogg and his team wanted to understand what made people assign credibility to those pages and trust their content. Since websites were a totally new concept when these studies took place and were a new way to present information, there wasn’t much of a starting point. Fogg performed a “large scale credibility study” by showing people different websites that were designed well or poorly and found that what “mattered most was, did the page look good? If it looked good, the assumption was the information was credible, and that was far and away the most important thing that determined whether people thought information was credible.”

  When I asked Jakob Nielsen, a world-renowned design and usability expert, why people feel more comfortable with well-designed sites, he explained that a lot of the thought process is about comfort and familiarity.5 “Think about the old banks,” he said. “When you walked into the institutions, they had these huge marble statues in the middle of the floor. This was meant to evoke power and strength and confidence so you could trust the institution to look after your money.” When it comes to the Web, good design offers the same feeling of trust. Nielsen explained that little things like a logo, a phone number, or clean, well-designed fonts offer a sense of familiarity with real-world objects.

  Fogg’s research candidly shows that it doesn’t matter who makes the information we consume but that we add influence and authenticity on the basis of aesthetics. Or as your mother always warned, we do judge a book by its cover.

  I asked Fogg how trust is changing with the next generation of computing and what have become our social networks. He explained that not only will the concept of trust be different tomorrow, the use of the word becomes difficult to apply in new settings.

  As an example, he said, “On one hand, trust can mean dependability, like, I’m going to jump off this bridge and here’s this bungee cord, and I trust this bungee cord. It’s going to be reliable and dependable, and it’s going to do what I think it’s going to do. Whereas other uses of trust are different. Trusting information or trusting the source of the information goes more towards credibility … they’re not the same thing,” although they have elements of overlap.

  That is, we trust our computers to work properly and not explode when the power button is pressed. But whether we trust them to protect our privacy, keep our money or our personal data secure, or even direct us to the right information when we need it is a whole different story. Instead of looking to computers to find the right information or insights for us, he noted that the information you get today is coming “more and more through your friends and through your social network. It’s being distributed through channels of trust and the trust isn’t necessarily the BBC or The New York Times. It’s people.”

  To Fogg, then, a Web page still needs to look nice and be easily navigated to be useful. Now, he says, it matters “who’s saying what and if it’s somebody I don’t know, how many followers do they have?” If it’s someone I know, he said, the credibility of that Web page increases dramatically, regardless of design, brand name, or even content.

  This isn’t to say that elegance in design isn’t important. But now there’s a human element involved. Indeed, what others think and do has always been influential, and that’s something that really isn’t changing in the new world. It’s just taking on a different form.

  So what about those computers? Don’t we have to worry about trusting them, too? For now, those distinctions remain relatively separate, and we have the opportunity to make the decision: Do we interact with and trust the algorithm, or do we opt for the human? That option will change. Take the website Wikipedia, the anyone-can-edit encyclopedia. The site employs hundreds of “software bots” that monitor actions on the site, including the creation of new pages or drastic changes to existing articles. If the bots see something out of the ordinary—something they are programmed to look for—they automatically go in and fix the problem. There are hundreds of thousands of these bot-related changes on Wikipedia, and there’s no clear distinction between the human edits and the algorithmic ones.

  As software and computers become smarter and we start to trust them, we will slowly add them to our trust markets and anchoring communities, both for their dependability and for their credibility. We will have more choices, as we do in deciding between the ATM machine and the bank teller. And more often than not, we’re going to opt for convenience, which after all, trumps trepidation.

  There is, however, one caveat to all of this: privacy.

  It’s apparent that the Web and the communities we have joined enable us to share anything from a breaking news alert to the mundane tragedies of our everyday lives. And although we’re more comfortable than ever dribbling out short, or long, dispatches from the day, our privacy, or the ability to control it, is still as important as ever.

  We can take a look at the social network Facebook to understand just how important this is. It’s no secret (both in the media and among its millions of users) that Facebook changes the privacy policy and privacy settings on its website on a regular basis, specifically when it introduces new features to the site. So in early 2010, when the company changed its policy and settings yet again, this time automatically filtering out and linking hundreds of millions of users’ information onto the Internet without their complete understanding, there was understandably a tense backlash. Although Facebook was trying to create a better experience for its users, linking individuals’ information with their friends and family and in turn creating a social and personalized experience across the Web, the way it went about this completely backfired. I for one didn’t want anything to do with the new feature because I didn’t trust what was happening to my information, even if it did offer a more compelling Web-surfing experience.

  Our online sharing and the mentality of what is private change on the basis of who we let in and who we trust. As a generation comes of age, growing up surrounded by social bubbles online, its members are comfortable sharing publicly with their friends but not with the public they don’t know—the general public. If Facebook had decided to launch this new personalized feature with transparency and control, where I understood that my online network was only able to see my actions, I would have embraced it wholeheartedly, but I couldn’t consciously share and engage in the public eye without knowledge of which public I was sharing with.

  How the Changing Communities Are Changing Us

  Now that we’ve identified how our new anchored communities work and how we build trust in them, we’ll turn to how they lead you and you lead them into new directions.

  In scientific terms, groups of people can help one another out through “swarm logic.” That is, a loose and unorganized group can work together to attack and solve a problem, whether it’s hunting for food, avoiding predators, or finding and sharing information.

  Another component of this concept is “swarm intelligence.”6 The term was coined by Gerardo Beni, a computer scientist, who theorized that a group can consciously, but more often unknowingly, band together to solve vast and unmanageable problems. Swarms have been used to explain computing, robots, animals, biology, and, increasingly, online social
networks. But until recently we haven’t really understood how they work, especially with regard to leadership.

  In the days of packaged content, the information leaders were the storytellers, such as book authors and newspaper publishers, and those lucky enough to have access to the printing plant. Now the distribution channels matter less, and anyone with an appropriate device can be a storyteller.

  But who leads the group in an online social setting? If each person creates her or his own community, isn’t there just a chaos of content delivery? Or are there true leaders even in our online social networks? Are we unknowingly developing our own swarms to help manage content consumption?

  The way we act online is very organic, similar to the behavior patterns of a species. To see what I mean, let’s look at what is known about how fish travel in groups.

  In 2008, Ashley Ward from the University of Sydney and a team of researchers, including Jens Krause of the University of Leeds, illustrated how a school of fish will navigate a path on the basis of group leadership.

  Ward and a team of biologists took a group of small stickleback fish that usually travel in large swarms and created a lab scenario that included a robotic version of the fish. They placed the fish in a long narrow bath of water and set up two different pathways for them to swim to get from one end to the other. The path to the right had what the researchers called a “predator fish” that was meant to scare and hinder the smaller fish from taking that route, whereas the path to the left was open and free, labeled the “safe route.”

  When the researchers placed a single live fish in the water, it would immediately swim through the safer route, doing all it could to avoid the predator. But when they added a robotic fish, the live fish would always follow the robot’s path, even if it meant going down the predatory pathway. This led the researchers to believe that a live fish will simply follow, even in the face of danger, because another fish has taken a specific route.

  To test this, they placed two live fish in the water and set a single robotic fish down the predator path. This time, the two live fish banded together and took the safer route to the left. Leadership was resolved by numbers.

  Finally, when the researchers sent two or more robotic fish down the predator path, the live fish—no matter how many there were—would always follow the robots. This led Ward and Krause to believe that swarms make decisions on the basis of a theory they called quorum responses.

  Krause explained that in smaller settings, any one fish can become a leader of a group. But when you start to add more elements to the swarm, it takes additional leaders to decide on the direction. Specifically, if there are four or more fish, only two leaders can direct the entire group. Adding a third robotic fish, for example, had absolutely no influence on the direction the fish took. Two was enough to determine direction. Even with small numbers, Krause explained, a collective intelligence sets in.

  “Social conformity and the desire to follow a leader, regardless of cost, exert an extremely powerful influence on the behavior of social animals, from fish to sheep to humans,” Ward wrote in a research paper on this type of swarm logic.

  After the paper was published in late 2008, Krause was approached by a German television station and asked if he was interested in a collaboration to help understand if these theories would apply to a crowd of humans seeking information. He agreed.

  As a biological scientist, Krause has spent twenty years trying to decipher the collective behavior, swarm intelligence, and social networks of a wide variety of animals and groups. His studies have looked primarily at leadership within these classifications and tried to explain how hundreds or thousands of individuals can stay organized and share information with such ease and elegance.

  With the camera crew in tow, the research team set off to Cologne, Germany, recruited two hundred volunteers, and set up a testing facility in a massive convention center. The basic goal was to “see if it is possible to lead people without them knowing that they’re being led.”

  The study began by placing the volunteers in an empty 90,000-square-foot hall. The participants were told not to talk to one another and asked to move in any direction within the hall but to follow two simple rules. First, they were to move at a normal pedestrian speed, not too quickly and not too slowly. Second, they were told always to stay within an arm’s length of any single individual within the group. This request allowed the pack to maintain some level of group cohesion.

  Film of the test showed two distinct patterns. First, when a large group is left to wander freely (while still following the basic rules), even without any leadership, it organizes into two concentric circles. This happened every time the researchers ran the test. The groups self-organized to move in a cohesive direction, not dispersing randomly throughout the space. Remember, no one was leading the volunteers and telling them to walk in a specific direction. Still, some sort of organization set in.

  Then the researchers secretly asked a percentage of people to try to walk in a particular direction toward a target marked with an X on the floor. The selected people were told to do this while following the two basic rules—move normally and stay within an arm’s length of another individual. The volunteers who were asked to walk toward the targets were completely unaware of the actions of everyone else in the group, including the fact that there were other target seekers.

  This led to the second finding, which has become known as the rule of 5 percent. When the small, select group of individuals was asked to move toward a specific target in the room, the group followed only when 5 percent or more were told to do so. If the researchers told only 2.5 percent of the group to aim for the target, that small group eventually would arrive there, but the other 97.5 percent of the participants would not arrive with them. The rest of the volunteers managed to stay within the concentric circles but not follow the folks seeking out the X on the floor. But as soon as the researchers upped the number to 5 percent or more, the entire crowd of two hundred ended up following and everyone made it to the target.

  In an interview, Krause explained that the smaller groups’ goal was not just to wander but to walk to the target while staying with a group. It became a “self-organized process because nobody has knowledge of what the group is collectively about, or what the individuals all know. Everybody is just following his or her local route. So as a result, we see collective locomotion towards the target.”

  This theory applies whether you have 5 percent, 10 percent, or even 50 percent headed in one direction. The whole group will always reach the target if 5 percent or more knowingly or unknowingly lead the way.

  The rule of 5 percent becomes increasingly important in settings in which a group shares information about a predator or food. Online, in the absence of both predators and food, we collectively avoid substandard, inaccurate, or ineffectual content and seek premium, quality information. Krause believes that these tests show that when a “few individuals, or a small proportion [of a group], receive information that the others don’t have, then they can become disproportionately influential” within a group. When you apply these findings to our online experiences, they illustrate how anyone, regardless of background or expertise, can become an influential individual within a group.

  Krause believes that when we all have the ability to share data, the information sharing becomes completely egalitarian. If you have distinct information at a specific moment, you will become the temporary leader of the group, with the ability to influence the flow and formation of the swarm.

  There’s another important component to the ebb and flow of sharing and leading. Online, just as in these real-life studies, positive feedback plays a key role. “An individual does something that is copied, and the more individuals copy it, the stronger the urge becomes for others to follow suit,” he said. If you imagine a swarm of insects floating through the air back and forth, or a school of fish, or even the people in the testing facility in Germany, they move in an elegant swooping circular pattern as the g
roup’s leaders change and gather new information.

  Something similar can happen online with the information we share and consume. Any single individual can find something interesting and send it to the group, and if it’s stimulating and appealing, they in turn share it with their community. The cycle of mass of content seekers will gravitate toward the X on the Web. Then the pattern begins all over again.

  If the news is that important, it will find me.

  —A college student explaining his news habits in a focus group

  So are we really nothing more than a school of dull fish? Could anyone with an audience of 5 percent lead and change an entire online group? Couldn’t a smarmy individual go off into one online world or another, recruit a few hundred people within a network, and drive you to click on a link?

  Luckily and happily, no, because in real life in the online world, we aren’t all locked up in a big hall with our arms nearly touching. Each community we set up is individualized to each of us. For instance, you’re the queen bee of your own anchoring communities, the person who filters the information you most want. But by belonging to a social network, you’re also a worker in someone else’s hive. Since no two social groups are alike, the whole group becomes notoriously difficult, if not impossible, to control.

  Most often, you’re not actually leading the group. You’re simply a part of the information sharing, harboring a collective intelligence. You may decide who enters your own network, accepting friends’ requests or following someone’s actions online, but you don’t control what they share and consume. You just decide if you’re going to pay attention to them.

  You also don’t seek out the same information as others in your communities. Your queries and interests are based on information channels that differ from the ones I use. Yet if Sam H. and others in my Foursquare world start to rave about a new restaurant, I probably will check it out. If several Twitter friends tell me about a great story or share breaking news, I will pay attention. My anchoring communities will bring the news or their discoveries to me, helping me sort, filter, and distribute a living and ever-changing stream of information and experiences.

 

‹ Prev