by Nick Bilton
How would it work? Let’s say you live in Brooklyn, New York, and you want to find a good Italian restaurant close to the Brooklyn Bridge. You could go to the search engine and type in a search query such as “good Italian restaurant” or “Italian restaurant Brooklyn.” You should get the names of many Italian restaurants—but that doesn’t mean you’re going to find a good meal. The results you see will be the result of looking first for a restaurant in Brooklyn called “good Italian restaurant” and then a mess of other results—yet you won’t really know what’s good and what’s not.
Now imagine that you went to Google and typed in that search query. Instead of an algorithmic answer, your Google results page displayed commentary from people you trust who had eaten Italian food in the area: your friends, family, neighbors, and coworkers, plus whoever else you have designated as trustworthy people in your friend and anchoring communities.
We won’t see these kinds of results from Google overnight; the algorithms and artificial intelligence to predict accurately the kind of Italian food you might like still are being developed—but it is getting closer. Making accurate personalized recommendations that are based on your likes and dislikes and the opinions of other people you trust isn’t exactly common sense for a computer program to decipher.
The difficultly in making these predictions was highlighted by Clive Thompson, a science, technology, and culture writer who summed up the challenges of making recommendations as “The Napoleon Dynamite Problem.”1 Movies like Napoleon Dynamite, Thompson points out, are anomalies in the recommendation functions for Netflix rentals. People either love that movie or they hate it, and there is no rhyme or reason to which of us falls in which category. It’s a true anomaly. As Thompson writes, “The movie has been rated more than two million times in the Netflix database, and the ratings are disproportionately one or five stars.” People either love it or hate it, and there’s no logical answer to their rationale.
Because the movie is so quirky and eccentric, Netflix can’t properly predict how people will rate Napoleon Dynamite and therefore can’t accurately recommend it to you.
But these services can’t afford to get this wrong. If they predict inaccurately, even just once, you may not trust them a second time. If Netflix recommends a movie to you and you really dislike it, the next time you make a rental choice, you’re not going to be as inclined to trust the little box that says, “You’ll probably like this movie tonight.”
Eric Schmidt sees this change too. He has said that Google plans to change its system for filtering and prioritizing your search results over the next five years to accommodate some of the fundamental changes online that are occurring with sites such as Facebook and Flickr that bring together millions of individual viewpoints and insights. “You will tend to listen to other people more,” he says. Young people in high school, college, and just beyond are sharing everything and are beginning to enter the workplace. He says they will bring their filtering and community mentality with them into every aspect of their lives over the next few years.
I’m already seeing this happen firsthand. Last year when a friend moved to New York City, instead of buying a guidebook or even searching the Web to find the best area of the city to live in, he created a simple online survey asking about the most important issues for him in finding a new neighborhood and apartment. He sent the survey to thirty or so friends who either lived in New York, or used to, and then used the information to pick his next home. Someday, he might be able to query Google for this insight based on the information his anchoring communities had contributed about their favorite neighborhoods over the years.
Google’s Schmidt thinks such a reality isn’t too far away, stating that in the next few years, no two Google search results will look the same. If you and I both live in Brooklyn and search for an Italian restaurant, we may get completely different search results, depending on the people in our online communities.
This brings up some interesting questions about how we perceive trust in a digital world. How do we make these decisions with regard to what and who to believe online? If I have a mutual friend online, a friend of a friend, someone I’ve never even met in real life, do I automatically trust her, too? What happens when I land on a website I’ve never seen before? How do I know what I’m reading is true and accurate?
So Who Do You Trust?
Traditional media sources build on brands, reputations, and previous experiences to help sell the idea of trust. For example, most people perceive the Wall Street Journal as trustworthy when it comes to in-depth stories on the world of finance, even though ownership and management of the paper have changed in the last two years. People magazine has the trust of those who want insider gossip about the world of celebrities, and Wired magazine has the trust of the technology community on technology trends. But if you took these brands and switched coverage on stories, you probably would see more skepticism. You’d be less likely to trust a People article on the latest technology advance in microchips or a Wired story speculating on the relationship of Brad Pitt and Angelina Jolie.
Online, however, this sort of media mash-up is already happening. Mainstream outlets, corporations, stores, friends, family, even government, are filtering all kinds of stories and information to you through any number of delivery channels—through social networks, their own websites, and mobile applications. In some cases, they’re simply forwarding information from one another. In other cases, the original information may have been forwarded several times. As it all flows in on the same device, with one piece looking just like another, we are challenged somehow to make good decisions about what to believe and what to discard.
So where do we start? Not surprisingly, we tend to trust our friends, family members, and peers deeply. A 2009 Nielsen Online survey of 25,000 consumers in more than fifty countries found that those who participated trusted their friends, family, and peers for advertising and product recommendations 90 percent of the time.
As a rule, we tend to be more distrusting of organizations, news outlets, and government. Over the years, the Pew Research Center for the People & the Press has regularly surveyed the public’s views on trust in society. Looking at the trends since the mid-1980s reminds you of the kids’ slide at the neighborhood park. The numbers just keep going down. One recent survey showed that between 1985 and 2009, the general public’s trust in the accuracy of the news media fell from 55 percent to 29 percent. (Those aren’t very reassuring numbers if you make your living reporting and writing news stories.) A separate 2007 study reported that 29 percent of those surveyed trusted large corporations most of the time, although 69 percent trusted them some of the time.
So between our friends and family, our wobbly trust of television and newspapers, and our uncertainty about big companies, there’s a lot of available space for others to fill in. Interestingly, people tend to feel somewhat better and more trusting about people they don’t know than about the ones they can clearly identify and check out. Another Pew survey asked people in different countries about their feelings of trust toward strangers. In America, 58 percent of those surveyed believed that “most people in society are trustworthy.” And although these numbers ranged between 41 percent and 79 percent in other Western countries, on average people tend to trust strangers a little less than 60 percent of the time.
Rick Wilson, a professor of political science at Rice University in Houston, says that numerous research studies and papers show that more than half of society generally trusts complete strangers in an initial interaction.2 Although he sees people apply dramatically higher levels of trust to friends, family, and peers, he says that our conflicted responses toward politicians and companies have given our online communities more of an opening to win our trust and supply more of our information and insight. That might be the reason we’ve been so quick to embrace online social networks, he says.
I trust in these complete and often anonymous strangers when I read Amazon.com reviews before I buy a book or
when I look at online restaurant reviews before choosing whether to try a new place. True, I don’t know who these reviewers are or whether they know what kinds of food or books I like. The restaurant might have deceitfully written some of these reviews—or a competitor might have penned them. But overall, I have developed enough trust in these online reviewers to use their postings in making some general decisions.
Am I nuts to do this? Wilson reassures me that I’m not because I’m not rigid in my assessments. Trust levels continually change, he says, making trust actually something of a game. If I believe what certain reviewers say (and my experience at the restaurant confirms that the salmon is an excellent dish), my trust level rises. If the “outstanding service” turns out to be wretched, my trust falls.
In addition, he reminds me, once someone breaks our trust, it can take a very long time to gain it back—if ever.
Take the website Yelp.com, which allows anyone to write a review of a restaurant or business. The site opened for business in 2004 and grew steadily, winning fans who could find great barbecue on a road trip or the best place to fix a vacuum cleaner. But there were questions from the beginning: How could anyone trust a random person to review a business? What if business owners asked their friends to write reviews or, worse, competitors dumped on one another anonymously? Still, the site generated millions of reviews and became well known for its vast database of locations and opinions on businesses.
Then, in 2009, cracks began to appear in the veneer. Several blogs, business magazines, and newspapers, including the Wall Street Journal and the New York Times, reported accusations that the company was running something of an “extortion scheme” in which Yelp employees would call the owners or managers of a business and say that they would remove negative reviews for a $300 advertising fee. If the business declined to pay, Yelp might highlight the negative reviews.
In February 2010, a group of businesses filed a class-action lawsuit against Yelp over its aggressive sales tactics. Although Yelp denied the claims, the site’s credibility was tainted and many users lost their trust in it. After I wrote about it, one commenter noted, “I believe this about Yelp. I’ve posted a few reviews on Yelp and for some reason the negative ones never appear (only the positive ones). Since that experience, I’ve never trusted Yelp reviews again!”
As we start to add and remove people and computers from our anchoring communities, another way to look at our trust in news and other information is as something like a stock market. Each individual or entity within my wide range of networks and connections doesn’t receive the same level of trust. Instead, I single them out and apply different levels of authenticity and trust to each person, almost like single stocks in a market. In fact, you can think of it as a “trust market.”
Imagine a portfolio of stocks that constantly fluctuate in value. Some ebb and flow, others remain stagnant for long periods, still others rise slowly, and some fall off a cliff. We are constantly applying this mentality to how we trust individuals and the content they deliver within our online communities.
I trust my friends who are news-obsessed to share interesting current events and political stories. I trust my neighbors to share relevant local information, even restaurant reviews. I trust my technology-obsessed friends and colleagues to pass along tech news they find or create. But I wouldn’t trust any of them to diagnose an illness or to water my plants, for that matter. They command different levels of trust in my trust market, and they all help me sort through the vast and overwhelming mass of online content. But I also understand that these individual markets can grow and change shape at any given moment.
The shifting nature of trust is one reason I think we’re moving toward investing more of our attention and confidence in individuals online and away from traditional companies and their brands. Online, building individual name recognition and trust may be more important than simply affiliating with a trusted institution. For instance, I admire the content in the New York Times, but when I go online, I look specifically for media coverage from the columnist David Carr or for simple recipes from the Times recipe writer Mark Bittman. I seek out his blog posts rather than his individual newspaper articles, and there I can see his television appearances as well as his columns and read additional tips and suggestions from his readers. After following them for a while, I know that I trust and value their advice.
And it’s not just “big-name, big-brand” storytellers who we choose to trust. People like Carr and Bittman have a clear platform for their views, but we’re also seeing “no names” build big brands around their big personalities—people who anoint themselves and then build their trust level by delivering content that is valued. If you’re an Apple computer enthusiast, you surely will have heard of John Gruber, a Mac expert and writer. He isn’t associated with any big-name news outlets or magazines, but he has built a loyal subscriber base with his website daringfireball.com. He is the sole employee and makes a very healthy six-figure income by selling ads on his site and giving talks to companies. Gary Vaynerchuk, a bigger-than-blogger personality, developed Wine Library TV, his own online network of wine reviews and ratings, which claims 80,000 viewers a day. If Gruber and Vaynerchuk can be their own personalities today without the backing of a big-name brand like Wired magazine, it’s entirely possible that Nick Kristof and Maureen Dowd could still be their own trusted personalities without the New York Times. Down the road, I think we are likely to see more reporters and reviewers be known and trusted largely because they have built their own brands, not because of the organization they may (or may not) work for.
Hello, Computer, Would You Like to Be My Friend?
You may not trust a computer algorithm today to tell you where to eat on Saturday night or find a new doctor for you. But eventually you will—and advertisers will try to take advantage of that.
Not all new “friends” in our online communities will be human. More and more, computers belonging to social-networking services, search engines, and maybe media sites will help us sort through the clutter by tailoring information just for us.
Right now, most of the promotions that come to your e-mail inbox or Twitter feed are generic, intended for broad swaths of customers. But as Facebook users already know, advertising frequently is targeted at you, based on your age, your gender, and other information on your profile. A Gmail conversation about dogs may well generate a list of dog-related ads adjacent to your inbox. Search for an address and you’ll see local advertising appear right inside the Google Maps. These kinds of smart ads are just the beginning. Even more detailed recommendations are coming that will be based on mathematical formulas and psychological data that will be based on the clicks you make online.
The sites that will provide all this just-for-you data are assuming that you’ll be comfortable with a computer knowing a lot about you, just as we’ve grown comfortable with ATM machines and banking online. In the early days of computerized banking, many people were extremely nervous about trusting a machine with their deposits and withdrawals. A friend recently remembered that her grandmother sat her down when she was a child and explained that “boys and ATM machines just couldn’t be trusted.” Yet today we use ATMs at delis, on street corners, even inside bank lobbies. Today, there are nearly 400,000 of the machines able to dispense cash, and often they can do more, such as sell stamps or money orders. More often than not, convenience trumps trepidation.
That said, we don’t trust these machines or computers any more quickly or blindly than we do real strangers we meet—and we still have a ways to go before we’re at a point where these machines are smart enough to offer a normal conversation and allow us to trust them. Just because I’m willing to buy music or a book from iTunes or Amazon, that doesn’t mean I’m willing to buy music from any old no-name purveyor with a PayPal account.
Then there’s what computer programmers call the cold-start problem.3 That’s what happens when a user doesn’t have any information or data in a system, and so the syste
m is unable to make recommendations and we’re unable to trust that it really knows something about us. If the computer guesses something about us and gets it wrong, we’ll be unlikely to go back to it.
One way programmers hope to tackle the cold-start problem is to filter and monitor everything about our online actions and our anchoring communities—as Google hopes to do. But these computer systems and online networks are often siloed and separated, too. To solve the problem of digital trust, computers ask people to fill out questionnaires. Some folks simply won’t take the time; for others, these surveys make no sense since they ask strange questions, trying to grasp a glimmer of insight into your personality so they can offer better recommendations.
An early study by Timothy Bickmore and Justine Cassell, now at Northwestern University, tried to promote trust in the real estate world by having a computer engage in “small talk.” They used a virtual Realtor named Rae, who started conversations with comical banter like “Sorry about my voice; this is some engineer’s idea of a natural-sounding voice.” After a series of chitchatty questions, Rae began asking more pertinent questions like “What kind of down payment can you make?” and “How many bedrooms are you looking for?”
You’d think that Rae’s conversational repartee would make any user feel comfortable trusting a machine, but Cassell and Bickmore found that the results were a little different. Small talk had a much more engaging effect on people who described themselves as extroverts; they felt the machine was more credible and even enjoyed the experience. In contrast, the self-described introverts wanted to get straight to the actual real estate questions and found the small talk annoying. It also limited their trust of Rae. A human being would have been able to distinguish introverts and extroverts, but today, conversations with computers are one-size-fits-all.