Book Read Free

The Shallows

Page 17

by Nicholas Carr


  Google’s Silicon Valley headquarters—the Googleplex—is the Internet’s high church, and the religion practiced inside its walls is Taylorism. The company, says CEO Eric Schmidt, is “founded around the science of measurement.” It is striving to “systematize everything” it does.4 “We try to be very data-driven, and quantify everything,” adds another Google executive, Marissa Mayer. “We live in a world of numbers.”5 Drawing on the terabytes of behavioral data it collects through its search engine and other sites, the company carries out thousands of experiments a day and uses the results to refine the algorithms that increasingly guide how all of us find information and extract meaning from it.6 What Taylor did for the work of the hand, Google is doing for the work of the mind.

  The company’s reliance on testing is legendary. Although the design of its Web pages may appear simple, even austere, each element has been subjected to exhaustive statistical and psychological research. Using a technique called “split A/B testing,” Google continually introduces tiny permutations in the way its sites look and operate, shows different permutations to different sets of users, and then compares how the variations influence the users’ behavior—how long they stay on a page, the way they move their cursor about the screen, what they click on, what they don’t click on, where they go next. In addition to the automated online tests, Google recruits volunteers for eye-tracking and other psychological studies at its in-house “usability lab.” Because Web surfers evaluate the contents of pages “so quickly that they make most of their decisions unconsciously,” remarked two Google researchers in a 2009 blog post about the lab, monitoring their eye movements “is the next best thing to actually being able to read their minds.”7 Irene Au, the company’s director of user experience, says that Google relies on “cognitive psychology research” to further its goal of “making people use their computers more efficiently.” 8

  Subjective judgments, including aesthetic ones, don’t enter into Google’s calculations. “On the web,” says Mayer, “design has become much more of a science than an art. Because you can iterate so quickly, because you can measure so precisely, you can actually find small differences and mathematically learn which one is right.”9 In one famous trial, the company tested forty-one different shades of blue on its toolbar to see which shade drew the most clicks from visitors. It carries out similarly rigorous experiments on the text it puts on its pages. “You have to try and make words less human and more a piece of the machinery,” explains Mayer.10

  In his 1993 book Technopoly, Neil Postman distilled the main tenets of Taylor’s system of scientific management. Taylorism, he wrote, is founded on six assumptions: “that the primary, if not the only, goal of human labor and thought is efficiency; that technical calculation is in all respects superior to human judgment; that in fact human judgment cannot be trusted, because it is plagued by laxity, ambiguity, and unnecessary complexity; that subjectivity is an obstacle to clear thinking; that what cannot be measured either does not exist or is of no value; and that the affairs of citizens are best guided and conducted by experts.”11 What’s remarkable is how well Postman’s summary encapsulates Google’s own intellectual ethic. Only one tweak is required to bring it up to date. Google doesn’t believe that the affairs of citizens are best guided by experts. It believes that those affairs are best guided by software algorithms—which is exactly what Taylor would have believed had powerful digital computers been around in his day.

  Google also resembles Taylor in the sense of righteousness it brings to its work. It has a deep, even messianic faith in its cause. Google, says its CEO, is more than a mere business; it is a “moral force.”12 The company’s much-publicized “mission” is “to organize the world’s information and make it universally accessible and useful.”13 Fulfilling that mission, Schmidt told the Wall Street Journal in 2005, “will take, current estimate, 300 years.”14 The company’s more immediate goal is to create “the perfect search engine,” which it defines as “something that understands exactly what you mean and gives you back exactly what you want.”15 In Google’s view, information is a kind of commodity, a utilitarian resource that can, and should, be mined and processed with industrial efficiency. The more pieces of information we can “access” and the faster we can distill their gist, the more productive we become as thinkers. Anything that stands in the way of the speedy collection, dissection, and transmission of data is a threat not only to Google’s business but to the new utopia of cognitive efficiency it aims to construct on the Internet.

  GOOGLE WAS BORN of an analogy—Larry Page’s analogy. The son of one of the pioneers of artificial intelligence, Page was surrounded by computers from an early age—he recalls being “the first kid in my elementary school to turn in a word-processed document”16 —and went on to study engineering as an undergraduate at the University of Michigan. His friends remember him as being ambitious, smart, and “nearly obsessed with efficiency.”17 While serving as president of Michigan’s engineering honor society, he spearheaded a brash, if ultimately futile, campaign to convince the school’s administrators to build a monorail through the campus. In the fall of 1995, Page headed to California to take a prized spot in Stanford University’s doctoral program in computer science. Even as a young boy, he had dreamed of creating a momentous invention, something that “would change the world.”18 He knew there was no better place than Stanford, Silicon Valley’s frontal cortex, to make the dream come true.

  It took only a few months for Page to land on a topic for his dissertation: the vast new computer network called the World Wide Web. Launched on the Internet just four years earlier, the Web was growing explosively—it had half a million sites and was adding more than a hundred thousand new ones every month—and the network’s incredibly complex and ever-shifting arrangement of nodes and links had come to fascinate mathematicians and computer scientists. Page had an idea that he thought might unlock some of its secrets. He had realized that the links on Web pages are analogous to the citations in academic papers. Both are signifiers of value. When a scholar, in writing an article, makes a reference to a paper published by another scholar, she is vouching for the importance of that other paper. The more citations a paper garners, the more prestige it gains in its field. In the same way, when a person with a Web page links to someone else’s page, she is saying that she thinks the other page is important. The value of any Web page, Page saw, could be gauged by the links coming into it.

  Page had another insight, again drawing on the citations analogy: not all links are created equal. The authority of any Web page can be gauged by how many incoming links it attracts. A page with a lot of incoming links has more authority than a page with only one or two. The greater the authority of a Web page, the greater the worth of its own outgoing links. The same is true in academia: earning a citation from a paper that has itself been much cited is more valuable than receiving one from a less cited paper. Page’s analogy led him to realize that the relative value of any Web page could be estimated through a mathematical analysis of two factors: the number of incoming links the page attracted and the authority of the sites that were the sources of those links. If you could create a database of all the links on the Web, you would have the raw material to feed into a software algorithm that could evaluate and rank the value of all the pages on the Web. You would also have the makings of the world’s most powerful search engine.

  The dissertation never got written. Page recruited another Stanford graduate student, a math prodigy named Sergey Brin who had a deep interest in data mining, to help him build his search engine. In the summer of 1996, an early version of Google—then called BackRub—debuted on Stanford’s Web site. Within a year, BackRub’s traffic had overwhelmed the university’s network. If they were going to turn their search service into a real business, Page and Brin saw, they were going to need a lot of money to buy computing gear and network bandwidth. In the summer of 1998, a wealthy Silicon Valley investor came to the rescue, cutting them a check for a hu
ndred grand. They moved their budding company out of their dorms and into a couple of spare rooms in a friend-of-a-friend’s house in nearby Menlo Park. In September they incorporated as Google Inc. They chose the name—a play on googol, the word for the number ten raised to the hundredth power—to highlight their goal of organizing “a seemingly infinite amount of information on the web.” In December, an article in PC Magazine praised the new search engine with the quirky name, saying it “has an uncanny knack for returning extremely relevant results.”19

  Thanks to that knack, Google was soon processing most of the millions—and then billions—of Internet searches being conducted every day. The company became fabulously successful, at least as measured by the traffic running through its site. But it faced the same problem that had doomed many dot-coms: it hadn’t been able to figure out how to turn a profit from all that traffic. No one would pay to search the Web, and Page and Brin were averse to injecting advertisements into their search results, fearing it would corrupt Google’s pristine mathematical objectivity. “We expect,” they had written in a scholarly paper early in 1998, “that advertising-funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.”20

  But the young entrepreneurs knew that they would not be able to live off the largesse of venture capitalists forever. Late in 2000, they came up with a clever plan for running small, textual advertisements alongside their search results—a plan that would require only a modest compromise of their ideals. Rather than selling advertising space for a set price, they decided to auction the space off. It wasn’t an original idea—another search engine, GoTo, was already auctioning ads—but Google gave it a new spin. Whereas GoTo ranked its search ads according to the size of advertisers’ bids—the higher the bid, the more prominent the ad—Google in 2002 added a second criterion. An ad’s placement would be determined not only by the amount of the bid but by the frequency with which people actually clicked on the ad. That innovation ensured that Google’s ads would remain, as the company put it, “relevant” to the topics of searches. Junk ads would automatically be screened from the system. If searchers didn’t find an ad relevant, they wouldn’t click on it, and it would eventually disappear from Google’s site.

  The auction system, named AdWords, had another, very important result: by tying ad placement to clicks, it increased click-through rates substantially. The more often people clicked on an ad, the more frequently and prominently the ad would appear on search result pages, bringing even more clicks. Since advertisers paid Google by the click, the company’s revenues soared. The AdWords system proved so lucrative that many other Web publishers contracted with Google to place its “contextual ads” on their sites as well, tailoring the ads to the content of each page. By the end of the decade, Google was not just the largest Internet company in the world; it was one of the largest media companies, taking in more than $22 billion in sales a year, almost all of it from advertising, and turning a profit of about $8 billion. Page and Brin were each worth, on paper, more than $10 billion.

  Google’s innovations have paid off for its founders and investors. But the biggest beneficiaries have been Web users. Google has succeeded in making the Internet a far more efficient informational medium. Earlier search engines tended to get clogged with data as the Web expanded—they couldn’t index the new content, much less separate the wheat from the chaff. Google’s engine, by contrast, has been engineered to produce better results as the Web grows. The more sites and links Google evaluates, the more precisely it can classify pages and rank their quality. And as traffic increases, Google is able to collect more behavioral data, allowing it to tailor its search results and advertisements ever more precisely to users’ needs and desires. The company has also invested many billions of dollars in building computer-packed data centers around the world, ensuring that it can deliver search results to its users in milliseconds. Google’s popularity and profitability are well deserved. The company plays an invaluable role in helping people navigate the hundreds of billions of pages that now populate the Web. Without its search engine, and the other engines that have been built on its model, the Internet would have long ago become a Tower of Digital Babel.

  But Google, as the supplier of the Web’s principal navigational tools, also shapes our relationship with the content that it serves up so efficiently and in such profusion. The intellectual technologies it has pioneered promote the speedy, superficial skimming of information and discourage any deep, prolonged engagement with a single argument, idea, or narrative. “Our goal,” says Irene Au, “is to get users in and out really quickly. All our design decisions are based on that strategy.”21 Google’s profits are tied directly to the velocity of people’s information intake. The faster we surf across the surface of the Web—the more links we click and pages we view—the more opportunities Google gains to collect information about us and to feed us advertisements. Its advertising system, moreover, is explicitly designed to figure out which messages are most likely to grab our attention and then to place those messages in our field of view. Every click we make on the Web marks a break in our concentration, a bottom-up disruption of our attention—and it’s in Google’s economic interest to make sure we click as often as possible. The last thing the company wants is to encourage leisurely reading or slow, concentrated thought. Google is, quite literally, in the business of distraction.

  GOOGLE MAY YET turn out to be a flash in the pan. The lives of Internet companies are rarely nasty or brutish, but they do tend to be short. Because their businesses are ethereal, constructed of invisible strands of software code, their defenses are fragile. All it takes to render a thriving online business obsolete is a sharp programmer with a fresh idea. The invention of a more precise search engine or a better way to circulate ads through the Net could spell ruin for Google. But no matter how long the company is able to maintain its dominance over the flow of digital information, its intellectual ethic will remain the general ethic of the Internet as a medium. Web publishers and toolmakers will continue to attract traffic and make money by encouraging and feeding our hunger for small, rapidly dispensed pieces of information.

  The history of the Web suggests that the velocity of data will only increase. During the 1990s, most online information was found on so-called static pages. They didn’t look all that different from the pages in magazines, and their content remained relatively fixed. The trend since then has been to make pages ever more “dynamic,” updating them regularly and often automatically with new content. Specialized blogging software, introduced in 1999, made rapid-fire publishing simple for everyone, and the most successful bloggers soon found that they needed to post many items a day to keep fickle readers engaged. News sites followed suit, serving up fresh stories around the clock. RSS readers, which became popular around 2005, allowed sites to “push” headlines and other bits of information to Web users, putting an even greater premium on the frequency of information delivery.

  The greatest acceleration has come recently, with the rise of social networks like MySpace, Facebook, and Twitter. These companies are dedicated to providing their millions of members with a never-ending “stream” of “real-time updates,” brief messages about, as a Twitter slogan puts it, “what’s happening right now.” By turning intimate messages—once the realm of the letter, the phone call, the whisper—into fodder for a new form of mass media, the social networks have given people a compelling new way to socialize and stay in touch. They’ve also placed a whole new emphasis on immediacy. A “status update” from a friend, co-worker, or favorite celebrity loses its currency within moments of being issued. To be up to date requires the continual monitoring of message alerts. The competition among the social networks to deliver ever-fresher and more plentiful messages is fierce. When, in early 2009, Facebook responded to Twitter’s rapid growth by announcing that it was revamping its site to, as it put it, “increase the pace of the stream,” its founder and chief executive, Mark Zuckerb
erg, assured its quarter of a billion members that the company would “continue making the flow of information even faster.”22 Unlike early book printers, who had strong economic incentives to promote the reading of older works as well as recent ones, online publishers battle to distribute the newest of the new.

  Google hasn’t been sitting still. To combat the upstarts, it has been revamping its search engine to ratchet up its speed. The quality of a page, as determined by the links coming into it, is no longer Google’s chief criterion in ranking search results. In fact, it’s now only one of two hundred different “signals” that the company monitors and measures, according to Amit Singhal, a top Google engineer.23 One of its major recent thrusts has been to place a greater priority on what it calls the “freshness” of the pages it recommends. Google not only identifies new or revised Web pages much more quickly than it used to—it now checks the most popular sites for updates every few seconds rather than every few days—but for many searches it skews its results to favor newer pages over older ones. In May 2009, the company introduced a new twist to its search service, allowing users to bypass considerations of quality entirely and have results ranked according to how recently the information was posted to the Web. A few months later, it announced a “next-generation architecture” for its search engine that bore the telling code name Caffeine.24 Citing Twitter’s achievements in speeding the flow of data, Larry Page said that Google wouldn’t be satisfied until it is able “to index the Web every second to allow real-time search.”25

  The company is also striving to further expand its hold on Web users and their data. With the billions in profits churned out by AdWords, it has been able to diversify well beyond its original focus on searching Web pages. It now has specialized search services for, among other things, images, videos, news stories, maps, blogs, and academic journals, all of which feed into the results supplied by its main search engine. It also offers computer operating systems, such as Android for smartphones and Chrome for PCs, as well as a slew of online software programs, or “apps,” including e-mail, word processing, blogging, photo storage, feed reading, spreadsheets, calendars, and Web hosting. Google Wave, an ambitious social-networking service launched at the end of 2009, allows people to monitor and update various multimedia message threads on a single densely packed page, which refreshes its contents automatically and almost instantaneously. Wave, says one reporter, “turns conversations into fast-moving group streams-of-consciousness.”26

 

‹ Prev