The Age of Surveillance Capitalism

Home > Other > The Age of Surveillance Capitalism > Page 10
The Age of Surveillance Capitalism Page 10

by Shoshana Zuboff


  Google’s declared state of exception was the backdrop for 2002, the watershed year during which surveillance capitalism took root. The firm’s appreciation of behavioral surplus crossed another threshold that April, when the data logs team arrived at their offices one morning to find that a peculiar phrase had surged to the top of the search queries: “Carol Brady’s maiden name.” Why the sudden interest in a 1970s television character? It was data scientist and logs team member Amit Patel who recounted the event to the New York Times, noting, “You can’t interpret it unless you know what else is going on in the world.”37

  The team went to work to solve the puzzle. First, they discerned that the pattern of queries had produced five separate spikes, each beginning at forty-eight minutes after the hour. Then they learned that the query pattern occurred during the airing of the popular TV show Who Wants to Be a Millionaire? The spikes reflected the successive time zones during which the show aired, ending in Hawaii. In each time zone, the show’s host posed the question of Carol Brady’s maiden name, and in each zone the queries immediately flooded into Google’s servers.

  As the New York Times reported, “The precision of the Carol Brady data was eye-opening for some.” Even Brin was stunned by the clarity of Search’s predictive power, revealing events and trends before they “hit the radar” of traditional media. As he told the Times, “It was like trying an electron microscope for the first time. It was like a moment-by-moment barometer.”38 Google executives were described by the Times as reluctant to share their thoughts about how their massive stores of query data might be commercialized. “There is tremendous opportunity with this data,” one executive confided.39

  Just a month before the Carol Brady moment, while the AdWords team was already working on new approaches, Brin and Page hired Eric Schmidt, an experienced executive, engineer, and computer science Ph.D., as chairman. By August, they appointed him to the CEO’s role. Doerr and Moritz had been pushing the founders to hire a professional manager who would know how to pivot the firm toward profit.40 Schmidt immediately implemented a “belt-tightening” program, grabbing the budgetary reins and heightening the general sense of financial alarm as fund-raising prospects came under threat. A squeeze on workspace found him unexpectedly sharing his office with none other than Amit Patel.

  Schmidt later boasted that as a result of their close quarters over the course of several months, he had instant access to better revenue figures than did his own financial planners.41 We do not know (and may never know) what other insights Schmidt might have gleaned from Patel about the predictive power of Google’s behavioral data stores, but there is no doubt that a deeper grasp of the predictive power of data quickly shaped Google’s specific response to financial emergency, triggering the crucial mutation that ultimately turned AdWords, Google, the internet, and the very nature of information capitalism toward an astonishingly lucrative surveillance project.

  Google’s earliest ads had been considered more effective than most online advertising at the time because they were linked to search queries and Google could track when users actually clicked on an ad, known as the “click-through” rate. Despite this, advertisers were billed in the conventional manner according to how many people viewed an ad. As Search expanded, Google created the self-service system called AdWords, in which a search that used the advertiser’s keyword would include that advertiser’s text box and a link to its landing page. Ad pricing depended upon the ad’s position on the search results page.

  Rival search startup Overture had developed an online auction system for web page placement that allowed it to scale online advertising targeted to keywords. Google would produce a transformational enhancement to that model, one that was destined to alter the course of information capitalism. As a Bloomberg journalist explained in 2006, “Google maximizes the revenue it gets from that precious real estate by giving its best position to the advertiser who is likely to pay Google the most in total, based on the price per click multiplied by Google’s estimate of the likelihood that someone will actually click on the ad.”42 That pivotal multiplier was the result of Google’s advanced computational capabilities trained on its most significant and secret discovery: behavioral surplus. From this point forward, the combination of ever-increasing machine intelligence and ever-more-vast supplies of behavioral surplus would become the foundation of an unprecedented logic of accumulation. Google’s reinvestment priorities would shift from merely improving its user offerings to inventing and institutionalizing the most far-reaching and technologically advanced raw-material supply operations that the world had ever seen. Henceforth, revenues and growth would depend upon more behavioral surplus.

  Google’s many patents filed during those early years illustrate the explosion of discovery, inventiveness, and complexity detonated by the state of exception that led to these crucial innovations and the firm’s determination to advance the capture of behavioral surplus.43 Among these efforts, I focus here on one patent submitted in 2003 by three of the firm’s top computer scientists and titled “Generating User Information for Use in Targeted Advertising.”44 The patent is emblematic of the new mutation and the emerging logic of accumulation that would define Google’s success. Of even greater interest, it also provides an unusual glimpse into the “economic orientation” baked deep into the technology cake by reflecting the mindset of Google’s distinguished scientists as they harnessed their knowledge to the firm’s new aims.45 In this way, the patent stands as a treatise on a new political economics of clicks and its moral universe, before the company learned to disguise this project in a fog of euphemism.

  The patent reveals a pivoting of the backstage operation toward Google’s new audience of genuine customers. “The present invention concerns advertising,” the inventors announce. Despite the enormous quantity of demographic data available to advertisers, the scientists note that much of an ad budget “is simply wasted… it is very difficult to identify and eliminate such waste.”46

  Advertising had always been a guessing game: art, relationships, conventional wisdom, standard practice, but never “science.” The idea of being able to deliver a particular message to a particular person at just the moment when it might have a high probability of actually influencing his or her behavior was, and had always been, the holy grail of advertising. The inventors point out that online ad systems had also failed to achieve this elusive goal. The then-predominant approaches used by Google’s competitors, in which ads were targeted to keywords or content, were unable to identify relevant ads “for a particular user.” Now the inventors offered a scientific solution that exceeded the most-ambitious dreams of any advertising executive:

  There is a need to increase the relevancy of ads served for some user request, such as a search query or a document request… to the user that submitted the request.… The present invention may involve novel methods, apparatus, message formats and/or data structures for determining user profile information and using such determined user profile information for ad serving.47

  In other words, Google would no longer mine behavioral data strictly to improve service for users but rather to read users’ minds for the purposes of matching ads to their interests, as those interests are deduced from the collateral traces of online behavior. With Google’s unique access to behavioral data, it would now be possible to know what a particular individual in a particular time and place was thinking, feeling, and doing. That this no longer seems astonishing to us, or perhaps even worthy of note, is evidence of the profound psychic numbing that has inured us to a bold and unprecedented shift in capitalist methods.

  The techniques described in the patent meant that each time a user queries Google’s search engine, the system simultaneously presents a specific configuration of a particular ad, all in the fraction of a moment that it takes to fulfill the search query. The data used to perform this instant translation from query to ad, a predictive analysis that was dubbed “matching,” went far beyond the mere denotation of search
terms. New data sets were compiled that would dramatically enhance the accuracy of these predictions. These data sets were referred to as “user profile information” or “UPI.” These new data meant that there would be no more guesswork and far less waste in the advertising budget. Mathematical certainty would replace all of that.

  Where would UPI come from? The scientists announce a breakthrough. They first explain that some of the new data can be culled from the firm’s existing systems with its continuously accruing caches of behavioral data from Search. Then they stress that even more behavioral data can be hunted and herded from anywhere in the online world. UPI, they write, “may be inferred,” “presumed,” and “deduced.” Their new methods and computational tools could create UPI from integrating and analyzing a user’s search patterns, document inquiries, and myriad other signals of online behaviors, even when users do not directly provide that personal information: “User profile information may include any information about an individual user or a group of users. Such information may be provided by the user, provided by a third-party authorized to release user information, and/or derived from user actions. Certain user information can be deduced or presumed using other user information of the same user and/or user information of other users. UPI may be associated with various entities.”48

  The inventors explain that UPI can be deduced directly from a user’s or group’s actions, from any kind of document a user views, or from an ad landing page: “For example, an ad for prostate cancer screening might be limited to user profiles having the attribute ‘male’ and ‘age 45 and over.’”49 They describe different ways to obtain UPI. One relies on “machine learning classifiers” that predict values on a range of attributes. “Association graphs” are developed to reveal the relationships among users, documents, search queries, and web pages: “user-to-user associations may also be generated.”50 The inventors also note that their methods can be understood only among the priesthood of computer scientists drawn to the analytic challenges of this new online universe: “The following description is presented to enable one skilled in the art to make and use the invention.… Various modifications to the disclosed embodiments will be apparent to those skilled in the art.…”51

  Of critical importance to our story is the scientists’ observation that the most challenging sources of friction here are social, not technical. Friction arises when users intentionally fail to provide information for no other reason than that they choose not to. “Unfortunately, user profile information is not always available,” the scientists warn. Users do not always “voluntarily” provide information, or “the user profile may be incomplete… and hence not comprehensive, because of privacy considerations, etc.”52

  A clear aim of the patent is to assure its audience that Google scientists will not be deterred by users’ exercise of decision rights over their personal information, despite the fact that such rights were an inherent feature of the original social contract between the company and its users.53 Even when users do provide UPI, the inventors caution, “it may be intentionally or unintentionally inaccurate, it may become stale.… UPI for a user… can be determined (or updated or extended) even when no explicit information is given to the system.… An initial UPI may include some expressly entered UPI information, though it doesn’t need to.”54

  The scientists thus make clear that they are willing—and that their inventions are able—to overcome the friction entailed in users’ decision rights. Google’s proprietary methods enable it to surveil, capture, expand, construct, and claim behavioral surplus, including data that users intentionally choose not to share. Recalcitrant users will not be obstacles to data expropriation. No moral, legal, or social constraints will stand in the way of finding, claiming, and analyzing others’ behavior for commercial purposes.

  The inventors provide examples of the kinds of attributes that Google could assess as it compiles its UPI data sets while circumnavigating users’ knowledge, intentions, and consent. These include websites visited, psychographics, browsing activity, and information about previous advertisements that the user has been shown, selected, and/or made purchases after viewing.55 It is a long list that is certainly much longer today.

  Finally, the inventors observe another obstacle to effective targeting. Even when user information exists, they say, “Advertisers may not be able to use this information to target ads effectively.”56 On the strength of the invention presented in this patent, and others related to it, the inventors publicly declare Google’s unique prowess in hunting, capturing, and transforming surplus into predictions for accurate targeting. No other firm could equal its range of access to behavioral surplus, its bench strength of scientific knowledge and technique, its computational power, or its storage infrastructure. In 2003 only Google could pull surplus from multiple sites of activity and integrate each increment of data into comprehensive “data structures.” Google was uniquely positioned with the state-of-the-art knowledge in computer science to convert those data into predictions of who will click on which configuration of what ad as the basis for a final “matching” result, all computed in micro-fractions of a second.

  To state all this in plain language, Google’s invention revealed new capabilities to infer and deduce the thoughts, feelings, intentions, and interests of individuals and groups with an automated architecture that operates as a one-way mirror irrespective of a person’s awareness, knowledge, and consent, thus enabling privileged secret access to behavioral data.

  A one-way mirror embodies the specific social relations of surveillance based on asymmetries of knowledge and power. The new mode of accumulation invented at Google would derive, above all, from the firm’s willingness and ability to impose these social relations on its users. Its willingness was mobilized by what the founders came to regard as a state of exception; its ability came from its actual success in leveraging privileged access to behavioral surplus in order to predict the behavior of individuals now, soon, and later. The predictive insights thus acquired would constitute a world-historic competitive advantage in a new marketplace where low-risk bets about the behavior of individuals are valued, bought, and sold.

  Google would no longer be a passive recipient of accidental data that it could recycle for the benefit of its users. The targeted advertising patent sheds light on the path of discovery that Google traveled from its advocacy-oriented founding toward the elaboration of behavioral surveillance as a full-blown logic of accumulation. The invention itself exposes the reasoning through which the behavioral value reinvestment cycle was subjugated to the service of a new commercial calculation. Behavioral data, whose value had previously been “used up” on improving the quality of Search for users, now became the pivotal—and exclusive to Google—raw material for the construction of a dynamic online advertising marketplace. Google would now secure more behavioral data than it needed to serve its users. That surplus, a behavioral surplus, was the game-changing, zero-cost asset that was diverted from service improvement toward a genuine and highly lucrative market exchange.

  These capabilities were and remain inscrutable to all but an exclusive data priesthood among whom Google is the übermensch. They operate in obscurity, indifferent to social norms or individual claims to self-determining decision rights. These moves established the foundational mechanisms of surveillance capitalism.

  The state of exception declared by Google’s founders transformed the youthful Dr. Jekyll into a ruthless, muscular Mr. Hyde determined to hunt his prey anywhere, anytime, irrespective of others’ self-determining aims. The new Google ignored claims to self-determination and acknowledged no a priori limits on what it could find and take. It dismissed the moral and legal content of individual decision rights and recast the situation as one of technological opportunism and unilateral power. This new Google assures its actual customers that it will do whatever it takes to transform the natural obscurity of human desire into scientific fact. This Google is the superpower that establishes its own values and pursu
es its own purposes above and beyond the social contracts to which others are bound.

  V. Surplus at Scale

  There were other new elements that helped to establish the centrality of behavioral surplus in Google’s commercial operations, beginning with its pricing innovations. The first new pricing metric was based on “click-through rates,” or how many times a user clicks on an ad through to the advertiser’s web page, rather than pricing based on the number of views that an ad receives. The click-through was interpreted as a signal of relevance and therefore a measure of successful targeting, operational results that derive from and reflect the value of behavioral surplus.

  This new pricing discipline established an ever-escalating incentive to increase behavioral surplus in order to continuously upgrade the effectiveness of predictions. Better predictions lead directly to more click-throughs and thus to revenue. Google learned new ways to conduct automated auctions for ad targeting that allowed the new invention to scale quickly, accommodating hundreds of thousands of advertisers and billions (later it would be trillions) of auctions simultaneously. Google’s unique auction methods and capabilities earned a great deal of attention, which distracted observers from reflecting on exactly what was being auctioned: derivatives of behavioral surplus. Click-through metrics institutionalized “customer” demand for these prediction products and thus established the central importance of economies of scale in surplus supply operations. Surplus capture would have to become automatic and ubiquitous if the new logic was to succeed, as measured by the successful trading of behavioral futures.

 

‹ Prev