The Age of Surveillance Capitalism

Home > Other > The Age of Surveillance Capitalism > Page 27
The Age of Surveillance Capitalism Page 27

by Shoshana Zuboff


  Telematics announce a new day of behavioral control. Now the insurance company can set specific parameters for driving behavior. These can include anything from fastening the seat belt to rate of speed, idling times, braking and cornering, aggressive acceleration, harsh braking, excessive hours on the road, driving out of state, and entering a restricted area.39 These parameters are translated into algorithms that continuously monitor, evaluate, and rank the driver, calculations that translate into real-time rate adjustments.

  According to a patent held by Spireon’s top strategist, insurers can eliminate uncertainty by shaping behavior.40 The idea is to continuously optimize the insurance rate based on monitoring the driver’s adherence to behavioral parameters defined by the insurer. The system translates its behavioral knowledge into power, assigning credits or imposing punishments on drivers. Surplus is also translated into prediction products for sale to advertisers. The system calculates “behavioral traits” for advertisers to target, sending ads directly to the driver’s phone. A second patent is even more explicit about triggers for punitive measures.41 It identifies a range of algorithms that activate consequences when the system’s parameters are breached: “a violation algorithm,” “a curfew algorithm,” “a monitoring algorithm,” “an adherence algorithm,” “a credit algorithm.”

  The consultancy firms are aligned in advising all their insurance clients to get into the surveillance game. AT Kearney acknowledges that the “connected car” is a proving ground for what is to come: “Ultimately, IoT’s true value depends on customers adjusting their behaviors and risk profiles based on feedback from their ‘things.’”42 Health insurers are another target: “Wearable accelerometers” could “improve traceability of their compliance” with prescribed exercise regimes, and “digestible sensors” could track compliance with dietary and medication schedules, “providing higher truth and better granularity than a monthly refill.”43

  Deloitte acknowledges that according to its own survey data, most consumers reject telematics on the basis of privacy concerns and mistrust companies that want to monitor their behavior. This reluctance can be overcome, the consultants advise, by offering cost savings “significant enough” that people are willing “to make the [privacy] trade-off,” in spite of “lingering concerns.…” If price inducements don’t work, insurers are counseled to present behavioral monitoring as “fun,” “interactive,” “competitive,” and “gratifying,” rewarding drivers for improvements on their past record and “relative to the broader policy holder pool.”44 In this approach, known as “gamification,” drivers can be engaged to participate in “performance based contests” and “incentive based challenges.”45

  If all else fails, insurers are advised to induce a sense of inevitability and helplessness in their customers. Deloitte counsels companies to emphasize “the multitude of other technologies already in play to monitor driving” and that “enhanced surveillance and/or geo-location capabilities are part of the world we live in now, for better or worse.”46

  Behavioral underwriting offers auto insurers cost savings and efficiencies, but it is not the endgame for a revitalized insurance industry. The analytics that produce targeted advertising in the online world are repurposed for the real world, laying the foundation for new behavioral futures markets that trade in predictions of customer behavior. This is where the real drive for surveillance revenues is focused. For example, an executive of cloud services provider Covisint advises clients aiming to “cash in” on automotive telematics to move beyond targeted ads to “targeted applications.” These are not ads on a screen but real-life experiences shaped by the same capabilities as targeted ads and designed to lure you into real places for the sake of others’ profit. That means selling driver data to third parties that will figure out where you are, where you’re going, and what you want: “They know what restaurants you like because you drive your car there, so they can recommend restaurants as you’re driving and the restaurants will pay.…”47

  Behavioral surplus is understood as the raw material for products that establish “co-marketing” with other services such as “towing, auto repair, car washes, restaurants, retail outlets.…”48 The consultants at McKinsey make a similar recommendation, advising insurers that the “internet of things” enables their expansion into “completely new areas” such as “data market-places.” Health surplus can be “monetized,” says Deloitte, by providing “relevant referrals.” The firm advises its clients, especially those unlikely to reach scale in telematics, to establish partnerships with “digital players.”49 The model is a 2016 deal between IBM and General Motors that announced the creation of “OnStar Go,” the car industry’s “first cognitive mobility platform.” Dell and Microsoft have launched “internet of things” insurance “accelerators.” Dell provides insurers with hardware, software, analytics, and services to “more accurately predict risk and take preventative measures,” and Microsoft has linked up with American Family Insurance to develop startups focused on home automation.50

  The data companies were once regarded as mere “suppliers,” but it is more likely that the auto companies will become suppliers to the data behemoths. “Google tries to accompany people throughout their day, to generate data and then use that data for economic gain,” acknowledges Daimler’s CEO. “It’s at that point where a conflict with Google seems preprogrammed.”51 Google and Amazon are already locked in competition for the dashboard of your car, where their systems will control all communication and applications. From there it is a short step to telemetry and related data. Google already offers applications developers a cloud-based “scaleable geolocation telemetry system” using Google Maps. In 2016 Google France announced its interest in partnerships with insurance companies “to develop bundles of products which blend technology and hardware with insurance.” That same year a report from Cap Gemini consultants found that 40 percent of insurers see Google “as a potential rival and threat because of its strong brand and ability to manage customer data.”52

  VI. Executing the Uncontract

  These examples drawn from the ordinary world of automobile insurance teach some extraordinary lessons. Drivers are persuaded, induced, incentivized, or coerced into a quid pro quo that links pricing to the expansion of a real-world extraction/execution architecture aimed at new behavioral surplus flows (economies of scope). Behavioral data drawn from their experience are processed, and the results flow in two directions. First, they return to the drivers, executing procedures to interrupt and shape behavior in order to enhance the certainty, and therefore profitability, of predictions (economies of action). Second, prediction products that rank and sort driver behavior flow into newly convened behavioral futures markets in which third parties lay bets on what drivers will do now, soon, and later: Will he maintain a high safety rating? Will she act in compliance with our rules? Will he drive like a girl? These bets translate into pricing, incentive structures, and monitoring and compliance regimes. In both operations, surplus drawn from the driver’s experience is repurposed as the means to shape and compel the driver’s experience for the sake of guaranteed outcomes. Most of this occurs, as MacKay advised, outside the driver’s awareness while she still thinks that she is free.

  The Google declarations underwrite all the action here. As Varian writes, “Because transactions are now computer-mediated we can observe behavior that was previously unobservable and write contracts on it. This enables transactions that were simply not feasible before.”53 Varian’s “we” refers to those with privileged access to the shadow text into which behavioral data flow. Our behavior, once unobservable, is declared as free for the taking, theirs to own, and theirs to decide how to use and how to profit from. This includes the production of “new contractual forms” that compel us in ways that would not have been possible but for surveillance capitalism’s original declarations of dispossession.

  Varian recognized that the subregions of automotive telematics exemplify this new economic frontier when h
e wrote, “Nowadays it’s a lot easier just to instruct the vehicular monitoring system not to allow the car to be started and to signal the location where it can be picked up.”54 Yawn. But wait. “A lot easier” for whom? He means, of course, a lot easier for the “we” that now observes what was, until surveillance capitalism, unobservable and executes actions that were, until surveillance capitalism, not feasible. Varian’s laid-back, simple prose is a kind a lullaby that makes his observations seem banal, so ordinary as to barely warrant comment. But in Varian’s scenario, what happens to the driver? What if there is a child in the car? Or a blizzard? Or a train to catch? Or a day-care center drop-off on the way to work? A mother on life support in the hospital still miles away? A son waiting to be picked up at school?

  It was not long ago that Varian’s prosaic proclamations were regarded as the stuff of nightmares. In his 1967 book The Year 2000 the hyper-rational wunderkind futurist Herman Kahn anticipated many of the capabilities that Varian now assigns to the new extraction/execution architecture.55 Kahn was no shrinking violet. He was rumored to be director Stanley Kubrick’s model for the title character in Dr. Strangelove, and he was well-known for arguing that nuclear war is both “winnable” and “survivable.” Yet it was Kahn who foresaw innovations such as Varian’s vehicular monitoring system and characterized them as “a twenty-first century nightmare.” Among his many technology-oriented insights, Kahn foresaw automated computer systems that track all vehicular movements and also listen to and record conversations with all the capability available for high-speed scan and search. He imagined computer systems able to detect and respond to individual behavior—a raised voice, a threatening tone: “Such computers may also be able to apply a great deal of inferential logic on their own—they may become a sort of transistorized Sherlock Holmes making hypotheses and investigating leads in a more or less autonomous or self-motivated manner.…”56 Anyone who wields this kind of knowledge, he concluded, is, like Faust, “less immoral than amoral… indifferent to the fate of those who stand in his way rather than brutal.”57

  Contemporary reviewers of Kahn’s book invariably seized upon the dark “nightmare scenarios” of the computerized surveillance theme, the science-fiction–like forms of control that, they assumed, “will be actively feared and resented by many.”58 Despite the wide range of scenarios that Kahn presented in his book on the distant year 2000, Kahn’s voyage into the “unthinkable” was viewed by the public as a way to prepare for “the worst possible outcome” in a terrifying “nightmare of social controls.”59 Yet now that same nightmare is rendered as an enthusiastic progress report on surveillance capitalism’s latest triumphs. Varian’s update is delivered without self-consciousness or a hint of controversy, rather than the astonishment and revulsion that were predicted just decades ago. How has the nightmare become banal? Where is our sense of astonishment and outrage?

  Political scientist Langdon Winner grappled with this question in his seminal book Autonomous Technology, published in 1977. His answer? “What we lack is our bearings,” he wrote. Winner painstakingly described the ways in which our experience of “things technological” confounds “our vision, our expectations, and our capacity to make intelligent judgments. Categories, arguments, conclusions, and choices that would have been entirely obvious in earlier times are obvious no longer.”60

  So let us establish our bearings. What Varian celebrates here is not a new form of contract but rather a final solution to the enduring uncertainty that is the raison d’être of “contract” as a means of “private ordering.” In fact, the use of the word contract in Varian’s formulation is a perfect example of the horseless-carriage syndrome. Varian’s invention is unprecedented and cannot be understood as simply another kind of contract. It is, in fact, the annihilation of contract; this invention is better understood as the uncontract.

  The uncontract is a feature of the larger complex that is the means of behavioral modification, and it is therefore an essential modality of surveillance capitalism. It contributes to economies of action by leveraging proprietary behavioral surplus to preempt and foreclose action alternatives, thus replacing the indeterminacy of social processes with the determinism of programmed machine processes. This is not the automation of society, as some might think, but rather the replacement of society with machine action dictated by economic imperatives.

  The uncontract is not a space of contractual relations but rather a unilateral execution that makes those relations unnecessary. The uncontract desocializes the contract, manufacturing certainty through the substitution of automated procedures for promises, dialogue, shared meaning, problem solving, dispute resolution, and trust: the expressions of solidarity and human agency that have been gradually institutionalized in the notion of “contract” over the course of millennia. The uncontract bypasses all that social work in favor of compulsion, and it does so for the sake of more-lucrative prediction products that approximate observation and therefore guarantee outcomes.

  This substitution of machine work for social work is possible thanks to the success of Google’s declarations and the road that Google paved for surveillance capitalists’ dominance of the division of learning. Sitting in the catbird seat, Google can observe what was previously unobservable and know what was previously unknowable. As a result, the company can do what was previously undoable: bypass social relations in favor of automated machine processes that compel the behaviors that advance commercial objectives. When we celebrate the uncontract, as Varian and others do, we celebrate the asymmetries of knowledge and power that produce these new possibilities. The uncontract is a signpost that reminds us of our bearings as we follow the next sections of this chapter toward a clearer picture of surveillance capitalism’s growing ambitions in the annexation of “reality” to its kingdom of conquered human experience.

  VII. Inevitabilism

  It is difficult to keep your bearings when everyone around you is losing theirs. The transition to ubiquitous computing, “when sensors are everywhere,” Paradiso writes, won’t be “incremental” but rather “a revolutionary phase shift much like the arrival of the world wide web.”61 The same “phase shift” that is understood by its architects as the universal antidote to uncertainty is anticipated with absolute certainty. Paradiso is not alone here. On the contrary, the rhetoric of inevitability is so “ubiquitous” that within the tech community it can be considered a full-blown ideology of inevitabilism.

  The sense of incontestable certainty that infuses Paradiso’s vision has long been recognized as a key feature of utopianism. In their definitive history of utopian thought, Frank and Fritzie Manuel wrote that “since the end of the eighteenth century the predictive utopia has become a major form of imaginative thought and has preempted certain scientific techniques of forecasting… the contemporary utopia… binds past, present, and future together as though fated. The state they depict appears virtually ordained either by god or by history; there is a carry-over of millenarian certainty.…”62

  The Manuels, along with many other historians, consider Marxism to be the last great modern utopia.63 There are hundreds of passages in Karl Marx’s writing that convey his inevitabilism. In the very first section of The Communist Manifesto, published in 1848, Marx wrote the following: “What the bourgeoisie, therefore, produces, above all, is its own grave-diggers. Its fall and the victory of the proletariat are equally inevitable.”64

  Before the rise of the modern utopia, the genre was largely composed of fantastical narratives in which isolated pockets of human perfection were discovered in exotic mountain aeries, hidden valleys, or faraway islands. Modern utopias such as Marxism diverge from those fairy tales, addressing “the reformation of the entire species” with a rational systemic vision “whose province was the whole world.” No longer content as mere weavers of dreams, modern utopianists shifted toward totalistic and universal visions, prophecies of “the ineluctable end toward which mankind was moving.”65

  Now the proselytizers of ubiq
uitous computing join Marx and other modern utopianists in postulating a new phase of history, like Paradiso’s “revolutionary phase shift,” in which all of society is reassembled in a novel and superior pattern. Despite the fact that inevitability is the opposite of politics and history, apostles of the apparatus routinely hijack historical metaphors that lend a veneer of gravitas to their claims. The rise of the apparatus is alternatively cast as the inauguration of a new “age,” “era,” “wave,” “phase,” or “stage.” This kind of historical framing conveys the futility of opposition to the categorical inevitability of the march toward ubiquity.

  Silicon Valley is the axis mundi of inevitabilism. Among high-tech leaders, within the specialist literature, and among expert professionals there appears to be universal agreement on the idea that everything will be connected, knowable, and actionable in the near future: ubiquity and its consequences in total information are an article of faith.

  Not surprisingly, Google’s leaders are passionate inevitabilists. The very first sentences of Schmidt and Cohen’s 2013 book, The New Digital Age, exemplify this thrust: “Soon everyone on Earth will be connected,” they write. So-called predictive “laws” such as “Moore’s Law” and “photonics” are called upon to signal this new iron law of necessity that will produce exponential growth in connectivity and computational power.66 And later, “The collective benefit of sharing human knowledge and creativity grows at an exponential rate. In the future, information technology will be everywhere, like electricity. It will be a given.”67 When the book’s assertions garnered some criticism, the authors confronted their critics in an afterword to the paperback edition: “But bemoaning the inevitable increase in the size and reach of the technology sector distracts us from the real question.… Many of the changes we discuss are inevitable. They’re coming.”

 

‹ Prev