The imaginative range of this new thinking is demonstrated in a 2013 Microsoft patent application updated and republished in 2016 and titled “User Behavior Monitoring on a Computerized Device.”30 With conspicuously thin theory complemented by thick practice, the patented device is designed to monitor user behavior in order to preemptively detect “any deviation from normal or acceptable behavior that is likely to affect the user’s mental state. A prediction model corresponding to features of one or more mental states may be compared with features based upon current user behavior.”
The scientists propose an application that can sit in an operating system, server, browser, phone, or wearable device continuously monitoring a person’s behavioral data: interactions with other people or computers, social media posts, search queries, and online activities. The app may activate sensors to record voice and speech, videos and images, and movement, such as detecting “when the user engages in excessive shouting by examining the user’s phone calls and comparing related features with the predication model.”
All these behavioral data are stored for future historical analyses in order to improve the prediction model. If the user normally restrains the volume of his or her voice, then sudden excessive shouting may indicate a “psychosocial event.” Alternatively, the behavior could be assessed in relation to a “feature distribution representing normal and/or acceptable behavior for an average member of a population… a statistically significant deviation from that behavior baseline indicates a number of possible psychological events.” The initial proposition is that in the event of an anomaly, the device would alert “trusted individuals” such as family members, doctors, and caregivers. But the circle widens as the patent specifications unfold. The scientists note the utility of alerts for health care providers, insurance companies, and law-enforcement personnel. Here is a new surveillance-as-a-service opportunity geared to preempt whatever behavior clients choose.
Microsoft’s patent returns us to Planck, Meyer, and Skinner and the viewpoint of the Other-One. In their physics-based representation of human behavior, anomalies are the “accidents” that are called freedom but actually denote ignorance; they simply cannot yet be explained by the facts. Planck/Meyer/Skinner believed that the forfeit of this freedom was the necessary price to be paid for the “safety” and “harmony” of an anomaly-free society in which all processes are optimized for the greater good. Skinner imagined that with the correct technology of behavior, knowledge could preemptively eliminate anomalies, driving all behavior toward preestablished parameters that align with social norms and objectives. “If we could show that our members preferred life in Walden Two,” says Frazier-Skinner, “it would be the best possible evidence that we had reached a safe and productive social structure.”31
In this template of social relations, behavioral modification operates just beyond the threshold of human awareness to induce, reward, goad, punish, and reinforce behavior consistent with “correct policies.” Thus, Facebook learns that it can predictably move the societal dial on voting patterns, emotional states, or anything else that it chooses. Niantic Labs and Google learn that they can predictably enrich McDonald’s bottom line or that of any other customer. In each case, corporate objectives define the “policies” toward which confluent behavior harmoniously streams.
The machine hive—the confluent mind created by machine learning—is the material means to the final elimination of the chaotic elements that interfere with guaranteed outcomes. Eric Schmidt and Sebastian Thrun, the machine intelligence guru who once directed Google’s X Lab and helped lead the development of Street View and Google’s self-driving car, make this point in championing Alphabet’s autonomous vehicles. “Let’s stop freaking out about artificial intelligence,” they write.
Schmidt and Thrun emphasize the “crucial insight that differentiates AI from the way people learn.”32 Instead of the typical assurances that machines can be designed to be more like human beings and therefore less threatening, Schmidt and Thrun argue just the opposite: it is necessary for people to become more machine-like. Machine intelligence is enthroned as the apotheosis of collective action in which all the machines in a networked system move seamlessly toward confluence, all sharing the same understanding and thus operating in unison with maximum efficiency to achieve the same outcomes. The jackhammers do not independently appraise their situation; they each learn what they all learn. They each respond the same way to uncredentialed hands, their brains operating as one in service to the “policy.” The machines stand or fall together, right or wrong together. As Schmidt and Thrun lament,
When driving, people mostly learn from their own mistakes, but they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again. As a result, hundreds of thousands of people die worldwide every year in traffic collisions. AI evolves differently. When one of the self-driving cars makes an error, all of the self-driving cars learn from it. In fact, new self-driving cars are “born” with the complete skill set of their ancestors and peers. So collectively, these cars can learn faster than people. With this insight, in a short time self-driving cars safely blended onto our roads alongside human drivers, as they kept learning from each other’s mistakes.… Sophisticated AI-powered tools will empower us to better learn from the experiences of others.… The lesson with self-driving cars is that we can learn and do more collectively.33
This is a succinct but extraordinary statement of the machine template for the social relations of an instrumentarian society. The essence of these facts is that first, machines are not individuals, and second, we should be more like machines. The machines mimic each other, and so must we. The machines move in confluence, not many rivers but one, and so must we. The machines are each structured by the same reasoning and flowing toward the same objective, and so must we be structured.
The instrumentarian future integrates this symbiotic vision in which the machine world and social world operate in harmony within in and across “species” as humans emulate the superior learning processes of the smart machines. This emulation is not intended as a throwback to mass production’s Taylorism or Chaplin’s hapless worker swallowed by the mechanical order. Instead, this prescription for symbiosis takes a different road on which human interaction mirrors the relations of the smart machines as individuals learn to think and act by emulating one another, just like the self-driving cars and the policy-worshipping jackhammers.
In this way, the machine hive becomes the role model for a new human hive in which we march in peaceful unison toward the same direction based on the same “correct” understanding in order to construct a world free of mistakes, accidents, and random messes. In this world the “correct” outcomes are known in advance and guaranteed in action. The same ubiquitous instrumentation and transparency that define the machine system must also define the social system, which in the end is simply another way of describing the ground truth of instrumentarian society.
In this human hive, individual freedom is forfeit to collective knowledge and action. Nonharmonious elements are preemptively targeted with high doses of tuning, herding, and conditioning, including the full seductive force of social persuasion and influence. We march in certainty, like the smart machines. We learn to sacrifice our freedom to collective knowledge imposed by others and for the sake of their guaranteed outcomes. This is the signature of the third modernity offered up by surveillance capital as its answer to our quest for effective life together.
CHAPTER FIFTEEN
THE INSTRUMENTARIAN COLLECTIVE
So an age ended, and its last deliverer died
In bed, grown idle and unhappy; they were safe:
The sudden shadow of a giant’s enormous calf
Would fall no more at dusk across their lawns outside.
—W. H. AUDEN
SONNETS FROM CHINA, X
I. The Priests of Instrumentarian Power
Applied utopianist executives such as Page, Nadella, and Zucker
berg do not say much about their theories. At best the information we have is episodic and shallow. But a cadre of data scientists and “computational social scientists” has leapt into this void with detailed experimental and theoretical accounts of the gathering momentum of instrumentarian power, providing invaluable insight into the social principles of an instrumentarian society.
One outstanding example is the work of Alex Pentland, the director of the Human Dynamics Lab within MIT’s Media Lab. Pentland is the rare applied utopianist who, in collaboration with his students and collaborators, has vigorously articulated, researched, and disseminated a theory of instrumentarian society in parallel to his prolific technical innovations and practical applications. The studies that this group has produced are a contemporary signal of an increasingly taken-for-granted worldview among data scientists whose computational theories and innovations exist in dynamic interaction with the progress of surveillance capitalism, as in the case of Picard’s affective computing and Paradiso’s digital omniscience. However, few consider the social ramifications of their work with Pentland’s insight and conviction, providing us with an invaluable opportunity to critically explore the governance assumptions, societal principles, and social processes that define an instrumentarian society. My aim is to infer the theory behind the practice, as surveillance capitalists integrate “society” as a “first class object” for rendition, computation, modification, monetization, and control.
Pentland is a prolific author or coauthor of hundreds of articles and research studies in the field of data science and is a prominent institutional actor who advises a roster of organizations, including the World Economic Forum, the Data-Pop Alliance, Google, Nissan, Telefonica, and the Office of the United Nations Secretary General. Pentland’s research lab is funded by a who’s who of global corporations, consultancies, and governments: Google, Cisco, IBM, Deloitte, Twitter, Verizon, the EU Commission, the US government, the Chinese government, “and various entities who are all concerned with why we don’t know what’s going on in the world.…”1
Although Pentland is not alone in this field, he is something of a high priest among an exclusive group of priests. Unlike Hal Varian, Pentland does not speak of Google in the first-person plural, but his work is showcased in surveillance capitalist enclaves, where it provides the kind of material and intellectual support that helps to legitimate instrumentarian practices. Appearing for a presentation at Google, where Pentland is on the Advisory Board for the Advanced Technology and Projects Group, former Pentland doctoral student and top Google executive Brad Horowitz introduced his mentor as an “inspirational educator” with credentials across many disciplines and whose former students lead the computational sciences in theory and practice.2
Pentland is often referred to as the “godfather of wearables,” especially Google Glass. In 1998 he predicted that wearables “can extend one’s senses, improve memory, aid the wearer’s social life and even help him or her stay calm and collected.”3 Thad Starner, one of Pentland’s doctoral students, developed a primitive “wearable” device while at MIT and was hired by Sergey Brin in 2010 to continue that work at Google: a project that produced Google Glass. More than fifty of Pentland’s doctoral students have gone on to spread the instrumentarian vision in top universities, in industry research groups, and in thirty companies in which Pentland participates as cofounder, sponsor, or advisor. Each one applies some facet of Pentland’s theory, analytics, and inventions to real people in organizations and cities.4
Pentland’s academic credentials and voluble intelligence help legitimate a social vision that repelled and alarmed intellectuals, public officials, and the general public just decades ago. Most noteworthy is that Pentland “completes” Skinner, fulfilling his social vision with big data, ubiquitous digital instrumentation, advanced mathematics, sweeping theory, numerous esteemed coauthors, institutional legitimacy, lavish funding, and corporate friends in high places without having attracted the worldwide backlash, moral revulsion, and naked vitriol once heaped on Harvard’s outspoken behaviorist. This fact alone suggests the depth of psychic numbing to which we have succumbed and the loss of our collective bearings.
Like Skinner, Pentland is a designer of utopias and a lofty thinker quick to generalize from animals to the entire arc of humanity. He is also a hands-on architect of instrumentarianism’s practical architecture and computational challenges. Pentland refers to his theory of society as “social physics,” a conception that confirms him as this century’s B. F. Skinner, by way of Planck, Meyer, and MacKay.5 And although Pentland never mentions the old behaviorist, Pentland’s book, Social Physics, summons Skinner’s social vision into the twenty-first century, now fulfilled by the instruments that eluded Skinner in his lifetime. Pentland validates the instrumentarian impulse with research and theory that are boldly grounded in Skinner’s moral reasoning and epistemology as captured by the viewpoint of the Other-One.
Professor Pentland began his intellectual journey as Skinner did, in the study of animal behavior. Where Skinner trained his reasoning on the detailed behaviors of blameless individual creatures, Pentland concerned himself with the gross behavior of animal populations. As a part-time researcher at NASA’s Environmental Research Institute while still an undergraduate, he developed a method for assessing the Canadian beaver population from space by counting the number of beaver ponds: “You’re watching the lifestyle, and you get an indirect measure.”6
The experience appears to have hooked Pentland on the distant detached gaze, which he would later embrace as the “God’s eye view.” You may have experienced the sensation of the God view from the window seat of an airplane as it lifts you above the city, transforming all the joys and woes below into the mute bustle of an anthill. Up there, any sense of “we” quickly dissolves into the viewpoint of the Other-One, and it is this angle of observation that founded Pentland’s science as he learned to apply MacKay’s principles of remote observation and telestimulation to humans: “If you think about people across the room talking, you can tell a lot.… It’s like watching beavers from outer space, like Jane Goodall watching gorillas. You observe from a distance.”7 (This is a slur on Goodall, of course, whose seminal genius was her ability to understand the gorillas she studied not as “other ones” but rather as “one of us.”)
The God view would come to be essential to the conception of instrumentarian society, but a comprehensive picture emerged gradually over years of piecemeal experimentation. In the following section we track that journey as Pentland and his students learned to render, measure, and compute social behavior. With that foundation, we turn to Pentland’s Social Physics, which aims to recast society as an instrumentarian hive mind—like Nadella’s machines—but now extensively theorized and deeply evocative of Skinner’s formulations, values, worldview, and vision of the human future.
II. When Big Other Eats Society: The Rendition of Social Relations
Skinner bitterly lamented the absence of “instruments and methods” for the study of human behavior comparable to those available to physicists. As if in response, Pentland and his students have spent the last two decades determined to invent the instruments and methods that can transform all of human behavior, especially social behavior, into highly predictive math. An early milestone was a 2002 collaboration with then-doctoral student Tanzeem Choudhury, in which the coauthors wrote, “As far as we know, there are currently no available methods to automatically model face-to-face interactions. This absence is probably due to the difficulty of obtaining reliable measurements from real-world interactions within a community.… We believe sensing and modeling physical interactions among people is an untapped resource.”8 In other words, the “social” remained an elusive domain even as data and computers had become more commonplace.
The researchers’ response was to introduce the “sociometer,” a wearable sensor that combines a microphone, accelerometer, Bluetooth connection, analytic software, and machine learning techniques designed to i
nfer “the structure and dynamic relationships” in human groups.9 (Choudhury would eventually run the People Aware Computing group at Cornell University.) From that point onward, Pentland and his teams have worked to crack the code on the instrumentation and instrumentalization of social processes in the name of a totalistic social vision founded on a comprehensive means of behavior modification.
A 2005 collaboration with doctoral student Nathan Eagle reiterated the problem of insufficient data on human society, noting the “bias, sparsity of data, and lack of continuity” in social science’s understanding of human behavior and the resulting “absence of dense continuous data that also hinders the machine learning and agent-based modeling communities from constructing more comprehensive predictive models of human dynamics.”10 Pentland had insisted that even the relatively new field of “data mining” could not capture the “real action” of conversations and face-to-face interactions necessary for a trenchant and comprehensive grasp of social behavior.11 But he also recognized that a rapidly growing swath of human activity—from transactions to communication—was falling to computer mediation, largely as a result of the cell phone.
The team saw that it would be possible to exploit the increasingly “ubiquitous infrastructure” of mobile phones and combine those data with new streams of information from their wearable behavioral monitors. The result was a radical new solution that Pentland and Eagle called “reality mining.” Mentor and student demonstrated how the data from cell phones “can be used to uncover regular rules and structure in the behavior of both individuals and organizations,” thus furthering the progress of behavioral surplus capture and analysis and pointing the way toward the larger shift in the nature of behavioral dispossession from virtual, to actual, to social experience.12 As a technological and cultural landmark, the researchers’ announcement that “reality” was now fair and feasible game for surplus capture, search, extraction, rendition, datafication, analysis, prediction, and intervention helped to forge a path toward the new practices that would eventually become the “reality business.”
The Age of Surveillance Capitalism Page 51