A month after the emotional contagion study’s publication, the editor-in-chief of the Proceedings of the National Academy of Sciences, Inder M. Verma, published an “editorial expression of concern” regarding the Facebook research. After acknowledging the standard defense that Facebook is technically exempt from the “Common Rule,” Verma added, “It is nevertheless a matter of concern that the collection of the data by Facebook may have involved practices that were not fully consistent with the principles of obtaining informed consent and allowing participants to opt out.”17
Among US scholars, University of Maryland law professor James Grimmelmann published the most comprehensive argument in favor of holding Facebook and other social media companies accountable to the standards represented by the Common Rule. Corporate research is more likely than academic research to be compromised by serious conflicts of interests, he reasoned, making common experimental standards critical and not something to be left to individual ethical judgment. Grimmelmann imagined “Internal Review Board laundering,” in which academics could “circumvent research ethics regulations whenever they work just closely enough with industry partners. The exception would swallow the Common Rule.”18
Despite his conviction on this point, Grimmelmann acknowledged in the final pages of his analysis that even the most rigorous imposition of the Common Rule would do little to curb the immense power of a company such as Facebook that routinely manipulates user behavior at scale, using means that are indecipherable and therefore uncontestable. Like Fiske, Grimmelmann sensed the larger project of economies of action just beyond the reach of established law and social norms.
The journal Nature drew attention with a strongly worded letter defending the Facebook experiment, authored by bioethicist Michelle Meyer together with five coauthors and on behalf of twenty-seven other ethicists. The letter argued that the need to codify new knowledge about the online environment justifies experimentation even when it does not or cannot abide by accepted ethical guidelines for human subjects research. But Meyer’s defense turned on a prescient note of warning that “the extreme response to this study… could result in such research being done in secret.… If critics think that the manipulation of emotional content in this research is sufficiently concerning to merit regulation… then the same concern must apply to Facebook’s standard practice.…”19
The experiment’s critics and supporters agreed on little but this: Facebook could easily turn rogue, threatening to retreat to secrecy if regulators attempted to intervene in its practices. The academic community shared a sense of threat in the face of known facts. Facebook owns an unprecedented means of behavior modification that operates covertly, at scale, and in the absence of social or legal mechanisms of agreement, contest, and control. Even the most stringent application of the “Common Rule” would be unlikely to change these facts.
As scholars promised to convene panels to consider the ethical issues raised by Facebook research, the corporation announced its own plans for better self-regulation. The corporation’s chief technology officer, Mike Schroepfer, confessed to being “unprepared” for the public reaction to the emotional contagion study, and he admitted that “there are things we should have done differently.” The company’s “new framework” for research included clear guidelines, an internal review panel, a capsule on research practices incorporated into the company’s famous “boot camp” orientation and training program for new hires, and a website to feature published academic research. These self-imposed “regulations” did not challenge the fundamental facts of Facebook’s online community as the necessary developmental environment and target for the firm’s economies of action.
A document acquired by the Australian press in May 2017 would eventually reveal this fact. Three years after the publication of the contagion study, the Australian broke the story on a confidential twenty-three-page Facebook document written by two Facebook executives in 2017 and aimed at the company’s Australian and New Zealand advertisers. The report depicted the corporation’s systems for gathering “psychological insights” on 6.4 million high school and tertiary students as well as young Australians and New Zealanders already in the workforce. The Facebook document detailed the many ways in which the corporation uses its stores of behavioral surplus to pinpoint the exact moment at which a young person needs a “confidence boost” and is therefore most vulnerable to a specific configuration of advertising cues and nudges: “By monitoring posts, pictures, interactions, and Internet activity, Facebook can work out when young people feel ‘stressed,’ ‘defeated,’ ‘overwhelmed,’ ‘anxious,’ ‘nervous,’ ‘stupid,’ ‘silly,’ ‘useless,’ and a ‘failure.’”20
The report reveals Facebook’s interest in leveraging this affective surplus for the purpose of economies of action. It boasts detailed information on “mood shifts” among young people based on “internal Facebook data,” and it claims that Facebook’s prediction products can not only “detect sentiment” but also predict how emotions are communicated at different points during the week, matching each emotional phase with appropriate ad messaging for the maximum probability of guaranteed outcomes. “Anticipatory emotions are more likely to be expressed early in the week,” the analysis counsels, “while reflective emotions increase on the weekend. Monday-Thursday is about building confidence; the weekend is for broadcasting achievements.”
Facebook publicly denied these practices, but Antonio Garcia-Martinez, a former Facebook product manager and author of a useful account of Silicon Valley titled Chaos Monkeys, described in the Guardian the routine application of such practices and accused the corporation of “lying through their teeth.” He concluded, “The hard reality is that Facebook will never try to limit such use of their data unless the public uproar reaches such a crescendo as to be un-mutable.”21 Certainly the public challenge to Facebook’s insertion of itself into the emotional lives of its users, as expressed in the contagion study, and its pledge to self-regulate did not quell its commercial interest in users’ emotions or the corporation’s compulsion to systematically exploit that knowledge on behalf of and in collaboration with its customers. It did not, because it cannot, not as long as the company’s revenues are bound to economies of action under the authority of the prediction imperative.
Facebook’s persistence warns us again of the dispossession cycle’s stubborn march. Facebook had publicly acknowledged and apologized for its overt experimental incursions into behavior modification and emotional manipulation, and it promised adaptations to curb or mitigate these practices. Meanwhile, a new threshold of intimate life had been breached. Facebook’s potential mastery of emotional manipulation became discussable and even taken for granted as habituation set in. From Princeton’s Fiske to critic Grimmelmann and supporter Meyer, the experts believed that if Facebook’s activities were to be forced into a new regulatory regime, the corporation would merely continue in secret. The Australian documents opened one door on these covert practices, suggesting the completion of the cycle with the redirection of action into clandestine zones protected by opacity and indecipherability, just as these scholars had anticipated.
Facebook’s political mobilization experimenters discovered that they could manipulate users’ vulnerabilities to social influence in order to create a motivational condition (“I want to be like my friends”) that increases the probability that a relevant priming message—the “I Voted” button—will produce action. The emotional contagion study exploited the same underlying social influence orientation. In this case, Facebook planted subliminal cues in the form of positive or negative affective language, which combined with the motivational state triggered by social comparison—“I want to be like my friends”—to produce a measurable, if weak, contagion effect. Finally, the Australian ad-targeting document points to the seriousness and complexity of the backstage effort to strengthen this effect by specifying motivational conditions at a granular level. It reveals not only the scale and scope of Facebook’s behavioral surplus bu
t also the corporation’s interest in leveraging its surplus to precisely determine the ebb and flow of a user’s predisposition for real-time targeting by the branded cues that are most likely to achieve guaranteed outcomes.
Facebook’s experimental success demonstrates that tuning through suggestion can be an effective form of telestimulation at scale. The evasion of individual and group awareness was critical to Facebook’s behavior-modification success, just as MacKay had stipulated. The first paragraph of the research article on emotional contagion celebrates this evasion: “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness.” Nor do the young adults of Australia’s great cities suspect that the precise measure of their fears and fantasies is exploited for commercial result at the hour and moment of their greatest vulnerability.
This evasion is neither accidental nor incidental, but actually essential to the structure of the whole surveillance capitalist project. Individual awareness is the enemy of telestimulation because it is the necessary condition for the mobilization of cognitive and existential resources. There is no autonomous judgment without awareness. Agreement and disagreement, participation and withdrawal, resistance or collaboration: none of these self-regulating choices can exist without awareness.
A rich and flourishing research literature illuminates the antecedents, conditions, consequences, and challenges of human self-regulation as a universal need. The capacity for self-determination is understood as an essential foundation for many of the behaviors that we associate with critical capabilities such as empathy, volition, reflection, personal development, authenticity, integrity, learning, goal accomplishment, impulse control, creativity, and the sustenance of intimate enduring relationships. “Implicit in this process is a self that sets goals and standards, is aware of its own thoughts and behaviors, and has the capacity to change them,” write Ohio State University professor Dylan Wagner and Dartmouth professor Todd Heatherton in an essay about the centrality of self-awareness to self-determination: “Indeed, some theorists have suggested that the primary purpose of self awareness is to enable self-regulation.” Every threat to human autonomy begins with an assault on awareness, “tearing down our capacity to regulate our thoughts, emotions, and desires.”22
The salience of self-awareness as a bulwark against self-regulatory failure is also underscored in the work of two Cambridge University researchers who developed a scale to measure a person’s “susceptibility to persuasion.” They found that the single most important determinant of one’s ability to resist persuasion is what they call “the ability to premeditate.”23 This means that people who harness self-awareness to think through the consequences of their actions are more disposed to chart their own course and are significantly less vulnerable to persuasion techniques. Self-awareness also figures in the second-highest-ranking factor on their scale: commitment. People who are consciously committed to a course of action or set of principles are less likely to be persuaded to do something that violates that commitment.
We have seen already that democracy threatens surveillance revenues. Facebook’s practices suggest an equally disturbing conclusion: human consciousness itself is a threat to surveillance revenues, as awareness endangers the larger project of behavior modification. Philosophers recognize “self-regulation,” “self-determination,” and “autonomy” as “freedom of will.” The word autonomy derives from the Greek and literally means “regulation by the self.” It stands in contrast to heteronomy, which means “regulation by others.” The competitive necessity of economies of action means that surveillance capitalists must use all means available to supplant autonomous action with heteronomous action.
In one sense there is nothing remarkable in observing that capitalists would prefer individuals who agree to work and consume in ways that most advantage capital. We need only to consider the ravages of the subprime mortgage industry that helped trigger the great financial crisis of 2008 or the daily insults to human autonomy at the hands of countless industries from airlines to insurance for plentiful examples of this plain fact.
However, it would be dangerous to nurse the notion that today’s surveillance capitalists simply represent more of the same. This structural requirement of economies of action turns the means of behavioral modification into an engine of growth. At no other time in history have private corporations of unprecedented wealth and power enjoyed the free exercise of economies of action supported by a pervasive global architecture of ubiquitous computational knowledge and control constructed and maintained by all the advanced scientific know-how that money can buy.
Most pointedly, Facebook’s declaration of experimental authority claims surveillance capitalists’ prerogatives over the future course of others’ behavior. In declaring the right to modify human action secretly and for profit, surveillance capitalism effectively exiles us from our own behavior, shifting the locus of control over the future tense from “I will” to “You will.” Each one of us may follow a distinct path, but economies of action ensure that the path is already shaped by surveillance capitalism’s economic imperatives. The struggle for power and control in society is no longer associated with the hidden facts of class and its relationship to production but rather by the hidden facts of automated engineered behavior modification.
III. Pokémon Go! Do!
It had been a particularly grueling July afternoon in 2016. David had directed hours of contentious insurance testimony in a dusty New Jersey courtroom, where a power surge the night before had knocked out the building’s fragile air-conditioning system. Then the fitful Friday commute home was cursed by a single car disabled by the heat that turned the once-hopeful flow of traffic into sludge. Finally home, he slid the car into his garage and made a beeline for the side door that opened to the laundry room and kitchen beyond. The cool air hit him like a dive into the ocean, and for the first time all day he took a deep breath. A note on the table said his wife would be back in a few minutes. He gulped down some water, made himself a drink, and climbed the stairs, heading for a long shower.
The doorbell rang just as the warm water hit his aching back muscles. Had she forgotten her key? Shower interrupted, he threw on a tee and shorts and ran downstairs, opening the front door to a couple of teenagers waving their cell phones in his face. “Hey, you’ve got a Pokémon in your backyard. It’s ours! Okay if we go back there and catch it?”
“A what?” He had no idea what they were talking about, but he was about to get educated.
David’s doorbell rang four more times that evening: perfect strangers eager for access to his yard and disgruntled when he asked them to leave. Throughout the days and evenings that followed, knots of Pokémon seekers formed on his front lawn, some of them young and others long past that excuse. They held up their phones, pointing and shouting as they scanned his house and garden for the “augmented-reality” creatures. Looking at this small slice of world through their phones, they could see their Pokémon prey but only at the expense of everything else. They could not see a family’s home or the boundaries of civility that made it a sanctuary for the man and woman who lived there. Instead, the game seized the house and the world around it, reinterpreting all of it in a vast equivalency of GPS coordinates. Here was a new kind of commercial assertion: a for-profit declaration of eminent domain in which reality is recast as an unbounded expanse of blank spaces to be sweated for others’ enrichment. David wondered, When will this end? What gives them the right? Whom do I call to make this stop?
Without knowing it, he had been yanked from his shower to join the villagers in Broughton, England, who had taken to their streets in 2009 protesting the invasion of Google’s Street View camera cars. Like them, he had been abruptly thrust into contest with surveillance capitalism’s economic imperatives, and like them he would soon understand that there was no number to call, no 911 to urgently inform the appropriate authorities that a dreadful mistake had blossomed on his lawn.<
br />
Back in 2009, as we saw in Chapter 5, Google Maps product vice president and Street View boss John Hanke ignored the Broughton protestors, insisting that only he and Google knew what was best, not just for Broughton but for all people. Now here was Hanke again at surveillance capitalism’s next frontier, this time as the founder of the company behind Pokémon Go, Niantic Labs. Hanke, you may recall, nursed an abiding determination to own the world by mapping it. He had founded Keyhole, the satellite mapping startup funded by the CIA and later acquired by Google and rechristened as Google Earth. At Google, he was a vice president for Google Maps and a principal in its controversial push to commandeer public and private space through its Street View project.
Hanke recounts that Pokémon Go was born out of Google Maps, which also supplied most of the game’s original development team.24 Indeed, Street View’s mystery engineer, Marius Milner, had joined Hanke in this new phase of incursion. By 2010, Hanke had set up his own launch pad, Niantic Labs, inside the Google mother ship. His aim was the development of “parallel reality” games that would track and herd people through the very territories that Street View had so audaciously claimed for its maps. In 2015, following the establishment of the Alphabet corporate structure and well after the development of Pokémon Go, Niantic Labs was formally established as an independent company with $30 million in funding from Google, Nintendo (the Japanese company that originally hosted Pokémon on its “Game Boy” devices in the late 1990s), and the Pokémon Company.25
Hanke had long recognized the power of the game format as a means to achieve economies of action. While still at Google he told an interviewer, “More than 80% of people who own a mobile device claim that they play games on their device… games are often the number 1 or number 2 activity… so for Android as an operative system, but also for Google, we think it’s important for us to innovate and to be a leader in… the future of mobile gaming.”26
The Age of Surveillance Capitalism Page 38