We can engineer the context around a particular behavior and force change that way. Context-aware data allow us to tie together your emotions, your cognitive functions, your vital signs, etcetera. We can know if you shouldn’t be driving, and we can just shut your car down. We can tell the fridge, “Hey, lock up because he shouldn’t be eating,” or we tell the TV to shut off and make you get some sleep, or the chair to start shaking because you shouldn’t be sitting so long, or the faucet to turn on because you need to drink more water.
“Conditioning” is a well-known approach to inducing behavior change, primarily associated with the famous Harvard behaviorist B. F. Skinner. He argued that behavior modification should mimic the evolutionary process, in which naturally occurring behaviors are “selected” for success by environmental conditions. Instead of the earlier, more simplistic model of stimulus/response, associated with behaviorists such as Watson and Pavlov, Skinner interpolated a third variable: “reinforcement.” In his laboratory work with mice and pigeons, Skinner learned how to observe a range of naturally occurring behaviors in the experimental animal and then reinforce the specific action, or “operant,” that he wanted the animal to reproduce. Ultimately, he mastered intricate designs or “schedules” of reinforcement that could reliably shape precise behavioral routines.
Skinner called the application of reinforcements to shape specific behaviors “operant conditioning.” His larger project was known as “behavior modification” or “behavioral engineering,” in which behavior is continuously shaped to amplify some actions at the expense of others. In the end the pigeon learns, for example, to peck a button twice in order to receive a pellet of grain. The mouse learns his way through a complicated maze and back again. Skinner imagined a pervasive “technology of behavior” that would enable the application of such methods across entire human populations.
As the chief data scientist for a much-admired Silicon Valley education company told me, “Conditioning at scale is essential to the new science of massively engineered human behavior.” He believes that smartphones, wearable devices, and the larger assembly of always-on networked nodes allow his company to modify and manage a substantial swath of its users’ behavior. As digital signals monitor and track a person’s daily activities, the company gradually masters the schedule of reinforcements—rewards, recognition, or praise that can reliably produce the specific user behaviors that the company selects for dominance:
The goal of everything we do is to change people’s actual behavior at scale. We want to figure out the construction of changing a person’s behavior, and then we want to change how lots of people are making their day-to-day decisions. When people use our app, we can capture their behaviors and identify good and bad [ones]. Then we develop “treatments” or “data pellets” that select good behaviors. We can test how actionable our cues are for them and how profitable certain behaviors are for us.
Although it is still possible to imagine automated behavioral modification without surveillance capitalism, it is not possible to imagine surveillance capitalism without the marriage of behavior modification and the technological means to automate its application. This marriage is essential to economies of action. For example, one can imagine a fitness tracker, a car, or a refrigerator whose data and operational controls are accessible exclusively to their owners for the purposes of helping them to exercise more often, drive safely, and eat healthily. But as we have already seen in so many domains, the rise of surveillance capitalism has obliterated the idea of the simple feedback loop characteristic of the behavioral value reinvestment cycle. In the end, it’s not the devices; it’s Max Weber’s “economic orientation,” now determined by surveillance capitalism.
The allure of surveillance revenues drives the continuous accumulation of more and more predictive forms of behavioral surplus. The most predictive source of all is behavior that has already been modified to orient toward guaranteed outcomes. The fusion of new digital means of modification and new economic aims produces whole new ranges of techniques for creating and cornering these new forms of surplus. A study called “Behavior Change Implemented in Electronic Lifestyle Activity Monitors” is illustrative. Researchers from the University of Texas and the University of Central Florida studied thirteen such applications, concluding that the monitoring devices “contain a wide range of behavior change techniques typically used in clinical behavior interventions.” The researchers conclude that behavior-change operations are proliferating as a result of their migration to digital devices and internet connectivity. They note that the very possibility of a simple loop designed by and for the consumer seems hopelessly elusive, observing that behavior-change apps “lend themselves… to various types of surveillance” and that “official methods” of securely and simply transmitting data “do not appear to currently exist in these apps.”2
Remember that Google economist Hal Varian extolled the “new uses” of big data that proceed from ubiquitous computer-mediated transactions. Among these he included the opportunity for “continuous experimentation.” Varian noted that Google has its engineering and data science teams consistently running thousands of “A/B” experiments that rely on randomization and controls to test user reactions to hundreds of variations in page characteristics from layout to buttons to fonts. Varian endorsed and celebrated this self-authorizing experimental role, warning that all the data in the world “can only measure correlation, not causality.”3 Data tell what happened but not why it happened. In the absence of causal knowledge, even the best predictions are only extrapolations from the past.
The result of this conundrum is that the last crucial element in the construction of high-quality prediction products—i.e., those that approximate guaranteed outcomes—depends upon causal knowledge. As Varian says, “If you really want to understand causality, you have to run experiments. And if you run experiments continuously, you can continuously improve your system.”4
Because the “system” is intended to produce predictions, “continuously improving the system” means closing the gap between prediction and observation in order to approximate certainty. In an analog world, such ambitions would be far too expensive to be practical, but Varian observes that in the realm of the internet, “experimentation can be entirely automated.”
Varian awards surveillance capitalists the privilege of the experimenter’s role, and this is presented as another casual fait accompli. In fact, it reflects the final critical step in surveillance capitalists’ radical self-dealing of new rights. In this phase of the prediction imperative, surveillance capitalists declare their right to modify others’ behavior for profit according to methods that bypass human awareness, individual decision rights, and the entire complex of self-regulatory processes that we summarize with terms such as autonomy and self-determination.
What follows now are two distinct narratives of surveillance capitalists as “experimenters” who leverage their asymmetries of knowledge to impose their will on the unsuspecting human subjects who are their users. The experimental insights accumulated through their one-way mirrors are critical to constructing, fine-tuning, and exploring the capabilities of each firm’s for-profit means of behavioral modification. In Facebook’s user experiments and in the augmented-reality game Pokémon Go (imagined and incubated at Google), we see the commercial means of behavioral modification evolving before our eyes. Both combine the components of economies of action and the techniques of tuning, herding, and conditioning in startling new ways that expose the Greeks secreted deep in the belly of the Trojan horse: the economic orientation obscured behind the veil of the digital.
II. Facebook Writes the Music
In 2012 Facebook researchers startled the public with an article provocatively titled “A 61-Million-Person Experiment in Social Influence and Political Mobilization,” published in the scientific journal Nature.5 In this controlled, randomized study conducted during the run-up to the 2010 US Congressional midterm elections, the researchers experimentally m
anipulated the social and informational content of voting-related messages in the news feeds of nearly 61 million Facebook users while also establishing a control group.
One group was shown a statement at the top of their news feed encouraging the user to vote. It included a link to polling place information, an actionable button reading “I Voted,” a counter indicating how many other Facebook users reported voting, and up to six profile pictures of the user’s Facebook friends who had already clicked the “I Voted” button. A second group received the same information but without the pictures of friends. A third control group did not receive any special message.
The results showed that users who received the social message were about 2 percent more likely to click on the “I Voted” button than did those who received the information alone and 0.26 percent more likely to click on polling place information. The Facebook experimenters determined that social messaging was an effective means of tuning behavior at scale because it “directly influenced political self-expression, information seeking and real-world voting behavior of millions of people,” and they concluded that “showing familiar faces to users can dramatically improve the effectiveness of a mobilization message.”
The team calculated that the manipulated social messages sent 60,000 additional voters to the polls in the 2010 midterm elections, as well as another 280,000 who cast votes as a result of a “social contagion” effect, for a total of 340,000 additional votes. In their concluding remarks, the researchers asserted that “we show the importance of social influence for effecting behavior change… the results suggest that online messages might influence a variety of offline behaviors, and this has implications for our understanding of the role of online social media in society.…”6
The experiment succeeded by producing social cues that “suggested” or “primed” users in ways that tuned their real-world behavior toward a specific set of actions determined by the “experimenters.” In this process of experimentation, economies of action are discovered, honed, and ultimately institutionalized in software programs and their algorithms that function automatically, continuously, ubiquitously, and pervasively to achieve economies of action. Facebook’s surplus is aimed at solving one problem: how and when to intervene in the state of play that is your daily life in order to modify your behavior and thus sharply increase the predictability of your actions now, soon, and later. The challenge for surveillance capitalists is to learn how to do this effectively, automatically, and, therefore, economically, as a former Facebook product manager writes:
Experiments are run on every user at some point in their tenure on the site. Whether that is seeing different size ad copy, or different marketing messages, or different call-to-action buttons, or having their feeds generated by different ranking algorithms.… The fundamental purpose of most people at Facebook working on data is to influence and alter people’s moods and behavior. They are doing it all the time to make you like stories more, to click on more ads, to spend more time on the site. This is just how a website works, everyone does this and everyone knows that everyone does this.7
The Facebook study’s publication evoked fierce debate as experts and the wider public finally began to reckon with Facebook’s—and the other internet companies’—unprecedented power to persuade, influence, and ultimately manufacture behavior. Harvard’s Jonathan Zittrain, a specialist in internet law, acknowledged that it was now possible to imagine Facebook quietly engineering an election, using means that its users could neither detect nor control. He described the Facebook experiment as a challenge to “collective rights” that could undermine “the right of people as a whole… to enjoy the benefits of a democratic process.…”8
Public concern failed to destabilize Facebook’s self-authorizing practice of behavior modification at scale. Even as the social influence experiment was being debated in 2012, a Facebook data scientist was already collaborating with academic researchers on a new study, “Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks,” submitted to the prestigious Proceedings of the National Academy of Sciences in 2013, where it was edited by a well-known Princeton social psychologist, Susan Fiske, and published in June 2014.
This time the experimenters “manipulated the extent to which people (N = 689,003) were exposed to emotional expressions in their News Feed.”9 The experiment was structured like one of those allegedly benign A/B tests. In this case one group was exposed to mostly positive messages in their news feed and the other to predominantly negative messages. The idea was to test whether even subliminal exposure to specific emotional content would cause people to change their own posting behavior to reflect that content. It did. Whether or not users felt happier or sadder, the tone of their expression changed to reflect their news feed.
The experimental results left no doubt that once again Facebook’s carefully designed, undetectable, and uncontestable subliminal cues reached beyond the screen into the daily lives of hundreds of thousands of naive users, predictably actuating specific qualities of emotional expression through processes that operate outside the awareness of their human targets, just as Stuart MacKay had originally prescribed for Galapagos turtles and Canadian elk (see Chapter 7). “Emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness,” the researchers proclaimed. “Online messages influence our experience of emotions, which may affect a variety of offline behaviors.” The team celebrated its work as “some of the first experimental evidence to support the controversial claims that emotions can spread throughout a network,” and they reflected on the fact that even their relatively minimal manipulation had a measurable effect, albeit a small one.10
What Facebook researchers failed to acknowledge in either experiment is that a person’s susceptibility to subliminal cues and his or her vulnerability to a “contagion” effect is largely dependent upon empathy: the ability to understand and share in the mental and emotional state of another person, including feeling another’s feelings and being able to take another’s point of view—sometimes characterized as “affective” or “cognitive” empathy. Psychologists have found that the more a person can project himself or herself into the feelings of another and take the other’s perspective, the more likely he or she is to be influenced by subliminal cues, including hypnosis. Empathy orients people toward other people. It allows one to get absorbed in emotional experience and to resonate with others’ experiences, including unconsciously mimicking another’s facial expressions or body language. Contagious laughing and even contagious yawning are examples of such resonance.11
Empathy is considered essential to social bonding and emotional attachment, but it can also trigger “vicarious anxiety” for victims or others who are genuinely distressed. Some psychologists have called empathy a “risky strength” because it predisposes us to experience others’ happiness but also their pain.12 The successful tuning evident in both Facebook experiments is the result of the effective exploitation of the natural empathy present in its population of users.
The Facebook researchers claimed that the results suggested two inferences. First, in a massive and engaged population such as Facebook users, even small effects “can have large aggregated consequences.” Second, the authors invited readers to imagine what might be accomplished with more-significant manipulations and larger experimental populations, noting the importance of their findings for “public health.”
Once again, public outcry was substantial. “If Facebook can tweak emotions and make us vote, what else can it do?” the Guardian asked. The Atlantic quoted the study’s editor, who had processed the article for publication despite her apparent misgivings.13 She told the magazine that as a private company, Facebook did not have to adhere to the legal standards for experimentation required of academic and government researchers.
These legal standards are known as the “Common Rule.” Designed to protect against the abuse of the experimenter’s
power, these standards must be adhered to by all federally funded research. The Common Rule enforces procedures for informed consent, avoidance of harm, debriefing, and transparency, and it is administered by panels of scientists, known as “internal review boards,” appointed within every research institution. Fiske acknowledged that she had been persuaded by Facebook’s argument that the experimental manipulation was an unremarkable extension of the corporation’s standard practice of manipulating people’s news feeds. As Fiske recounted, “They said… that Facebook apparently manipulates people’s News Feeds all the time.… Who knows what other research they’re doing.”14 In other words, Fiske recognized that the experiment was merely an extension of Facebook’s standard practices of behavioral modification, which already flourish without sanction.
Facebook data scientist and principal researcher Adam Kramer was deluged with hundreds of media queries, leading him to write on his Facebook page that the corporation really does “care” about its emotional impact. One of his coauthors, Cornell’s Jeffrey Hancock, told the New York Times that he didn’t realize that manipulating the news feeds, even modestly, would make some people feel violated.15 The Wall Street Journal reported that the Facebook data science group had run more than 1,000 experiments since its inception in 2007 and operated with “few limits” and no internal review board. Writing in the Guardian, psychology professor Chris Chambers summarized that “the Facebook study paints a dystopian future in which academic researchers escape ethical restriction by teaming up with private companies to test increasingly dangerous or harmful interventions.”16
The Age of Surveillance Capitalism Page 37