Films from the Future
Page 9
The problem is, just as with research that tries to tie facial features, head shape, or genetic heritage to a propensity to engage in criminal behavior, fMRI research is equally susceptible to human biases. It’s not so much that we can collect data on brain activity that’s problematic; it’s how we decide what data to collect, and how we end up interpreting and using it, that’s the issue.
A large part of the challenge here is understanding what the motivation is behind the research questions being asked, and what subtle underlying assumptions are nudging a complex series of scientific decisions toward results that seem to support these assumptions.
Here, there’s a danger of being caught up in the misapprehension that the scientific method is pure and unbiased, and that it’s solely about the pursuit of truth. To be sure, science is indeed one of the best tools we have to understand the reality of how the world around us and within us works. And it is self-correcting—ultimately, errors in scientific thinking cannot stand up to the scrutiny the scientific method exposes them to. Yet this self-correcting nature of science takes time, sometimes decades or centuries. And until it self-corrects, science is deeply susceptible to human foibles, as phrenology, eugenics, and other misguided ideas have all too disturbingly shown.
This susceptibility to human bias is greatly amplified in areas where the scientific evidence we have at our disposal is far from certain, and where complex statistics are needed to tease out what we think is useful information from the surrounding noise. And this is very much the case with behavioral studies and fMRI research. Here, limited studies on small numbers of people that are carried out under constrained conditions can lead to data that seem to support new ideas. But we’re increasingly finding that many such studies aren’t reproducible, or that they are not as generalizable as we at first thought. As a result, even if a study does one day suggest that a brain scan can tell if you’re likely to steal the office paper clips, or murder your boss, the validity of the prediction is likely to be extremely suspect, and certainly not one that has any place in informing legal action—or any form of discriminatory action—before any crime has been committed.
Machine Learning-Based Precognition
Just as in Minority Report, the science and speculation around behavior prediction challenges our ideas of free will and justice. Is it just to restrict and restrain people based on what someone’s science predicts they might do? Probably not, because embedded in the “science” are value judgments about what sort of behavior is unwanted, and what sort of person might engage in such behavior. More than this, though, the notion of pre-justice challenges the very idea that we have some degree of control over our destiny. And this in turn raises deep questions about determinism versus free will. Can we, in principle, know enough to fully determine someone’s actions and behavior ahead of time, or is there sufficient uncertainty and unpredictability in the world to make free will and choice valid ideas?
In Chapter Two and Jurassic Park, we were introduced to the ideas of chaos and complexity, and these, it turns out, are just as relevant here. Even before we have the science pinned down, it’s likely that the complexities of the human mind, together with the incredibly broad and often unusual panoply of things we all experience, will make predicting what we do all but impossible. As with Mandelbrot’s fractal, we will undoubtedly be able to draw boundaries around more or less likely behaviors. But within these boundaries, even with the most exhaustive measurements and the most powerful computers, I doubt we will ever be able to predict with absolute certainty what someone will do in the future. There will always be an element of chance and choice that determines our actions.
Despite this, the idea that we can predict whether someone is going to behave in a way that we consider “good” or “bad” remains a seductive one, and one that is increasingly being fed by technologies that go beyond fMRI.
In 2016, two scientists released the results of a study in which they used machine learning to train an algorithm to identify criminals based on headshots alone.40 The study was highly contentious and resulted in a significant public and academic backlash, leading the paper’s authors to state in an addendum to the paper, “Our work is only intended for pure academic discussions; how it has become a media consumption is a total surprise to us.”41
Their work hit a nerve for many people because it seemed to reinforce the idea that criminal behavior is something that can be predicted from measurable physiological traits. But more than this, it suggested that a computer could be trained to read these traits and classify people as criminal or non-criminal, even before they’ve committed a crime.
The authors vehemently resisted suggestions that their work was biased or inappropriate, and took pains to point out that others were misinterpreting it. In fact, in their addendum, they point out, “Nowhere in our paper advocated the use of our method as a tool of law enforcement, nor did our discussions advance from correlation to causality.”
Nevertheless, in the original paper, they conclude: “After controlled for race, gender and age, the general law-biding [sic] public have facial appearances that vary in a significantly lesser degree than criminals.” It’s hard to interpret this as anything other than a conclusion that machines and artificial intelligence could be developed that distinguish between people who have criminal tendencies and those who do not.
Part of why this is deeply disturbing is that it taps into the issue of “algorithmic bias”—our ability to create artificial-intelligence-based apps and machines that reflect the unconscious (and sometimes conscious) biases of those who develop them. Because of this, there’s a very real possibility that an artificial judge and jury that relies only on what you look like will reflect the prejudices of its human instructors.
This research is also disturbing because it takes us out of the realm of people interpreting data that may or may not be linked to behavioral tendencies, and into the world of big data and autonomous machines. Here, we begin to enter a space where we have not only trained computers to do our thinking for us, but we no longer know how they’re thinking. In a worrying twist of irony, we are using our increasing understanding of how the human brain works to develop and train artificial brains that we are increasingly ignorant of the inner workings of.
In other words, if we’re not careful, in our rush to predict and preempt undesirable human behavior, we may end up creating machines that exhibit equally undesirable behavior, precisely because they are unpredictable.
Big Brother, Meet Big Data
Despite being set in a technologically advanced future, one of the more intriguing aspects of Minority Report is that it falls back on human intuition when interpreting the precog data feed. In the opening sequences, Chief Anderton performs an impromptu “ballet” of preemptive deduction, as he turns up the music and weaves the disjointed images being fed through from the three precogs into a coherent narrative. This is a world where, perhaps ironically, given the assumption that human behavior is predictable, intuition and creativity still have an edge over machines.
Anderton’s professional skills tap into a deep belief that there’s more to the human mind than its simply being the biological equivalent of a digital computer—even a super-powerful one. As the movie opens, Anderton is responsible for fitting together a puzzle of fragmented information. And, as he aligns the pieces and fills the gaps, he draws connections between snippets of information that seem irrelevant or disjointed to the untrained eye, so much so that the skill he demonstrates lies in the sum total of his experiences as a living human being. This is adeptly illustrated as Anderton pins down the location of an impending murder by recognizing inconsistencies in two images that, he deduces, could only be due to a child riding an old-fashioned merry-go-round.
This small intuitive leap is deeply comforting to us as viewers. It confirms to that there’s something uniquely special about people, and it suggests that we are more than the sum of the chemicals, cells, and organs we’re made of. It also affirms a bel
ief that we cannot simply be defined by what we look like, or by the electrical and chemical processes going on inside our head.
But are we right in this belief that we are more than the sum of our parts? What if we could be reduced to massive amounts of data that not only determine who we are, but how we will act and react in any given situation?
Questions like this would have been hypothetical, bordering the fantastical, not so long ago. Certainly, as a species, we’ve toyed with the idea for centuries that people are simply complex yet ultimately predictable biological machines (chaos theory not withstanding). But it’s only recently that we’ve had the computing power to start capturing every minutia of ourselves and the world around us and utilizing it in what’s increasingly called “big data.”
“Big data”—which when all’s said and done is just a fancy way of saying massive amounts of information that we can do stuff with—has its roots in human genome sequencing. Our genetic code has three billion discrete pieces of information, or base pairs, that help define us biologically. Compared to the storage capacity of early computers, this is a stupendously large amount of information, far more than could easily be handled by the computing systems of the 1970s and 1980s, or even the 1990s, when the initiative to decode the complete human genome really took off. But, as we began to understand the power of digital computing, scientists started to speculate that, if we could decode the human genome and store it in computer databases, we would have the key to the code of life.
With hindsight, they were wrong. As it turns out, decoding the human genome is just one small step toward understanding how we work. But this vision of identifying and cataloguing every piece of our genome caught hold, and in the late 1990s it led to one of the biggest sets of data ever created. It also spawned a whole new area of technology involving how we collect, store, analyze, and use massive amounts of data, and this is what is now known colloquially as Big Data.
As we’ve since discovered, the ability to store three billion base pairs of genetic code in computer databases barely puts us in the foothills of understanding human biology. The more we find out, the more complex we discover life is. But the idea that the natural world can be broken down into its constituent parts, uploaded into cyberspace, and played around with there remains a powerful one. And there’s still a belief held by some that, if we have a big enough computer memory and a powerful enough processor, we could in principle encode every aspect of the physical and biological world and reproduce it virtually.
This is the idea behind movies like The Matrix (which sadly didn’t make the cut for this book) where most people are unwittingly playing out their lives inside a computer simulation. It also underpins speculations that arise every now and again that we are all, in fact, living inside a computer simulation, but just don’t know it. There are even researchers working on the probability that this is indeed the case.42
This is an extreme scenario that comes out of our growing ability to collect, process, and manipulate unimaginable amounts of data. It’s also one that has some serious flaws, as our technology is rarely as powerful as our imaginations would like it to be. Yet the data revolution we’re currently living through is still poised to impact our lives in quite profound ways, including our privacy.
Despite the Precrime program’s reliance on human skills and intuition, Minority Report is set in a future where big data has made privacy a thing of the past—almost. As John Anderton passes through public spaces, he’s bombarded by personal ads as devices identify him from his retinal scan. And, like a slick salesperson who knows his every weakness, they tempt him to indulge in some serious retail therapy.
These ads are a logical extension of what most of us already experience with online advertisements. Websites are constantly sucking up our browsing habits and trying to second-guess what we might be tempted to purchase, or which sites we might be persuaded to visit. These online ads are based on a sophisticated combination of browsing history, personal data, and machine learning. Powerful algorithms are being trained to collect our information, watch our online habits, predict what we might be interested in, and place ads in front of us that, they hope, will nudge our behavior. And it’s not only purchases. Increasingly, online behavior is being used to find ways of influencing what people think and how they act—even down to how they vote. As I write this, we’re still experiencing the fallout from Cambridge Analytica’s manipulations of Facebook feeds that were designed to influence users, and there’s growing concern over the use of fake news and social media to influence people’s ideas and behaviors.
Admittedly, targeted online messaging is still clumsy, but it’s getting smarter and subtler. Currently it’s largely driven by the massive amounts of data that organizations are collecting on our browsing habits. But imagine if these data extended to everything we did—where we are, who we’re with, what we’re doing, even what we’re saying. We’re frighteningly close to a world where some system somewhere holds data on nearly every aspect of our lives, and the only things preventing the widespread use of these “engines of persuasion” are our collective scruples and privacy laws.
Minority Report is surprisingly prescient when it comes to some aspects of big data. It paints a future where what people do in the real world as well as online is collected, analyzed, and ultimately used in ways that directly affect them. In the movie, these massive repositories of personal data are not used to determine if you’re going to commit a crime—this remains the sacred domain of humans in John Anderton’s world—but they are used to nudge people’s behavior toward what benefits others more than themselves.
This is, of course, what marketing is all about. Marketers use information to understand how they can persuade people to act in a certain way, whether this is to purchase organic food, or to buy a new car, or to vote for a particular political candidate. Big data massively expands the possibilities for manipulation and persuasion. And this is especially the case when it’s coupled to machine learning, and the increasing ability of artificial-intelligence-based systems to join the data dots, and even interpolate what’s missing from the data they do have. Here, we’re no longer just talking about how big data combined with smart algorithms can help identify future criminals and curtail their antisocial tendencies, but about how corporations, governments, and others can subtly influence people’s behavior to do what they want. It’s a subtler and more Machiavellian approach to achieving what is essentially the same thing—controlling people.
Frighteningly, the world portrayed in Minority Report is not that far away. We still lack the ability to identify people through simple and ubiquitous scans, but we’re almost there. Real-time facial recognition, for instance, is almost at the point where, if you’re captured on camera, the chances are that someone has the capability of identifying and tracking you. And our digital fingerprint—the sum total of the digital breadcrumbs we scatter around us in our daily lives—is becoming easier to follow, and harder to cover up. As ubiquitous identity monitoring is increasingly matched to massive data files on every single one of us, we’re going to have to make some tough decisions over how much of our personal freedom we are willing to concede for the benefits these new technologies bring.43
Even more worrying, perhaps, is the number of people who are already conceding their personal freedom without even thinking about it. How many of us use digital personal assistants like Siri, Google Home, or Alexa, or rely on cloud-connected home automation devices, or even internet-connected cars? And how many of us read the small print in the user agreement before signing up for the benefits these technologies provide? We are surrounded by an increasing number of devices that are collecting personal data on us and combining it in ever-growing databases. And while we’re being wowed by the lifestyle advantages these bring, they’re potentially setting us up to be manipulated in ways that are so subtle, we won’t even know they’re happening. But the use of big data doesn’t stop there.
In 2003, a group of entrepreneurs
set up the company Palantir, named after J. R. R. Tolkien’s seeing-stones in Lord of the Rings. The company excels at using big data to detect, monitor, and predict behavior, based on myriads of connections between what is known about people and organizations, and what can be inferred from the information that’s available. The company largely flew under the radar for many years, working with other companies and intelligence agencies to extract as much information as possible out of massive data sets. But in recent years, Palantir’s use in “predictive policing” has been attracting increasing attention. And in May 2018, the grassroots organization Stop LAPD Spying Coalition released a report raising concerns over the use of Palantir and other technologies by the Los Angeles Police Department for predicting where crimes are likely to occur, and who might commit them.44
Palantir is just one of an increasing number of data collection and analytics technologies being used by law enforcement to manage and reduce crime. In the US, much of this comes under the banner of the “Smart Policing Initiative,” which is sponsored by the US Bureau of Justice Assistance. Smart Policing aims to develop and deploy “evidence-based, data-driven law enforcement tactics and strategies that are effective, efficient, and economical.” It’s an initiative that makes a lot of sense, as evidence-based and data-driven crime prevention is surely better than the alternatives. Yet there’s growing concern that, without sufficient due diligence, seemingly beneficial data and AI-based approaches to policing could easily slip into profiling and “managing people” before they commit a criminal act. Here, we’re replacing Minority Report’s precogs with massive data sets and AI algorithms, but the intent is remarkably similar: Use every ounce of technology we have to predict who might commit a crime, and where and when, and intervene to prevent the “bad” people causing harm.
Naturally, despite the benefits of data-driven crime prevention (and they are many), irresponsible use of big data in policing opens the door to unethical actions and manipulation, just as is seen in Minority Report. Yet here, real life is perhaps taking us down an even more worrying path.