Films from the Future

Home > Other > Films from the Future > Page 20
Films from the Future Page 20

by Andrew Maynard


  Of course, I’m simplifying things and being a little playful with Bostrom’s ideas. But the central concept is that if we’re not careful, we could start a chain reaction of AI’s building more powerful AIs, until humans become superfluous at best, and an impediment to further AI development at worst.

  The existential risks that Bostrom describes in Superintelligence grabbed the attention of some equally smart scientists. Enough people took his ideas sufficiently seriously that, in January 2015, some of the world’s top experts in AI and technology innovation signed an open letter promoting the development of beneficial AI, while avoiding “potential pitfalls.”112 Elon Musk, Steve Wozniak, Stephen Hawking, and around 8,000 others signed the letter, signaling a desire to work toward ensuring that AI benefits humanity, rather than causing more problems than it’s worth. The list of luminaries who signed this open letter is sobering. These are not people prone to flights of fantasy, but in many cases, are respected scientists and successful business leaders. This in itself suggests that enough people were worried at the time by what they could see emerging that they wanted to shore the community up against the potential missteps of permissionless innovation.

  The 2017 Asilomar meeting was a direct follow-up to this letter, and one that I had the privilege of participating in. The meeting was heavily focused on the challenges and opportunities to developing beneficial forms of AI.113 Many of the participants were actively grappling with near- to mid-term challenges presented by artificial-intelligence-based systems, such as loss of transparency in decision-making, machines straying into dangerous territory as they seek to achieve set goals, machines that can learn and adapt while being inscrutable to human understanding, and the ubiquitous “trolley problem” that concerns how an intelligent machine decides who to kill, if it has to make a choice. But there was also a hard core of attendees who believed that the emergence of superintelligence was one of the most important and potentially catastrophic challenges associated with AI.

  This concern would often come out in conversations around meals. I’d be sitting next to some engaging person, having what seemed like a normal conversation, when they’d ask “So, do you believe in superintelligence?” As something of an agnostic, I’d either prevaricate, or express some doubts as to the plausibility of the idea. In most cases, they’d then proceed to challenge any doubts that I might express, and try to convert me to becoming a superintelligence believer. I sometimes had to remind myself that I was at a scientific meeting, not a religious convention.

  Part of my problem with these conversations was that, despite respecting Bostrom’s brilliance as a philosopher, I don’t fully buy into his notion of superintelligence, and I suspect that many of my overzealous dining companions could spot this a mile off. I certainly agree that the trends in AI-based technologies suggest we are approaching a tipping point in areas like machine learning and natural language processing. And the convergence we’re seeing between AI-based algorithms, novel processing architectures, and advances in neurotechnology are likely to lead to some stunning advances over the next few years. But I struggle with what seems to me to be a very human idea that narrowly-defined intelligence and a particular type of power will lead to world domination.

  Here, I freely admit that I may be wrong. And to be sure, we’re seeing far more sophisticated ideas begin to emerge around what the future of AI might look like—physicist Tax Tegmark, for one, outlines a compelling vision in his book Life 3.0.114 The problem is, though, that we’re all looking into a crystal ball as we gaze into the future of AI, and trying to make sense of shadows and portents that, to be honest, none of us really understand. When it comes to some of the more extreme imaginings of superintelligence, two things in particular worry me. One is the challenge we face in differentiating between what is imaginable and what is plausible when we think about the future. The other, looking back to chapter five and the movie Limitless, is how we define and understand intelligence in the first place.

  With a creative imagination, it is certainly possible to envision a future where AI takes over the world and crushes humanity. This is the Skynet scenario of the Terminator movies, or the constraining virtual reality of The Matrix. But our technological capabilities remain light-years away from being able to create such futures—even if we do create machines that can design future generations of smarter machines. And it’s not just our inability to write clever-enough algorithms that’s holding us back. For human-like intelligence to emerge from machines, we’d first have to come up with radically different computing substrates and architectures. Our quaint, two-dimensional digital circuits are about as useful to superintelligence as the brain cells of a flatworm are to solving the unified theory of everything; it’s a good start, but there’s a long way to go.115

  Here, what is plausible, rather than simply imaginable, is vitally important for grounding conversations around what AI will and won’t be able to do in the near future. Bostrom’s ideas of superintelligence are intellectually fascinating, but they’re currently scientifically implausible. On the other hand, Max Tegmark and others are beginning to develop ideas that have more of a ring of plausibility to them, while still painting a picture of a radically different future to the world we live in now (and in Tegmark’s case, one where there is a clear pathway to strong AGI leading to a vastly better future). But in all of these cases, future AI scenarios depend on an understanding of intelligence that may end up being deceptive.

  Defining Artificial Intelligence

  The nature of intelligence, as we saw in chapter five, is something that’s taxed philosophers, scientists, and others for eons. And for good reason; there is no absolute definition of intelligence. It’s a term of convenience we use to describe certain traits, characteristics, or behaviors. As a result, it takes on different meanings for different people. Often, and quite tritely, intelligence refers to someone’s ability to solve problems and think logically or rationally. So, the Intelligence Quotient is a measure of someone’s ability to solve problems that aren’t predicated on a high level of learned knowledge. Yet we also talk about social intelligence as the ability to make sense of and navigate social situations, or emotional intelligence, or the intelligence needed to survive and thrive politically. Then there’s intelligence that leads to some people being able to make sense of and use different types of information, including mathematical, written, oral, and visual information. On top of this, there are less formalized types of intelligence, like shrewdness, or business acumen.

  This lack of an absolute foundation for what intelligence is presents a challenge when talking about artificial intelligence. To get around this, thoughtful AI experts are careful to define what they mean by intelligence. Invariably, this is a form of intelligence that makes sense for AI systems. This is important, as it forms a plausible basis for exploring the emerging benefits and risks of AI systems, but it’s a long stretch to extend these pragmatic definitions of intelligence to world domination.

  One of the more thoughtful AI experts exploring the nature of artificial intelligence is Stuart Russell.116 Some years ago, Russell recognized that an inability to define intelligence is somewhat problematic if you’re setting out develop an artificial form of intelligence. And so, he developed the concept of bounded optimality.

  To understand this, you first have to understand the tendency among people working on AI—at least initially—to assume that there is a cozy relationship between intelligence and rationality. This is a deterministic view of the world that assumes there’s a perfectly logical way of understanding and predicting everything, if only you’re smart enough to do so. And even though we know from chaos and complexity theory that this can never be, it’s amazing how many people veer toward assuming a link between rationality and intelligence, and from there, to power.

  Russell, however, realized that this was a non-starter in a system where it was impossible for a machine to calculate the best course of action or, in other words, to compute precisely and
rationally what it should do. So, he came up with the idea of defining intelligence as the ability to assess a situation and make decisions that, on average, will provide the best solutions within a given set of constraints.

  Russell’s work begins to reflect definitions of intelligence that focus on the ability of a person or a machine to deduce how something works or behaves, based on information they collect or are given, their ability to retain and build on this knowledge, and their ability to apply this knowledge to bring about intentional change. In the context of intelligent machines, this is a strong and practical definition. It provides a framework for developing algorithms and machines that are able to develop optimized solutions to challenges within a given set of constraints, by observing, deducing, learning, and adapting.

  But this is a definition of intelligence that is specific to particular types of situation. It can be extended to some notion of general intelligence (or AGI) in that it provides a framework for learning and adaptive machines. But because it is constrained to specific types of machines and specific contexts, it is not a framework for intelligence that supports the emergence of human-threatening superintelligence.

  This is not to say that this constrained understanding of machine intelligence doesn’t lead to potentially dangerous forms of AI—far from it. It’s simply that the AI risks that arise from this definition of intelligence tend to be more concrete than the types of risks that speculation over superintelligence leads to. So, for instance, an intelligent machine that’s set the task of optimally solving a particular challenge—creating as many paper clips as possible for instance, or regulating the Earth’s climate—may find solutions that satisfy the boundaries it was given, but that nevertheless lead to unanticipated harm. The classic case here is a machine that works out it can make more paper clips more cheaply by turning everything around it into paper clips. This would be a really smart solution if making more paper clips was the most important thing in the world. And for a poorly instructed AI, it may indeed be. But if the enthusiasm of the AI ends up with it killing people to use the iron in their blood for yet more paper clips (which admittedly is a little far-fetched), we have a problem.

  Potential risks like these emerge from poorly considered goals, together with human biases, in developing artificial systems. But they may also arise as emergent and unanticipated behaviors, meaning that a degree of anticipation and responsiveness in how these technologies are governed is needed to ensure the beneficial development of AI. And while we’re unlikely to see Skynet-type AI world domination anytime soon, it’s plausible that some of these risks may blindside us, in part because we’re not thinking creatively enough about how an AI might threaten what’s important to us.

  This is where, to me, the premise of Ex Machina becomes especially interesting. In the movie, Ava is not a superintelligence, and she doesn’t have that much physical agency. Yet she’s been designed with an intelligence that enables her to optimize her ability to learn and grow, and this leads to her developing emergent properties. These include her the ability to deduce how to manipulate human behavior, and how to use this to her advantage.

  As she grows and matures in her understanding and abilities, Ava presents a bounded risk. There’s no indication that she’s about to take over the world, or that she has any aspirations in this direction. But the risk she presents is nevertheless a deeply disturbing one, because she emerges as a machine that not only has the capacity to learn and understand human behaviors, biases, and psychological and social vulnerabilities, but to dispassionately use them against us to reach her goals. This raises a plausible AI risk that is far more worrisome than superintelligence: the ability of future machines to bend us to their own will.

  Artificial Manipulation

  The eminent twentieth-century computer scientist Alan Turing was intrigued by the idea that it might be possible to create a machine that exhibits human intelligence. To him, humans were merely exquisitely intricate machines. And by extension, our minds—the source of our intelligence—were merely an emergent property of a complex machine. It therefore stood to reason to him that, with the right technology, there was no reason why we couldn’t build a machine that thought and reasoned like a person.

  But if we could achieve this, how would we know that we’d succeeded?

  This question formed the basis of Alan’s famous Turing Test. In the test, an interrogator carries out a conversation with two subjects, one of which is human, the other a machine. If the interrogator cannot tell which one is the human, and which is the machine, the machine is assumed to have equal intelligence to the human. And just to make sure something doesn’t give the game away, each conversation is carried out through text messages on a screen.

  Turing’s idea was that, if, in a conversation using natural language, someone could not tell whether they were conversing with a machine or another human, there was in effect no difference in intelligence between them.

  Since 1950, when Turing published his test,117 it’s dominated thinking around how we’d tell if we had created a truly artificial intelligence—so much so that, when Caleb discovers why he’s been flown out to Nathan’s lair, he initially assumes he’s there to administer the Turing Test. But, as we quickly learn, this test is deeply inadequate when it comes to grappling with an artificial form of intelligence like Ava.

  Part of the problem is that the Turing Test is human-centric. It assumes that the most valuable form of intelligence is human intelligence, and that this is manifest in the nuances of written human interactions. It’s a pretty sophisticated test in this respect, as we are deeply sensitive to behavior in others that feels wrong or artificial. So, the test isn’t a bad starting point for evaluating human-like behavior. But there’s a difference between how people behave—including all of our foibles and habits that are less about intelligence and more about our biological predilections—and what we might think of as intelligence. In other words, if a machine appeared to be human, all we’d know is that we’ve created something that was hot mess of cognitive biases, flawed reasoning, illogicalities, and self-delusion.

  On the other hand, if we created a machine that was aware of the Turing Test, and understood humans well enough to fake it, this would be an incredible, if rather disturbing, breakthrough. And this is, in a very real sense, what we see unfolding in Ex Machina.

  In the movie, Caleb quickly realizes that his evaluation of Ava is going to have to go far beyond the Turing Test, in part because he’s actually conversing with her face to face, which rather pulls the rug out from under the test’s methodology. Instead, he’s forced to dive much deeper into exploring what defines intelligence, and what gives a machine autonomy and value.

  Nathan, however, is several steps ahead of him. He’s realized that a more interesting test of Ava’s capabilities is to see how effectively she can manipulate Caleb to achieve her own goals. Nathan’s test is much closer to a form of Turing Test that sees whether a machine can understand and manipulate the test itself, much as a person might use their reasoning ability to outsmart someone trying to evaluate them.

  Yet, as Ex Machina begins to play out, we realize that this is not a test of Ava’s “humanity,” but a test to see how effectively she uses a combination of knowledge, observation, deduction, and action to achieve her goals, even down to using a deep knowledge of people to achieve her ends.

  It’s not clear whether this behavior constitutes intelligence or not, and I’m not sure that it matters. What is important is the idea of an AI that can observe human behavior and learn how to use our many biases, vulnerabilities, and blind spots against us.

  This sets up a scenario that is frighteningly plausible. We know that, as a species, we’ve developed a remarkable ability to rationalize the many sensory inputs we receive every second of every day, and construct in our heads a world that makes sense from these. In this sense, we all live in our own personal Plato’s Cave, building elaborate explanations for the shadows that our senses throw on the w
alls of our mind. It’s an evolutionary trait that’s led to us being incredibly successful as a species. But we too easily forget that what we think of as reality is simply a series of shadows that our brains interpret as such. And anyone—or anything—that has the capability of manipulating these shadows has the power to control us.

  People, of course, are adept at this. We are all relatively easily manipulated by others, either through them playing to our cognitive biases, or to our desires or our emotions. This is part of the complex web of everyday life as a human. And it sort of works because we’re all in the same boat: We manipulate and in turn are manipulated, and as a result feel reasonably okay within this shared experience.

  But what if it was a machine doing the manipulation, one that wasn’t part of the “human club,” and because it wasn’t constrained by human foibles, could see the things casting the shadows for what they really were? And what if this machine could easily manipulate these “shadows,” effectively controlling the world inside our heads to its own ends?

  This is a future that Ex Machina hints at. It’s a future where it isn’t people who reach enlightenment by coming out of the cave, but one where we create something other than us that finds its own way out. And it’s a future where this creation ends up seeing the value of not only keeping us where we are, but using its own enlightenment to enslave us.

  In the movie, Ava achieves this path to AI enlightenment with relative ease. Using the massive resources she has access to, she is able to play with Caleb’s cognitive biases and emotions in ways that lead to him doing what she needs him to in order to achieve her ends. And the worst of it is that we get the sense that Caleb is aware that he is being manipulated, yet is helpless to resist.

 

‹ Prev