by Olaf Groth
How many conversations played out across the country about the best movies to see in 2017? A friend recommends a movie, or a restaurant, or a product. That recommendation has value because it’s supported by all you know or think you know about your friend—their standing and credibility and the quality of your relationship with him or her. Consciously or subconsciously, you assign a weight to that friend’s recommendation. Your mind quickly examines your friend’s trustworthiness, her experience with the subject at hand, and how many good recommendations she’s made in the past. You know of the clear biases your friend might have toward or against the type of movie or the style of restaurant. Her profile and history are, for the most part, transparent to you. Ask the same of a few random colleagues at work, and your insight into their biases declines, as does their credibility in your mind. Get to people six or seven degrees away, and they have almost no credibility whatsoever—unless, of course, someone explains why you can trust them.
In theory, social networks could empower this sort of personal credibility, as their recommendation power emerges from the analyses of millions or billions of other people, from whom sensibilities like yours can emerge. At first blush, that sounds exactly like Unanimous AI, bringing together the power of multiple minds to help inform your own. But an algorithm that pulls insights out of collective data and fits the results to individuals merely represents a probability based on correlations with many other people, and not an individual match to who you are. It cannot identify the complex combination of reasons you might not want to watch a particular movie, nor can it understand why your preference might have changed since the last recommendation you accepted. To do that, it would have to actually understand causation, how one thing leads to another in your life, and the machine learning systems of 2018 can’t do that very well. They can’t know why you wanted to watch that cheesy romantic comedy; just that you’re likely to be interested (or not). Causality, this essential ingredient to understanding the way the world works and why, might also be an essential ingredient in getting to the next level of AI systems.
In coming years, we’ll learn more about how formidable Unanimous and the notion of swarm intelligence could become, but its idea of empowering people by keeping them in the loop and partnering them with AI systems—embedding human values and wisdom within the AI system itself—hints toward a richer, more valuable, and more potent form of cognition. We have collectively opted to entrust more of our decisions to the recommendations of the global Internet titans, their algorithms, and the vague sort of trust we imbue in them. This might sound like a first step toward a trusting and hyperconnected society, but it also lends itself to the manipulation of power. As François Chollet, an AI and machine learning software engineer at Google, noted in March 2018 Twitter rant: “Facebook can simultaneously measure everything about us, and control the information we consume. When you have access to both perception and action, you’re looking at an AI problem.” Human minds are “highly vulnerable to simple patterns of social manipulation,” Chollet tweeted, and serious dangers lurk in the types of closed loops that allow a company to both observe the state of its “targets” and keep tuning the information it feeds them. (As others on Twitter quickly noted, Google and the other AI titans can wield a similar power.)
Yet, our personal ability to shape our lives, influence those around us, and choose our own pathways remains the most personal source of power. To be sure, how well we exercise that agency depends in part on the quality of information we receive and how well we can process it. But in the end, our cognition and our ability to influence the cognition of others are themselves a form of power. The developing generations of artificial intelligence will lay us bare to the power of other entities to shape our lives, but it also will help us develop and exercise our own agency and authority.
We can’t go back now. These cognitive machines already shape human cognition, consciousness, and choice. They already thrive on our desire to simplify our lives and ease the time pressures we face. This is power we gladly grant in return for convenience or pleasure, and it is also power that is usurped from us without knowledge or consent. If we hope to capitalize on the positive potential these technologies could deliver in the next decade or two, we need to forge a new balance of power in the here and now. It’s still our choice, not theirs.
*Saqib Shah, “Facial recognition technology can now text jaywalkers a fine,” New York Post, March 27, 2018.
†Joyce Liu (producer), “In Your Face: China’s all-seeing state,” BBC News, Dec. 10, 2017.
‡Saqib Shah, “Facial recognition technology can now text jaywalkers a fine,” New York Post, March 27, 2018.
§Mara Hvistendahl, “Inside China’s Vast New Experiment in Social Ranking,” Wired, Dec. 14, 2017.
¶Liu Xuanzun, “Social credit system must bankrupt discredited people: former official,” Global Times, May 20, 2018.
#Interview with the authors at the University of Texas at Austin, October 20, 2017
**Topher White, “The fight against illegal deforestation with TensorFlow,” Google blog, March 21, 2018.
††Interview with the authors in Berkeley, CA, March 2, 2018
‡‡Interview with the authors in Berkeley, CA, March 9, 2018
§§Martin Chorzempa. “China Needs Better Credit Data to Help Consumers,” Peterson Institute for International Economics policy brief (January 2018).
¶¶Mara Hvistendahl, “Inside China’s Vast New Experiment in Social Ranking,” Wired (Dec. 14, 2017).
##Jeremy Page and Eva Dou, “In Sign of Resistance, Chinese Balk at Using Apps to Snitch on Neighbors,” Wall Street Journal (Dec. 29, 2017).
***Josh Chin, “Chinese Police Add Facial-Recognition Glasses to Surveillance Arsenal,” Wall Street Journal, Feb. 7, 2018.
†††Joyce Liu (producer), “In Your Face: China’s all-seeing state,” BBC News, Dec. 10, 2017.
‡‡‡Testimony to California Assembly joint hearing of the Privacy and Consumer Protection Committee and Select Committee on Emerging Technologies and Innovation, March 6, 2018.
§§§Interview with the authors at the University of Texas at Austin, October 20, 2017
¶¶¶Barbara Ross, “NYPD blasted over refusal to disclose ‘predictive’ data,” New York Daily News, Dec. 17, 2016.
###Testimony to California Assembly joint hearing of the Privacy and Consumer Protection Committee and Select Committee on Emerging Technologies and Innovation, March 6, 2018.
****Interview with the authors via video conference, January 18, 2018
††††Wolfgang Heller, “Service robots boost Danish welfare,” Robohub.org, Nov. 3, 2012.
‡‡‡‡https://www.youtube.com/watch?v=PNw4oicWmWU
§§§§Andrew Griffiths, “How Paro the robot seal is being used to help UK dementia patients,” The Guardian (July 8, 2014).
¶¶¶¶U.S. Environmental Protection Agency, Sources of Greenhouse Gas Emissions, Updated April 2018.
####Association for Safe International Road Travel, Annual Global Road Crash Statistics, ASIRT.org.
*****World Nuclear Association, World Nuclear Performance Report 2017, Updated April 2018.
†††††IHS Markit, “Global Auto Sales Growth to Slow in 2018, Yet Remain at Record Levels; 95.9 Million Light Vehicles Forecast to Be Sold in 2018, IHS Markit Says,” press release, Jan. 11, 2018.
‡‡‡‡‡Interview with the authors via video conference, December 12, 2017
3
Think Symbiosis
The moment embedded itself in the cultural zeitgeist as one of the most memorable technological achievements in modern history, but Patrick Wolff and most of his peers saw it coming for years. “It was a deal, but we knew it was going to happen,” he says years later. “It happened, and life went on.”
In the late 1980s, as Wolff began ascending the ranks of US chess grandmasters, the idea of a computer beating a human remained the stuff of science fiction—a far off, but foreseeable, milepost that w
ould signal the real arrival of artificial intelligence. The best computer-based game of the day, SARGON, could give most players a serviceable game of chess, but the digital competition was laughable for those with advanced skills. Up until then, players followed magazines and word of mouth to keep up with the latest chess news, tracking their own analyses, observations, and ideas in paper notebooks. But around the time Wolff was winning the US Junior Championship in 1987, professional chess players had started to rely on ChessBase and similar databases, and within a few years every grandmaster carried a laptop with them. They’d get floppy disks with the games played by all the grandmasters, enter their own games, and compile their notes digitally. They could query the database for specific types of openings or defenses.
In 1992, Wolff won his first US Chess Championship, and would take a second title three years later. By then, computer programs had gotten good enough at the game that even the world’s best could take them seriously. Grandmasters could still handle their digital opponents with relative ease, but they could augment their analyses of the game by sparring with a decent piece of software. Make a move, leave the computer on overnight to consider its millions of options, and it might make an intriguing move by the morning, Wolff recalls. Still, the path ahead was clear—computer processing power would continue to advance and developers would continue to improve game-playing software. “It was pretty obvious to me and most grandmasters that it was just a question of when,” he says.
The moment finally arrived on May 11, 1997, when IBM’s Deep Blue beat world chess champion Garry Kasparov on its second attempt. The supercomputer’s 3.5 to 2.5 victory made headlines around the world, made Kasparov a household name, and remains an iconic moment in humanity’s high-tech achievements to this very day. Yet, life went on in the chess world. Shortly after the turn of the century, commercially available computer systems with dedicated chess-playing software could challenge, and often beat, some of the best chess minds in the world. And over the subsequent ten years or so, the relationship between grandmasters and their digital challengers grew ever more symbiotic.
Then AlphaZero happened, and the chess community still hasn’t fully absorbed what it might lead to next, Wolff says. Grandmasters already knew software could process chess better than they could, so they learned how to integrate these powerful digital tools into their workflow to enhance their own game. But those applications essentially worked by executing handcrafted programs that followed these steps: start with constantly updated opening theory, optimize probabilities during game play, and then close out with established endgame procedures. Google DeepMind’s AlphaZero developed its expertise upon nothing but the rules of the game, playing against itself and rapidly improving via reinforcement learning. Once trained, DeepMind developers matched the system up in a hundred games against Stockfish, the best traditional computer program going at the time. Playing with the first-move advantage of the white pieces, AlphaZero won twenty-five and drew the other twenty-five games. Playing with black, Alpha Zero drew forty-seven and won three. Most grandmasters agree that a perfectly played game ends in a draw, so AlphaZero did very well.
To get a sense of how well, consider the Elo rating, which the chess world uses to gauge player quality. It’s a four-digit number that currently appears to top out around 3,600. (It used to go to about 2,900, but computers pushed it beyond that level.) Magnus Carlsen, the top-ranked grandmaster who won his first world championship in 2013 and had yet to lose the title at the time this book was written, posted a standard Elo rating around 2,840. Stockfish rated around 3,400, a score that suggests it would be expected to win 97 percent of its games, Wolff explains. AlphaZero’s rating nudged past Stockfish in just four hours, leveling off right around the 3,500 threshold, according to the information released by DeepMind researchers. “The AlphaZero moment? It was a moment for me,” Wolff says. “Holy shit. I spent months teaching myself about machine learning and deep neural networks. I needed to understand what the hell was going on.”
As he researched the technologies and the ten games about which DeepMind researchers released full details, he realized that AlphaZero, for all its sheer computing and cognitive power, still lacked something innate to human grandmasters: a conceptual understanding of the game. AlphaZero can calculate at four orders of magnitude faster than any human, and it can use that power to generalize across a wide range of potential options across a chess or go board, but it can’t conceptualize a position or put it into language. Consider, for example, a game Wolff observed in early 2018, during which one of the players sacrificed his knight in what proved to be an elegant maneuver. To a certain extent, Wolff explains, he could understand the play through pattern recognition, but he also naturally wondered, upon the initial move, why the player would put his knight in a clearly threatened position. “I could understand there was a reason that move was selected,” he says, “so I could look at what features changed and (ask) ‘Is there a way to use that reason?’” In other words, Wolff was trying to back into what he was seeing, trying to figure out what the larger, conceptual theory behind the move was. What was the chess player seeing that he wasn’t yet?
“When I look at the board, I often see an image. That opens up potential theories or ways to play and win.” While AlphaZero’s neural nets favor certain positions over others, it doesn’t appear to gather the same conceptual understanding, which could guide it toward an optimal set of moves. That hasn’t limited Alpha Zero’s prowess at chess, but it’s worth noting because it could have consequences in other domains where a conceptual understanding might produce more holistic solutions to problems. Wolff calls it the difference between raw skill and the sort of imagination that creates conceptual images in the human brain. For the time being, anyway, human grandmasters possess more conceptual knowledge and can imagine a successful theory of winning, even if they’re overwhelmed by AlphaZero’s skill. Yet, what happens when we combine both?
These days, Wolff says, virtually every grandmaster will train with a computer, integrating it into their routine to help hone their game and develop new concepts—perhaps coming up with a more intricate opening, or contemplating ways to counter novel tactics deployed by opponents. Many of them compete in “advanced chess” or “centaur chess” tournaments, in which human-machine pairs vie against each other. The combination can produce sublime results, says Wolff, who retired from professional chess in 1997 but still enjoys following the top echelons of the human-machine game. “It’s like watching gods play,” he says. “It’s incredible, the quality of chess they play.”
SYMBIO-INTELLIGENCE
Most traditional case studies used in business schools and other university settings leave the students who read and discuss them a relatively narrow set of choices. As open-ended a scenario as these exercises set out to simulate, they’re inevitably bound by a small number of potential responses. The experiential nature of the exercise typically imparts a deeper understanding of the material and issues at hand—at least better than most lectures—but they fall far short of real-life experience. John Beck and his colleagues haven’t created a lifelike educational experience yet, but they’ve moved a lot closer to that than any case study could.
Beck stumbled across the seeds of a powerful new idea while coauthoring The Attention Economy with Thomas Davenport. While researching the book, which argues that companies need to capture, manage, and keep the market’s attention, Beck discovered that video games drew and held people’s attention far better than almost anything else. He always figured information technology could play a greater role in education, but he never spent much time thinking about how. Somewhere around 2015, something clicked, and Interactive Learning Experiences, or I-L-X, was born. “It’s the kind of stuff you wish you could really do in a good case study, but you don’t have the time or capacity,” Beck says. “Case studies are very narrow . . . but here we have 10^150 possible different outcomes—and even that’s a lot less than real life on any decision you’re maki
ng.”
I-L-X uses a video game engine to engage students, but it’s not really a video game. It constantly evaluates what a student is learning along the way, blending traditional teaching, video game play, and a kind of scripted entertainment, he explains. The scripted piece of the lesson matters, so the experience might never become an end-to-end AI system, but it already integrates elements of machine learning and could eventually include a range of other AI technologies. But at its core, what makes I-L-X special is the interaction between the human and the experience provided by the machine. It tracks every single move the players make throughout the game, down to the amount of time it takes to make their decisions. “We have a really good sense of what paths people are taking based on what information they’re seeing,” he says, “and we can change every single element of this game on the fly.” By changing the content in a dialogue box in one situation, they can understand how users might change their decisions five or six steps down the road.
For now, much of the expertise and analysis comes from Beck’s thirty years of teaching experience, but he already has ideas of how current and future AI integration could increase complexity, enhance the sociological and emotional context of the stories, and interject a range of current events. For example, in one class back in 1998, Beck first used a current event—the crash of SwissAir Flight 111 out of New York City—in an educational “war game” about terrorism and the airline industry. Now, he revels in the idea of using AI technologies to introduce current events into games in real time, bringing a deeper context to the students’ interactive experience.