Book Read Free

AI Superpowers

Page 17

by Kai-Fu Lee


  Silicon Valley juggernauts do have some insight into the search and social habits in these countries. But building business, perception, and autonomous AI products will require companies to put real boots on the ground in each market. They will need to install hardware devices and localize AI services for the quirks of North African shopping malls and Indonesian hospitals. Projecting global power outward from Silicon Valley via computer code may not be the long-term answer.

  Of course, no one knows the endgame for this global AI chess match. American companies could suddenly boost their localization efforts, leverage their existing products, and end up dominating all countries except China. Or a new generation of tenacious entrepreneurs in the developing world could use Chinese backing to create local empires impenetrable to Silicon Valley. If the latter scenario unfolds, China’s tech giants wouldn’t dominate the world, but they would play a role everywhere, improve their own algorithms using training data from many markets, and take home a substantial chunk of the profits generated.

  LOOKING AHEAD

  Scanning the AI horizon, we see waves of technology that will soon wash over the global economy and tilt the geopolitical landscape toward China. Traditional American companies are doing a good job of using deep learning to squeeze greater profits from their businesses, and AI-driven companies like Google remain bastions of elite expertise. But when it comes to building new internet empires, changing the way we diagnose illnesses, or reimagining how we shop, move, and eat, China seems poised to seize global leadership. Chinese and American internet companies have taken different approaches to winning local markets, and as these AI services filter out to every corner of the world, they may engage in proxy competition in countries like India, Indonesia, and parts of the Middle East and Africa.

  This analysis sheds light on the emerging AI world order, but it also showcases one of the blind spots in our AI discourse: the tendency to discuss it solely as a horse race. Who’s ahead? What are the odds for each player? Who’s going to win?

  This kind of competition matters, but if we dig deeper into the coming changes, we find that far weightier questions lurk just below the surface. When the true power of artificial intelligence is brought to bear, the real divide won’t be between countries like the United States and China. Instead, the most dangerous fault lines will emerge within each country, and they will possess the power to tear them apart from the inside.

  6

  ★

  Utopia, Dystopia, and the Real AI Crisis

  All of the AI products and services outlined in the previous chapter are within reach based on current technologies. Bringing them to market requires no major new breakthroughs in AI research, just the nuts-and-bolts work of everyday implementation: gathering data, tweaking formulas, iterating algorithms in experiments and different combinations, prototyping products, and experimenting with business models.

  But the age of implementation has done more than make these practical products possible. It has also set ablaze the popular imagination when it comes to AI. It has fed a belief that we’re on the verge of achieving what some consider the Holy Grail of AI research, artificial general intelligence (AGI)—thinking machines with the ability to perform any intellectual task that a human can—and much more.

  Some predict that with the dawn of AGI, machines that can improve themselves will trigger runaway growth in computer intelligence. Often called “the singularity,” or artificial superintelligence, this future involves computers whose ability to understand and manipulate the world dwarfs our own, comparable to the intelligence gap between human beings and, say, insects. Such dizzying predictions have divided much of the intellectual community into two camps: utopians and dystopians.

  The utopians see the dawn of AGI and subsequent singularity as the final frontier in human flourishing, an opportunity to expand our own consciousness and conquer mortality. Ray Kurzweil—the eccentric inventor, futurist, and guru-in-residence at Google—envisions a radical future in which humans and machines have fully merged. We will upload our minds to the cloud, he predicts, and constantly renew our bodies through intelligent nanobots released into our bloodstream. Kurzweil predicts that by 2029 we will have computers with intelligence comparable to that of humans (i.e., AGI), and that we will reach the singularity by 2045.

  Other utopian thinkers see AGI as something that will enable us to rapidly decode the mysteries of the physical universe. DeepMind founder Demis Hassabis predicts that the creation of superintelligence will allow human civilization to solve intractable problems, producing inconceivably brilliant solutions to global warming and previously incurable diseases. With superintelligent computers that understand the universe on levels that humans cannot even conceive of, these machines become not just tools for lightening the burdens of humanity; they approach the omniscience and omnipotence of a god.

  Not everyone, however, is so optimistic. Elon Musk has called superintelligence “the biggest risk we face as a civilization,” comparing the creation of it to “summoning the demon.” Intellectual celebrities such as the late cosmologist Stephen Hawking have joined Musk in the dystopian camp, many of them inspired by the work of Oxford philosopher Nick Bostrom, whose 2014 book Superintelligence captured the imagination of many futurists.

  For the most part, members of the dystopian camp aren’t worried about the AI takeover as imagined in films like the Terminator series, with human-like robots “turning evil” and hunting down people in a power-hungry conquest of humanity. Superintelligence would be the product of human creation, not natural evolution, and thus wouldn’t have the same instincts for survival, reproduction, or domination that motivate humans or animals. Instead, it would likely just seek to achieve the goals given to it in the most efficient way possible.

  The fear is that if human beings presented an obstacle to achieving one of those goals—reverse global warming, for example—a superintelligent agent could easily, even accidentally, wipe us off the face of the earth. For a computer program whose intellectual imagination so dwarfed our own, this wouldn’t require anything as crude as gun-toting robots. Superintelligence’s profound understanding of chemistry, physics, and nanotechnology would allow for far more ingenious ways to instantly accomplish its goals. Researchers refer to this as the “control problem” or “value alignment problem,” and it’s something that worries even AGI optimists.

  Although timelines for these capabilities vary widely, Bostrom’s book presents surveys of AI researchers, giving a median prediction of 2040 for the creation of AGI, with superintelligence likely to follow within three decades of that. But read on.

  REALITY CHECK

  When utopian and dystopian visions of the superintelligent future are discussed publicly, they inspire both awe and a sense of dread in audiences. Those all-consuming emotions then blur the lines in our mind separating these fantastical futures from our current age of AI implementation. The result is widespread popular confusion over where we truly stand today and where things are headed.

  To be clear, none of the scenarios described above—the immortal digital minds or omnipotent superintelligences—are possible based on today’s technologies; there remain no known algorithms for AGI or a clear engineering route to get there. The singularity is not something that can occur spontaneously, with autonomous vehicles running on deep learning suddenly “waking up” and realizing that they can band together to form a superintelligent network.

  Getting to AGI would require a series of foundational scientific breakthroughs in artificial intelligence, a string of advances on the scale of, or greater than, deep learning. These breakthroughs would need to remove key constraints on the “narrow AI” programs that we run today and empower them with a wide array of new abilities: multidomain learning; domain-independent learning; natural-language understanding; commonsense reasoning, planning, and learning from a small number of examples. Taking the next step to emotionally intelligent robots may require self-awareness, humor, love, empathy, and ap
preciation for beauty. These are the key hurdles that separate what AI does today—spotting correlations in data and making predictions—and artificial general intelligence. Any one of these new abilities may require multiple huge breakthroughs; AGI implies solving all of them.

  The mistake of many AGI forecasts is to simply take the rapid rate of advance from the past decade and extrapolate it outward or launch it exponentially upward in an unstoppable snowballing of computer intelligence. Deep learning represents a major leveling up in machine learning, a movement onto a new plateau with a variety of real-world uses: the age of implementation. But there is no proof that this upward change represents the beginning of exponential growth that will inevitably race toward AGI, and then superintelligence, at an ever-increasing pace.

  Science is difficult, and fundamental scientific breakthroughs are even harder. Discoveries like deep learning that truly raise the bar for machine intelligence are rare and often separated by decades, if not longer. Implementations and improvements on these breakthroughs abound, and researchers at places like DeepMind have demonstrated powerful new approaches to things like reinforcement learning. But in the twelve years since Geoffrey Hinton and his colleagues’ landmark paper on deep learning, I haven’t seen anything that represents a similar sea change in machine intelligence. Yes, the AI scientists surveyed by Bostrom predicted a median date of 2040 for AGI, but I believe scientists tend to overestimate when an academic demonstration will become a real-world product. To wit, in the late 1980s, I was the world’s leading researcher on AI speech recognition, and I joined Apple because I believed the technology would go mainstream within five years. It turned out that I was off by twenty years.

  I cannot guarantee that scientists definitely will not make the breakthroughs that would bring about AGI and then superintelligence. In fact, I believe we should expect continual improvements to the existing state of the art. But I believe we are still many decades, if not centuries, away from the real thing. There is also a real possibility that AGI is something humans will never achieve. Artificial general intelligence would be a major turning point in the relationship between humans and machines—what many predict would be the most significant single event in the history of the human race. It’s a milestone that I believe we should not cross unless we have first definitively solved all problems of control and safety. But given the relatively slow rate of progress on fundamental scientific breakthroughs, I and other AI experts, among them Andrew Ng and Rodney Brooks, believe AGI remains farther away than often imagined.

  Does that mean I see nothing but steady material progress and glorious human flourishing in our AI future? Not at all. Instead, I believe that civilization will soon face a different kind of AI-induced crisis. This crisis will lack the apocalyptic drama of a Hollywood blockbuster, but it will disrupt our economic and political systems all the same, and even cut to the core of what it means to be human in the twenty-first century.

  In short, this is the coming crisis of jobs and inequality. Our present AI capabilities can’t create a superintelligence that destroys our civilization. But my fear is that we humans may prove more than up to that task ourselves.

  FOLDING BEIJING: SCIENCE-FICTION VISIONS AND AI ECONOMICS

  When the clock strikes 6 a.m., the city devours itself. Densely packed buildings of concrete and steel bend at the hip and twist at their spines. External balconies and awnings are turned inward, creating smooth and tightly sealed exteriors. Skyscrapers break down into component parts, shuffling and consolidating into Rubik’s Cubes of industrial proportions. Inside those blocks are the residents of Beijing’s Third Space, the economic underclass that toils during the night hours and sleeps during the day. As the cityscape folds in on itself, a patchwork of squares on the earth’s surface begin their 180-degree rotation, flipping over to tuck these consolidated structures underground.

  When the other side of these squares turn skyward, they reveal a separate city. The first rays of dawn creep over the horizon as this new city emerges from its crouch. Tree-lined streets, vast public parks, and beautiful single-family homes begin to unfold, spreading outward until they have covered the surface entirely. The residents of First Space stir from their slumber, stretching their limbs and looking out on a world all their own.

  These are visions of Hao Jingfang, a Chinese science-fiction writer and economics researcher. Hao’s novelette “Folding Beijing” won the prestigious Hugo Award in 2016 for its arresting depiction of a city in which economic classes are separated into different worlds.

  In a futuristic Beijing, the city is divided into three economic castes that split time on the city’s surface. Five million residents of the elite First Space enjoy a twenty-four-hour cycle beginning at 6 a.m., a full day and night in a clean, hypermodern, uncluttered city. When First Space folds up and flips over, the 20 million residents of Second Space get sixteen hours to work across a somewhat less glamorous cityscape. Finally, the denizens of Third Space—50 million sanitation workers, food vendors, and menial laborers—emerge for an eight-hour shift from 10 p.m. to 6 a.m., toiling in the dark among the skyscrapers and trash pits.

  The trash-sorting jobs that are a pillar of the Third Space could be entirely automated but are instead done manually to provide employment for the unfortunate denizens condemned to life there. Travel between the different spaces is forbidden, creating a society in which the privileged residents of First Space can live free of worry that the unwashed masses will contaminate their techno-utopia.

  THE REAL AI CRISIS

  This dystopian story is a work of science fiction but one rooted in real fears about economic stratification and unemployment in our automated future. Hao holds a Ph.D. in economics and management from prestigious Tsinghua University. For her day job, she conducts economics research at a think tank reporting to the Chinese central government, including investigating the impact of AI on jobs in China.

  It’s a subject that deeply worries many economists, technologists, and futurists, myself included. I believe that as the four waves of AI spread across the global economy, they have the potential to wrench open ever greater economic divides between the haves and have-nots, leading to widespread technological unemployment. As Hao’s story so vividly illustrates, these chasms in wealth and class can morph into something much deeper: economic divisions that tear at the fabric of our society and challenge our sense of human dignity and purpose.

  Massive productivity gains will come from the automation of profit-generating tasks, but they will also eliminate jobs for huge numbers of workers. These layoffs won’t discriminate by the color of one’s collar, hitting highly educated white-collar workers just as hard as many manual laborers. A college degree—even a highly specialized professional degree—is no guarantee of job security when competing against machines that can spot patterns and make decisions on levels the human brain simply can’t fathom.

  Beyond direct job losses, artificial intelligence will exacerbate global economic inequality. By giving robots the power of sight and the ability to move autonomously, AI will revolutionize manufacturing, putting third-world sweatshops stocked with armies of low-wage workers out of business. In doing so, it will cut away the bottom rungs on the ladder of economic development. It will deprive poor countries of the opportunity to kick-start economic growth through low-cost exports, the one proven route that has lifted countries like South Korea, China, and Singapore out of poverty. The large populations of young workers that once comprised the greatest advantage of poor countries will turn into a net liability, and a potentially destabilizing one. With no way to begin the development process, poor countries will stagnate while the AI superpowers take off.

  But even within those rich and technologically advanced countries, AI will further cleave open the divide between the haves and the have-nots. The positive-feedback loop generated by increasing amounts of data means that AI-driven industries naturally tend toward monopoly, simultaneously driving down prices and eliminating competition am
ong firms. While small businesses will ultimately be forced to close their doors, the industry juggernauts of the AI age will see profits soar to previously unimaginable levels. This concentration of economic power in the hands of a few will rub salt in the open wounds of social inequality.

  In most developed countries, economic inequality and class-based resentment rank among the most dangerous and potentially explosive problems. The past few years have shown us how a cauldron of long-simmering inequality can boil over into radical political upheaval. I believe that, if left unchecked, AI will throw gasoline on the socioeconomic fires.

  Lurking beneath this social and economic turmoil will be a psychological struggle, one that won’t make the headlines but that could make all the difference. As more and more people see themselves displaced by machines, they will be forced to answer a far deeper question: in an age of intelligent machines, what does it mean to be human?

  THE TECHNO-OPTIMISTS AND THE “LUDDITE FALLACY”

  Like the utopian and dystopian forecasts for AGI, this prediction of a jobs and inequality crisis is not without controversy. A large contingent of economists and techno-optimists believe that fears about technology-induced job losses are fundamentally unfounded.

  Members of this camp dismiss dire predictions of unemployment as the product of a “Luddite fallacy.” The term is derived from the Luddites, a group of nineteenth-century British weavers who smashed the new industrial textile looms that they blamed for destroying their livelihoods. Despite the best efforts and protests of the Luddites, industrialization plowed full steam ahead, and both the number of jobs and quality of life in England rose steadily for much of the next two centuries. The Luddites may have failed in their bid to protect their craft from automation—and many of those directly impacted by automation did in fact suffer stagnant wages for some time—but their children and grandchildren were ultimately far better off for the change.

 

‹ Prev