Book Read Free

AI Superpowers

Page 24

by Kai-Fu Lee


  Let’s begin that process.

  First, I want to examine three of the most popular policy suggestions for adapting to the AI economy, many of them emanating from Silicon Valley. These three are largely “technical fixes,” tweaks to policy and business models that seek to smooth the transition but do not actually shift the culture. After examining the uses and weaknesses of these technical fixes, I propose three analogous changes that I believe will both alleviate the jobs issues while also pushing us toward a deeper social evolution.

  Instead of just implementing mere technical fixes, these constitute new approaches to job creation within the private sector, affecting investing and government policy. These approaches take as their goal not just keeping humans one step ahead of AI automation but actually opening new avenues to increased prosperity and human flourishing. Together, I believe they lay the groundwork for a new social contract that uses AI to build a more humanistic world.

  THE CHINESE PERSPECTIVE ON AI AND JOBS

  Before diving into the technical fixes proposed by Silicon Valley, let’s first look at how this conversation is unfolding in China. To date, China’s tech elite have said very little about the possible negative impact of AI on jobs. Personally, I don’t believe this silence is due to any desire to hide the dark truth from the masses—I think they genuinely believe there is nothing to fear in the jobs impact of AI advances. In this sense, China’s tech elites are aligned with the techno-optimistic American economists who believe that in the long run, technology always leads to more jobs and greater prosperity for all.

  Why does a Chinese entrepreneur believe in that with such conviction? For the past forty years, Chinese people have watched as their country’s technological progress acted as the rising tide that lifted all boats. The Chinese government has long emphasized technological advances as key to China’s economic development, and that model has proved highly successful in recent decades, moving China from a predominantly agricultural society to an industrial juggernaut and now an innovation powerhouse. Inequality has certainly increased over this same period of time, but those downsides have paled in comparison to the broad-based improvement in livelihoods. It makes a stark contrast to the stagnation and decline felt in many segments of American society, part of the “great decoupling” between productivity and wages we explored in previous chapters. It also helps explain why Chinese technologists appear unconcerned with the potential jobs impact of their innovations.

  Even among the Chinese entrepreneurs who do foresee a negative AI impact, there is a pervasive sense that the Chinese government will take care of all the displaced workers. This idea isn’t without basis. During the 1990s, China undertook a series of wrenching reforms to its bloated state-owned companies, shedding millions of workers from government payrolls. But despite the massive labor-market disruptions, the strength of the national economy and a far-reaching government effort to help workers manage the transition combined to successfully transform the economy without widespread unemployment. Looking into the AI future, many technologists and policymakers share an unspoken belief that these same mechanisms will help China avoid an AI-induced job crisis.

  Personally, I believe these predictions are too optimistic, so I am working to raise consciousness in China, as I am in the United States, regarding the momentous employment challenges that await us in the age of AI. It is important that Chinese entrepreneurs, technologists, and policymakers take these challenges seriously and begin laying the groundwork for creative solutions. But the cultural mentality described above—one that is reinforced by four decades of growing prosperity—means that we see little discussion of the crisis in China and even less in the way of proposed solutions. To engage with that conversation, we must turn again to Silicon Valley.

  THE THREE R’S: REDUCE, RETRAIN, AND REDISTRIBUTE

  Many of the proposed technical solutions for AI-induced job losses coming out of Silicon Valley fall into three buckets: retraining workers, reducing work hours, or redistributing income. Each of these approaches aims to augment a different variable within the labor markets (skills, time, compensation) and also embodies different assumption about the speed and severity of job losses.

  Those advocating the retraining of workers tend to believe that AI will slowly shift what skills are in demand, but if workers can adapt their abilities and training, then there will be no decrease in the need for labor. Those advocates of reducing work hours believe that AI will reduce the demand for human labor and feel that this impact could be absorbed by moving to a three- or four-day work week, spreading the jobs that do remain over more workers. The redistribution camp tends to be the most dire in their predictions of AI-induced job losses. Many of them predict that as AI advances, it will so thoroughly displace or dislodge workers that no amount of training or tweaking hours will be sufficient. Instead, we will have to adopt more radical redistribution schemes to support unemployed workers and spread the wealth created by AI. Next, I will take a closer look at the value and pitfalls of each of these approaches.

  Advocates of job retraining often point to two related trends as crucial for creating an AI-ready workforce: online education and “lifelong learning.” They believe that with the proliferation of online education platforms—both free and paid—displaced workers will have unprecedented access to training materials and instruction for new jobs. These platforms—video streaming sites, online coding academies, and so on—will give workers the tools they need to become lifelong learners, constantly updating their skills and moving into new professions that are not yet subject to automation. In this envisioned world of fluid retraining, unemployed insurance brokers can use online education platforms like Coursera to become software programmers. And when that job becomes automated, they can use those same tools to retrain for a new position that remains out of reach for AI, perhaps as an algorithm engineer or as a psychologist.

  Lifelong learning via online platforms is a nice idea, and I believe retraining workers will be an important piece of the puzzle. It can particularly help those individuals within the bottom-right quadrant of our risk-of-replacement charts from chapter 6 (the “Slow Creep” zone) stay ahead of AI’s ability to think creatively or work in unstructured environments. I also like that this method can give these workers a sense of personal accomplishment and agency in their own lives.

  But given the depth and breadth of AI’s impact on jobs, I fear this approach will be far from enough to solve the problem. As AI steadily conquers new professions, workers will be forced to change occupations every few years, rapidly trying to acquire skills that it took others an entire lifetime to build up. Uncertainty over the pace and path of automation makes things even more difficult. Even AI experts have difficulty predicting exactly which jobs will be subject to automation in the coming years. Can we really expect a typical worker choosing a retraining program to accurately predict which jobs will be safe a few years from now?

  I fear workers will find themselves in a state of constant retreat, like animals fleeing relentlessly rising flood waters, anxiously hopping from one rock to another in search of higher ground. Retraining will help many people find their place in the AI economy, and we must experiment with ways to scale this up and make it widely available. But I believe we cannot count on this haphazard approach to address the macro-level disruptions that will sweep over labor markets.

  To be clear, I do believe that education is the best long-term solution to the AI-related employment problems we will face. The previous millennia of progress have demonstrated human beings’ incredible ability both to innovate technically and to adapt to those innovations by training ourselves for new kinds of work. But the scale and speed of the coming changes from AI will not give us the luxury of simply relying on educational improvements to help us keep pace with the changing demands of our own inventions.

  Recognition of the scale of these disruptions has led people like Google cofounder Larry Page to advocate a more radical proposition: let’s move to a
four-day work week or have multiple people “share” the same job. In one version of this proposal, a single full-time job could be split into several part-time jobs, sharing the increasingly scarce resource of jobs across a larger pool of workers. These approaches would likely mean reduced take-home pay for most workers, but these changes could at least help people avoid outright unemployment.

  Some creative approaches to work-sharing have already been implemented. Following the 2008 financial crisis, several U.S. states implemented work-sharing arrangements to avoid mass layoffs at companies whose business suddenly dried up. Instead of laying off a portion of workers, companies reduced hours for several workers by 20 to 40 percent. The local government then compensated those workers for a certain percentage of their lost wages, often 50 percent. This approach worked well in some places, saving employees and companies the disruptions of firing and rehiring at the whim of the business cycle. It also potentially saved local governments money that would have gone to paying full unemployment benefits.

  Work-share arrangements could blunt job losses, particularly for professions in the “Human Veneer” quadrant of our risk-of-replacement graphs, where AI performs the main job task but only a smaller number of workers are needed to interface with customers. If executed well, these arrangements could act as government subsidies or incentives to keep more workers on the company payroll.

  But while this approach works well for short-term disruptions, it may lose traction in the face of AI’s persistent and nonstop decimation of jobs. Existing work-share programs only supplement a portion of lost wages, meaning workers still saw a net decline in income. Workers may accept this knock to their income during a temporary economic crisis, but no one desires stagnation or downward mobility over the long term. Telling a worker making $20,000 a year that they can now work four days a week and earn $16,000 is really a nonstarter. More creative versions of these programs could correct for this, and I encourage companies and governments to continue experimenting with them. But I fear this kind of approach will be far from sufficient to address the long-term pressures that AI will bring to the labor market. For that, we may have to adopt more radical redistributive measures.

  THE BASICS OF UNIVERSAL BASIC INCOME

  Currently, the most popular of these methods of redistribution is, as mentioned earlier, the universal basic income (UBI). At its core, the idea is simple: every citizen (or every adult) in a country receives a regular income stipend from the government—no strings attached. A UBI would differ from traditional welfare or unemployment benefits in that it would be given to everyone and would not be subject to time limits, job-search requirements, or any constraints in how it could be spent. An alternate proposal, often called a guaranteed minimum income (GMI), calls for giving the stipend only to the poor, turning it into an “income floor” below which no one could fall but without the universality of a UBI.

  Funding for these programs would come from steep taxes on the winners of the AI revolution: major technology companies; legacy corporations that adapted to leverage AI; and the millionaires, billionaires, and perhaps even trillionaires who cashed in on these companies’ success. The size of the stipend given is a matter of debate among proponents. Some people argue for keeping it very small—perhaps just $10,000 per year—so that workers still have a strong incentive to find a real job. Others view the stipend as a full replacement for the lost income of a regular job. In this view, a UBI could become a crucial step toward creating a “leisure society,” one in which people are fully liberated from the need to work, and free to pursue their own passions in life.

  Discussion of a UBI or GMI in the United States dates back to the 1960s, when it won support from people as varied as Martin Luther King Jr. and Richard Nixon. At the time, advocates saw a GMI as a simple way to end poverty, and in 1970 President Nixon actually came close to passing a bill that would have granted each family enough money to raise itself above the poverty line. But following Nixon’s unsuccessful push, discussion of a UBI or GMI largely dropped out of public discourse.

  That is, until Silicon Valley got excited about it. Recently, the idea has captured the imagination of the Silicon Valley elite, with giants of the industry like the prestigious Silicon Valley startup accelerator Y Combinator president Sam Altman and Facebook cofounder Chris Hughes sponsoring research and funding basic income pilot programs. Whereas GMI was initially crafted as a cure for poverty in normal economic times, Silicon Valley’s surging interest in the programs sees them as solutions for widespread technological unemployment due to AI.

  The bleak predictions of broad unemployment and unrest have put many of the Silicon Valley elite on edge. People who have spent their careers preaching the gospel of disruption appear to have suddenly woken up to the fact that when you disrupt an industry, you also disrupt and displace real human beings within it. Having founded and funded transformative internet companies that also contributed to gaping inequality, this cadre of millionaires and billionaires appear determined to soften the blow in the age of AI.

  To these proponents, massive redistribution schemes are potentially all that stand between an AI-driven economy and widespread joblessness and destitution. Job retraining and clever scheduling are hopeless in the face of widespread automation, they argue. Only a guaranteed income will let us avert disaster during the jobs crisis that looms ahead.

  How exactly a UBI would be implemented remains to be seen. A research organization associated with Y Combinator is currently running one pilot program in Oakland, California, that gives a thousand families a stipend of a thousand dollars each month for three to five years. The research group will track the well-being and activities of those families through regular questionnaires, comparing them with a control group that receives just fifty dollars per month.

  Many in Silicon Valley see the program through the lens of their own experience as entrepreneurs. They envision the money not only as a kind of broad safety net but as an “investment in the startup of you,” or as one tech writer put it, “VC for the people.” In this worldview, a UBI would give unemployed people a little “personal angel investment” with which they could start a new business or learn a new skill. In his 2017 Harvard commencement speech, Mark Zuckerberg aligned himself with this vision of UBI, arguing that we should explore a UBI so that “everyone has a cushion to try new ideas.”

  From my perspective, I can understand why the Silicon Valley elite have become so enamored with the idea of a UBI: it is a simple, technical solution to an enormous and complex social problem of their own making. But adopting a UBI would constitute a major change in our social contract, one that we should think through very carefully and most critically. While I support certain guarantees that basic needs will be met, I also believe embracing a UBI as a cure-all for the crisis we face is a mistake and a massive missed opportunity. To understand why, we must truly look at the motivations for the frenzy of interest in UBI and also think hard about what kind of a society it may create.

  SILICON VALLEY’S “MAGIC WAND” MENTALITY

  In observing Silicon Valley’s surge in interest around UBI, I believe some of that advocacy has emerged from a place of true and genuine concern for those who will be displaced by new technologies. But I worry that there’s also a more self-interested component: Silicon Valley entrepreneurs know that their billions in riches and their role in instigating these disruptions make them an obvious target of mob anger if things ever spin out of control. With that fear fresh in their minds, I wonder if this group has begun casting about for a quick fix to problems ahead.

  The mixed motivations of these people shouldn’t lead us to outright dismiss the solutions they put forth. This group, after all, includes some of the most creative business and engineering minds in the world today. Silicon Valley’s tendency to dream big, experiment, and iterate will all be helpful as we navigate these uncharted waters.

  But an awareness of these motivations should sharpen our critical engagement with proposals lik
e UBI. We should be aware of the cultural biases that engineers and investors bring with them when tackling a new problem, particularly one with profound social and human dimensions. Most of all, when evaluating these proposed solutions, we must ask what exactly they’re trying to achieve. Are they seeking to ensure that this technology genuinely and truly benefits all people across society? Or are they looking only to avert a worst-case scenario of social upheaval? Are they willing to put in the legwork needed to build new institutions or merely looking for a quick fix that will assuage their own consciences and absolve them of responsibility for the deeper psychological impacts of automation?

  I fear that many of those in Silicon Valley are firmly in the latter camp. They see UBI as a “magic wand” that can make disappear the myriad economic, social, and psychological downsides of their exploits in the AI age. UBI is the epitome of the “light” approach to problem-solving so popular in the valley: stick to the purely digital sphere and avoid the messy details of taking action in the real world. It tends to envision that all problems can be solved through a tweaking of incentives or a shuffling of money between digital bank accounts.

  Best of all, it doesn’t place any further burden on researchers to think critically about the societal impacts of the technologies they build; as long as everyone gets that monthly dose of UBI, all is well. The tech elite can go on doing exactly what they planned to do in the first place: building innovative companies and reaping massive financial rewards. Sure, higher taxes required to fund a UBI will cut into those profits to a certain degree, but the vast majority of the financial benefits from AI will still accrue to this elite group.

  Seen in this manner, UBI isn’t a constructive solution that leverages AI to build a better world. It’s a painkiller, something to numb and sedate the people who have been hurt by the adoption of AI. And that numbing effect goes both ways: not only does it ease the pain for those displaced by technology; it also assuages the conscience of those who do the displacing.

 

‹ Prev