Not only will there be higher unemployment, there will be lower wages for those who don’t have access to the upper stratum. A 2018 working document from the International Monetary Fund—which does not reflect the official view of the IMF, but is influential among the institution’s policy makers—concluded that “automation is very good for growth, and very bad for equality.” Frey, the Oxford economist, told me in an interview that most of the jobs of the future will be in the personal services sector, which depend largely on consumer preferences and are more difficult to automate. He added that he’s not so much forecasting a jobless future, but “a future of continued polarization, where we see a few new jobs emerging in technology industries and we see a lot of demand for more personal services than are typically lower paid.”
So, are we headed for a society in which those who don’t write algorithms will end up as Zumba instructors? I asked him. Frey responded that—while it would be an exaggeration to say that a big chunk of the workforce will be made up of Zumba instructors—“when we examined which jobs had been the fastest-growing over the past five years, Zumba instructor was one of them…And I think that most new jobs are indeed going to be associated with categories that relate to this type of personal services.”
WILL COMPUTERS TAKE OVER?
After my visit to the Oxford Martin School at the University of Oxford, I headed to the Future of Humanity Institute, a nearby think tank that has drawn international attention for its ominous predictions about the potential superpowers of computers. Nick Bostrom, the institute’s founding director, had become famous for his recent book on artificial intelligence titled Superintelligence: Paths, Dangers, Strategies. Bostrom and his team studied the possible long-term effects of artificial intelligence, and reached a conclusion that seems to come straight out of a science fiction movie: that superintelligent machines will soon have the ability to think for themselves. According to Bostrom, mankind must take precautions starting now—as it did in the twentieth century when scientists produced the first atomic weapons—to prevent its own destruction at the hands of this new source of intelligent life. Bostrom had gained notoriety in academic and business circles after Bill Gates and Elon Musk publicly recommended reading Superintelligence, and his book became a New York Times bestseller.
After a ten-minute walk through the streets of Oxford, which are lined with tiny shops and restaurants packed with students, I reached the Future of Humanity Institute. I was somewhat skeptical about what I would find there. I must confess, the name of the institution sounded a bit pretentious. I had read on its web page that it was an interdisciplinary research center affiliated with Oxford’s Faculty of Philosophy, and I found myself wondering if Bostrom and his research team had real knowledge of mathematics and robotics, or if they were simply a bunch of philosophers speculating about the future.
My concerns increased when I arrived at the center—an architecturally modern building on one of the side streets in the downtown area—and read the office directory in the lobby. In the same building with the Future of Humanity Institute were other similar institutions such as the Centre for Effective Altruism and the Oxford Uehiro Centre for Practical Ethics. Clearly this was the epicenter of Oxford’s idealists. But were they anchored in reality, or were they a group of well-intentioned though somewhat naive philosophers and poets? I asked myself.
But as soon as I entered the institute, it became clear to me that it wasn’t a den of dreamers. Along with professors of robotics and artificial intelligence, I found researchers with Ph.D.s in cybernetic neurosciences, computational biochemistry, parabolic geometry, and several other related disciplines I had never heard of before. The institute is specifically aimed at studying the long-term impact—looking forward a hundred years or more—that new technologies could have on society and the environment, something that neither governments nor large corporations are doing.
Bostrom, who was in his midforties, is originally from Sweden. He studied philosophy, mathematics, logic, and artificial intelligence at the University of Gothenburg before earning master’s degrees in philosophy, physics, and computational neuroscience at Stockholm University and King’s College in London and a Ph.D. in philosophy from the London School of Economics. But his primary interest over the past several years has been artificial intelligence.
Bostrom contends that in the same manner as the United States and other nations engaged in an arms race in the twentieth century that resulted in the creation of atomic weapons, we are now in a new competition between countries or companies looking to create an artificial intelligence that exceeds that of humans. Just as past governments and scientific elites argued that if they didn’t create an atomic bomb, their enemies would, the same is happening today with artificial intelligence. We are creating superintelligent machines that in the beginning will follow specific orders from a person or group of people, but eventually they will reach a point where they will make decisions on their own, which could affect the interests and even the safety of all of humanity, he says. So in order to avoid this potential catastrophe, Bostrom believes we must establish international safeguards and codes of conduct for artificial intelligence researchers and programmers.
WHAT HAPPENED TO HORSES COULD HAPPEN TO HOMO SAPIENS
When I asked him about the future of work, Bostrom told me he considers it very possible that people will become superfluous, just as horses did after the invention of cars. While Bostrom is much less pessimistic about the future of jobs than many of his colleagues, he does say that the horse analogy is perfect to illustrate what could happen to humans. Before the invention of the automobile, horses hauled carriages and plows, which helped to significantly increase productivity, he says. But then carriages were replaced by cars and plows by tractors, reducing the need for horses and leading to a collapse in the world horse population.
Horses were left without the jobs they needed to survive. Could the same thing happen to humans once robots start doing almost all of the world’s work? Bostrom asks. The number of horses in the United States plummeted from 26 million in 1915 to just 2 million in 1950. Today, a few horses are used by police officers patrolling parks, but the vast majority of them are used for sporting or leisure activities.
With robots doing more of today’s work and greatly increasing productivity, human labor will become less important, Bostrom predicts. We could be headed toward an incredibly wealthy world in which people will have no need to work and where most of those with jobs will be in the arts, humanities, athletics, meditation, or other activities designed to make life more enjoyable. So as Bostrom sees it, the automation of work by robots could ultimately lead either to a gloomy future, like that of horses, or to a blissful world where nobody will have to work at an undesirable job against his or her will.
A WORLD FULL OF UNEMPLOYED PEOPLE COULD BE WONDERFUL
I asked Bostrom if he wasn’t worried about the possibility of a world without work. “Not necessarily,” he responded. In fact, it could be something great, he added. It is quite possible that over time, as a greater percentage of the population stops working, people’s perception of unemployment will change, and having a job will no longer be seen as something positive or indispensable. “My main fear, actually, is not joblessness. In a way, I see unemployment as something that should be our goal, to get machines to do all the things that currently only humans can do so that we don’t have to work anymore.”
When I looked at him with a mixture of surprise and skepticism and suggested that work isn’t just a source of income but also a source of self-esteem, Bostrom disagreed. A world where the unemployed are viewed in a positive light wouldn’t be anything new, he said. In previous centuries, aristocrats considered work as something dirty that only the common folk would engage in, he argued. Aristocrats devoted themselves to socializing, reading poetry, and listening to music, and yet they enjoyed the highest social status and felt that they were leading v
ery meaningful lives, he said.
The idea that work is something that gives meaning to our lives is relatively new, and it might well be a fleeting idea, Bostrom said. A superproductive economy thanks to automation could potentially subsidize all human beings, and the concept of work itself would change forever. Seeing that I wasn’t entirely convinced, Bostrom went on to point out that even today, there are large sections of society that don’t have conventional jobs—students, for example—but who nevertheless lead purposeful lives and enjoy high levels of social approval. And there are entire nations, like Saudi Arabia, where a large percentage of the population doesn’t work and has a guaranteed income, and yet these countries are still well regarded.
“Historically, if you look at who were regarded as high-status individuals, the aristocrats were the people who didn’t have to work in order to live,” Bostrom said. “Working was a sign of lower-class status. The more desirable way to spend your time was playing music, going on a hunt, drinking with your friends, tending your garden, traveling, and doing things because you wanted to do them rather than being forced to work. The current era is atypical in the sense that now the highest-status people are CEOs, doctors, lawyers, politicians—people who work hard all day. But that, I think, hasn’t been the norm throughout history.”
COULD WE BE HAPPY WITHOUT WORK?
But, going back to the example of Saudi Arabia, are people in that country happy not to be working? I asked. “No,” he replied. “There seems to be a lot of evidence that there’s a fair bit of discontent because they want to work and don’t have the ability to change their society. But this is kind of a cultural thing, it’s about how the society is set up. So with the right culture, I think a jobless society could be great, but with the wrong culture, it could be hell.”
To reach the goal of a society of happily unemployed people, we would have to solve two basic problems: the technological challenge of making sure that intelligent machines do what we want them to do and the economic challenge of guaranteeing that all workers who lose their jobs to automation have an income, he said. “Fortunately, if machines really do gain great human capabilities, that would be a great boost to the economy. You would have massive economic growth because you would have the ability to automate everything. So the resources would be there. It would then be a political question as to how those resources would be distributed.”
Bostrom described it as being like a return to the days of the great mammoth hunts, but with much more comfort. According to some anthropologists, prehistoric humans were a relatively workless society and had a lot of free time. When a hunting party of forty men killed a mammoth, there would be enough food to feed the entire community for a couple of months. “They wouldn’t squabble over how big to slice everyone’s steak,” he said. “So if the artificial intelligence revolution happens in a similar sort of way, it would be like having a giant mammoth for all of humanity. Instead of squabbling over how exactly to carve it up, even if some people get slightly more, there could still be more than enough for everybody to get something in that type of scenario.”
SOME JOBS WILL DISAPPEAR, BUT MOST WILL BE TRANSFORMED
Bostrom isn’t completely convinced by his Oxford colleagues Frey and Osborne’s study conclusion that 47 percent of jobs are in danger of being replaced by automation. As he explained to me, “the methodology is really a good idea, but you shouldn’t trust any particular number too much…The 47 percent figure can be debated.” He added that it will be very difficult in the near future to automate jobs that require creativity, social skills, or common sense, because it takes a long time for artificial intelligence to become as effective as humans in those fields.
For example, one of the most automatable jobs mentioned in the Frey report is that of insurance underwriter. In theory, these people collect and process data: in other words, it’s routine work that could be done more simply and more quickly by an algorithm. “However,” Bostrom says, “when I talk to real-life underwriters, they need to have business relationships with other people. They need to negotiate a fair bit. A lot of it is thinking about what’s good for the company in the long run, doing quite complex evaluations. And then selling stuff to other people, convincing them, playing golf with them, which is quite an important skill when it comes to social relations.”
Bostrom went on to say that there are some jobs that might seem routine at first glance, but require a lot of common sense. It’s very hard to automate common sense. You need that human skill, for instance, to detect if the blueprint for a skyscraper mistakenly calls for it to be built out of wood because someone entered the wrong data into a computer. For now, he explained, artificial intelligence just can’t emulate human common sense.
WHAT WOULD HAPPEN IF ROBOTS GO CRAZY?
Without detracting from Bostrom’s concern that artificial superintelligence could end up having a mind of its own and destroying humanity, I have a much simpler fear: that the robots and algorithms that we are already using may suddenly go crazy. In 2018, there were many stories in the media about people who got a big fright when their Alexa, Amazon’s virtual assistant, unexpectedly let out an eerie laugh. The company later admitted that the machine was mistakenly hearing the command “Alexa, laugh” when other words were spoken and said it was working to fix the problem. I’ve been even more concerned about the robots going crazy after I had my own personal scary experience with the Alexa that my son had given me as a birthday present.
My Alexa personal assistant lives in a small cylindrical speaker with a light at the top. The speaker turns on when she hears the word Alexa, and you can ask her whatever you want, whether it’s play a song, check the weather forecast, or read the latest headlines. You can also ask her to order a pizza or buy a book off Amazon’s website. According to the company, Alexa is already in more than twenty million American homes.
In our house, we mostly use her to check the weather and listen to the latest National Public Radio (NPR) news broadcasts. More than anything, she’s been a good conversation piece, especially when we have visitors from other countries where Alexa still isn’t available. But the particular incident I had with her makes me wonder what will happen as we increasingly allow our daily life to be assisted—if not directed—by virtual assistants in our homes, GPS navigators in our cars, medical diagnostic robots in our hospitals, and other intelligent machines everywhere.
I was working one morning in my home office when suddenly I heard a male voice coming from my living room. It never crossed my mind that it could be Alexa, who sits on a small end table next to a sofa, because nobody had asked her to turn on. I was alone, my wife was traveling, and we have no pets. There was nobody around who could have woken her up by calling her name.
At once frightened and frustrated that I didn’t have any solid object to grab and use to face a possible robber, I slowly walked into the living room. My heart was pounding. But nobody was there: just Alexa, who had somehow managed to turn herself on and was now broadcasting the latest news on NPR.
I called Amazon’s customer service department, and a representative told me that what had happened was quite unusual and could have been caused by a “technical glitch.” I called Amazon’s media department and was told that the NPR program could have been on pause and that Alexa might have “thought” she heard the word resume. Another possibility was that some radio or television elsewhere in the house had uttered the word Alexa followed by NPR, thus triggering the device. But none of these potential explanations satisfied me because Alexa had been off for days, if not weeks, and there were no other devices in the house that were turned on at the time.
I wrote a column in the Miami Herald about what had happened to me, and afterward I was flooded with comments from readers who had experienced similar situations with that and other smart machines. Many recounted the mishaps they suffered because their GPS system had led them to the wrong place. Who
can’t identify with that? Another reader complained that the sensor in his car was constantly warning him of low pressure in one of the tires, and yet a quick stop at a gas station confirmed that it was properly inflated. These trips to the air pump quickly became as wearisome as they were worthless, so he stopped paying attention to the indicator light on his dashboard altogether, he said. Someone else offered his sympathy, saying that he felt similarly lost when his refrigerator warned him that the ice machine had broken when in fact it was running quite smoothly.
What will happen when, thanks to the Internet of Things, the refrigerator takes it upon itself to call a repair service based on a “technical glitch”? Or in an even more dangerous scenario, what would happen when a robot makes a mistake during open-heart surgery or during a cancer diagnosis? Or when the automated machine guns in South Korea’s northern border fail to listen to their human operator because of a “technical failure” and open fire on North Korea? The techno-optimists will argue that the chances of these things happening will be infinitely fewer than mistakes due to human error. But regardless, it’s a subject that requires much greater attention. Before we worry about intelligent machines becoming so smart that they can rule the world, we should be worrying about a much more basic threat: that they may simply go crazy.
THERE IS HOPE FOR THE FUTURE, BUT THE TRANSITION WILL BE HARSH
The Robots Are Coming! Page 7