Book Read Free

Architects of Intelligence

Page 27

by Martin Ford


  That’s a success for this approach of comprehensive ethical standards, and technical strategies on how to keep the technology safe, and much of that is now baked into law. That doesn’t mean we can cross danger from biotechnology off our list of concerns; we keep coming up with more powerful technologies like CRISPR and we have to keep reinventing the standards.

  We had our first AI ethics Asilomar conference about 18 months ago where we came up with a set of ethical standards. I think they need further development, but it’s an overall approach that can work. We have to give it a high priority.

  MARTIN FORD: The concern that’s really getting a lot of attention right now is what’s called the control problem or the alignment problem, where a superintelligence might not have goals that are aligned with what’s best for humanity. Do you take that seriously, and should work be done on that?

  RAY KURZWEIL: Humans don’t all have aligned goals with each other, and that’s really the key issue. It’s a misconception to talk about AI as a civilization apart, as if it’s an alien invasion from Mars. We create tools to extend our own reach. We couldn’t reach food at that higher branch 10,000 years ago, so we made a tool that extended our reach. We can’t build a skyscraper with our bare hands, so we have machines that leverage the range of our muscles. A kid in Africa with a smartphone is connected to all of the human knowledge with a few keystrokes.

  That is the role of technology; it enables us to go beyond our limitations, and that’s what we are doing and will continue to do with AI. It’s not us versus the AIs, which has been the theme of many AI futurist dystopian movies. We are going to merge with it. We already have. The fact that your phone is not physically inside your body and brain is a distinction without a difference, because it may as well be. We don’t leave home without it, we’re incomplete without it, nobody could do their work, get their education, or keep their relationships without their devices today, and we’re getting more intimate with them.

  I went to MIT because it was so advanced in 1965 that it had a computer. I had to take my bicycle across the campus to get to it and show my ID to get into the building, and now half a century later we’re carrying them in our pockets, and we’re using them constantly. They are integrated into our lives and will ultimately become integrated into our bodies and brains.

  If you look at the conflict and warfare we’ve had over the millennia, it’s been from humans having disagreements. I do think technology tends to actually create greater harmony and peace and democratization. You can trace the rise of democratization to improvements in communication. Two centuries ago, there was only one democracy in the world. There were half a dozen democracies one century ago. Now there are 123 democracies out of 192 recognized countries, that’s 64% of the world. The world’s not a perfect democracy, but democracy has actually been accepted as the standard today. It is the most peaceful time in human history, and every aspect of life is getting better, and this is due to the effect of technology which is becoming increasingly intelligent, and it’s deeply integrated into who we are.

  We have conflict today between different groups of humans, each of whom are amplified by their technology. That will continue to be the case, although I think there’s this other theme that better communication technology harnesses our short-range empathy. We have a biological empathy for small groups of people, but that’s now amplified by our ability to actually experience what happens to people half a world away. I think that’s the key issue; we still have to manage our human relations as we increase our personal powers through technology.

  MARTIN FORD: Let’s talk about the potential for economic and job market disruption. I personally do think there’s a lot of potential for jobs to be lost or deskilled and for greatly increasing inequality. I actually think it could be something that will be disruptive on the scale of a new Industrial Revolution.

  RAY KURZWEIL: Let me ask you this: how did that last Industrial Revolution work out? Two hundred years ago, the weavers had enjoyed a guild that was passed down from generation to generation for hundreds of years. Their business model was turned on its head and disrupted when all these thread-spinning and cloth-weaving machines came out that completely upended their livelihoods. They predicted that more machines would come out and that most people would lose their jobs, and that employment would be enjoyed just by an elite. Part of that prediction came true—more textile machines were introduced and many types of skills and jobs were eliminated. However, employment went up, not down as society became more prosperous.

  If I were a prescient futurist in 1900 I would point out that 38% of you work on farms and 25% of you work in factories, but I predict that 115 years from now, in 2015, that’ll be 2% on farms, and 9% in factories. Everybody’s reaction would be, “Oh my god I’m going to be out of work!” I would then say “Don’t worry, the jobs that are eliminated are at the bottom of the skill ladder, and we are going to create an even larger number of jobs at the top of the skill ladder.”

  People would say, “Oh really, what new jobs?”, and I’d say, “Well I don’t know, we haven’t invented them yet.” People say we’ve destroyed many more jobs than we’ve created but that’s not true, we’ve gone from 24 million jobs in 1900 to 142 million jobs today, and as a percentage of the population that goes from 31% to 44%. How do these new jobs compare? Well, for one thing, the average job today pays 11 times as much in constant dollars per hour than in 1900. As a result, we’ve shortened the work year from about 3,000 hours to 1,800 hours. People still make 6 times as much per year in constant dollars, and the jobs have become much more interesting. I think that’s going to continue to be the case even in the next Industrial Revolution.

  MARTIN FORD: The real question is whether this time it’s different. What you say about what happened previously is certainly true, but it is also true, according to most estimates, that maybe half or more of the people in the workforce are doing things that are fundamentally predictable and relatively routine, and all those jobs are going to be potentially threatened by machine learning. Automating most of those predictable jobs does not require human-level AI.

  There may be new kinds of work created for robotics engineers and deep learning researchers and all of that, but you cannot take all the people that are now flipping hamburgers or driving taxis and realistically expect to transition them into those kinds of jobs, even assuming that there are going to be a sufficient number of these new jobs. We’re talking about a technology that can displace people cognitively, displace their brainpower, and it’s going to be extraordinarily broad-based.

  RAY KURZWEIL: Your model that’s implicit in your prediction is us-versus-them, and what are the humans going to do versus the machines. We’ve already made ourselves smarter in order to do these higher-level types of jobs. We’ve made ourselves smarter not with things connected directly into our brains yet, but with intelligent devices. Nobody can do their jobs without these brain extenders, and the brain extenders are going to extend our brains even further, and they’re going to be more closely integrated into our lives.

  One thing that we did to improve our skills is education. We had 68,000 college students in 1870 and today we have 15 million. If you take them and all the people that service them, such as faculty and staff, it is about 20 percent of the workforce that is just involved in higher education, and we are constantly creating new things to do. The whole app economy did not exist about six years ago, and that forms a major part of the economy today. We’re going to make ourselves smarter.

  A whole other thesis that needs to be looked at in considering this question is the radical abundance thesis that I mentioned earlier. I had an on-stage dialogue with Christine Lagarde, the managing director of the IMF, at the annual International Monetary Fund meeting and she said, “Where’s the economic growth associated with this? The digital world has these fantastic things, but fundamentally you can’t eat information technology, you can’t wear it, you can’t live in it,” and my response was, “All that
’s going to change.”

  “All those types of nominally physical products are going to become an information technology. We’re going to grow food with vertical agriculture in AI-controlled buildings with hydroponic fruits and vegetables, and in vitro cloning of muscle tissue for meat, providing very high-quality food without chemicals at very low cost, and without animal suffering. Information technology has a 50% deflation rate; you get the same computation, communication, genetic sequencing that you could purchase a year ago for half the price, and this massive deflation is going to attend to these traditionally physical products.”

  MARTIN FORD: So, you think that technologies like 3D printing or robotic factories and agriculture could drive costs down for nearly everything?

  RAY KURZWEIL: Exactly, 3D printing will print out clothing in the 2020s. We’re not quite there yet for various reasons, but all that’s moving in the right direction. The other physical things that we need will be printed out on 3D printers, including modules which will snap together a building in a matter of days. All the physical things we need will ultimately become facilitated by these AI-controlled information technologies.

  Solar energy is being facilitated by applying deep learning to come up with better materials, and as a result, the cost of both energy storage and energy collection is coming down rapidly. The total amount of solar energy is doubling every two years, and the same trend exists with wind energy. Renewable energy is now only about five doublings, at two years per doubling, away from meeting 100% of our energy needs, by which time it will use one part in thousands of the energy from the sun or from the wind.

  Christine Lagarde said, “OK, there is one resource that will never be an information technology, and that’s land. We are already crowded together.” I responded “That’s only because we decided to crowd ourselves together and create cities so we could work and play together.” People are already spreading out as our virtual communication becomes more robust. Try taking a train trip anywhere in the world and you will see that 95% of the land is unused.

  We’re going be able to provide a very high quality of living that’s beyond what we consider a high standard of living today for everyone, for all of the human population, as we get to the 2030s. I made a prediction at TED that we will have universal basic income, which won’t actually need to be that much to provide a very high standard of living, as we get into the 2030s.

  MARTIN FORD: So, you’re a proponent of a basic income, eventually? You agree that there won’t be a job for everyone, or maybe everyone won’t need a job, and that there’ll be some other source of income for people, like a universal basic income?

  RAY KURZWEIL: We assume that a job is a road to happiness. I think the key issue will be purpose and meaning. People will still compete to be able to contribute and get gratification.

  MARTIN FORD: But you don’t necessarily have to get paid for the thing that you get meaning from?

  RAY KURZWEIL: I think we will change the economic model and we are already in the process of doing that. I mean, being a student in college is considered a worthwhile thing to do. It’s not a job, but it’s considered a worthwhile activity. You won’t need income from a job in order to have a very good standard of living for the physical requirements of life, and we will continue to move up Maslow’s hierarchy. We have been doing that, just compare today to 1900.

  MARTIN FORD: What do you think about the perceived competition with China to get to advanced AI? China does have advantages in terms of having less regulation on things like privacy. Plus, their population is so much larger, which generates more data and also means they potentially have a lot more young Turings or von Neumanns in the pipeline.

  RAY KURZWEIL: I don’t think it’s a zero-sum game. An engineer in China who comes up with a breakthrough in solar energy or in deep learning benefits all of us. China is publishing a lot just as the United States is, and the information is actually shared pretty widely. Look at Google, which put its TensorFlow deep learning framework into the public domain, and we did that in our group with the technology underlying Talk to Books and Smart Reply being made open source so people can use that.

  I personally welcome the fact that China is emphasizing economic development and entrepreneurship. When I was in China recently the tremendous explosion of entrepreneurship was apparent. I would encourage China to move in the direction of free exchange of information. I think that’s fundamental for this type of progress. All around the world we see Silicon Valley as a motivating model. Silicon Valley really is just a metaphor for entrepreneurship, the celebrating of experimenting, and calling failure experience. I think that’s a good thing, I really don’t see it as an international competition.

  MARTIN FORD: But do you worry about the fact that China is an authoritarian state, and that these technologies do have, for example, military applications? Companies like Google and certainly DeepMind in London have been very clear that they don’t want their technology used in anything that is even remotely military. Companies like Tencent and Baidu in China don’t really have the option to make that choice. Is that something we should worry about, that there’s a kind of asymmetry going forward?

  RAY KURZWEIL: Military use is a different issue from authoritarian government structure. I am concerned about the authoritarian orientation of the Chinese government, and I would encourage them to move toward greater freedom of information and democratic ways of governing. I think that will help them and everyone economically.

  I think these political and social and philosophical issues remain very important. My concern is not that AI is going to go off and do something on its own, because I think it’s deeply integrated with us. I’m concerned about the future of the human population, which is already a human technological civilization. We’re going to continue to enhance ourselves through technology, and so the best way to assure the safety of AI is to attend to how we govern ourselves as humans.

  RAY KURZWEIL is widely recognized as one of the world’s foremost inventors and futurists. Ray received his engineering degree from MIT, where he was mentored by Marvin Minsky, one of the founding fathers of the field of artificial intelligence. He went on to make major contributions in a variety of areas. He was the principal inventor of the first CCD flat-bed scanner, the first omni-font optical character recognition, the first print-to-speech reading machine for the blind, the first text-to-speech synthesizer, the first music synthesizer capable of recreating the grand piano and other orchestral instruments, and the first commercially marketed large-vocabulary speech recognition.

  Among Ray’s many honors, he received a Grammy Award for outstanding achievements in music technology; he is the recipient of the National Medal of Technology (the nation’s highest honor in technology), was inducted into the National Inventors Hall of Fame, holds twenty-one honorary doctorates, and honors from three US presidents.

  Ray has written five national best-selling books, including New York Times bestsellers The Singularity Is Near (2005) and How To Create A Mind (2012). He is Co-Founder and Chancellor of Singularity University and a Director of Engineering at Google, heading up a team developing machine intelligence and natural language understanding.

  Ray is known for his work on exponential progress in technology, which he has formalized as “The Law of Accelerating Returns.” Over the course of decades, he has made a number of important predictions that have proven to be accurate.

  Ray’s first novel, Danielle, Chronicles of a Superheroine, is being published in early 2019. Another book by Ray, The Singularity is Nearer, is expected to be published in late 2019.

  Chapter 12. DANIELA RUS

  I like to think of a world where more mundane routine tasks are taken off your plate. Maybe garbage cans that take themselves out and smart infrastructure to ensure that they disappear, or robots that will fold your laundry.

  DIRECTOR OF MIT CSAIL

  Daniela Rus is the Director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at
MIT, one of the world’s largest research organizations focused on AI and robotics. Daniela is a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering, and the American Academy for Arts and Science. Daniela leads research in robotics, mobile computing, and data science.

  MARTIN FORD: Let’s start by talking about your background and looking at how you became interested in AI and robotics.

  DANIELA RUS: I’ve always been interested in science and science fiction, and when I was a kid I read all the popular science fiction books at the time. I grew up in Romania where we didn’t have the range of media that you had in the US, but there was one show that I really enjoyed, and that’s the original Lost in Space.

  MARTIN FORD: I remember that. You’re not the first person I’ve spoken to who has drawn their career inspiration from science fiction.

  DANIELA RUS: I never missed an episode of Lost in Space, and I loved the cool geeky kid Will and the robot. I didn’t imagine that I would do anything remotely associated with that at that time. I was lucky enough to be quite good at math and science, and by the time I got to college age I knew that I wanted to do something with math, but not pure math because it seemed too abstract. I studied computer science with a major in computer science and mathematics, and a minor in astronomy—the astronomy continuing the connection to my fantasies of what could be in other worlds.

  Toward the end of my undergraduate degree I went to a talk given by John Hopcroft, the Turing Award-winning theoretical computer scientist, and in that talk, John said that classical computer science was finished. What he meant by that was that many of the graph-theoretic algorithms that were posed by the founders of the field of computing had solutions and it was time for the grand applications, which in his opinion were robots.

  I found that an exciting idea, so I worked on my PhD with John Hopcroft because I wanted to make contributions to the field of robotics. However, at that time the field of robotics was not at all developed. For example, the only robot that was available to us was a big PUMA arm (Programmable Universal Manipulation Arm), an industrial manipulator that had little in common with my childhood fantasies of what robots should be. It got me thinking a lot about what I could contribute, and I ended up studying dexterous manipulation, but very much from a theoretical, computational point of view. I remember finishing my thesis and trying to implement my algorithms to go beyond simulation and create real systems. Unfortunately, the systems that were available at the time were the Utah/MIT hand and the Salisbury hand, and neither one of those hands was able to exert the kind of forces and torques that my algorithms required.

 

‹ Prev