Architects of Intelligence

Home > Other > Architects of Intelligence > Page 20
Architects of Intelligence Page 20

by Martin Ford


  MARTIN FORD: What about the risks and the downsides associated with AGI? Elon Musk has talked about “raising the demon” and an existential threat. There’s also Nick Bostrom, who I know is on DeepMind’s advisory board and has written a lot on this idea. What do you think about these fears? Should we be worried?

  DEMIS HASSABIS: I’ve talked to them a lot about these things. As always, the soundbites seem extreme but it’s a lot more nuanced when you talk to any of these people in person.

  My view on it is that I’m in the middle. The reason I work on AI is because I think it’s going to be the most beneficial thing to humanity ever. I think it’s going to unlock our potential within science and medicine in all sorts of ways. As with any powerful technology, and AI could be especially powerful because it’s so general, the technology itself is neutral. It depends on how we as humans decide to design and deploy it, what we decide to use it for, and how we decide to distribute the gains.

  There are a lot of complications there, but those are more like geopolitical issues that we need to solve as a society. A lot of what Nick Bostrom worries about are the technical questions we have to get right, such as the control problem and the value alignment problem. My view is that on those issues we do need a lot more research because we’ve only just got to the point now where there are systems that can even do anything interesting at all.

  We’re still at a very nascent stage. Five years ago, you might as well have been talking about philosophy because no one had anything that was interesting. We’ve now got AlphaGo and a few other interesting technologies that are still very nascent, but we’re now at the point where we should start reverse-engineering those things and experimenting on them by building visualization and analysis tools. We’ve got teams doing this to better understand what these black-box systems are doing and how we interpret their behavior.

  MARTIN FORD: Are you confident that we’ll be able to manage the risks that come along with advanced AI?

  DEMIS HASSABIS: Yes, I’m very confident, and the reason is that we’re at the inflection point where we’ve just got these things working, and not that much effort has yet gone into reverse engineering them and understanding them, and that’s happening now. Over the next decade, most of these systems won’t be black-box in the sense that we mean now. We’ll have a good handle on what’s going on with these systems, and that will lead to a better understanding of how to control the systems and what their limits are mathematically, and then that could lead into best practices and protocols.

  I’m pretty confident that path will address a lot of the technical issues that people like Nick Bostrom are worried about, like the collateral consequences of goals not being set correctly. To make advances in that, my view has always been that the best science occurs when theory and practice—empirical work—go hand in hand, and for this subject and field, empirical work experiments are engineering.

  A lot of the fears that some of the people not working at the coalface of this technology have won’t hold once we actually have a much better understanding of these systems. That’s not to say that I think that there’s nothing to worry about, because I think we should worry about these things. There are plenty of near-term questions to resolve as well—like how do we test these systems as we deploy them in products? Some of the long-term problems are so hard that we want to be thinking about them in the time we have right now, well ahead of when we’re going to need the answers.

  We also need to be able to inform the research that has to be done to come up with the solutions to some of those questions that are posed by people like Nick Bostrom. We are actively thinking about these problems and we’re taking them seriously, but I’m a big believer in human ingenuity to overcome those problems if you put enough brainpower on it collectively around the world.

  MARTIN FORD: What about the risks that will arise long before AGI is achieved? For example, autonomous weapons. I know you’ve been very outspoken about AI being used in military applications.

  DEMIS HASSABIS: These are very important questions. At DeepMind, we start from the premise that AI applications should remain under meaningful human control, and be used for socially beneficial purposes. This means banning the development and deployment of fully autonomous weapons, since it requires a meaningful level of human judgment and control to ensure that weapons are used in ways that are necessary and proportionate. We’ve expressed this view in a number of ways, including signing an open letter and supporting the Future of Life Institute’s pledge on the subject.

  MARTIN FORD: It’s worth pointing out even though chemical weapons are in fact banned, they have still been used. All of this requires global coordination and it seems that rivalries between countries could push things in the other direction. For example, there is a perceived AI race with China. They do have a much more authoritarian system of government. Should we worry that they will gain an advantage in AI?

  DEMIS HASSABIS: I don’t think it’s a race in that sense because we know all the researchers and there’s a lot of collaboration. We publish papers openly and I know that for example Tencent has created an AlphaGo clone, so I know many of the researchers there. I do think that if there’s going to be coordination and perhaps even regulation and best practices down the road, it’s important that it’s international and the whole world adopts that. It doesn’t work if some countries don’t adopt those principles. However, that’s not an issue that’s unique to AI. There are many other problems that we’re already grappling with that are a question of global coordination and organization—the obvious one being climate change.

  MARTIN FORD: What about the economic impact of all of this? Is there going to be a big disruption of the job market and perhaps rising unemployment and inequality?

  DEMIS HASSABIS: I think there’s been very minimal disruption so far from AI, it’s just been part of the technology disruption in general. AI is going to be hugely transformative, though. Some people believe that it’s going to be on the scale of the Industrial Revolution or electricity, while other people believe it’s going to be a class of its own above that, and that’s something I think that remains to be seen. Maybe it will mean we’re in a world of abundance, where there are huge productivity gains everywhere? Nobody knows for sure. The key thing is to make sure those benefits are shared with everyone.

  I think that’s the key thing, whether that’s universal basic income, or it’s done in some other form. There are lots of economists debating these things, and we need to think very carefully about how everyone in society will benefit from those presumably huge productivity gains, which must be coming in, otherwise it wouldn’t be so disruptive.

  MARTIN FORD: Yes, that’s basically the argument that I’ve been making, that it’s fundamentally a distributional problem and that a large part of our population is in danger of being left behind. But it is a staggering political challenge to come up with a new paradigm that will create an economy that works for everyone.

  DEMIS HASSABIS: Right.

  Whenever I meet an economist, I think they should be working quite hard on this problem, but it’s difficult to because they can’t really envisage how it could be so productive because people have been talking about massive productivity gains for 100 years.

  My dad studied economics at university, and he was saying that in the late 1960s a lot of people were seriously talking about that: “What is everyone going to do in the 1980s when we have so much abundance, and we don’t have to work?” That, of course, never happened, in the 1980s or since then, and we’re working harder than ever. I think a lot of people are not sure if it’s ever going to be like that, but if it does end up that we have a lot of extra resources and productivity, then we’ve got to distribute it widely and equitably, and I think if we do that, then I don’t see a problem with it.

  MARTIN FORD: Is it safe to say that you’re an optimist? I’d guess that you see AI as transformative and that it’s arguably going to be one of the best things that’s ever happen
ed to humanity. Assuming, of course, that we manage it wisely?

  DEMIS HASSABIS: Definitely, and that’s why I’ve worked towards it my whole life. All of the things I’ve been doing that we covered in the first part of our discussion have been building towards achieving that. I would be quite pessimistic about the way the world’s going, if AI was not going to come along. I actually think there’s a lot of problems in the world that require better solutions, like climate change, Alzheimer’s research or water purification. I can give you a list of things that are going to get worse over time. What is a worry is that I don’t see how we’re going to get the global coordination and the excess resources or activity to solve them. But ultimately, I’m actually optimistic about the world because a transformative technology like AI is coming.

  DEMIS HASSABIS is a former child chess prodigy who finished his high school exams two years early before coding the multi-million selling simulation game Theme Park at age 17. Following graduation from Cambridge University with a Double First in Computer Science he founded the pioneering videogames company Elixir Studios producing award winning games for global publishers such as Vivendi Universal. After a decade of experience leading successful technology startups, Demis returned to academia to complete a PhD in cognitive neuroscience at University College London, followed by postdoctoral research at MIT and Harvard. His research into the neural mechanisms underlying imagination and planning was listed in the top ten scientific breakthroughs of 2007 by the journal Science.

  Demis is a five-time World Games Champion, and a Fellow of the Royal Society of Arts and the Royal Academy of Engineering, winning the Academy’s Silver Medal. In 2017 he was named in the Time 100 list of the world’s most influential people, and in 2018 was awarded a CBE for services to science and technology. He was elected as a Fellow of the Royal Society, has been a recipient of the Society’s Mullard Award, and was also awarded an Honorary Doctorate by Imperial College London.

  Demis co-founded DeepMind along with Shane Legg and Mustafa Suleyman in 2010. DeepMind was acquired by Google in 2014 and is now part of Alphabet. In 2016 DeepMind’s AlphaGo system defeated Lee Sedol, arguably the world’s best player of the ancient game of Go. That match is chronicled in the documentary film AlphaGo (https://www.alphagomovie.com/).

  Chapter 9. ANDREW NG

  The rise of supervised learning has created a lot of opportunities in probably every major industry. Supervised learning is incredibly valuable and will transform multiple industries, but I think there is a lot of room for something even better to be invented.

  CEO, LANDING AI & GENERAL PARTNER, AI FUND ADJUNCT PROFESSOR COMPUTER SCIENCE, STANFORD

  Andrew Ng is widely recognized for his contributions to artificial intelligence and deep learning, as both an academic researcher and an entrepreneur. He co-founded both the Google Brain project and the online education company, Coursera. He then became the chief scientist at Baidu, where he built an industry-leading AI research group. Andrew played a major role in the transformation of both Google and Baidu into AI-driven organizations. In 2018 he established AI Fund, a venture capital firm focused on building startup companies in the AI space from scratch.

  MARTIN FORD: Let’s start by talking about the future of AI. There’s been remarkable success, but also enormous hype, associated with deep learning. Do you feel that deep learning is the way forward and—the primary idea that will continue to underlie progress in AI? Or is it possible that an entirely new approach will replace it in the long run?

  ANDREW NG: I really hope there’s something else out there better than deep learning. All of the economic value driven by this recent rise of AI is down to supervised learning—basically learning input and output mappings. For example, with self-driving cars the input is a video picture of what’s in front of your car, and the output is the actual position of the other cars. There are other examples, speech recognition has an input of an audio clip and an output of a text transcript, machine translation has an input of English text and an output of Chinese text, say.

  Deep learning is incredibly effective for learning these input/output mappings and this is called supervised learning, but I think that artificial intelligence is much bigger than supervised learning.

  The rise of supervised learning has created a lot of opportunities in probably every major industry. Supervised learning is incredibly valuable and will transform multiple industries, but I think that there is a lot of room for something even better to be invented. It’s hard to say right now exactly what that would be, though.

  MARTIN FORD: What about the path to artificial general intelligence? What would you say are the primary breakthroughs that have to occur for us to get to AGI?

  ANDREW NG: I think the path is very unclear. One of the things we will probably need is unsupervised learning. For example, today in order to teach a computer what a coffee mug is we show it thousands of coffee mugs, but no child’s parents, no matter how patient and loving, ever pointed out thousands of coffee mugs to that child. The way that children learn is by wandering around the world and soaking in images and audio. The experience of being a child allows them to learn what a coffee mug is. The ability to learn from unlabeled data, without parents or labelers pointing out thousands of coffee mugs, will be crucial to making our systems more intelligent.

  I think one of the problems in AI is that we’ve made a lot of progress in building specialized intelligence or narrow intelligence, and very little progress towards AGI. The problem is, both of these things are called AI. AI turns out to be incredibly valuable for online advertising, speech recognition and self-driving cars, but it’s specialized intelligence, not general. Much of what the public sees is progress in building specialized intelligence and they think that we are therefore making rapid progress toward artificial general intelligence. It’s just not true.

  I would love to get to AGI, but the path is very unclear. I think that individuals that are less knowledgeable about AI have used very simplistic extrapolations, and that has led to unnecessary amounts of hype about AI.

  MARTIN FORD: Do you expect AGI to be achieved in your lifetime?

  ANDREW NG: The honest answer is that I really don’t know. I would love to see AGI in my lifetime, but I think there’s a good chance it’ll be further out than that.

  MARTIN FORD: How did you become interested in AI? And how did that lead to such a varied career trajectory?

  ANDREW NG: My first encounter with neural networks was when I was in high school where I did an office assistant internship. There may not seem like an obvious link between an internship and neural networks, but during the course of my internship I thought about how we could automate some of the work that I was doing, and that was the earliest time I was thinking about neural networks. I wound up doing my bachelor’s at Carnegie Mellon, my master’s from MIT and a PhD, with a thesis titled, Shaping and Policy Search in Reinforcement Learning, from the University of California, Berkeley.

  For about the next twelve years I taught at the Stanford University Department of Computer Science and the Department of Electrical Engineering as a professor. Then between 2011 and 2012, I was a founding member of the Google Brain team, which helped transform Google into the AI company that we now perceive it to be.

  MARTIN FORD: And Google Brain was the first attempt to really use deep learning at Google, correct?

  ANDREW NG: To an extent. There had been some small-scale projects based around neural networks, but the Google Brain team really was the force that took deep learning into many parts of Google. The first thing I did when I was leading the Brain team was to teach a class within Google for around 100 engineers. This helped teach a lot of Google engineers about deep learning, and it created a lot of allies and partners for the Google Brain team and opened up deep learning to a lot more people.

  The first two projects we did were partnering with the speech team, which I think helped transform speech recognition at Google, and working on unsupervised learning, which led to th
e somewhat infamous Google cat. This is where we set an unsupervised neural network free on YouTube data and it learned to recognize cats. Unsupervised learning isn’t what actually creates the most value today, but that was a nice technology demonstration of the type of scale we could achieve using Google’s compute cluster at the time. We were able to do very large-scale deep learning algorithms.

  MARTIN FORD: You stayed at Google until 2012. What came next for you?

  ANDREW NG: Towards the end of my time at Google, I felt that deep learning should move toward GPUs. As a result, I wound up doing that work at Stanford University rather than at Google. In fact, I remember a conversation that I had with Geoff Hinton at NIPS, the annual conference on Neural Information Processing Systems, where I was trying to use GPUs, and I think that later influenced his work with Alex Krizhevsky and influenced, quite a lot of people to then adopt GPUs for deep learning.

  I was lucky to be teaching at Stanford at the time because being here in Silicon Valley, we saw the signals that GPGPU (general-purpose GPU) computing was coming. We were in the right place at the right time and we had friends at Stanford working on GPGPUs, so we saw the ability of GPUs to help scale up deep learning algorithms earlier than almost everyone else.

  My former student at Stanford, Adam Coates was actually the reason I decided to pitch the Google Brain team to Larry Page in a bid to get Larry to approve me using a lot of their computers to build a very large neural network. It was really one figure that was generated by Adam Coates, where the x-axis was the amount of data, and the y-axis was the performance of an algorithm. Adam generated this figure showing that the more data we could train these deep learning algorithms on, the better they’d perform.

 

‹ Prev