Book Read Free

Architects of Intelligence

Page 18

by Martin Ford


  A year later, we have six universities targeting different areas where AI has really struggled to get people involved. In addition to Stanford and Simon Fraser University, we’ve also got Berkeley targeting AI for low-income students, Princeton focusing on AI for racial minorities, Christopher Newport University doing AI for off-the-rails students, and Boston University doing AI for girls. These have only been running for a small amount of time, but we’re hoping to mushroom the program and continue to invite future leaders of AI from a much more diverse background.

  MARTIN FORD: I wanted to ask if you think there’s a place for the regulation of artificial intelligence. Is that something you’d like to see? Would you advocate for the government taking more of an interest, in terms of making rules, or do you think that the AI community can solve these problems internally?

  FEI-FEI LI: I actually don’t think AI, if you mean the AI technologists, can solve all the AI problems by themselves: our world is interconnected, human lives are intertwined, and we all depend on each other.

  No matter how much AI that I make happen, I still drive on the same highway, breathe the same air, and send my kids to community schools. I think that we need to have a very humanistic view of this and recognize that for any technology to have this profound impact, we need to invite all sectors of life and society to participate.

  I also think the government has a huge role, which is to invest in basic science, research, and education of AI. Because if we want to have the transparent technology, and if we want to have the fair technology, and if we want to have more people who can understand and impact this technology in positive ways, then the government needs to invest in our universities, research institutes and schools to educate people about AI and support basic science research. I’m not trained as a policymaker, but I talk to some policymakers, and I talk to my friends. Whether it’s about privacy, fairness, dissemination, or collaboration, I see a role the government can play.

  MARTIN FORD: The final thing I want to ask you about is this perceived AI arms race, especially with China. How seriously do you take that, and is it something we should worry about?

  China does have a different system, a more authoritarian system, and a much bigger population which means more data to train algorithms on and less restrictions regarding privacy and so forth. Are we at risk of falling behind in AI leadership?

  FEI-FEI LI: Right now, we’re living in a major hype-cycle of modern physics and how that can transform technology, whether it’s nuclear technology, or electrical technology.

  One hundred years later, will we ask ourselves the question: which person owned modern physics? Will we try to name the company or country that owned modern physics and everything after the industrial revolution? I think it will be difficult for any of us to answer those questions. My point is, as a scientist and as an educator, that the human quest for knowledge and truth has no borders. If there is a fundamental principle of science, it is that these are the universal truths and quests for these truths, which we all seek as a species together. And AI is a science in my opinion.

  From that point of view, as a basic scientist and as an educator, I work with people from all backgrounds. My Stanford lab literally consists of students from every continent. With the technology we create, whether it’s automation or it’s healthcare, we hope to benefit everyone.

  Of course, there is going to be competition between companies and between regions, and I hope that’s healthy. Healthy competition means that we respect each other, we respect the market, we respect the users and consumers, and we respect the laws, even if it’s cross-border laws or international laws. As a scientist, that’s what I advocate for, and I continue to publish in the open source domain to educate students of all colors and nations, and I want to collaborate with people of all backgrounds.

  More Information about AI4ALL can be found at http://ai-4-all.org/ .

  FEI-FEI LI is Chief Scientist, AI and Machine Learning at Google Cloud, Professor of Computer Science at Stanford University, and Director of both the Stanford Artificial Intelligence Lab and the Stanford Vision Lab. Fei-Fei received her undergraduate degree in physics from Princeton University and her PhD in electrical engineering from the California Institute of Technology. Her work has focused on computer vision and cognitive neural science and she is widely published in top academic journals. She is the co-founder of AI4ALL, an organization focused on attracting women and people from underrepresented groups into the field of AI, which began at Stanford and has now scaled up to universities across the United States.

  Chapter 8. DEMIS HASSABIS

  Games are just our training domain. We’re not doing all this work just to solve games; we want to build these general algorithms that we can apply to real-world problems.

  CO-FOUNDER & CEO OF DEEPMIND AI RESEARCHER AND NEUROSCIENTIST

  Demis Hassabis is a former child chess prodigy, who started coding and designing video games professionally at age 16. After graduating from Cambridge University, Demis spent a decade leading and founding successful startups focused on video games and simulation. He returned to academia to complete a PhD in cognitive neuroscience at University College London, followed by postdoctoral research at MIT and Harvard. He co-founded DeepMind in 2010. DeepMind was acquired by Google in 2014 and is now part of Alphabet’s portfolio of companies.

  MARTIN FORD: I know you had a very strong interest in chess and video games when you were younger. How has that influenced your career in AI research and your decision to found DeepMind?

  DEMIS HASSABIS: I was a professional chess player in my childhood with aspirations of becoming the world chess champion. I was an introspective kid and I wanted to improve my game, so I used to think a lot about how my brain was coming up with these ideas for moves. What are the processes that are going on there when you make a great move or a blunder? So, very early on I started to think a lot about thinking, and that led me to my interest in things like neuroscience later on in my life.

  Chess, of course, has a deeper role in AI. The game itself has been one of the main problem areas for AI research since the dawn of AI. Some of the early pioneers in AI like Alan Turing and Claude Shannon were very interested in computer chess. When I was 8 years old, I purchased my first computer using the winnings from the chess tournaments that I entered. One of the first programs that I remember writing was for a game called Othello—also known as Reversi—and while it’s a simpler game than chess, I used the same ideas that those early AI pioneers had been using in their chess programs, like alpha-beta search, and so on. That was my first exposure to writing an AI program.

  My love of chess and games got me into programming, and specifically into writing AI for games. The next stage for me was to combine my love of games and programming into writing commercial videogames. One key theme that you’ll see in a lot of my games, from Theme Park (1994) to Republic: The Revolution (2003), was that they had simulation at the heart of their gameplay. The games presented players with sandboxes with characters in them that reacted to the way that you played. It was AI underpinning those characters, and that was always the part that I worked on specifically.

  The other thing that I was doing with games was training my mind on certain capabilities. For example, with chess, I think it’s a great thing for kids to learn at school because it teaches problem-solving, planning, and all sorts of other meta-skills that I think are then useful and translatable to other domains. Looking back, perhaps all of that information was in my subconscious when I started DeepMind and started using games as a training environment for our AI systems.

  The final step for me, before starting DeepMind, was taking undergraduate computer science course at Cambridge University. At the time, which was the early 2000s, I felt that as a field we didn’t have quite enough ideas to try and attempt to climb the Everest of AGI. This led me to my PhD in Neuroscience because I felt we needed a better understanding of how the brain solved some of these complex capabilities, so that we could be in
spired by that to come up with new algorithmic ideas. I learned a lot about memory and imagination—topics that we didn’t at the time, and in some cases still don’t, know how to get machines to do. All those different strands then came together into DeepMind.

  MARTIN FORD: Your focus then, right from the beginning, has been on machine intelligence and especially AGI?

  DEMIS HASSABIS: Exactly. I’ve known I wanted to do this as a career since my early teens. That journey started with my first computer. I realized straight away that a computer was a magical tool because most machines extend your physical capability, but here was a machine that could extend your mental capabilities.

  I still get excited by the fact that you can write a program to crunch a scientific problem, set it running, go off to sleep, and then when you wake up in the morning it’s solved it. It’s almost like outsourcing your problems to the machine. This led me to think of AI as the natural next step, or even the end step, where we get machines to be smarter in themselves so they’re not just executing what you’re giving them, but they’re actually able to come up with their own solutions.

  I’ve always wanted to work on learning systems that learn for themselves, and I’ve always been interested in the philosophical idea of what is intelligence and how can we recreate that phenomena artificially, which is what led me to create DeepMind.

  MARTIN FORD: There aren’t many examples of pure AGI companies around. One reason is that there’s not really a business model for doing that; it’s hard to generate revenue in the short term. How did DeepMind overcome that?

  DEMIS HASSABIS: From the beginning, we were an AGI company, and we were very clear about that. Our mission statement of solving intelligence was there from the beginning. As you can imagine, trying to pitch that to standard venture capitalists was quite hard.

  Our thesis was that because what we were building was a general-purpose technology, if you could build it powerfully enough, general enough, and capable enough, then there should be hundreds of amazing applications for it. You’d be inundated with incoming possibilities and opportunities, but you would require a large amount of upfront research first from a group of very talented people that we’d need to get together. We thought that was defensible because of the small number of people in the world that could actually work on this, especially if you think back to 2009 and 2010 when we first started out. You could probably count less than 100 people that could contribute to that type of work. Then there was the question of can we demonstrate clear and measurable progress?

  The problem with having a large and long-term research goal is how do your funders get confidence that you actually know what you’re talking about? With a typical company, your metric is your product and the number of users, something that’s easily measurable. The reason why a company like DeepMind is so rare is that’s very hard for an external non-specialist, like a venture capitalist, to judge whether you’re making sense and your plan really is sensible, or whether you’re just crazy.

  The line is very thin, especially when you’re going very far out, and in 2009 and 2010 no one was talking about AI. AI was not the hot topic that it is today. It was really difficult for me to get my initial seed funding because of the previous 30 years of failed promises in AI. We had some very strong hypotheses as to why that was, and those were the pillars that we were basing DeepMind on. Things like taking inspiration from neuroscience, which had massively improved our understanding of the brain in the last 10 years; doing learning systems not traditional expert systems; using benchmarking and simulations for the rapid development and testing of AI. There was a set of things that we committed to that turned out to be correct and were our explanations for why AI hadn’t improved in the previous years. Another very powerful thing was that these new techniques required a lot of computing power, which was now becoming available in the form of GPUs.

  Our thesis made sense to us, and in the end, we managed to convince enough people, but it was hard because we were operating at that point within a very skeptical, non-fashionable domain. Even in academia, AI was frowned upon. It had been rebranded “machine learning,” and people who worked on AI were considered to be fringe elements. It’s amazing to see how quickly all of that has changed.

  MARTIN FORD: Eventually you were able to secure the funding to be viable as an independent company. But then you decided to let Google acquire DeepMind. Can you tell me about the rationale behind the acquisition and how that happened?

  DEMIS HASSABIS: It’s worth noting that we had no plans to sell, partly because we figured no big corporate would understand our value until DeepMind started producing products. It’s also not fair to say that we didn’t have a business model. We did, we just hadn’t gone very far down the line of executing it. We did already have some cool technology, DQN (deep Q-network—our first general-purpose learning model) and our Atari work had already been done by 2013. But then Larry Page, the Co-Founder of Google, heard about us through some of our investors and out of the blue in 2013 I received an email from Alan Eustace, who was running search and research at Google, saying that Larry’s heard of DeepMind and he’d like to have a chat.

  That was the start, but the process took a long time because there were a lot of things I wanted to be sure of before we joined forces with Google. But at the end of the day, I became convinced that by combining with Google’s strengths and resources—their computing power and their ability to construct a much bigger team, we would be able to execute on our mission much more quickly. It wasn’t to do with money, our investors were willing to increase funding to keep us going independently, but DeepMind has always been about delivering AGI and using it for the benefit of the world, and there was an opportunity with Google to accelerate that.

  Larry and the people at Google were just as passionate about AI as I was, and they understood how important the work we would do would be. They agreed to give us autonomy as to our research roadmap and our culture, and also to staying in London, which was very important to me. Finally, they also agreed to have an ethics board concerning our technology, which was very unusual but very prescient of them.

  MARTIN FORD: Why did you choose to be in London, and not Silicon Valley? Is that a Demis Hassabis or a DeepMind thing?

  DEMIS HASSABIS: Both really. I’m a born-and-bred Londoner, and I love London, but at the same time, I thought it was a competitive advantage because the UK and Europe have amazing universities in the field of AI like Cambridge and Oxford. But also, at the time there was no real ambitious research company in the UK, or really in Europe, so our hiring prospects were high, especially with all these universities outputting great postgraduate and graduate students.

  In 2018 there are now a number of companies in Europe, but we were the first in AI who were doing deep research. But more culturally, I think it’s important that we have more stakeholders and cultures involved in making AI, not just Silicon Valley in the United States, but also European sensibilities and Canadian, and so on. Ultimately, this is going to be of global significance and having different voices about how to use it, what to use it for, and how to distribute the proceeds, is important.

  MARTIN FORD: I believe you’re also opening up labs in other European cities?

  DEMIS HASSABIS: We’ve opened a small research lab in Paris, which is our first continental European office. We’ve also opened two labs in Canada in Alberta and Montreal. More recently, since joining Google, we now have an applied team office in Mountain View, California who are right next to the Google teams that we work with.

  MARTIN FORD: How closely do you work with the other AI teams at Google?

  DEMIS HASSABIS: Google’s a huge place, and there are thousands of people working on every aspect of machine learning and AI, from both a very applied perspective to a pure research point of view. As a result of that, there are a number of team leads who all know each other, and there’s a lot of cross-collaboration, both with product teams and research teams. It tends to be ad hoc, so it depends o
n individual researchers or individual topics, but we keep each other informed at a high level of our overall research directions.

  At DeepMind, we’re quite different from other teams in that we’re pretty focused around this one moonshot goal of AGI. We’re organized around a long-term roadmap, which is our neuroscience-based thesis, which talks about what intelligence is and what’s required to get there.

  MARTIN FORD: DeepMind’s accomplishments with AlphaGo are well documented. There’s even a documentary film about it (https://www.alphagomovie.com/), so I wanted to focus more on your latest innovation, AlphaZero, and on your plans for the future. It seems to me that you’ve demonstrated something very close to a general solution for information-complete two-player games; in other words, games where everything that can be known is available there on the board or in terms of pixels on the screen. Going forward, are you finished with that type of game? Are you planning to move on to more complex games with hidden information, and so forth?

  DEMIS HASSABIS: There’s a new version of AlphaZero that we’re going to publish soon that’s even more improved, and as you’ve said, you can think of that as a solution to two-player perfect-information games like chess, Go, shogi, and so on. Of course, the real world is not made up of perfect information, so as you’ve said, the next step is to create systems that can deal with that. We’re already working on that, and one example of this is our work with the PC strategy game, StarCraft, which has a very complicated action space. It’s very complex because you build units, so it’s not static in terms of what pieces you have, like in chess. It’s also real time, and the game has hidden information, for example, the “fog of war” that obscures onscreen information until you explore that area.

 

‹ Prev