Book Read Free

Architects of Intelligence

Page 19

by Martin Ford


  Beyond that, games are just our training domain. We’re not doing all this work just to solve games; we want to build these general algorithms that we can apply to real-world problems.

  MARTIN FORD: So far, your focus has primarily been on combining deep learning with reinforcement learning. That’s basically learning by practice, where the system repeatedly attempts something, and there’s a reward function that drives it toward success. I’ve heard you say that you believe that reinforcement learning offers a viable path to general intelligence, that it might be sufficient to get there. Is that your primary focus going forward?

  DEMIS HASSABIS: Going forward, yes, it is. I think that technique is extremely powerful, but you need to combine it with other things to scale it. Reinforcement learning has been around for a long time, but it was only used in very small toy problems because it was very difficult for anyone to scale up that learning in any way. In our Atari work, we combined that with deep learning, which did the processing of the screen, and the model of the environment you’re in. Deep learning is amazing at scaling, so combining that with reinforcement learning allowed it to scale to these large problems that we’ve now tackled in AlphaGo and DQN—all of these things that people would have told you was impossible 10 years ago.

  I think we proved that first part. The reason we were so confident about it and why we backed it when we did was because in my opinion reinforcement learning will become as big as deep learning in the next few years. DeepMind is one of the few companies that take that seriously because, from the neuroscience perspective, we know that the brain uses a form of reinforcement learning as one of its learning mechanisms, it’s called temporal difference learning, and we know the dopamine system implements that. Your dopamine neurons track the prediction errors your brain is making, and then you strengthen your synapses according to those reward signals. The brain works along these principles, and the brain is our only example of general intelligence, which is why we take neuroscience very seriously here. To us, that must be a viable solution to the problem of general intelligence. It may not be the only one, but from a biologically inspired standpoint, it seems reinforcement learning is sufficient once you scale it up enough. Of course, there are many technical challenges with doing that, and many of them are unsolved.

  MARTIN FORD: Still, when a child learns things like language or an understanding of the world, it doesn’t really seem like reinforcement learning for the most part. It’s unsupervised learning, as no one’s giving the child labeled data the way we would do with ImageNet. Yet somehow, a young child can learn organically directly from the environment. But it seems to be more driven by observation or random interaction with the environment rather than learning by practice with a specific goal in mind.

  DEMIS HASSABIS: A child learns with many mechanisms, it’s not like the brain only uses one. The child gets supervised learning from their parents, teachers, or their peers and they do unsupervised learning when they’re just experimenting with stuff, with no goal in mind. They also do reward learning and reinforcement learning when they do something, and they get a reward for it.

  We work on all three of those, and they’re all going to be needed for intelligence. Unsupervised learning is hugely important, and we’re working on that. The question here is, are there intrinsic motivations that evolution has designed in us that end up being proxies for reward, which then guide the unsupervised learning? Just look at information gain. There is strong evidence showing that gaining information is intrinsically rewarding to your brain.

  Another thing would be novelty seeking. We know that seeing novel things releases dopamine in the brain, so that means novelty is intrinsically rewarding. In a sense, it could be that these intrinsic motivations that we have chemically in our brains are guiding what seems to us to be unstructured play or unsupervised learning. If the brain finds finding information and structure rewarding in itself, then that’s a hugely useful motivation for unsupervised learning; you’re just going to try and find structure, no matter what, and it seems like the brain is doing that.

  Depending on what you determine as the reward, some of these things could be intrinsic rewards that could be guiding the unsupervised learning. I find that it is useful to think about intelligence in the framework of reinforcement learning.

  MARTIN FORD: One thing that’s obvious from listening to you is that you combine a deep interest in both neuroscience and computer science. Is that combined approach true for DeepMind as a whole? How does the company integrate knowledge and talent from those two areas?

  DEMIS HASSABIS: I’m definitely right in the middle for both those fields, as I’m equally trained in both. I would say DeepMind is clearly more skewed towards machine learning; however, our biggest single group here at DeepMind is made up of neuroscientists led by Matt Botvinick, an amazing neuroscientist and professor from Princeton. We take it very seriously.

  The problem with neuroscience is that it’s a massive field in itself, way bigger than machine learning. If you as a machine-learning person wanted to quickly find out which parts of neuroscience would be useful to you, then you’d be stuck. There’s no book that’s going to tell you that, there’s just a mass of research work, and you’ll have to figure out for yourself how to parse that information and find the nuggets that could be useful from an AI perspective. Most of that neuroscience research is being undertaken for medical research, psychology, or for neuroscience itself. Neuroscientists aren’t designing those experiments thinking they would be useful for AI. 99% of that literature is not useful to you as an AI researcher and so you have to get really good at training yourself to navigate and pick out what are the right influences and what is the right level of influence for each of those.

  Quite a lot of people talk about neuroscience inspiring AI work, but I don’t think a lot of them really have concrete ideas on how to do that. Let’s explore two extremes. One is you could try and reverse-engineer the brain, which is what quite a lot of people are attempting to do in their approach to AI, and I mean literally reverse-engineer the brain on a cortical level, a prime example being the Blue Brain Project.

  MARTIN FORD: That’s being directed by Henry Markram, right?

  DEMIS HASSABIS: Right, and he’s literally trying to reverse-engineer cortical columns. It may be interesting neuroscience but, in my view, that is not the most efficient path towards building AI because it’s too low-level. What we’re interested in at DeepMind is a systems-level understanding of the brain and the algorithms the brain implements, the capabilities it has, the functions it has, and the representations it uses.

  DeepMind is not looking at the exact specifics of the wetware or how the biology actually instantiates it, we can abstract all of that away. That makes sense, because why would you imagine an in-silico system would have to mimic an in-carbo system because there are completely different strengths and weaknesses about those two systems. In silicon, there’s no reason why you would want to copy the exact permutation details of, say a hippocampus. On the other hand, I am very interested in the computations and the functions that the hippocampus has, like episodic memory, navigating in space, and the grid cells it uses. These are all systems-level influences from neuroscience and showcase our interest in the functions, representations and the algorithms that the brain uses, not the exact details of implementation.

  MARTIN FORD: You often hear the analogy that airplanes don’t flap their wings. Airplanes achieve flight, but don’t precisely mimic what birds do.

  DEMIS HASSABIS: That’s a great example. At DeepMind, we’re trying to understand aerodynamics by looking at birds, and then abstracting the principles of aerodynamics and building a fixed-wing plane.

  Of course, people who built planes were inspired by birds. The Wright Brothers knew that heavier-than-air flight was possible because they’d seen birds. Before the airfoil was invented, they tried without success to use deformable wings, but they were more like birds gliding. What you’ve got to do is look at na
ture, and then try and abstract away the things that are not important for the phenomenon you’re after in that case, flying and in our case, intelligence. But that doesn’t mean that that didn’t help your search process.

  My point is that you don’t know yet what the outcome looks like. If you’re trying to build something artificial like intelligence and it doesn’t work straight away, how do you know that you’re looking in the right place? Is your 20-person team wasting their time, or should you push a bit harder, and maybe you’ll crack it next year? Because of that, having neuroscience as a guide can allow me to make much bigger, much stronger bets on things like that.

  A great example of this is reinforcement learning. I know reinforcement learning has to be scalable because the brain does scale it. If you didn’t know that the brain implemented reinforcement learning and it wasn’t scaling, how would you know on a practical level if you should spend another two years on this? It’s very important to narrow down the search space that you’re exploring as a team or a company, and I think that’s a meta-point that is often missed by people that ignore neuroscience.

  MARTIN FORD: I think you’ve made the point that the work in AI could also inform research being done in neuroscience. DeepMind just came out with a result on grid cells used in navigation, and it sounds like you’ve got them to emerge organically in a neural network. In other words, the same basic structure naturally arises in both the biological brain and in artificial neural networks, which seems pretty remarkable.

  DEMIS HASSABIS: I’m very excited about that because it’s one of our biggest breakthroughs in the last year. Edvard Moser and May-Britt Moser, who discovered grid cells and won the Nobel Prize for their work both wrote to us very excited about this finding because it means that, possibly, these grid cells are not just a function of the wiring of the brain, but actually may be the most optimal way of representing space from a computational sense. That’s a huge and important finding for the neuroscientists because what they’re speculating now is that maybe the brain isn’t necessarily hardwired to create grid cells. Perhaps if you have that structure of neurons and you just expose them to space, that is the most efficient coding any system would come up with.

  We’ve also recently created a whole new theory around how the prefrontal cortex might work, based on looking at our AI algorithms and what they were doing, and then having our neuroscientists translate that into how the brain might work.

  I think that this is the beginning of seeing many more examples of AI ideas and algorithms inspiring us to look at things in a different way in the brain or looking for new things in the brain, or as an analysis tool to experiment with our ideas about how we think the brain might work.

  As a neuroscientist, I think that the journey we’re on of building neuroscience-inspired AI is one of the best ways to address some of the complex questions we have about the brain. If we build an AI system that’s based on neuroscience, we can then compare it to the human brain and maybe start gleaning some information about its unique characteristics. We could start shedding light on some of the profound mysteries of the mind like the nature of consciousness, creativity, and dreaming. I think that comparing the brain to an algorithmic construct could be a way to understand that.

  MARTIN FORD: It sounds like you think there could be some discoverable general principles of intelligence that are substrate-independent. To return to the flight analogy, you might call it “the aerodynamics of intelligence.”

  DEMIS HASSABIS: That’s right, and if you extract that general principle, then it must be useful for understanding the particular instance of the human brain.

  MARTIN FORD: Can you talk about some of the practical applications that you imagine happening within the next 10 years? How are your breakthroughs going to be applied in the real world in the relatively near future?

  DEMIS HASSABIS: We’re already seeing lots of things in practice. All over the world people are interacting with AI today through machine translation, image analysis, and computer vision.

  DeepMind has started working on quite a few things, like optimizing the energy being used in Google’s data centers. We’ve worked on WaveNet, the very human-like text-to-speech system that’s now in the Google Assistant in all Android-powered phones. We use AI in recommendation systems, in Google Play, and even on behind-the-scenes elements like saving battery life on your Android phone. Things that everyone uses every single day. We’re finding that because they’re general algorithms, they’re coming up all over the place, so I think that’s just the beginning.

  What I’m hoping will come through next are the collaborations we have in healthcare. An example of this is our work with the famous UK eye hospital, Moorfields, where we’re looking at diagnosing macular degeneration from your retina scans. We published the results from the first phase of our joint research partnership in Nature Medicine, and they show that our AI system can quickly interpret eye scans from routine clinical practice with unprecedented accuracy. It can also correctly recommend how patients should be referred for treatment for over 50 sight-threatening eye diseases as accurately as world-leading expert doctors.

  There are other teams doing similar work for diseases like skin cancer. Over the next five years, I think healthcare will be one of the biggest areas to see a benefit from the work we’re all doing in the field.

  What I’m really personally excited about, and this is something I think we’re on the cusp of, is using AI to actually help with scientific problems. We’re working on things like protein folding, but you can imagine its use in material design, drug discovery and chemistry. People are using AI to analyze data from the Large Hadron Collider to searching for exoplanets. There’s a lot of really cool areas of masses of data that we as human experts find hard to identify the structure in that I think this kind of AI is going to become increasingly used for. I’m hoping that over the next 10 years this will result in an advancement in the speed of scientific breakthroughs in some really fundamental areas.

  MARTIN FORD: What does the path to AGI look like? What would you say are the main hurdles that will have to be surmounted before we have human-level AI?

  DEMIS HASSABIS: From the beginning of DeepMind we identified some big milestones, such as the learning of abstract, conceptual knowledge, and then using that for transfer learning. Transfer learning is where you usefully transfer your knowledge from one domain to a new domain that you’ve never seen before, it’s something humans are amazing at. If you give me a new task, I won’t be terrible at it out of the box because I’ll bring some knowledge from similar things or structural things, and I can start dealing with it straight away. That’s something that computer systems are pretty terrible at because they require lots of data and they’re very inefficient. We need to improve that.

  Another milestone is that we need to get better at language understanding, and another is replicating things that old AI systems were able to do, like symbolic manipulation, but using our new techniques. We’re a long way from all of those, but they would be really big milestones if they were to happen. If you look at where we were in 2010, just eight years ago, we’ve already achieved some big things that were milestones to us, like AlphaGo, but there are more to come. So those would be the big ones for me, concepts and transfer learning.

  MARTIN FORD: When we do achieve AGI, do you imagine intelligence being coupled with consciousness? Is it something that would automatically emerge, or is consciousness a completely separate thing?

  DEMIS HASSABIS: That’s one of the interesting questions that this journey will address. I don’t know the answer to it at the moment, but that’s one of the very exciting things about the work that both we and others are doing in this field.

  My hunch currently would be that consciousness and intelligence are double-dissociable. You can have intelligence without consciousness, and you can have consciousness without human-level intelligence. I’m pretty sure smart animals have some level of consciousness and self-awareness, but they’re
obviously not that intelligent at least compared to humans, and I can imagine building machines that are phenomenally intelligent by some measures but would not feel conscious to us in any way at all.

  MARTIN FORD: Like an intelligent zombie, something that has no inner experience.

  DEMIS HASSABIS: Something that wouldn’t feel sentient in the way we feel about other humans. Now that’s a philosophical question, because the problem is, as we see with the Turing test, how would we know if it was behaving in the same way as we were? The Occam’s razor explanation is to say that if you’re exhibiting the same behavior as I exhibit, and you’re made from the same stuff as I’m made from, and I know what I feel, then I can assume you’re feeling the same thing as me. Why would you not?

  What’s interesting with a machine is that they could exhibit the same behavior as a human, if we designed them like that, but they’re on a different substrate. If you’re not on the same substrate then that Occam’s razor idea doesn’t hold as strongly. It may be that they are conscious in some sense, but we don’t feel it in the same way because we don’t have that additional assumption to rely on. If you break down why we think each of us is conscious, I think that’s a very important assumption, if you’re operating on the same substrate as me, why would it feel different to your substrate?

  MARTIN FORD: Do you believe machine consciousness is possible? There are some people that argue consciousness is fundamentally a biological phenomenon.

  DEMIS HASSABIS: I am actually open-minded about that, in the sense that I don’t think we know. It could well turn out that there’s something very special about biological systems. There are people like Sir Roger Penrose that think it’s to do with quantum consciousness, in which case a classical computer wouldn’t have it, but it’s an open question. That’s why I think the path we’re on will shed some light on it because I actually think we don’t know whether that’s a limit or not. Either way, it will be fascinating because it would be pretty amazing if it turned out that you couldn’t build consciousness at all on a machine. That would tell us a lot about what consciousness is and where it resides.

 

‹ Prev