Architects of Intelligence

Home > Other > Architects of Intelligence > Page 12
Architects of Intelligence Page 12

by Martin Ford


  MARTIN FORD: Is solving that technical control problem, in terms of how to build a machine that remains aligned with the objective, what you’re working on at the Future of Humanity Institute, and what other think tanks like OpenAI and the Machine Intelligence Research Institute are focusing on?

  NICK BOSTROM: Yes, that’s right. We do have a group working on that, but we’re also working on other things. We also have a governance of AI group, that is focused on the governance problems related to advances in machine intelligence.

  MARTIN FORD: Do you think that think tanks like yours are an appropriate level of resource allocation for AI governance, or do you think that governments should jump into this at a larger scale?

  NICK BOSTROM: I think there could be more resources on AI safety. It’s not actually just us: DeepMind also has an AI safety group that we work with, but I do think more resources would be beneficial. There is already a lot more talent and money now than there was even four years ago. In percentage terms, there has been a rapid growth trajectory, even though in absolute terms it’s still a very small field.

  MARTIN FORD: Do you think that superintelligence concerns should be more in the public sphere? Do you want to see presidential candidates in the United States talking about superintelligence?

  NICK BOSTROM: Not really. It’s still a bit too early to seek involvement from states and governments because right now it’s not exactly clear what one would want them to do that would be helpful at this point in time. The nature of the problem first needs to be clarified and understood better, and there’s a lot of work that can be done without having governments come in. I don’t see any need right now for any particular regulations with respect to machine superintelligence. There are all kinds of things related to near-term AI applications where there might be various roles for governments to play.

  If you’re going to have flying drones everywhere in the cities, or self-driving cars on the streets, then there presumably needs to be a framework that regulates them. The extent that AI will have an impact on the economy and the labor market is also something that should be of interest to people running education systems or setting economic policy. I still think superintelligence is a little bit outside the purview of politicians, who mainly think about what might happen during their tenure.

  MARTIN FORD: So, when Elon Musk says superintelligence is a bigger threat than North Korea, could that rhetoric potentially make things worse?

  NICK BOSTROM: If you are getting into this prematurely, with a view to there being a big arms race, which could lead to a more competitive situation where voices for caution and global cooperation get sidelined, then yes, that could actually make things worse rather than better. I think one can wait until there is a clear concrete thing that one actually would need and want governments to do in relation to superintelligence, and then one can try to get them activated. Until that time, there’s still a huge amount of work that we can do, for example, in collaboration with the AI development community and with companies and academic institutions that are working with AI, so let’s get on with that groundwork for the time being.

  MARTIN FORD: How did you come to your role in the AI community? How did you first become interested in AI, and how did your career develop to the point it’s at right now?

  NICK BOSTROM: I’ve been interested in artificial intelligence for as long as I can remember. I studied artificial intelligence, and later computational neuroscience, at university, as well as other topics, like theoretical physics. I did this because I thought that firstly, AI technology could eventually be transformative in the world, and secondly because it’s very interesting intellectually to try to figure out how thinking is produced by the brain or in a computer.

  I published some work about superintelligence in the mid-1990s, and I had the opportunity in 2006 to create the Future of Humanity Institute (FHI) at Oxford University. Together with my colleagues, I work full-time on the implications of future technologies for the future of humanity, with a particular focus—some might say an obsession—on the future of machine intelligence. That then resulted in 2014 in my book Superintelligence: Paths, Dangers, Strategies. Currently, we have two groups within the FHI. One group focuses on technical computer science work on the alignment problem, so trying to craft algorithms for scalable control methods. The other group focuses on governance, policy, ethics and the social implications of advances in machine intelligence.

  MARTIN FORD: In your work at the Future of Humanity Institute you’ve focused on a variety of existential risks, not just AI-related dangers, right?

  NICK BOSTROM: That’s right, but we’re also looking at the existential opportunities, we are not blind to the upside of technology.

  MARTIN FORD: Tell me about some of the other risks you’ve looked at, and why you’ve chosen to focus so much on machine intelligence above all.

  NICK BOSTROM: At the FHI, we’re interested in really big-picture questions, the things that could fundamentally change the human condition in some way. We’re not trying to study what next year’s iPhone might be like, but instead things that could change some fundamental parameter of what it means to be human—questions that shape the future destiny of Earth-originating intelligent life. From that perspective, we are interested in existential risk—things that could permanently destroy human civilization—and also things that could permanently shape our trajectory into the future. I think technology is maybe the most plausible source for such fundamental reshapers of humanity, and within technology there are just a few that plausibly present either existential risks or existential opportunities; AI might be the foremost amongst those. FHI also has a group working on the biosecurity risks coming out of biotechnology, and we’re interested more generally in how you put these different considerations together—a macro strategy, as we call it.

  Why AI in particular? I think that if AI were to be successful at its original goal, which all along has been not just to automate specific tasks but to replicate in machine substrates the general-purpose learning ability and planning ability that makes us humans smart, then that would quite literally be the last invention that humans ever needed to make. If achieved, it would have enormous implications not just in AI, but across all technological fields, and indeed all areas where human intelligence currently is useful.

  MARTIN FORD: What about climate change, for example? Is that on your list of existential threats?

  NICK BOSTROM: Not so much, partly because we prefer to focus where we think our efforts might make a big difference, which tends to be areas where the questions have been relatively neglected. There are tons of people currently working on climate change across the world. Also, it’s hard to see how the planet getting a few degrees warmer would cause the extinction of the human species, or permanently destroy the future. So, for those and some other reasons, that’s not been at the center of our own efforts, although we might cast a sideways glance at it on occasion by trying to sum up the overall picture of the challenges that humanity confronts.

  MARTIN FORD: So, you would argue that the risk from advanced AI is actually more significant than from climate change, and that we’re allocating our resources and investment in these questions incorrectly? That sounds like a very controversial view.

  NICK BOSTROM: I do think that there is some misallocation, and it’s not just between those two fields in particular. In general, I don’t think that we as a human civilization allocate our attention that wisely. If we imagine humans as having an amount of concern capital, chips of concern or fear that we can spread around on different things that threaten human civilization, I don’t think we are that sophisticated in how we choose to allocate those concern chips.

  If you look back over the last century, there has been at any given point in time maybe one big global concern that all intellectually educated people are supposed to be fixated on, and it’s changed over time. So maybe 100 years ago, it was dysgenics, where intellectuals were worrying about the deterioration of the huma
n stock. Then during the Cold War, obviously nuclear Armageddon was a big concern, and then for a while, it was overpopulation. Currently, I would say it’s global warming, although AI has, over the last couple of years, been creeping up there.

  MARTIN FORD: That’s perhaps largely due to the influence of people like Elon Musk talking about it. Do you think that’s a positive thing that he’s been so vocal, or is there a danger that it becomes overhyped or it draws uninformed people into the discussion?

  NICK BOSTROM: I think so far it has been met positively. When I was writing my book, it was striking how neglected the whole topic of AI was. There were a lot of people working on AI, but very few people thinking about what would happen if AI were to succeed. It also wasn’t the kind of topic you could have a serious conversation with people about because they would dismiss it as just science fiction, but that’s now changed.

  I think that’s valuable, and maybe as a consequence of this having become a more mainstream topic, it’s now possible to do research and publish technical papers on things like the alignment problem. There are a number of research groups doing just that, including here at the FHI, where we have joint technical research seminars with DeepMind, also OpenAI has a number of AI safety researchers, and there are other groups like the Machine Intelligence Research Institute at Berkeley. I’m not sure whether there would have been as much talent flowing into this field unless the profile of the whole challenge had first been raised. What is most needed today is not further alarm or further hand-wringing with people screaming for attention, the challenge now is more to channel this existing concern and interest in constructive directions and to get on with the work.

  MARTIN FORD: Is it true to say that the risks you worry about in terms of machine intelligence are really all dependent on achieving AGI and beyond that, superintelligence? The risks associated with narrow AI are probably significant, but not what you would characterize as existential.

  NICK BOSTROM: That’s correct. We do also have some interest in these more near-term applications of machine intelligence, which are interesting in their own right and also worth having a conversation about. I think the trouble arises when these two different contexts, the near term, and the long term get thrown into the same pot and confused.

  MARTIN FORD: What are some of the near-term risks that we need to worry about over the next five years or so?

  NICK BOSTROM: In the near term, I think primarily there are things that I would be very excited about and look forward to having roll out. In the near-term context, the upside far outweighs the downside. Just look across to the economy and at all the areas where having smarter algorithms could make a positive difference. Even a low-key, boring algorithm running in the background in a big logistic center predicting demand curves more accurately would enable you to reduce the amount of stock, and therefore cut prices for consumers.

  In healthcare, the same neural networks that can recognize cats, dogs, and faces could recognize tumors in x-ray images and assist radiologists in making more accurate diagnoses. Those neural networks might run in the background and help optimize patient flows and track outcomes. You could name almost any area, and there would probably be creative ways to use these new techniques that are emerging from machine learning to good effect.

  I think that’s a very exciting field, with a lot of opportunity for entrepreneurs. From a scientific point of view as well, it’s really exciting to begin to understand a little bit about how intelligence works and how perception is performed by the brain and in these neural systems.

  MARTIN FORD: A lot of people worry about the near-term risks of things like autonomous weapons that can make their own decisions about who to kill. Do you support a ban on weapons of those types?

  NICK BOSTROM: It would be positive if the world could avoid immediately jumping into another arms race, where huge amounts of money are spent perfecting killer robots. Broadly speaking, I’d prefer that machine intelligence is used for peaceful purposes, and not to develop new ways of destroying us. I think if one zooms in, it becomes a little bit less clear exactly what it is that one would want to see banned by a treaty.

  There’s a move to say that humans must be in the loop and that we should not have autonomous drones make targeting decisions on their own, and maybe that is possible. However, the alternative is that you have exactly the same system in place, but instead of the drone deciding to fire a missile, a19-year-old sits in Arlington, Virginia in front of a computer screen and has the job that whenever a window pops up on the screen saying “Fire,” they need to press a red button. If that’s what human oversight amounts to, then it’s not clear that it really makes that much of a difference from having the whole system be completely autonomous. I think maybe more important is that there is some accountability, and there’s somebody whose butt you can kick if things go wrong.

  MARTIN FORD: There are certain situations you can imagine where an autonomous machine might be preferable. Thinking of policing rather than military applications, we’ve had incidents in the United States of what appears to be police racism, for example. A properly designed AI-driven robotic system in a situation like that would not be biased. It would also be prepared to take a bullet first, and shoot second, which is really not an option for a human being.

  NICK BOSTROM: Preferably we shouldn’t be fighting any wars between ourselves at all, but if there are going to be wars, maybe it’s better if it’s machines killing machines rather than young men shooting holes in other young men. If there are going to be strikes against specific combatants, maybe you can make precision strikes that only kill the people you’re trying to kill, and don’t create collateral damage with civilians. That’s why I’m saying that the overall calculation becomes a little bit more complex when one considers the specifics, and what exactly the rule or agreement is that one would want to be implemented with regard to lethal autonomous weapons.

  There are other areas of application that also raise interesting ethical questions such as in surveillance, or the management of data flows, marketing, and advertising, which might matter as much for the long-term outcome of human civilization as these more direct applications of drones to kill or injure people.

  MARTIN FORD: Do you feel there is a role for regulation of these technologies?

  NICK BOSTROM: Some regulation, for sure. If you’re going to have killer drones, you don’t want any old criminal to be able to easily assassinate public officials from five kilometers away using a drone with facial recognition software. Likewise, you don’t want to have amateurs flying drones across airports and causing big delays. I’m sure a form of military framework will be required as we get more of these drones traversing spaces where humans are traveling for other purposes.

  MARTIN FORD: It’s been about four years since your book Superintelligence: Paths, Dangers, Strategies was published. Are things progressing at the rate that you expected?

  NICK BOSTROM: Progress has been faster than expected over the last few years, with big advances in deep learning in particular.

  MARTIN FORD: You had a table in your book where you said that having a computer beat the best Go player in the world was a decade out, so that would have been roughly 2024. As things turned out, it actually occurred just two years after you published the book.

  NICK BOSTROM: I think the statement I made was that if progress continued at the same rate as it had been going over the last several years, then one would expect a Go Grand Champion machine to occur about a decade after the book was written. However, the progress was faster than that, partly because there was a specific effort toward solving Go. DeepMind took on the challenge and assigned some good people to the task, and put a lot of computing power onto it. It was certainly a milestone, though, and a demonstration of the impressive capabilities of these deep learning systems.

  MARTIN FORD: What are the major milestones or hurdles that you would point to that stand between us and AGI?

  NICK BOSTROM: There are several big chall
enges remaining in machine learning, such as needing better techniques for unsupervised learning. If you think about how adult humans come to know all the things we do, only a small fraction of that is done through explicit instruction. Most of it is by us just observing what’s going on and using that sensory feed to improve our world models. We also do a lot of trial and error as toddlers, banging different things into one another and seeing what happens.

  In order to get really highly effective machine intelligent systems, we also need algorithms that can make more use of unsupervised and unlabeled data. As humans, we tend to organize a lot of our world knowledge in causal terms, and that’s something that is not really done much by current neural networks. It’s more about finding statistical regularities in complex patterns, but not really organizing that as objects that can have various kinds of causal impacts on other objects. So, that would be one aspect.

  I also think that there are advances needed in planning and a number of other areas as well, and it is not as if there are no ideas out there on how to achieve these things. There are limited techniques available that can do various aspects of these things relatively poorly, and I think that there just needs to be a great deal of improvement in those areas in order for us to get all the way to full human general intelligence.

  MARTIN FORD: DeepMind seems to be one of the very few companies that’s focused specifically on AGI. Are there other players that you would point to that are doing important work, that you think may be competitive with what DeepMind is doing?

  NICK BOSTROM: DeepMind is certainly among the leaders, but there are many places where there is exciting work being done on machine learning or work that might eventually contribute to achieving artificial general intelligence. Google itself has another world-class AI research group in the form of Google Brain. Other big tech companies now have their own AI labs: Facebook, Baidu, and Microsoft have quite a lot of research in AI going on.

 

‹ Prev