Book Read Free

Architects of Intelligence

Page 11

by Martin Ford


  GEOFFREY HINTON: No, I view that as Demis and me having different predictions about the future.

  MARTIN FORD: Let’s talk about the potential risks of AI. One particular challenge that I’ve written about is the potential impact on the job market and the economy. Do you think that all of this could cause a new Industrial Revolution and completely transform the job market? If so, is that something we need to worry about, or is that another thing that’s perhaps overhyped?

  GEOFFREY HINTON: If you can dramatically increase productivity and make more goodies to go around, that should be a good thing. Whether or not it turns out to be a good thing depends entirely on the social system, and doesn’t depend at all on the technology. People are looking at the technology as if the technological advances are a problem. The problem is in the social systems, and whether we’re going to have a social system that shares fairly, or one that focuses all the improvement on the 1% and treats the rest of the people like dirt. That’s nothing to do with technology.

  MARTIN FORD: That problem comes about, though, because a lot of jobs could be eliminated—in particular, jobs that are predictable and easily automated. One social response to that is a basic income, is that something that you agree with?

  GEOFFREY HINTON: Yes, I think a basic income is a very sensible idea.

  MARTIN FORD: Do you think, then, that policy responses are required to address this? Some people take a view that we should just let it play out, but that’s perhaps irresponsible.

  GEOFFREY HINTON: I moved to Canada because it has a higher taxation rate and because I think taxes done right are good things. What governments ought to be is mechanisms put in place so that when people act in their own self-interest, it helps everybody. High taxation is one such mechanism: when people get rich, everybody else gets helped by the taxes. I certainly agree that there’s a lot of work to be done in making sure that AI benefits everybody.

  MARTIN FORD: What about some of the other risks that you would associate with AI, such as weaponization?

  GEOFFREY HINTON: Yes, I am concerned by some of the things that President Putin has said recently. I think people should be very active now in trying to get the international community to treat weapons that can kill people without a person in the loop the same way as they treat chemical warfare and weapons of mass destruction.

  MARTIN FORD: Would you favor some kind of a moratorium on that type of research and development?

  GEOFFREY HINTON: You’re not going to get a moratorium on that type of research, just as you haven’t had a moratorium on the development of nerve agents, but you do have international mechanisms in place that have stopped them being widely used.

  MARTIN FORD: What about other risks, beyond the military weapon use? Are there other issues, like privacy and transparency?

  GEOFFREY HINTON: I think using it to manipulate elections and to manipulate voters is worrying. Cambridge Analytica was set up by Bob Mercer who was a machine learning person, and you’ve seen that Cambridge Analytica did a lot of damage. We have to take that seriously.

  MARTIN FORD: Do you think that there’s a place for regulation?

  GEOFFREY HINTON: Yes, lots of regulation. It’s a very interesting issue, but I’m not an expert on it, so don’t have much to offer.

  MARTIN FORD: What about the global arms race in general AI, do you think it’s important that one country doesn’t get too far ahead of the others?

  GEOFFREY HINTON: What you’re talking about is global politics. For a long time, Britain was a dominant nation, and they didn’t behave very well, and then it was America, and they didn’t behave very well, and if it becomes the Chinese, I don’t expect them to behave very well.

  MARTIN FORD: Should we have some form of industrial policy? Should the United States and other Western governments focus on AI and make it a national priority?

  GEOFFREY HINTON: There are going to be huge technological developments, and countries would be crazy not to try and keep up with that, so obviously, I think there should be a lot of investment in it. That seems common sense to me.

  MARTIN FORD: Overall, are you optimistic about all of this? Do you think that the rewards from AI are going to outweigh the downsides?

  GEOFFREY HINTON: I hope the rewards will outweigh the downsides, but I don’t know whether they will, and that’s an issue of social systems, not with the technology.

  MARTIN FORD: There’s an enormous talent shortage in AI and everyone’s hiring. Is there any advice you would give to a young person that wants to get into this field, anything that might help attract more people and enable them to become expert in AI and in deep learning, that you can offer?

  GEOFFREY HINTON: I’m worried that there may not be enough people who are critical of the basics. The idea of Capsules is to say, maybe some of the basic ways we’re doing things aren’t the best way of doing things, and we should throw a wider net. We should think about alternatives to some of the very basic assumptions we’re making. The one piece of advice I give people is that if you have intuitions that what people are doing is wrong and that there could be something better, you should follow your intuitions.

  You’re quite likely to be wrong, but unless people follow the intuitions when they have them about how to change things radically, we’re going to get stuck. One worry is that I think the most fertile source of genuinely new ideas is graduate students being well advised in a university. They have the freedom to come up with genuinely new ideas, and they learn enough so that they’re not just repeating history, and we need to preserve that. People doing a master’s degree and then going straight into the industry aren’t going to come up with radically new ideas. I think you need to sit and think for a few years.

  MARTIN FORD: There seems to be a hub of deep learning coalescing in Canada. Is that just random, or is there something special about Canada that helped with that?

  GEOFFREY HINTON: The Canadian Institute for Advanced Research (CIFAR) provided funding for basic research in high-risk areas, and that was very important. There’s also a lot of good luck in that both Yann LeCun, who was briefly my postdoc, and Yoshua Bengio were also in Canada. The three of us could form a collaboration that was very fruitful, and the Canadian Institute for Advanced Research funded that collaboration. This was at a time when all of us would have been a bit isolated in a fairly hostile environment—the environment for deep learning was fairly hostile until quite recently—it was very helpful to have this funding that allowed us to spend quite a lot of time with each other in small meetings, where we could really share unpublished ideas.

  MARTIN FORD: So, it was a strategic investment on the part of the Canadian government to keep deep learning alive?

  GEOFFREY HINTON: Yes. Basically, the Canadian government is significantly investing in advanced deep learning by spending half a million dollars a year, which is pretty efficient for something that’s going to turn into a multi-billion-dollar industry.

  MARTIN FORD: Speaking of Canadians, do you have any interaction with your fellow faculty member, Jordan Peterson? It seems like there’s all kinds of disruption coming out of the University of Toronto...

  GEOFFREY HINTON: Ha! Well, all I’ll say about that is that he’s someone who doesn’t know when to keep his mouth shut.

  GEOFFREY HINTON received his undergraduate degree from Kings College, Cambridge and his PhD in Artificial Intelligence from the University of Edinburgh in 1978. After five years as a faculty member at Carnegie-Mellon University, he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto where he is now an Emeritus Distinguished Professor. He is also a Vice President & Engineering Fellow at Google and Chief Scientific Adviser of the Vector Institute for Artificial Intelligence.

  Geoff was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, dist
ributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made seminal breakthroughs in deep learning that revolutionized speech recognition and object classification.

  Geoff is a fellow of the UK Royal Society, a foreign member of the US National Academy of Engineering and a foreign member of the American Academy of Arts and Sciences. His awards include the David E. Rumelhart prize, the IJCAI award for research excellence, the Killam prize for Engineering, the IEEE Frank Rosenblatt medal, the IEEE James Clerk Maxwell Gold medal, the NEC C&C award, the BBVA award, and the NSERC Herzberg Gold Medal, which is Canada’s top award in science and engineering.”

  Chapter 5. NICK BOSTROM

  The concern is not that [an AGI] would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel, but rather that it would be very competently pursuing an objective that differs from what we really want. Then you get a future shaped in accordance with alien criteria.

  PROFESSOR, UNIVERSITY OF OXFORD AND DIRECTOR OF THE FUTURE OF HUMANITY INSTITUTE

  Nick Bostrom is widely recognized as one of the world’s top experts on superintelligence and the existential risks that AI and machine learning could potentially pose for humanity. He is the Founding Director of the Future of Humanity Institute at the University of Oxford, a multidisciplinary research institute studying big-picture questions about humanity and its prospects. He is a prolific author of over 200 publications, including the 2014 New York Times bestseller Superintelligence: Paths, Dangers, Strategies.

  MARTIN FORD: You’ve written about the risks of creating a superintelligence—an entity that could emerge when an AGI system turns its energies toward improving itself, creating a recursive improvement loop that results in an intelligence that is vastly superior to humans.

  NICK BOSTROM: Yes, that’s one scenario and one problem, but there are other scenarios and other ways this transition to a machine intelligence era could unfold, and there are certainly, other problems one could be worried about.

  MARTIN FORD: One idea you’ve focused on especially is the control or alignment problem where a machine intelligence’s goals or values might result in outcomes that are harmful to humanity. Can you go into more detail on what that alignment problem, or control problem, is in layman’s terms?

  NICK BOSTROM: Well, one distinctive problem with very advanced AI systems that’s different from other technologies is that it presents not only the possibility of humans misusing the technology—that’s something we see with other technologies, of course—but also the possibility that the technology could misuse itself, as it were. In other words, you create an artificial agent or a process that has its own goals and objectives, and it is very capable of achieving those objectives because, in this scenario, it is superintelligent. The concern is that the objectives that this powerful system is trying to optimize for are different from our human values, and maybe even at cross-purposes with what we want to achieve in the world. Then if you have humans trying to achieve one thing and a superintelligent system trying to achieve something different, it might well be that the superintelligence wins and gets its way.

  The concern is not that it would hate or resent us for enslaving it, or that suddenly a spark of consciousness would arise and it would rebel, but rather that it would be very competently pursuing an objective that differs from what we really want. Then you get a future shaped in accordance with alien criteria. The control problem, or the alignment problem, then is how do you engineer AI systems so that they are an extension of human will? In the sense that we have our intentions shape their behavior as opposed to a random, unforeseen and unwanted objective cropping up there?

  MARTIN FORD: You have a famous example of a system that manufactures paperclips. The idea is that when a system is conceived and given an objective, it pursues that goal with a superintelligent competence, but it does it in a way that doesn’t consider common sense, so it ends up harming us. The example you give is a system that turns the whole universe into paperclips because it’s a paperclip optimizer. Is that a good articulation of the alignment problem?

  NICK BOSTROM: The paperclip example is a stand-in for a wider category of possible failures where you ask a system to do one thing and, perhaps, initially things turn out pretty well but then it races to a conclusion that is beyond our control. It’s a cartoon example, where you design an AI to operate a paperclip factory. It’s dumb initially, but the smarter it gets, the better it operates the paperclip factory, and the owner of this factory is very pleased and wants to make more progress. However, when the AI becomes sufficiently smart, it realizes that there are other ways of achieving an even greater number of paperclips in the world, which might then involve taking control away from humans and indeed turning the whole planet into paperclips or into space probes that can go out and transform the universe into more paperclips.

  The point here is that you could substitute almost any other goal you want for paperclips and if you think through what it would mean for that goal to be truly maximized in this world, that unless you’re really, really careful about how you specify your goals, you will find that as a side effect of maximizing for that goal human beings and the things we care about would be stamped out.

  MARTIN FORD: When I hear this problem described, it’s always given as a situation where we give the system a goal, and then it pursues that goal in a way that we’re not happy with. However, I never hear of a system that simply changes its goal, and I don’t quite understand why that is not a concern. Why couldn’t a superintelligent system at some point just decide to have different goals or objectives? Humans do it all of the time!

  NICK BOSTROM: The reason why this seems less of a concern is that although a superintelligence would have the ability to change its goals, you have to consider the criteria it uses to choose its goals. It would make that choice based on the goals it has at that moment. In most situations, it would be a very poor strategic move for an agent to change its goals because it can predict that in the future, there will then not be an agent pursuing its current goal but instead an agent pursuing some different goal. This would tend to produce outcomes that would rank lower by its current goals, which by definition here are what it is using as the criteria by which to select actions. So, once you have a sufficiently sophisticated reasoning system, you expect it to figure this out and therefore be able to achieve internal goal stability.

  Humans are a mess. We don’t have a particular goal from which all the other objectives we pursue are sub-goals. We have different parts of our minds that are pulling in different directions, and if you increase our hormone levels, we suddenly change those values. Humans are not stable in the same way as machines, and maybe don’t have a very clean, compact description as goal-maximizing agents. That’s why it can seem that we humans sometimes decide to change our goals. It’s not so much us deciding to change our goals; it’s our goals just changing. Alternatively, by “goals,” we don’t mean our fundamental criteria for judging things, but just some particular objective, which of course can change as circumstances change or we discover new plans.

  MARTIN FORD: A lot of the research going into this is informed by neural science, though, so there are ideas coming from the human brain being injected into machine intelligence. Imagine a superintelligence that has at its disposal all of human knowledge. It would be able to read all of human history. It would read about powerful individuals, and how they had different objectives and goals. The machine could also conceivably be subject to pathologies. The human brain has all kinds of problems, and there are drugs that can change the way the brain works. How do we know there’s not something comparable in the machine space?

  NICK BOSTROM: I think there well could be, particularly in the earlier stages of development, before the machine achieves sufficient understanding of how AI works to be able to modify itself without messing itself up. Ultimately, there are convergent instrumental
reasons for developing technology to prevent your goals from being corrupted. I would expect a sufficiently capable system to develop those technologies for goal stability, and indeed it might place some priority on developing them. However, if it’s in a rush or if it’s not yet very capable—if it’s roughly at the human level—the possibility certainly exists that things could get scrambled. A change might be implemented with the hope that it would maybe make it a more effective thinker, but it turns out to have some side effect in changing its objective function.

  MARTIN FORD: The other thing that I worry about is that it’s always a concern about how the machine is not going to do what we want, where “we” applies to collective humanity as though there’s some sort of universal set of human desires or values. Yet, if you look at the world today, that’s really not the case. The world has different cultures with different value sets. It seems to me that it might matter quite a lot where the first machine intelligence is developed. Is it naive to talk about the machine and all of humanity as being one entity? To me, it just seems like things are a lot messier than that

  NICK BOSTROM: You try to break up the big problem into smaller problems in order then to make progress on them. You try to break out one component of the overall challenge, in this case that is the technical problem of how to achieve AI alignment with any human values to get the machine to do what its developers want it to do. Unless you have a solution to that, you don’t have the privilege even to try for a solution to the wider, political problems of ensuring that we humans will then use this powerful technology for some beneficial purpose.

  You need to solve the technical problem to get the opportunity to squabble over whose values, or in what degrees different values should guide the use of this technology. It is true, of course, that even if you have a solution to the technical control problem, you’ve really only solved part of the overall challenge. You also then need to figure out a way that we can use this peacefully and in a way that benefits all of humanity.

 

‹ Prev