Architects of Intelligence

Home > Other > Architects of Intelligence > Page 2
Architects of Intelligence Page 2

by Martin Ford


  The conversations included here were conducted from February to August 2018 and virtually all of them occupied at least an hour, some substantially more. They were recorded, professionally transcribed, and then edited for clarity by the team at Packt. Finally, the edited text was provided to the person I spoke to, who then had the opportunity to revise it and expand it. Therefore, I have every confidence that the words recorded here accurately reflect the thoughts of the person I interviewed.

  The AI experts I spoke to are highly varied in terms of their origins, locations, and affiliations. One thing that even a brief perusal of this book will make apparent is the outsized influence of Google in the AI community. Of the 23 people I interviewed, seven have current or former affiliations with Google or its parent, Alphabet. Other major concentrations of talent are found at MIT and Stanford. Geoff Hinton and Yoshua Bengio are based at the Universities of Toronto and Montreal respectively, and the Canadian government has leveraged the reputations of their research organizations into a strategic focus on deep learning. Nineteen of the 23 people I spoke to work in the United States. Of those 19, however, more than half were born outside the US. Countries of origin include Australia, China, Egypt, France, Israel, Rhodesia (now Zimbabwe), Romania, and the UK. I would say this is pretty dramatic evidence of the critical role that skilled immigration plays in the technological leadership of the US.

  As I carried out the conversations in this book, I had in mind a variety of potential readers, ranging from professional computer scientists, to managers and investors, to virtually anyone with an interest in AI and its impact on society. One especially important audience, however, consists of young people who might consider a future career in artificial intelligence. There is currently a massive shortage of talent in the field, especially among those with skills in deep learning, and a career in AI or machine learning promises to be exciting, lucrative and consequential.

  As the industry works to attract more talent into the field, there is widespread recognition that much more must be done to ensure that those new people are more diverse. If artificial intelligence is indeed poised to reshape our world, then it is crucial that the individuals who best understand the technology—and are therefore best positioned to influence its direction—be representative of society as a whole.

  About a quarter of those interviewed in this book are women, and that number is likely significantly higher than what would be found across the entire field of AI or machine learning. A recent study found that women represent about 12 percent of leading researchers in machine learning. (https://www.wired.com/story/artificial-intelligence-researchers-gender-imbalance) A number of the people I spoke to emphasized the need for greater representation for both women and members of minority groups.

  As you will learn from her interview in this book, one of the foremost women working in artificial intelligence is especially passionate about the need to increase diversity in the field. Stanford University’s Fei-Fei Li co-founded an organization now called AI4ALL (http://ai-4-all.org/) to provide AI-focused summer camps geared especially to underrepresented high school students. AI4ALL has received significant industry support, including a recent grant from Google, and has now scaled up to include summer programs at six universities across the United States. While much work remains to be done, there are good reasons to be optimistic that diversity among AI researchers will increase significantly in the coming years and decades.

  While this book does not assume a technical background, you will encounter some of the concepts and terminology associated with the field. For those without previous exposure to AI, I believe this will afford an opportunity to learn about the technology directly from some of the foremost minds in the field. To help less experienced readers get started, a brief overview of the vocabulary of AI follows this introduction, and I recommend you take a few moments to read this material before beginning the interviews. Additionally, the interview with Stuart Russell, who is the co-author of the leading AI textbook, includes an explanation of many of the field’s most important ideas.

  It has been an extraordinary privilege for me to participate in the conversations in this book. I believe you will find everyone I spoke with to be thoughtful, articulate, and deeply committed to ensuring that the technology he or she is working to create will be leveraged for the benefit of humanity. What you will not so often find is broad-based consensus. This book is full of varied, and often sharply conflicting, insights, opinions, and predictions. The message should be clear: Artificial intelligence is a wide open field. The nature of the innovations that lie ahead, the rate at which they will occur, and the specific applications to which they will be applied are all shrouded in deep uncertainty. It is this combination of massive potential disruption together with fundamental uncertainty that makes it imperative that we begin to engage in a meaningful and inclusive conversation about the future of artificial intelligence and what it may mean for our way of life. I hope this book will make a contribution to that discussion.

  A Brief Introduction to the Vocabulary of AI

  The conversations in this book are wide-ranging and in some cases delve into the specific techniques used in AI. You don’t need a technical background to understand this material, but in some cases you may encounter the terminology used in the field. What follows is a very brief guide to the most important terms you will encounter in the interviews. If you take a few moments to read through this material, you will have all you need to fully enjoy this book. If you do find that a particular section is more detailed or technical than you would prefer, I would advise you to simply skip ahead to the next section.

  MACHINE LEARNING is the branch of AI that involves creating algorithms that can learn from data. Another way to put this is that machine learning algorithms are computer programs that essentially program themselves by looking at information. You still hear people say “computers only do what they are programmed to do…” but the rise of machine learning is making this less and less true. There are many types of machine learning algorithms, but the one that has recently proved most disruptive (and gets all the press) is deep learning.

  DEEP LEARNING is a type of machine learning that uses deep (or many layered) ARTIFICIAL NEURAL NETWORKS—software that roughly emulates the way neurons operate in the brain. Deep learning has been the primary driver of the revolution in AI that we have seen in the last decade or so.

  There are a few other terms that less technically inclined readers can translate as simply “stuff under the deep learning hood.” Opening the hood and delving into the details of these terms is entirely optional: BACKPROPAGATION (or BACKPROP) is the learning algorithm used in deep learning systems. As a neural network is trained (see supervised learning below), information propagates back through the layers of neurons that make up the network and causes a recalibration of the settings (or weights) for the individual neurons. The result is that the entire network gradually homes in on the correct answer. Geoff Hinton co-authored the seminal academic paper on backpropagation in 1986. He explains backprop further in his interview. An even more obscure term is GRADIENT DESCENT. This refers to the specific mathematical technique that the backpropagation algorithm uses to the reduce error as the network is trained. You may also run into terms that refer to various types, or configurations, of neural networks, such as RECURRENT and CONVOLUTIONAL neural nets and BOLTZMANN MACHINES. The differences generally pertain to the ways the neurons are connected. The details are technical and beyond the scope of this book. Nonetheless, I did ask Yann LeCun, who invented the convolutional architecture that is widely used in computer vision applications, to take a shot at explaining this concept.

  BAYESIAN is a term that can be generally be translated as “probabilistic” or “using the rules of probability.” You may encounter terms like Bayesian machine learning or Bayesian networks; these refer to algorithms that use the rules of probability. The term derives from the name of the Reverend Thomas Bayes (1701 to 1761) who formulated a way to
update the likelihood of an event based on new evidence. Bayesian methods are very popular with both computer scientists and with scientists who attempt to model human cognition. Judea Pearl, who is interviewed in this book, received the highest honor in computer science, the Turing Award, in part for his work on Bayesian techniques.

  How AI Systems Learn

  There are several ways that machine learning systems can be trained. Innovation in this area—finding better ways to teach AI systems—will be critical to future progress in the field.

  SUPERVISED LEARNING involves providing carefully structured training data that has been categorized or labeled to a learning algorithm. For example, you could teach a deep learning system to recognize a dog in photographs by feeding it many thousands (or even millions) of images containing a dog. Each of these would be labeled “Dog.” You would also need to provide a huge number of images without a dog, labeled “No Dog.” Once the system has been trained, you can then input entirely new photographs, and the system will tell you either “Dog” or “No Dog”—and it might well be able to do this with a proficiency that exceeds that of a typical human being.

  Supervised learning is by far the most common technique used in current AI systems, accounting for perhaps 95 percent of practical applications. Supervised learning powers language translation (trained with millions of documents pre-translated into two different languages) and AI radiology systems (trained with millions of medical images labeled either “Cancer” or “No Cancer”). One problem with supervised learning is that it requires massive amounts of labeled data. This explains why companies that control huge amounts of data, like Google, Amazon, and Facebook, have such a dominant position in deep learning technology.

  REINFORCEMENT LEARNING essentially means learning through practice or trial and error. Rather than training an algorithm by providing the correct, labeled outcome, the learning system is set loose to find a solution for itself, and if it succeeds it is given a “reward.” Imagine training your dog to sit, and if he succeeds, giving him a treat. Reinforcement learning has been an especially powerful way to build AI systems that play games. As you will learn from the interview with Demis Hassabis in this book, DeepMind is a strong proponent of reinforcement learning and relied on it to create the AlphaGo system.

  The problem with reinforcement learning is that it requires a huge number of practice runs before the algorithm can succeed. For this reason, it is primarily used for games or for tasks that can be simulated on a computer at high speed. Reinforcement learning can be used in the development of self-driving cars—but not by having actual cars practice on real roads. Instead virtual cars are trained in simulated environments. Once the software has been trained it can be moved to real-world cars.

  UNSUPERVISED LEARNING means teaching machines to learn directly from unstructured data coming from their environments. This is how human beings learn. Young children, for example, learn languages primarily by listening to their parents. Supervised learning and reinforcement learning also play a role, but the human brain has an astonishing ability to learn simply by observation and unsupervised interaction with the environment.

  Unsupervised learning represents one of the most promising avenues for progress in AI. We can imagine systems that can learn by themselves without the need for huge volumes of labeled training data. However, it is also one of the most difficult challenges facing the field. A breakthrough that allowed machines to efficiently learn in a truly unsupervised way would likely be considered one of the biggest events in AI so far, and an important waypoint on the road to human-level AI.

  ARTIFICIAL GENERAL INTELLIGENCE (AGI) refers to a true thinking machine. AGI is typically considered to be more or less synonymous with the terms HUMAN-LEVEL AI or STRONG AI. You’ve likely seen several examples of AGI—but they have all been in the realm of science fiction. HAL from 2001 A Space Odyssey, the Enterprise’s main computer (or Mr. Data) from Star Trek, C3PO from Star Wars and Agent Smith from The Matrix are all examples of AGI. Each of these fictional systems would be capable of passing the TURING TEST—in other words, these AI systems could carry out a conversation so that they would be indistinguishable from a human being. Alan Turing proposed this test in his 1950 paper, Computing Machinery and Intelligence, which arguably established artificial intelligence as a modern field of study. In other words, AGI has been the goal from the very beginning.

  It seems likely that if we someday succeed in achieving AGI, that smart system will soon become even smarter. In other words, we will see the advent of SUPERINTELLIGENCE, or a machine that exceeds the general intellectual capability of any human being. This might happen simply as a result of more powerful hardware, but it could be greatly accelerated if an intelligent machine turns its energies toward designing even smarter versions of itself. This might lead to what has been called a “recursive improvement cycle” or a “fast intelligence take off.” This is the scenario that has led to concern about the “control” or “alignment” problem—where a superintelligent system might act in ways that are not in the best interest of the human race.

  I have judged the path to AGI and the prospect for superintelligence to be topics of such high interest that I have discussed these issues with everyone interviewed in this book.

  MARTIN FORD is a futurist and the author of two books: The New York Times Bestselling Rise of the Robots: Technology and the Threat of a Jobless Future (winner of the 2015 Financial Times/McKinsey Business Book of the Year Award and translated into more than 20 languages) and The Lights in the Tunnel: Automation, Accelerating Technology and the Economy of the Future, as well as the founder of a Silicon Valley-based software development firm. His TED Talk on the impact of AI and robotics on the economy and society, given on the main stage at the 2017 TED Conference, has been viewed more than 2 million times.

  Martin is also the consulting artificial intelligence expert for the new “Rise of the Robots Index” from Societe Generale, underlying the Lyxor Robotics & AI ETF, which is focused specifically on investing in companies that will be significant participants in the AI and robotics revolution. He holds a computer engineering degree from the University of Michigan, Ann Arbor and a graduate business degree from the University of California, Los Angeles.

  He has written about future technology and its implications for publications including The New York Times, Fortune, Forbes, The Atlantic, The Washington Post, Harvard Business Review, The Guardian, and The Financial Times. He has also appeared on numerous radio and television shows, including NPR, CNBC, CNN, MSNBC and PBS. Martin is a frequent keynote speaker on the subject of accelerating progress in robotics and artificial intelligence—and what these advances mean for the economy, job market and society of the future.

  Martin continues to focus on entrepreneurship and is actively engaged as a board member and investor at Genesis Systems, a startup company that has developed a revolutionary atmospheric water generation (AWG) technology. Genesis will soon deploy automated, self-powered systems that will generate water directly from the air at industrial scale in the world’s most arid regions.

  Chapter 2. YOSHUA BENGIO

  Current AI—and the AI that we can foresee in the reasonable future—does not, and will not, have a moral sense or moral understanding of what is right and what is wrong.

  SCIENTIFIC DIRECTOR, MONTREAL INSTITUTE FOR LEARNING ALGORITHMS AND PROFESSOR OF COMPUTER SCIENCE AND OPERATIONS RESEARCH, UNIVERSITY OF MONTREAL

  Yoshua Bengio is a professor of computer science and operations research at the University of Montreal and is widely recognized as one of the pioneers of deep learning. Yoshua was instrumental in advancing neural network research, in particular “unsupervised” learning where neural networks can learn without relying on vast amounts of training data.

  MARTIN FORD: You are at the forefront of AI research, so I want to begin by asking what current research problems you think we’ll see breakthroughs in over the next few years, and how those will help us on the road to AGI (ar
tificial general intelligence)?

  YOSHUA BENGIO: I don’t know exactly what we’re going to see, but I can tell you that there are some really hard problems in front of us and that we are far from human-level AI. Researchers are trying to understand what the issues are, such as, why is it that we can’t build machines that really understand the world as well as we do? Is it just that we don’t have enough training data, or is it that we don’t have enough computing power? Many of us think that we are also missing the basic ingredients needed, such as the ability to understand causal relationships in data—an ability that actually enables us to generalize and to come up with the right answers in settings that are very different from those we’ve been trained in.

  A human can imagine themselves going through an experience that is completely new to them. You might have never had a car accident, for example, but you can imagine one and because of all the things you already know you’re actually able to roleplay and make the right decisions, at least in your head. Current machine learning is based on supervised learning, where a computer essentially learns about the statistics of the data that it sees, and it needs to be taken through that process by hand. In other words, humans have to provide all of those labels, possibly hundreds of millions of correct answers, that the computer can then learn from.

 

‹ Prev