Book Read Free

Architects of Intelligence

Page 55

by Martin Ford


  This is one of the subtlest things in the whole field. People see these amazing achievements, like a program that beats people in Go and they say, “Wow! Intelligence must be around the corner.” But when you get to these more nuanced things like natural language, or reasoning over knowledge, it turns out that we don’t even know, in some sense, the right questions to ask.

  Pablo Picasso is famous for saying computers are useless. They answer questions rather than asking them. So, when we define a question rigorously, when we can define it mathematically or as a computational problem, we’re really good at hammering away at that and figuring out the answer. But there are a lot of questions that we don’t yet know how to formulate appropriately, such as how can we represent natural language inside a computer? Or, what is common sense?

  MARTIN FORD: What are the primary hurdles we need to overcome to achieve AGI?

  OREN ETZIONI: When I talk to people working in AI about these questions, such as when we might achieve AGI, one of the things that I really like is to identify is what I call canaries in the coal mine. In the same way that the coal miners put canaries in the mines to warn them of dangerous gases, I feel like there are certain stepping stones—and that if we achieved those, then AI would be in a very different world.

  So, one of those stepping stones would be an AI program that can really handle multiple, very different tasks. An AI program that’s able to both do language and vision, it’s able to play board games and cross the street, it’s able to walk and chew gum. Yes, that is a joke, but I think it is important for AI to have the ability to do much more complex things.

  Another stepping stone is that it’s very important that these systems be a lot more data-efficient. So, how many examples do you need to learn from? If you have an AI program that can really learn from a single example, that feels meaningful. For example, I can show you a new object, and you look at it, you’re going to hold it in your hand, and you’re thinking, “I’ve got it.” Now, I can show you lots of different pictures of that object, or different versions of that object in different lighting conditions, partially obscured by something, and you’d still be able to say, “Yep, that’s the same object.” But machines can’t do that off of a single example yet. That would be a real stepping stone to AGI for me.

  Self-replication is another dramatic stepping stone towards AGI. Can we have an AI system that is physically embodied and that can make a copy of itself? That would be a huge canary in the coal mine because then that AI system could make lots of copies of itself. People have quite a laborious and involved process for making copies of themselves, and AI systems cannot. You can copy the software easily but not the hardware. Those are some of the major stepping stones to AGI that come to mind.

  MARTIN FORD: And maybe the ability to use knowledge in a different domain would be a core capability. You gave the example of studying a chapter in a textbook. To be able to acquire that knowledge, and then not just answer questions about it, but actually be able to employ it in a real-world situation. That would seem to be at the heart of true intelligence.

  OREN ETZIONI: I completely agree with you, and that question is only a step along the way. It’s employment of AI in the real world, and also in unanticipated situations.

  MARTIN FORD: I want to talk about the risks that are associated with AI, but before we do that, do you want to say more about what you view as some of the greatest benefits, some of the most promising areas where AI could be deployed?

  OREN ETZIONI: There are two examples that stand out to me, the first being self-driving cars, where we have upwards of 35,000 deaths each year on US highways alone, we have in the order of a million accidents where people are injured, and studies have shown that we could cut a substantial fraction of that by using self-driving cars. I get very excited when I see how AI can directly translate to technologies that save lives.

  The second example, which we’re working on, is science—which has been such an engine of prosperity in economic growth, the improvement of medicine, and generally speaking for humanity. Yet despite these advancements, there are still so many challenges, whether it’s Ebola, or cancer, or superbugs that are resistant to antibiotics. Scientists need help to solve these problems and just to move faster. With a project like Semantic Scholar, it has the potential to save people’s lives by providing better medical outcomes and better medical research.

  My colleague, Eric Horvitz, is one of the most thoughtful people on these topics. He has a great quote when he responds to people who are worried about AI taking lives. He says that actually, it’s the absence of AI technology that is already killing people. The third-leading cause of death in American hospitals is physician error, and a lot of that could be prevented using AI. So, our failure to use AI is really what’s costing lives.

  MARTIN FORD: Since you mentioned self-driving cars, let me try to pin you down on a timeframe. Imagine you’re in Manhattan, in some random location, and you call for a car. A self-driving car arrives with no one inside, and it’s going to take you to some other random location. When do you think we will see that as a widely available consumer service?

  OREN ETZIONI: I would say that is probably somewhere between 10 and 20 years away from today.

  MARTIN FORD: Let’s talk about the risks. I want to start with the one that I’ve written a lot about, which is the potential economic disruption, and the impact on the job market. I think it’s quite possible that we’re on the leading edge of a new industrial revolution, which might really have a transformative impact, and maybe will destroy or deskill a lot of jobs. What do you think about that?

  OREN ETZIONI: I very much agree with you, in the sense that I have tried, as you have, not to get overly focused on the threats of superintelligence because we should have fewer imaginary problems and more real ones. But we have some very real problems and one of the most prominent of them, if not the most prominent, is jobs. There’s a long-term trend towards the reduction of manufacturing jobs, and due to automation, computer automation, and AI-based automation, we now have the potential to substantially accelerate that timeline. So, I do think that there’s a very real issue here.

  One point that I would make, is that it’s also the case that the demographics are working in our favor. The number of children we have as a species is getting smaller on average, and the number of us living longer is increasing, and society is aging—especially after the baby boom. So, for the next 20 years, I think we’re going to be seeing increasing automation, but we’re also going to be seeing the number of workers not growing as quickly as it did before. Another way that demographic factors work in our favor is that, while for the last two decades, more women were entering the workforce, and the percentage of female participation in the workforce was going up, this affect has now plateaued. In other words, women who want to be in the workforce are now already there. So again, I think that for the next 20 years we’re not going to see the numbers of workers increasing. The risk of automation taking jobs away from people is still serious though I think.

  MARTIN FORD: In the long run, what do you think of the idea of a universal basic income, as a way to adapt society to the economic consequences of automation?

  OREN ETZIONI: I think that what we’ve already seen with agriculture, and with manufacturing, is clearly going to recur. Let’s say we don’t argue about the exact timing. It’s very clear that, in the next 10 to 50 years, many jobs are either going to go away completely or those jobs are going to be radically transformed—they’ll be done a lot more efficiently, with fewer people.

  As you know, the number of people working in agriculture is much smaller than it was in the past, and the jobs involved in agriculture are now much more sophisticated. So, when that happens, we have this question: “What are the people going to do?” I don’t necessarily know, but I do have one contribution to this conversation, which I wrote up as an article for Wired in February 2017 titled Workers displaced by automation should try a new job: Caregiver. (https://www
.wired.com/story/workers-displaced-by-automation-should-try-a-new-job-caregiver/)

  In that Wired paper, I said some of the most vulnerable workers, in this economic situation that we’re discussing here, are people who don’t have a high-school degree or those who don’t have a college degree. I don’t think it’s likely that we’re going to be successful in the principle of coal miners to data miners, that we’re going to give these people technical retraining, and that they’ll somehow become part of the new economy very easily. I think that’s a major challenge.

  I also don’t think that universal basic income, at least given the current climate, where we can’t even achieve universal health care, or universal housing, is going to be easy either.

  MARTIN FORD: It seems pretty clear that any viable solution to this problem will be a huge political challenge.

  OREN ETZIONI: I don’t know that there is a general solution or a silver bullet, but my contribution to the conversation is to think about jobs that are very strongly human focused. Think of the jobs providing emotional support: having coffee with somebody or being a companion who keeps somebody company. I think that those are the jobs that when we think about our elderly, when we think about our special-needs kids, when we think about various populations like that, those are the ones that we really want a person to engage with—rather than a robot.

  If we want society to allocate resources toward those kinds of jobs, to give the people engaged in those jobs better compensation and greater dignity, then I think that there’s room for people to take on those jobs. That said, there are many issues with my proposal, I don’t think it’s a panacea, but I do think it’s a direction that’s worth investing in.

  MARTIN FORD: Beyond the job market impact, what other things do you think we genuinely should be concerned about in terms of artificial intelligence in the next decade or two?

  OREN ETZIONI: Cybersecurity is already a huge concern, and it becomes much more so if we have AI. The other big concern for me is autonomous weapons, which is a scary proposition, particularly the ones that can make life-or-death decisions on their own. But what we just talked about, the risks to jobs—that is still the thing that we should be most concerned about, even more so than security and weapons.

  MARTIN FORD: How about existential risk from AGI, and the alignment or control problem with regard to a superintelligence. Is that something that we should be worried about?

  OREN ETZIONI: I think that it’s great for a small number of philosophers and mathematicians to contemplate the existential threat, so I’m not dismissing it out of hand. At the same time, I don’t think those are the primary things that we should be concerned about, nor do I think that there’s that much that we can do at this point about that threat.

  I think that one of the interesting things to consider is if a superintelligence emerges, it would be really nice to be able to communicate with it, to talk to it. The work that we’re doing at AI2—and that other people are also doing—on natural language understanding, seems like a very valuable contribution to AI safety, at least as valuable as worrying about the alignment problem, which ultimately is just a technical problem having to do with reinforcement learning and objective functions.

  So, I wouldn’t say that we’re underinvesting in being prepared for AI safety, and certainly some of the work that we’re doing at AI2 is actually implicitly a key investment in AI safety.

  MARTIN FORD: Any concluding thoughts?

  OREN ETZIONI: Well, there’s one other point I wanted to make that I think people often miss in the AI discussion, and that’s the distinction between intelligence and autonomy (https://www.wired.com/2014/12/ai-wont-exterminate-us-it-will-empower-us/).

  We naturally think that intelligence and autonomy go hand in hand. But you can have a highly intelligent system with essentially no autonomy, and the example of that is a calculator. A calculator is a trivial example, but something like AlphaGo that plays brilliant Go but won’t play another game until somebody pushes a button: that’s high intelligence and low autonomy.

  You can also have high autonomy and low intelligence. My favorite kind of tongue-in-cheek example is a bunch of teenagers drinking on a Saturday night: that’s high autonomy but low intelligence. But a real-world example, that we’ve all experienced would be a computer virus that can have low intelligence but quite a strong ability to bounce around computer networks. My point is that we should understand that the systems that we’re building have these two dimensions to them, intelligence and autonomy, and that it’s often the autonomy that is the scary part.

  MARTIN FORD: Drones or robots that could decide to kill without a human in the loop to authorize that action is something that is really generating a lot of concern in the AI community.

  OREN ETZIONI: Exactly, when they’re autonomous and they can make life-and-death decisions on their own. Intelligence, on the other hand, could actually help save lives, by getting them more targeted, or by having them abort when the human cost is unacceptable, or when the wrong person or building is targeted.

  I want to emphasize the fact that a lot of our worries about AI are really worries about autonomy, and I want to emphasize that autonomy is something that we can choose as a society to meter out.

  I like to think of “AI” as standing for “augmented intelligence,” just as it is with systems like Semantic Scholar and like with self-driving cars. One of the reasons that I am an AI optimist, and feel so passionate about it, and the reason that I’ve dedicated my entire career to AI since high school, is that I see this tremendous potential to do good with AI.

  MARTIN FORD: Is there a place for regulation, to address that issue of autonomy? Is that something that you would advocate?

  OREN ETZIONI: Yes, I think that regulation is both inevitable and appropriate when it comes to powerful technologies. I would focus on regulating the applications of AI—so AI cars, AI clothes, AI toys, and AI in nuclear power plants, rather than the field itself. Note that the boundary between AI and software is quite murky!

  We’re in a global competition for AI, so I wouldn’t rush to regulate AI per se. Of course, existing regulatory bodies like the National Safety Transportation Board are already looking at AI cars, and the recent Uber accident. I think that regulation is very appropriate and that it will happen and should happen.

  Oren Etzioni is the CEO of the Allen Institute for Artificial Intelligence (abbreviated as AI2), an independent, non-profit research organization established by Microsoft co-founder Paul Allen in 2014. AI2, located in Seattle, employs over 80 researchers and engineers with the mission of “conducting high-impact research and engineering in the field of artificial intelligence, all for the common good.”

  Oren received a bachelor’s degree in computer science from Harvard in 1986. He then went on to obtain a PhD from Carnegie Mellon University in 1991. Prior to joining AI2, Oren was a professor at the University of Washington, where he co-authored over 100 technical papers. Oren is a fellow of the Association for the Advancement of Artificial Intelligence, and is also a successful serial entrepreneur, having founded or co-founded a number of technology startups that were acquired by larger firms such as eBay and Microsoft, Oren helped to pioneer meta-search (1994), online comparison shopping (1996), machine reading (2006), open information extraction (2007), and semantic search of the academic literature (2015).

  Chapter 24. BRYAN JOHNSON

  AI is the best thing since sliced bread. We should embrace it wholeheartedly and understand the secrets of unlocking the human brain by embracing AI. We can’t do it by ourselves.

  ENTREPRENEUR FOUNDER, KERNEL & OS FUND

  Bryan Johnson is the founder of Kernel, OS Fund, and Braintree. After the sale of Braintree to PayPal in 2013 for $800m, Johnson founded OS Fund in 2014 with $100m of those funds. His objective was to invest in entrepreneurs and companies that develop breakthrough discoveries in hard science to address our most pressing global problems. In 2016, Johnson founded Kernel with another $100m of his
funds. Kernel is building brain-machine interfaces with the intention of providing humans with the option to radically enhance their cognition.

  MARTIN FORD: Could you explain what Kernel is? How did it get started, and what is the long-term vision?

  BRYAN JOHNSON: Most people start companies with a product in mind, and they build that given product. I started Kernel with a problem identified—we need to build better tools to read and write our neural code, to address disease and malfunction, to illuminate the mechanisms of intelligence, and to extend our cognition. Look at the tools we have to interface with our brain right now—we can get an image of our brain via an MRI scan, we can do bad recordings via EEG outside the scalp that don’t really give us much, and we can implant an electrode to address a disease. Outside of that, our brain is largely inaccessible to the world outside of our five senses. I started Kernel with $100 million with the objective of figuring out what tools we can build. We’ve been on this quest for two years, and we still remain in stealth mode on purpose. We have a team of 30 people and we feel very good about where we’re at. We’re working very hard to build the next breakthroughs. I wish I could give you more details about where we’re at in the world. We will have that out in time, but right now we’re not ready.

  MARTIN FORD: The articles I’ve read suggest that you’re beginning with medical applications to help with conditions like epilepsy. My understanding is that you initially want to try an invasive approach that involves brain surgery, and you then want to leverage what you learn to eventually move to something that will enhance cognition, while hopefully being less invasive. Is that the case, or are you imagining that we’re all going to have chips inserted into our brain at some point?

  BRYAN JOHNSON: Having chips in our brain is one avenue that we’ve contemplated, but we’ve also started looking at every possible entry point in neuroscience because the key in this game is figuring out how to create a profitable business. Figuring out how to create an implantable chip is one option, but there are many other options, and we’re looking at all of them.

 

‹ Prev