Architects of Intelligence

Home > Other > Architects of Intelligence > Page 15
Architects of Intelligence Page 15

by Martin Ford


  Babies don’t have a huge amount of means to act on the world, but they observe a lot, and they learn a huge amount by observing. Baby animals also do this. They probably have more hardwired stuff, but it’s very similar.

  Until we figure out how to do this unsupervised/self-supervised/predictive learning, we’re not going to make significant progress because I think that’s the key to learning enough background knowledge about the world so that common sense will emerge. That’s the main hurdle. There are more technical subproblems of this that I can’t get into, like prediction under uncertainty, but that’s the main thing.

  How long is it going to take before we figure out a way to train machines so that they learn how the world works by watching YouTube videos? That’s not entirely clear. We could have a breakthrough in two years that might take another 10 years to actually make it work, or it might take 10 or 20 years. I have no idea when it will happen, but I do know it has to happen.

  That’s just the first mountain we have to climb, and we don’t know how many mountains are behind it. There might be other huge issues and major questions that we do not see yet because we haven’t been there yet and it’s unexplored territory.

  It will probably take 10 years before we find this kind of breakthrough and before it has some consequence in the real world, and that has to happen way before we reach human-level artificial general intelligence. The question is, once we clear this hurdle, what other problems are going to pop up?

  How much prior structure do we need to build into those systems for them to actually work appropriately and be stable, and for them to have intrinsic motivations so that they behave properly around humans? There’s a whole lot of problems that will absolutely pop up, so AGI might take 50 years, it might take 100 years, I’m not too sure.

  MARTIN FORD: But you think it’s achievable?

  YANN LECUN: Oh, definitely.

  MARTIN FORD: Do you think it’s inevitable?

  YANN LECUN: Yes, there’s no question about that.

  MARTIN FORD: When you think of an AGI, would it be conscious, or could it be a zombie with no conscious experience at all?

  YANN LECUN: We don’t know what that means. We have no idea what consciousness is. I think it’s a non-problem. It’s one of those questions that in the end, when you realize how things actually work, you realize that question was immaterial.

  Back in the 17th century when people figured out that the image in the back of the eye on the retina forms upside down, they were puzzled by the fact that we see right-side up. When you understand what kind of processing is required after this, and that it doesn’t really matter in which order the pixels come, you realize it’s kind of a funny question because it doesn’t make any sense. It’s the same thing here. I think consciousness is a subjective experience and it could be a very simple epiphenomenon of being smart.

  There are several hypotheses for what causes this illusion of consciousness—because I think it is an illusion. One possibility is that we have essentially a single engine in our prefrontal cortex that allows us to model the world, and a conscious decision to pay attention to a particular situation configures that model of the world for the situation at hand.

  The conscious state is sort of an important form of attention, if you will. We may not have the same conscious experience if our brain were ten times the size and we didn’t have a single engine to model the world, but a whole bunch of them.

  MARTIN FORD: Let’s talk about some of the risks associated with AI. Do you believe that we’re on the cusp of a big economic disruption with the potential for wide spread job losses?

  YANN LECUN: I’m not an economist, but I’m obviously interested in those questions, too. I’ve talked to a bunch of economists, and I’ve attended a number of conferences with a whole bunch of very famous economists who were discussing those very questions. First of all, what they say is that AI is what they call a general-purpose technology or GPT for short. What that means is that it’s a piece of technology that will diffuse into all corners of the economy and transform pretty much how we do everything. I’m not saying this; they are saying this. If I was saying this, I would sound self-serving or arrogant, and I would not repeat it unless I had heard it from other people who know what they’re talking about. So, they’re saying this, and I didn’t really realize that this was the case before I heard them say it. They say this is something on the scale of electricity, the steam engine, or the electric motor.

  One thing I’m worried about, and this was before talking to the economists, is the problem of technological unemployment. The idea that technology progresses rapidly and the skills that are required by the new economy are not matched by the skills of the population. A whole proportion of the population suddenly doesn’t have the right skills, and it’s left behind.

  You would think that as technological progress accelerates, there’d be more and more people left behind, but what the economists say is that the speed at which a piece of technology disseminates in the economy is actually limited by the proportion of people who are not trained to use it. In other words, the more people are left behind, the less quickly the technology can diffuse in the economy. It’s interesting because it means that the evil has kind of a self-regulating mechanism in it. We’re not going to have widely disseminated AI technology unless a significant proportion of the population is trained to actually take advantage of it, and the example they use to demonstrate this is computer technology.

  Computer technology popped up in the 1960s and 1970s but did not have an impact on productivity on the economy until the 1990s because it took that long for people to get familiar with keyboards, mice, etc., and for software and computers to become cheap enough for them to have mass appeal.

  MARTIN FORD: I think there is a question of whether this time is different relative to those historical cases, because machines are taking on cognitive capability now.

  You now have machines that can learn to do a lot of routine, predictable things, and a significant percentage of our workforce is engaged in things that are predictable. So, I think the disruption could turn out to be bigger this time than what we’ve seen in the past.

  YANN LECUN: I don’t actually think that’s the case. I don’t think that we’re going to face mass unemployment because of the appearance of this technology. I think certainly the economic landscape is going to be vastly different in the same way that 100 years ago most of the population were working in the fields, and now it’s 2% of the population.

  Certainly, over the next several decades, you’re going to see this kind of shift and people are going to have to retrain for it. We’ll need some form of continuous learning, and it’s not going to be easy for everyone. I don’t believe, though, that we’re going to run out of jobs. I heard an economist say, “We’re not going to run out of jobs because we’re not going to run out of problems.”

  The upcoming AI systems are going to be an amplification of human intelligence in the way that mechanical machines have been an amplification of physical strength. They’re not going to be a replacement. It’s not like just because AI systems that analyze MRI images would be better at detecting tumors, then radiologists are out of a job. It’s going to be a very different job, and it’s going to be a much more interesting job. They’re going to spend their time doing more interesting things like talking to patients instead of staring at screens for 8 hours a day.

  MARTIN FORD: Not everyone’s a doctor, though. A lot of people are taxi drivers or truck drivers or fast food workers and they may have a harder time transitioning.

  YANN LECUN: What’s going to happen is the value of things and services is going to change. Everything that’s by done by machine is going to get a lot cheaper, and anything that’s done by humans is going to get more expensive. We’re going to pay more for authentic human experience, and the stuff that can be done by machine is going to get cheap.

  As an example, you can buy a Blu-ray player for $46. If you think about how much
incredibly sophisticated technology goes into a Blu-ray player, it’s insane that it costs $46. It’s got technology in the form of blue lasers that didn’t exist 20 years ago. It’s got an incredibly precise servo mechanism to drive the laser to microns of precision. It’s also got, H.264 video compression and superfast processors. It has a ridiculous amount of technology that goes in there, and it’s $46 because it’s essentially mass-produced by machines. Now, go on the web and search for a handmade ceramic salad bowl, and the first couple of hits you’re going to get are going to propose handmade ceramic bowl, a 10,000-year-old technology, for something in the region of $500. Why $500? Because it’s handmade and you’re paying for the human experience and the human connection. You can download a piece of music for a buck, but then if you want to go to a show where that music is being played live, it’s going to be $200. That’s for human experience.

  The value of things is going to change, with more value placed on human experience and less to things that are automated. A taxi ride is going to be cheap because it can be driven by the AI system, but a restaurant where an actual person serves you or an actual human cook creates something, is going to be more expensive.

  MARTIN FORD: That does presume that everyone’s got a skill or talent that’s marketable, which I’m not sure is true. What do you think of the idea of a universal basic income as a way to adapt to these changes?

  YANN LECUN: I’m not an economist, so I don’t have an informed opinion on this, but every economist I talked to seemed against the idea of a universal basic income. They all agree with the fact that as a result of increased inequality brought about by technological progress, some measures have to be taken by governments to compensate. All of them believe this has to do with fiscal policy in the form of taxing, and wealth and income redistribution.

  This income inequality is something that is particularly apparent in the US, but also to a smaller scale in Western Europe. The Gini index—a measure of income inequality—of France or Scandinavia is around 25 or 30. In the US, it’s 45, and that’s the same level as third-world countries. In the US, Erik Brynjolfsson, an economist at MIT, wrote a couple of books with his colleague from MIT, Andrew McAfee, studying the impact of technology on the economy. They say that the median income of a household in America has been flat since the 1980s where we had Reaganomics and the lowering of taxes for higher incomes, whereas productivity has gone up more or less continuously. None of that occurred in Western Europe. So, it’s purely down to fiscal policy. It’s maybe fueled by technological progress, but there are easy things that governments can do to compensate for the disruption, and they’re just not doing it in the US.

  MARTIN FORD: What other risks are there, beyond the impact on the job market and economy, that come coupled with AI?

  YANN LECUN: Let me start with one thing we should not worry about, the Terminator scenario. This idea that somehow we’ll come up with the secret to artificial general intelligence, and that we’ll create a human-level intelligence that will escape our control and all of a sudden robots will want to take over the world. The desire to take over the world is not correlated with intelligence, it’s correlated with testosterone.

  We have a lot of examples today in American politics, clearly illustrating that the desire for power is not correlated with intelligence.

  MARTIN FORD: There is a pretty reasoned argument, though, that Nick Bostrom, in particular, has raised. The problem is not an innate need to take over the world, but rather that an AI could be given a goal and then it might decide to pursue that goal in a way that turns out to be harmful to us.

  YANN LECUN: So, somehow we’re smart enough to build artificial general intelligence machines, then the first thing we do is tell them to build as many paper clips as they can and they turn the entire universe into paper clips? That sounds unrealistic to me.

  MARTIN FORD: I think Nick intends that as kind of a cartoonish example. Those kinds of scenarios all seem far-fetched, but if you are truly talking about superintelligence, then you would have a machine that might act in ways that would be incomprehensible to us.

  YANN LECUN: Well, there is the issue of objective function design. All of those scenarios assume that somehow, you’re going to design the objective function—the intrinsic motivations—of those machines in advance, and that if you get it wrong, they’re going to do crazy things. That’s not the way humans are built. Our intrinsic objective functions are not hardwired. A piece of it is hardwired in a sense that we have the instinct to eat, breathe, and reproduce, but a lot of our behavior and value system is learned.

  We can very much do the same with machines, where their value system is going to be trained and we’re going to train them to essentially behave in society and be beneficial to humanity. It’s not just a problem of designing those functions but also training them, and it’s much easier to train an entity to behave. We do it with our kids to educate them in what’s right and wrong, and if we know how to do it with kids why wouldn’t we be able to do this with robots or AI systems?

  Clearly, there are issues there, but it’s a bit like we haven’t invented the internal combustion engine yet and we are already worrying that we’re not going to be able to invent the brake and the safety belt. The problem of inventing the internal combustion engine is considerably more complicated than inventing brakes and safety belts.

  MARTIN FORD: What do you think of the fast takeoff scenario, where you have recursive improvement that happens at an extraordinary rate, and before you know it, we’ve got something that makes us look like a mouse or an insect in comparison?

  YANN LECUN: I absolutely do not believe in that. Clearly there’s going to be continuous improvement, and certainly, the more intelligent machines become, the more they’re going to help us design the next generation. It’s already the case, and it’s going to accelerate.

  There is some sort of differential equation that governs the progress of technology, the economy, consumption of resources, communication, the sophistication of technology, and all that stuff. There’s a whole bunch of friction terms in this equation that is completely ignored by the proponent of singularity or fast takeoff. Every physical process at some point has to saturate, by exhausting resources if nothing else. So, I don’t believe in a fast takeoff. It’s a fallacy that someone will figure out the secret to AGI, then all of a sudden, we’re going to go from machines that are as intelligent as a rat to some that are as intelligent as an orangutan, and then a week later they are more intelligent than us, and then a month later, way more intelligent.

  There’s also no reason necessarily to believe that being way more intelligent than a single human will allow a machine to be completely superior to a single human. Humans can get killed by viruses that are extremely stupid, but they are specialized to kill us.

  If we can build an artificial intelligence system that has general intelligence in that sense, then we can probably also build a more specialized intelligence designed to destroy the first one. It would be much more efficient at killing the AGI because more specialized machines are more efficient than general ones. I just think that every issue has its own solution built in.

  MARTIN FORD: So, what should we legitimately be worried about in the next decade or two?

  YANN LECUN: Economic disruption is clearly an issue. It’s not an issue without a solution, but it’s an issue with considerable political obstacles, particularly in cultures like the US where income and wealth redistribution are not something that’s culturally accepted. There is an issue of disseminating the technology so that it doesn’t only profit the developed world, but it’s shared across the world.

  There is a concentration of power. Currently, AI research is very public and open, but it’s widely deployed by a relatively small number of companies at the moment. It’s going to take a while before it’s used by a wider swath of the economy and that’s a redistribution of the cards of power. That will affect the world in some ways, it may be positive but it may also be neg
ative, and we need to ensure that it’s positive.

  I think the acceleration of technological progress and the emergence of AI is going to prompt governments to invest more massively into education, particularly continuous education because people are going to have to learn new jobs. That’s a real aspect of the disruption that needs to be dealt with. It’s not something that doesn’t have a solution, it’s just a problem that people have to realize exists in order for them to solve it.

  If you have a government that doesn’t even believe in established scientific facts like global warming, how can they believe in this kind of stuff? There are a lot of issues of this type, including ones in the area of bias and equity. If we use supervised learning to train our systems, they’re going to reflect the biases that are in the data, so how can you make sure they don’t prolong the status quo in terms of biases?

  MARTIN FORD: The problem there is that the biases are encapsulated in the data so that a machine learning algorithm would naturally acquire them. One would hope that it might be much easier to fix bias in an algorithm than in a human.

  YANN LECUN: Absolutely. I’m actually quite optimistic in that dimension because I think it would indeed be a lot easier to reduce bias in a machine than it currently is with people. People are biased in ways that are extremely difficult to fix.

  MARTIN FORD: Do you worry about military applications, like autonomous weapons?

  YANN LECUN: Yes and no. Yes, because of course AI technology can be used for building weapons, but some people, like Stuart Russell, have characterized a potential new generation of AI-powered weapons as weapons of mass destruction and I completely disagree with that.

  I think the way that militaries are going to use AI technology is exactly the opposite. It’s for what the military calls, surgical actions. You don’t drop a bomb that destroys an entire building, you send in your drone that just puts the person you are interested in capturing to sleep; it could be non-lethal.

 

‹ Prev