Book Read Free

Architects of Intelligence

Page 49

by Martin Ford


  At this point I was quite far into my PhD, but I walked into Rod’s office on that day, and I said, “I have to change everything I’m doing about my PhD. My PhD has got to be about robots and the lives of everyday people; it’s got to be about robots being socially and emotionally intelligent.” To his credit, Rod understood that this was a really important way to think about these problems and that it was going to be key to having robots become part of our everyday lives, so he let me go for it.

  From that point, I built a whole new robot, Kismet, which is recognized as the world’s first social robot.

  MARTIN FORD: I know Kismet is now in the MIT museum.

  CYNTHIA BREAZEAL: Kismet was really the beginning of it. It’s the robot that started this field of the interpersonal human-robot social interaction, collaboration, and partnership, much more akin to the droids in Star Wars. I knew I could not build an autonomous robot that could rival adult social and emotional intelligence because we are the most socially and emotionally sophisticated species on the planet. The question was what kind of entity can I model, because I’m coming from a lab where we’re very biologically inspired, and the only entities that exhibit this behavior are living things, mainly people. So, I thought the place to start looking at this was the infant-caregiver relationship and looking at where does our sociability originate and how does that develop over time? Kismet was modeling that nonverbal, emotive communication at the infant stage, because if a baby cannot form its emotional bond with its caregiver, the baby can’t survive. The caregiver has to sacrifice and do many things in order to care for an infant.

  Part of our survival mechanism is to be able to form this emotional connection and to have enough sociability there that the caregiver—the mother, the father, or whoever—is compelled to treat the newborn or young infant as a fully-fledged social and emotional being. Those interactions are critical to us actually developing true social and emotional intelligence, it’s a whole bootstrapping process. That’s another moment of just acknowledging that even human beings, with all of our evolutionary endowments, don’t develop these capabilities if we don’t grow up in the right kind of social environment.

  It became a really important intersection of not only what you program in and endow an AI robot with, you have to also think deeply about the social learning and how you create the behaviors in the entity so that people will treat it as a social, emotionally responsive entity that they can empathize with and form that connection with. It’s from those interactions that you can develop and grow and go through another developmental trajectory to develop full adult social and emotional adult intelligence.

  That was always the philosophy, which is why Kismet was modeled to be not like a baby literally, but instead being altricial. I remember reading a lot of animation literature too, which raised questions like, how do you design something that pulls on those social, emotive, nurturing instincts within people so people would interact with Kismet in a subconscious way and nurture it naturally? Because of the way the robot was designed, every aspect about its quality of movement, its appearance, and its vocal quality was all about trying to create the right social environment that would allow the robot to engage, interact, and eventually be able to learn and develop.

  In the early 2000s, a lot of the work was in understanding the mechanics of interpersonal interaction and how people really communicate, not just verbally but importantly nonverbally. A huge part of human communication is nonverbal, and a lot of our social judgments of trustworthiness and affiliation, etc., are heavily influenced by our nonverbal interaction.

  When you look at voice assistants today, the interaction is very transactional; it feels a lot like playing chess. I say something, the machine says something, I say something, the machine says something, and so on. When you look at human interpersonal interaction, developmental psychology literature talks about the “dance of communication.” The way we communicate is constantly mutually adapted and regulated between the participants; it’s a subtle, nuanced dance. First, I’m influencing the listener, and while I’m talking and gesturing the listener is proving me nonverbal cues in dynamic relation to my own. All the while, their cues are influencing me and shaping my inferences about how the interaction is going, and vice versa. We’re a dynamically coupled, collaborative duo. That’s what human interaction and human communication really is, and a lot of the early work was trying to capture that dynamic and appreciating how critical the nonverbal aspects were as well as the linguistic side of it.

  The next phase was to actually create an autonomous robot that could collaborate with people in this interpersonal way, still pushing on the social and emotional intelligence and the theory of other minds, now to do cooperative activities. In AI we have this habit of thinking that just because there’s a competence that’s easy for us as humans to do because we’ve evolved to do it, then it must not be that hard, but actually, we are the most socially and emotionally sophisticated species on the planet. Building social and emotional intelligence into machines is very, very hard.

  MARTIN FORD: And also, very computationally challenging?

  CYNTHIA BREAZEAL: Right. Arguably more so than a lot of other capabilities, like vision or manipulation, when we think about how sophisticated we are. The machine has to dovetail its intelligence and behavior with our own. It has to be able to infer and predict our thoughts, intents, beliefs, desires, etc. from context. What we do, what we say. Our pattern of behavior over time. What if you can build a machine that can engage people in this partnership where it doesn’t have to be about physical work or physical assistance, but instead is about assistance and support in the social and emotional domains? We started looking at new applications for robots that these intelligent machines could have a profound impact on, like education, behavior change, wellness, coaching, aging… but people hadn’t even thought about yet because they’re so hung up on the physical aspect of physical work.

  When you start to look at areas where social and emotional support is known to be really important, these are often areas of growth and transformation of the human themselves. If the task of the robot isn’t just to get a thing built, what if the thing you’re actually trying to help improve or build is the person themselves? Education is a great example. If you can learn something new, you are transformed. You are able to do things that you could not do otherwise, and you have opportunities now available to you that you didn’t have otherwise. Aging in place or managing chronic disease are other examples. If you can stay healthier, your life is transformed because you’re going to be able to do things and access opportunities you would not have been able to do otherwise.

  Social robots broaden the relevance and application of huge areas of social significance beyond manufacturing and autonomous cars. Part of my life’s work is trying to show people that you have physical competence in one dimension but orthogonal to that, which is critically important, is the ability for these machines to interact, engage, and support people in a way that unlocks our human potential. In order to do that, you need to be able to engage people and all of our ways of thinking and understanding the world around us. We are a profoundly social and emotional species, and it’s really critical to engage and support those other aspects of human intelligence in order to unlock human potential. The work within the social robotics community has been focused on those huge impact areas.

  We’re now just recently starting to see that an appreciation of robots or AI that work collaboratively with people is actually really important. For a long, long time human-AI or human-robot collaboration was not a widely adopted problem that people thought we had to figure out, but now I think that’s changed.

  Now that we’re seeing the proliferation of AI impacting so many aspects of our society, people are appreciating that this field of AI and robotics is no longer just a computer science or engineering endeavor. The technology has come into society in a way that we have to think much more holistically around the societal integration and im
pact of these technologies.

  Look at a robot like Baxter, built by Rethink Robotics. It’s a manufacturing robot that’s designed to collaborate with humans on the assembly line, not to be roped off far from people but to work shoulder-to-shoulder with them. In order to do that, Baxter has got a face so that coworkers can anticipate, predict, and understand what the robot’s likely to do next. Its design is supporting our theory of mind so that we can collaborate with it. We can read those nonverbal cues in order to make those assessments and predictions, and so the robot has to support that human way of understanding so that we can dovetail our actions and our mental states with those of the machine, and vice versa. I would say Baxter is a social robot; it just happens to be a manufacturing social robot. I think we’ll have broad genres of robots that will be social, which means they’re able to collaborate with people, but they may do a wide variety of tasks from education and healthcare to manufacturing and driving, and any other tasks. I see it as a critical kind of intelligence for any machine that is meant to coexist with human beings in a human-centered way that dovetails with the way we think and the way we behave. It doesn’t matter what the physical task or capabilities of the machine are; if it’s collaborative it is also a social robot.

  We’re seeing a wide variety of robots being designed today. They’re still going into the oceans and on manufacturing lines, but now we’re also seeing these other kinds of robots coming in to human spaces in education and therapeutic applications for autism, for instance. It’s worth remembering, though, that the social aspect is also really hard. There’s still a long way to go in improving and enhancing the social and emotional collaborative intelligence of this kind of technology. Over time, we’ll see combinations of the social, emotional intelligence with the physical intelligence, I think that’s just logical.

  MARTIN FORD: I want to ask you about progress toward human-level AI or AGI. First of all, do you think it’s a realistic objective?

  CYNTHIA BREAZEAL: I think the question actually is, what is the real-world impact we want to achieve? I think there is the scientific question and challenge of wanting to understand human intelligence, and one way of trying to understand human intelligence is to model it and to put it in technologies that can be manifested in the world, and trying to understand how well the behavior and capabilities of these systems mirror what people do.

  Then, there’s the real-world application question of what value are these systems supposed to be bringing to people? For me, the question has always been about how you design these intelligent machines that dovetail with people—with the way we behave, the way we make decisions, and the way we experience the world—so that by working together with these machines we can build a better life and a better world. Do these robots have to be exactly human to do that? I don’t think so. We already have a lot of people. The question is what’s the synergy, what’s the complementarity, what’s the augmentation that allows us to extend our human capabilities in terms of what we do that allows us to really have greater impact in the world.

  That’s my own personal interest and passion; understanding how you design for the complementary partnership. It doesn’t mean I have to build robots that are exactly human. In fact, I feel I have already got the human part of the team, and now I’m trying to figure out how to build the robot part of the team that can actually enhance the human part of the team. As we do these things, we have to think about what people need in order to be able to live fulfilling lives and feel that there’s upward mobility and that they and their families can flourish and live with dignity. So, however we design and apply these machines needs to be done in a way that supports both our ethical and human values. People need to feel that they can contribute to their community. You don’t want machines that do everything because that’s not going to allow for human flourishing. If the goal is human flourishing, that gives some pretty important constraints in terms of what is the nature of that relationship and that collaboration to make that happen.

  MARTIN FORD: What are some of the breakthroughs that need to take place in order to reach AGI?

  CYNTHIA BREAZEAL: What we know how to do today is to build special-purpose AI that, with sufficient human expertise, we can craft, and hone, and polish so that it can exceed human intelligence with narrow domains. Those AIs, however, can’t do multiple things that require fundamentally different kinds of intelligence. We don’t know how to build a machine that can develop in the same way as a child and grow and expand its intelligence in an ongoing way.

  We have had some recent breakthroughs with deep learning, which is a supervised learning method. People learn in all kinds of ways, though. We haven’t seen the same breakthrough in machines that can learn from real-time experience. People can learn from very few examples and generalize. We don’t know how to build machines that can do that. We don’t know how to build machines that have human-level common sense. We can build machines that can have knowledge and information within domains, but we don’t know how to do the kind of common sense we all take for granted. We don’t know how to build a machine with deep emotional intelligence. We don’t know how to build a machine that has a deep theory of mind. The list goes on. There’s a lot of science to be done, and in the process of trying to figure these things out we’re going to come to a deeper appreciation and understanding of how we are intelligent.

  MARTIN FORD: Let’s talk about some of the potential downsides, the risks and the things we should legitimately worry about.

  CYNTHIA BREAZEAL: The real risks right now that I see have to do with people with nefarious intents using these technologies to hurt people. I am not nearly as concerned about superintelligence enslaving humanity as I am around people using the technology to do harm. AI is a tool, and you can apply it to both benefit and help people, but also to hurt people or to privilege one group of people over others. There’s a lot of legitimate concern around privacy and security because that’s tied to our freedom. There is a lot of concern around democracy and what do you do when we have fake news and bots proliferating falsehoods, and people are struggling to understand what’s true and to have a common ground. Those are very real risks. There are real risks around autonomous weapons. There’s also a question of a growing AI gap where AI exacerbates the divide instead of closing it. We need to start working on making AI far more democratized and inclusive so we have a future where AI can truly benefit everyone, not just a few.

  MARTIN FORD: But are superintelligence and the alignment or control problem ultimately real concerns, even if they lie far in the future?

  CYNTHIA BREAZEAL: Well, you have to then really get down to the brass tacks of what do you mean by super intelligence, because it could mean a lot of different things. If it is a superintelligence, why are we assuming the same evolutionary forces that drove the creation of our motivations and drives would be anything like those of the super intelligence? A lot of the fear I hear is basically mapping onto AI our human baggage that we evolved with to survive in a hostile complex world with competitive others. Why assume that a super intelligence is going to be saddled with the same machinery? It’s not human, so why would it be?

  What are the practical driving forces to create that? Who’s going to build it and why? Who’s going to invest the time and effort and money? Will it be universities or will it be corporations? You’ve got to think about the practicalities of what are the societal and economic drivers that would lead to the creation of something like that. It’s going to require enormous amounts of talent and funding and people in order to do that instead of working on something else important.

  MARTIN FORD: There is definitely a lot of interest. People like Demis Hassabis at DeepMind, are definitely interested in building AGI, or at least getting much closer to it. It’s their stated goal.

  CYNTHIA BREAZEAL: People may be interested in building it, but where are the resources, time, and talent coming from at massive scale? My question is, what are the actual societal driving conditions and fo
rces that would lead to the investment necessary to create that versus what we see now? I’m just asking a very practical question. Think about what the path is given the amount of investment it’s going to take to get there. What is the driver that’s going to lead to that? I don’t see the motivation of agencies or entities to fund what it’s going to take to achieve real superhuman AGI right now.

  MARTIN FORD: One potential driver of interest and investment might be the perceived AI arms race with China, and perhaps other countries as well. AI does have applications in the military and security space, so is that a concern?

  CYNTHIA BREAZEAL: I think we’re always going to be in a race with other countries around technology and resources, that’s just the way it is. That doesn’t necessitate leading to general-purpose intelligence; everything you’ve just said wouldn’t necessarily require general intelligence, they could be broader, more flexible, but still more bounded AI.

  All I’m pushing on is that there’s the general super intelligence thing versus what the driving forces are right now by the entities that can fuel the work and the people and the talent to work on those problems. I see much more reason and rationale for the more domained aspects of AI versus the true general super intelligence. Certainly, within academia and research, people are absolutely very interested in creating that and people will continue to work on it. But when you get to the brass tacks of resources and time, talent, and patience for a very long-term commitment to do that, it’s not obvious to me who’s going to push that forward in a very practical sense just by the nature of who’s going to provide those resources. I don’t see that yet.

 

‹ Prev