Book Read Free

Emotional Design

Page 20

by Donald A. Norman


  CHAPTER SEVEN

  The Future of Robots

  SCIENCE FICTION CAN BE a useful source of ideas and information, for it is, in essence, detailed scenario development. Writers who have used robots in their stories have had to imagine in considerable detail just how they would function within everyday work and activities. Isaac Asimov was one of the earliest thinkers to explore the implications of robots as autonomous, intelligent creatures, equal (or superior) in intelligence and abilities to their human masters. Asimov wrote a sequence of novels analyzing the difficulties that would arise if autonomous robots populated the earth. He realized that a robot might inadvertently harm itself or others, both through its actions or, at times, through its lack of action. He therefore developed a set of postulates that might prevent these problems; but, as he did so, he also realized that they were often in conflict with one another. Some conflicts were simple: given a choice between preventing harm to itself or to a human, the robot should protect the human. But other conflicts were much more subtle, much more difficult. Eventually, he postulated three laws of robotics (laws one, two, and three) and wrote a sequence of stories to illustrate the dilemmas that robots would find themselves in, and how the three laws would allow them to handle these situations. These three laws dealt with the interaction of robots and people, but as his story line progressed into more complex situations, Asimov felt compelled to add an even more fundamental law dealing with the robots’ relationship to humanity itself. This one was so fundamental that it had to come first; but, because he already had a law labeled First, this fourth law had to be labeled Zeroth.

  Asimov’s vision of people and of the workings of industry was strangely crude. It was only his robots that behaved well. When I reread his books in preparation for this chapter, I was surprised at the discrepancy between my fond memories of the stories and my response to them now. His people are rude, sexist, and naïve. They seem unable to converse unless they are insulting each other, fighting, or jeering. His fictional company, the U.S. Robots and Mechanical Men Corporation doesn’t fare well either. It is secretive, manipulative, and allows no tolerance for error: make one mistake and the company would fire you. Asimov spent his entire life in a university. Maybe that is why he had such a weird view of the real world.

  Nonetheless, his analysis of the reaction of society to robots—and of robots to humans—was interesting. He thought society would turn against robots; and, indeed, he wrote that “most of the world governments banned robot use on earth for any purpose other than scientific research between 2003 and 2007.” (Robots, however, were allowed for space exploration and mining; and in Asimov’s stories, these activities are widely deployed in the early 2000s, which allow the robot industry to survive and grow.) The Laws of Robotics are intended to reassure humanity that robots will not be a threat and will, moreover, always be subservient to humans.

  Today, even our most powerful and functional robots are far from the stage of Asimov’s. They do not operate for long periods without human control and assistance. Even so, the laws are an excellent tool for examining just how robots and humans should interact.

  Asimov’s Four Laws of Robotics

  Zeroth Law: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

  First law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics.

  Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law.

  Third Law: A robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law.

  Many machines already have key aspects of the laws hard-wired into them. Let’s examine how these laws are implemented.

  The Zeroth Law—that “a robot may not injure humanity, or, through inaction, allow humanity to come to harm,” is beyond current capability, for much the same reasons that Asimov did not need this law in his early stories: to determine just when an action—or lack of action—will harm all humanity is truly sophisticated, probably beyond the abilities of most people.

  The first law—that “a robot may not injure a human being, or, through inaction, allow a human being to come to harm, unless this would violate the Zeroth Law of Robotics,” could be labeled “safety.” It isn’t legal, let alone proper, to produce things that can hurt people. As a result, all machines today are designed with multiple safeguards to minimize the likelihood that they can harm by their action. Liability laws guarantee that robots—and machines in general—are outfitted with numerous safeguards to prevent their actions from harming people. Industrial and home robots have proximity and collision sensors. Even simple machines such as elevators and garage doors have sensors that stop them from closing on people. Today’s robots try to avoid bumping into people or objects. Lawn mower and vacuum cleaner robots have sensing mechanisms that cause them to stop or back away whenever they bump into anything or come too close to an edge, such as a stairway. Industrial robots are often fenced off, so that people can’t get near them when they are operating. Some have people detectors, so they stop when they detect someone nearby. Home robots have many mechanisms to minimize the chance of damage; but at the moment, most of them are so underpowered that they couldn’t hurt even if they tried to. Moreover, the lawyers are very careful to guard against potential damage. One company sells a home robot that can be used to teach children by reading books to them and that can also serve as a home sentinel, wandering about the house, taking photographs of unexpected encounters and notifying its owners, by email if necessary (through its wireless internet connection, attaching the photographs along with the message, of course). Despite these intended applications, the robot comes with stern instructions that it is not to be used near children, nor is it to be left unattended in the house.

  A lot of effort has gone into implementation of the safety provision of the first law. Most of this work can be thought of as applying to the visceral level, where fairly simple mechanisms are used to shut down the system if safety regulations are violated.

  The second part of the law—do not allow harm through inaction—is quite difficult to implement. If determining how a machine’s actions might affect people is difficult, trying to determine how the lack of an action might have an impact is even more so. This would be a reflective level implementation, for the robot would have to do considerable analysis and planning to determine when lack of action would lead to harm. This is beyond most capabilities today.

  Despite the difficulties, some simple solutions to the problem do exist. Many computers are plugged into “non-interruptible power supplies” to avoid loss of data in cases of power failure. If the power failed and no action were taken, harm would occur, but in these cases, when the power fails, the power supply springs into action, switching to batteries, converting the battery voltage to the form the computer requires. It can also be set to notify people and to turn off the computer gracefully. Other safety systems are designed to act when normal processes have failed. Some automobiles have internal sensors that watch over the path of the car, adjusting engine power and braking to ensure that the auto keeps going as intended. Automatic speed control mechanisms attempt to keep a safe distance from the car in front, and lane-changing detectors are under investigation. All of these devices safeguard car and passengers when inaction would lead to accident.

  Today, these devices are simple and the mechanisms built in. Still, one can see the beginnings of solutions to the inaction clause of the first law, even in these simple devices.

  The second law—that “a robot must obey orders given it by human beings, except where such orders would conflict with the Zeroth or First Law,” is about obeying people, in contrast to the first, which is about protecting them. In many ways, this law is trivial to implement, but for elementary reasons. Machines today do not have an independent mind, so they must obey ord
ers: they have no choice but to follow the commands given them. If they fail, they face the ultimate punishment: they are shut off and sent to the repair shop.

  Can a machine disobey the second law in order to protect the first law? Yes, but not with much subtlety. Command an elevator to take you to your floor, and it will refuse if it senses that a person or object is blocking the door. This, however, is the most trivial of ways to implement the law, and it fails when the situation has any sophistication. Actually, in cases where safety systems prevent a machine from following orders, usually a person can override the safety system to permit the operation to take place anyway. This has been the cause of many an accident in trains, airplanes, and factories. Maybe Asimov was correct: we should leave some decisions up to the machines.

  Some automatically deployed safety systems are an example of the “through inaction” clause of the law. Thus, if the driver of an automobile steps on the brakes rapidly, but only depresses the brake pedal halfway, most autos would only slow halfway. The Mercedes Benz, however, considers this “harm through inaction,” so when it detects a rapid brake deployment, it puts the brakes on full, assuming that the owner really wants to stop as soon as possible. This is a combination of the first and second laws: the first law, because it prevents harm to the driver; and the second law because it is violating the “instructions” to apply the brakes at half strength. Of course, this may not really be a violation of the instructions: the robot assumes that full power was intended, even if not commanded. Perhaps the robot is invoking a new rule: “Do what I mean, not what I say,” an old concept from some early artificial intelligence computer systems.

  Although the automatic application of brakes in an automobile is a partial implementation of the second law, the correct implementation would have the auto examine the roadway ahead and decide for itself just how much speed, braking, or steering ought to be applied. Once that happens, we will indeed have a full first and second law implementation. Once again, this is starting to happen. Some cars automatically slow up if they’re too close to the car in front, even if the driver has not acted to slow the vehicle.

  We don’t yet have the case of conflicting orders, but soon we will have interacting robots, where the requests of one robot might conflict with the requests of the human supervisors. Then, determining precedence and priority will become important.

  Once again, these are easy cases. Asimov had in mind situations where a car would refuse to drive: “I’m sorry, but the road conditions are too dangerous tonight.” We haven’t yet reached that point—but we will. Asimov’s second law will be useful.

  Least important of all the laws, so Asimov thought, was self-preservation—“a robot must protect its own existence as long as such protection does not conflict with the Zeroth, First, or Second Law”—so it is numbered three, last in the series. Of course, given the limited capability of today’s machines, where laws one and two seldom apply, this law is of most importance today, for we would be most annoyed if our expensive robot damaged or destroyed itself. As a result, this law is easy to find in action within many existing machines. Remember those sensors that are built into robot vacuum cleaners to prevent them from falling down stairs? Also how they—and robot lawn mowers—have bump and obstacle detectors to avoid damage from collisions? In addition, many robots monitor their energy state and either go into “sleep” mode or return to a charging station when their energy level drops. Resolution of conflicts with the other laws is not well handled, except by the presence of human operators who are able to override safety parameters when circumstances warrant.

  Asimov’s Laws cannot be fully implemented until machines have a powerful and effective capability for reflection, including meta-knowledge (knowledge of its own knowledge) and self-awareness of its state, activities, and intentions. These raise deep issues of philosophy and science as well as complex implementation problems for engineers and programmers. Progress in this area is happening, but slowly.

  Even with today’s rather primitive devices, having some of the capabilities would be useful. Thus, in cases of conflict, there would be sensible overriding of the commands. Automatic controls in airplanes would look ahead to determine the implications of the path they are following so that they would change if it would lead to danger. Some planes have indeed flown into mountains while on automatic control, so the capability would have saved lives. In actuality, many automated systems already are beginning to do this kind of checking.

  Even today’s toy pet robots have some self-awareness. Consider a robot whose operation is controlled both by its “desire” to play with its human owner, but also to make sure that it doesn’t exhaust its battery power. When low on energy, it will therefore return to its charging station, even if the human wishes to continue playing with it.

  The greatest hurdles to our ability to implement something akin to Asimov’s Laws are his underlying assumptions of autonomous operation and central control mechanisms that may not apply in today’s systems.

  Asimov’s robots worked as individuals. Give a robot a task to do, and off it would go. In the few cases where he had robots work as a group, one robot was always in charge. Moreover, he never had people and robots working together as a team. We are more likely to want cooperative robots, systems in which people and robots or teams of robots work together, much as a group of human workers can work together at a task. Cooperative behavior requires a different set of assumptions than Asimov had. Thus, cooperative robots need rules that provide for full communication of intentions, current state, and progress.

  Asimov’s main failure, however, was his assumption that someone had to be in control. When he wrote his novels, it was common to assume that intelligence required a centralized coordinating and control mechanism with a hierarchical organizational structure beneath it. This is how armies have been organized for thousands of years: armies, governments, corporations, and other organizations. It was natural to assume that the same principle applied to all intelligent systems. But this is not the way of nature. Many natural systems, from the actions of ants and bees, to the flocking of birds, and even the growth of cities and the structure of the stock market, occur as a natural result of the interaction of multiple bodies, not through some central, coordinated control structure. Modern control theory has moved away from this assumption of a central command post. Distributed control is the hallmark of today’s systems. Asimov assumed a central decision structure for each robot that decided how to act, guided by his laws. In fact, that is probably not how it will work: the laws will be part of the robot ’s architecture, distributed throughout the many modules of its mechanisms; lawful behavior will emerge from the interactions of the multiple modules. This is a modern concept, not understood while Asimov was writing, so it is no wonder he missed this development in our understanding of complex systems.

  Still, Asimov was ahead of his time, thinking far ahead to the future. His stories were written in the 1940s and ’50s, but in his novel I, Robot, he quotes the three laws of robotics from the 2058 edition of the Handbook of Robotics; thus, he looked ahead more than 100 years. By 2058, we may indeed need his laws. Moreover, as the analyses indicate, the laws are indeed relevant, and many systems today follow them, even if inadvertently. The difficult aspects have to do with damage due to lack of action, as well as with properly assessing the relative importance of following orders versus damage or harm to oneself, others, or humanity.

  As machines become more capable, as they take over more and more human activities, working autonomously, without direct supervision, they will get entangled in the legal system, which will try to determine fault when accidents arise. Before this happens, it would be useful to have some sort of ethical procedure in place. There already are some safety regulations that apply to robots, but they are very primitive. We will need more.

  It is not too early to think about the future difficulties that intelligent and emotional machines may give rise to. There are numerous practical, moral, lega
l, and ethical issues to think about. Most are still far in the future, but that is a good reason to start now—so that when problems arrive, we will be ready.

  The Future of Emotional Machines and Robots: Implications and Ethical Issues

  The development of smart machines that will take over some tasks now done by people has important ethical and moral implications. This point becomes especially critical when we talk about humanoid robots that have emotions and to which people might form strong emotional attachments.

  What is the role of emotional robots? How will they interact with us? Do we really want machines that are autonomous, self-directed, with a wide range of behavior, a powerful intelligence, and affect and emotion? I think we do, for they can provide many benefits. Obviously, as with all technologies, there are dangers as well. We need to ensure that people always maintain oversight and control, that they serve human needs appropriately.

  Will robot teachers replace human teachers? No, but they can complement them. Moreover, they could be sufficient in situations where there is no alternative—to enable learning while traveling, or while in remote locations, or when one wishes to study a topic for which there is not easy access to teachers. Robot teachers will help make lifelong learning a practicality. They can make it possible to learn no matter where one is in the world, no matter the time of day. Learning should take place when it is needed, when the learner is interested, not according to some arbitrary, fixed school schedule.

 

‹ Prev