Robot City 1 & 2

Home > Science > Robot City 1 & 2 > Page 22
Robot City 1 & 2 Page 22

by Isaac Asimov


  "I am afraid that that is not possible," Euler said, shaking his head gravely.

  "What?" exclaimed Katherine. "Why not?"

  "Friend Euler's statement was imprecise," Rydberg said. "It is possible to leave. But there is a problem. A human being has been killed--"

  "Why does that involve us?" Derec asked.

  "It would be an unthinkable violation of the Laws of Robotics for a robot to harm a human being," Rydberg said. "I am unable even to form the thought without experiencing distress."

  "Of course it wasn't a robot," Derec said impatiently. "Another human being did it, obviously."

  Euler said, "Disregarding yourselves, there are no other humans here.""Our guide said something about that," Derec said. "But just because they have no business here doesn't mean that they didn't come over from some other sector anyway. Someone who'd murder wouldn't worry too long about proper travel passes or whatever it is you use here."

  "I will clarify," Rydberg interjected. "Friend Euler meant to say that there are no other human beings in this city."

  "Then from one of the other cities--," Katherine began.

  "There are no other cities on this planet."

  "What are you saying? Where are we?" she demanded.

  "I regret that I may not identify this planet or its star," Rydberg said. "But we who live here call this place Robot City."

  "There's nothing but robots here?" Derec said slowly, an uncomfortable idea pricking at him."Discounting yourselves, that is correct," Euler said.

  Katherine gaped. "No one in this whole city--it must be fifty hectares--"

  "Two hundred five," Euler corrected.

  Derec interrupted. "Where are the inhabitants? The builders? Where did they go?"

  Rydberg cocked his head slightly. "We are the inhabitants, and the builders, Friend Derec," he said matter-of-factly.

  It was the answer he had been expecting, but he still resisted its implications. "Where are your owners?" Derec persisted. "Where are the people you report to?"

  "Your question is based on an erroneous assumption," Euler replied. "Robot City is a free and autonomous community."

  "That can't be," he protested. "Maybe there are no humans here now. Maybe you're not presently in contact with any. But they must have brought you here, or sent you here. Youmust still be following their directives."

  "No, Friend Derec. We are self-directed," Euler said. "But we are not unaware of human beings. We have a vast library of book-films by and about human beings. And we have accepted our responsibility to see that humans do not come to harm."

  "I hope you understand, Friend Derec, why we are obliged to delay your departure," Rydberg said. "This is our first experience with death. We need your help in understanding how it happened, and in understanding how the experience of death should be integrated into our study of the Laws of Humanics."

  "The Laws of Humanics? What are they?" Katherine asked, puzzled.

  "The human counterparts of the Laws of Robotics--those guiding principles which govern human behavior."

  Euler continued, "At present the Laws of Humanics are a theoretical construct. We areattempting to determine if Laws of Humanics exist, and if they do, what they are. This incident has placed the research project in crisis. You must help us. I assure you that you will be afforded every possible comfort."

  As Euler was speaking, Katherine had slowly ' and closer to Derec, and now was standing at his elbow. "This is crazy," she said under her breath. "A city of robots, with no one to guide them? Doing research on human beings, like we were some curiosity?"

  And in that moment, Derec stopped fighting the truth and embraced it: The community on the asteroid and the great city surrounding him were products of the same mind, the same plan. He hadn't escaped at all.

  But at least he at last understood why--why he was given the key, and why it had brought them there. For the last to touch it had been Monitor 5, an advanced robot desperate to fulfill its First Law obligation to save him. KnowingKnowing what it was and what it was capable of, the robot could do nothing other than give it to him--programmed for what it knew would be a safe destination, a sister colony of robots light-years away.

  "Sssh," he said to Katherine, then looked to the robots. "Could you excuse us for a moment? We need to talk."

  "Certainly, Friend Derec," Euler replied, "We will--"

  "You stay. We'll leave," Derec said, taking Katherine's hand and leading her out the door.

  "Where are we going?" she asked breathlessly as he guided her a dozen meters down the corridor. "They're going to follow us."

  He stopped short and released her hand. "We're not going anywhere. At least I'm not. I really did want to talk privately."

  "What do you mean, you're not going anywhere?"

  "I'm going to stay," he said. "I won't tell themthat, though. I'll offer to stay and cooperate on the condition they arrange transportation for you. They don't need both of us."

  "No!" she said emphatically. "You don't have to do that. They've got no right to hold us. They have to let us both go. They're robots, aren't they? They have to help us."

  "They're robots, yes. But not like any you're used to. I don't think they'd agree with your definition of their obligations," Derec said, shaking his head. "But that's not the point. I'm not going to stay just to appease them, or to get them to let you go. I'm staying because I want to."

  "Want to! Why?"

  Derec flashed a tight-lipped smile. "I started thinking about how I'd feel if they did what we asked and put us on a ship to Aurora, or wherever. How I'd feel if I never found out any more about the key--"

  "We could take it with us.""--never found out where this planet is or why the robots are here--never went back for Wolruf or found out what happened to her. I thought about it and realized I couldn't just walk away. It's true that I don't know who I am. Even so, I know that's not the kind of person I want to be."

  There was a studied silence, which Derec finally broke. "Part friends?"

  Her eyes flicked upward and her gaze met his. "No," she said, shaking her head. "Because if you're staying, I'm staying, too."

  It was his turn to protest. "You don't have to do that. They're my causes, not yours. This is a safe world. I'll be fine alone."

  "You don't like my company?"

  He shrugged. "We get on all right."

  "Then are you trying to tell me that this is something a girl can't handle or shouldn't worry her head about?"

  "Of course not.""Then it's okay if I stay just because I want to?"

  Derec surrendered. "Sure."

  "Then let's go tell Euler and Rydberg."

  "After you," he said, bowing with a flourish of his hand.

  Wearing a contented smile, Katherine led the way back to the office. As the door opened, she turned and whispered back over her shoulder. "Just tell me this--when do our lives turn normal again?"

  Derec laughed aloud, startling the robots. "Maybe never, Katherine," he said. "Why are you complaining? You said your life was dull, didn't you?"

  "Dull isn't so bad," she said wistfully. "Dull has its good points."

  Chuckling to himself, Derec picked out a chair and settled in it as though planning to stay for a while. "We'll do what we can to help,"he said to Rydberg. "Tell us the story. Who're the suspects?"

  But the robot's dispassionate answer erased the smiles from both their faces so thoroughly it was as though they had never been there. Like a bitter aftertaste to a sweet drink, it stole all the pleasure that had come before.

  "Yes, David Derec," Rydberg said. "There are two suspects. Yourself--and Katherine Burgess. We are most curious to learn which of you committed the act, and why."

  THE LAWS OF HUMANICS

  by ISAAC ASIMOV

  I am pleased by the way in which the Robot City books pick up the various themes and references in my robot stories and carry on with them.

  For instance, my first three robot novels were, essentially, murder mysteries, with E
lijah Baley as the detective. Of these first three, the second novel, The Naked Sun, was a locked-room mystery, in the sense that the murdered person was found with no weapon on the site and yet no weapon could have been removed either.

  I managed to produce a satisfactory solution but I did not do that sort of thing again, and Iam delighted that Mike McQuay has tried his hand at it here.

  The fourth robot novel, Robots and Empire, was not primarily a murder mystery. Elijah Baley had died a natural death at a good, old age, the book veered toward the Foundation universe so that it was clear that both my notable series, the Robot series and the Foundation series, were going to be fused into a broader whole. (No, I didn't do this for some arbitrary reason. The necessities arising out of writing sequels in the 1980s to tales originally written in the 1940s and 1950s forced my hand.)

  In Robots and Empire, my robot character, Giskard, of whom I was very fond, began to concern himself with "the Laws of Humanics," which, I indicated, might eventually serve as the basis for the science of psychohistory, which plays such a large role in the Foundation series.

  Strictly speaking, the Laws of Humanicsshould be a description, in concise form, of how human beings actually behave. No such description exists, of course. Even psychologists, who study the matter scientifically (at least, I hope they do) cannot present any "laws" but can only make lengthy and diffuse descriptions of what people seem to do. And none of them are prescriptive. When a psychologist says that people respond in this way to a stimulus of that sort, he merely means that some do at some times. Others may do it at other times, or may not do it at all.

  If we have to wait for actual laws prescribing human behavior in order to establish psychohistory (and surely we must) then I suppose we will have to wait a long time.

  Well, then, what are we going to do about the Laws of Humanics? I suppose what we can do is to start in a very small way, and then later slowly build it up, if we can.

  Thus, in Robots and Empire, it is a robot, GiskardGiskard, who raises the question of the Laws of Humanics. Being a robot, he must view everything from the standpoint of the Three Laws of Robotics -- these robotic laws being truly prescriptive, since robots are forced to obey them and cannot disobey them.

  The Three Laws of Robotics are:

  1 -- A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

  2 -- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3 -- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  Well, then, it seems to me that a robot could not help but think that human beings ought to behave in such a way as to make it easier for robots to obey those laws.

  In fact, it seems to me that ethical humanbeings should be as anxious to make life easier for robots as the robots themselves would. I took up this matter in my story "The Bicentennial Man," which was published in 1976. In it, I had a human character say in part:

  "If a man has the right to give a robot any order that does not involve harm to a human being, he should have the decency never to give a robot any order that involves harm to a robot, unless human safety absolutely requires it. With great power goes great responsibility, and if the robots have Three Laws to protect men, is it too much to ask that men have a law or two to protect robots?"

  For instance, the First Law is in two parts. The first part, "A robot may not injure a human being," is absolute and nothing need be done about that. The second part, "or, through inaction, allow a human being to come to harm," leaves things open a bit. A human being might be about to come to harm because of someevent involving an inanimate object. A heavy weight might be likely to fall upon him, or he may slip and be about to fall into a lake, or any one of uncountable other misadventures of the sort may be involved. Here the robot simply must try to rescue the human being; pull him from under, steady him on his feet and so on. Or a human being might be threatened by some form of life other than human -- a lion, for instance -- and the robot must come to his defense.

  But what if harm to a human being is threatened by the action of another human being? There a robot must decide what to do. Can he save one human being without harming the other? Or if there must be harm, what course of action must he pursue to make it minimal?

  It would be a lot easier for the robot, if human beings were as concerned about the welfare of human beings, as robots are expected to be. And, indeed, any reasonablehuman code of ethics would instruct human beings to care for each other and to do no harm to each other. Which is, after all, the mandate that humans gave robots. Therefore the First Law of Humanics from the robots' standpoint is:

  1 -- A human being may not injure another human being, or, through inaction, allow a human being to come to harm.

  If this law is carried through, the robot will be left guarding the human being from misadventures with inanimate objects and with non-human life, something which poses no ethical dilemmas for it. Of course, the robot must still guard against harm done a human being unwittingly by another human being. It must also stand ready to come to the aid of a threatened human being, if another human being on the scene simply cannot get to the scene of action quickly enough. But then, even a robot may unwittingly harm a human being, and even arobot may not be fast enough to get to the scene of action in time or skilled enough to take the necessary action. Nothing is perfect.

  That brings us to the Second Law of Robotics, which compels a robot to obey all orders given it by human beings except where such orders would conflict with the First Law. This means that human beings can give robots any order without limitation as long as it does not involve harm to a human being.

  But then a human being might order a robot to do something impossible, or give it an order that might involve a robot in a dilemma that would do damage to its brain. Thus, in my short story "Liar!," published in 1940, I had a human being deliberately put a robot into a dilemma where its brain burnt out and ceased to function.

  We might even imagine that as a robot becomes more intelligent and self-aware, its brain might become sensitive enough toundergo harm if it were forced to do something needlessly embarrassing or undignified. Consequently, the Second Law of Humanics would be:

  2 -- A human being must give orders to a robot that preserve robotic existence, unless such orders cause harm or discomfort to human beings.

  The Third Law of Robotics is designed to protect the robot, but from the robotic view it can be seen that it does not go far enough. The robot must sacrifice its existence if the First or Second Law makes that necessary. Where the First Law is concerned, there can be no argument. A robot must give up its existence if that is the only way it can avoid doing harm to a human being or can prevent harm from coming to a human being. If we admit the innate superiority of any human being to any robot (which is something I am a little reluctant to admit, actually), then this is inevitable.

  On the other hand, must a robot give up itsexistence merely in obedience to an order that might be trivial, or even malicious? In "The Bicentennial Man," I have some hoodlums deliberately order a robot to take itself apart for the fun of watching that happen. The Third Law of Humanics must therefore be:

  3 -- A human being must not harm a robot, or, through inaction, allow a robot to come to harm, unless such harm is needed to keep a human being from harm or to allow a vital order to be carried out.

  Of course, we cannot enforce these laws as we can the Robotic Laws. We cannot design human brains as we design robot brains. It is, however, a beginning, and I honestly think that if we are to have power over intelligent robots, we must feel a corresponding responsibility for them, as the human character in my story "The Bicentennial Man" said.

  Certainly in Robot City, these are the sorts of rules that robots might suggest for the onlyhuman beings on the planet, as you may soon learn.

  BOOK TWO

  SUSPICION
r />   MIKE McQUAY

  CHAPTER 1

  PARADES

  It was sunset in the city of robots, and it was snowing paper.

  The sun was a yellow one and the atmosphere, mostly nitrogen/oxygen blue, was flush with the veins of iron oxides that traced through it, making the whole twilight sky glow bright orange like a forest fire.

  The one who called himself Derec marveled at the sunset from the back of the huge earthmover as it slowly made its way through the city streets, crowds of robots lining the avenue to watch him and his companions make this tour of the city. The tiny shards of paperfloated down from the upper stories of the crystal-like buildings, thrown (for reasons that escaped Derec) by the robots that crowded the windows to watch him.

  Derec took it all in, sure that it must have significance or the robots wouldn't do it. And that was the only thing he was sure of--for Derec was a person without memory, without notion of who he was. Worse still, he had come to this impossible world, unpopulated by humans, by means that still astounded him; and he had no idea, no idea, of where in the universe he was.

  He was young, the cape of manhood still new on his shoulders, and he only knew that by observing himself in a mirror. Even his name--Derec--wasn't really his. It was a borrowed name, a convenient thing to call himself because not having a name was like not existing. And he desperately wanted to exist, to know who, to know what he was.And why.

  Beside him sat a young woman called Katherine Burgess, who had said she'd known him, briefly, when he'd had a name. But he wasn't sure of her, of her truth or her motivations. She had told him his real name was David and that he'd crewed on a Settler ship, but neither the name nor the classification seemed to fit as well as the identity he'd already been building for himself; so he continued to call himself by his chosen name, Derec, until he had solid proof of his other existence.

  Flanking the humans on either side were two robots of advanced sophistication (Derec knew that, but didn't know how he knew it). One was named Euler, the other Rydberg, and they couldn't, or wouldn't, tell him any more than he already knew--nothing. The robots wanted information from him, however. They wanted to know why he was a murderer.

 

‹ Prev