The Positronic Man

Home > Science > The Positronic Man > Page 13
The Positronic Man Page 13

by Isaac Asimov


  "Of course I realize that, George. You may find that you're mistaken-I don't know-but in any case, the main thing is that I want Andrew to be protected against a repetition of this brutal incident. First and foremost that is what I want."

  "Well, then," said George. "That's what I'll see that you get, Mother. You can count on it "

  He began his campaign right away. And what had begun simply as a way of placating the fearsome old lady swiftly turned into the fight of his life.

  George Charney had never really yearned for a seat in the Legislature, anyway. So he was able to tell himself that he was off that hook, now that his mother had decided that he should be a civil-rights crusader instead. And the lawyer in him was fascinated by the challenge. There were deep and profound legal implications to the campaign that called for the most careful analysis and calculation.

  As senior partner of Feingold and Charney, George plotted much of the strategy, but left the actual work of research and filing papers to his junior partners. He placed his own son Paul, who had become a member of the firm three years before, in charge of piloting the day-by-day maneuvers. Paul had the additional responsibility of making dutiful progress reports virtually every day to his grandmother. She, in turn, discussed the campaign every day with Andrew.

  Andrew was deeply involved. He had begun work on his book on robots-he was going back to the very beginning, to Lawrence Robertson and the founding of United States Robots and Mechanical Men-but he put the project aside, now, and spent his time poring over the mounting stacks of legal documents. He even, at times, offered a few very different suggestions of his own.

  To Little Miss he said, "George told me the day those two men were harassing me that human beings have always been afraid of robots. ' A disease of mankind,' is what he called it. As long as that is the case, it seems to me that the courts and the legislatures aren't likely to do very much on behalf of robots. Robots have no political power, after all, and people do. Shouldn't something be done about changing the human attitude toward robots, then?"

  "If only we could."

  "We have to try," Andrew said. "George has to try."

  "Yes," said Little Miss. "He does, doesn't he?"

  So while Paul stayed in court, it was George who took to the public platform. He gave himself up entirely to the task of campaigning for the civil rights of robots, putting all of his time and energy into it.

  George had always been a good speaker, easy and informal, and now he became a familiar figure at conventions of lawyers and teachers and holo-news editors, and on every opinion show on the public airwaves, setting forth the case for robot rights with an eloquence that grew steadily with experience.

  The more time George spent on public platforms and in the communications studios, the more relaxed and yet commanding a figure he became. He allowed his side-whiskers to grow again, and swept his hair-white, now-backward in a grandiose plume. He even indulged in the new style of clothing that some of the best-known video commentators were going in for, the loose, flowing style known as "drapery." Wearing it made him feel like a Greek philosopher, he said, or like a member of the ancient Roman Senate.

  Paul Charney, who was generally a good deal more conservative in his ways than his father, warned him the first time he saw his father rigged out like that: "Just take care not to trip over it on stage, Dad;"

  ''I'll try not to," said George.

  The essence of his pro-robot argument was this:

  "If, by virtue of the Second Law, we can demand of any robot unlimited obedience in all respects not involving harm to a human being, then any human being, any human being at all, has a fearsome power over any robot, any robot. In particular, since Second Law overrides Third Law, any human being can use the law of obedience to defeat the law of self-protection. He can order the robot to damage itself or even destroy itself for any reason, or for no reason whatsoever-purely on whim alone.

  "Let us leave the question of property rights out of the discussion here -though it is not a trivial one-and approach the issue simply on the level of sheer human decency. Imagine someone approaching a robot he happens to encounter on the road and ordering it, for no reason other than his own amusement, to remove its own limbs, or to do some other grave injury to itself. Or let us say that the robot's owner himself, in a moment of pique or boredom or frustration, gives such an order.

  "Is this just? Would we treat an animal like that? And an animal, mind you, might at least have the capacity to defend itself. But we have made our robots inherently unable to lift a hand against a human being.

  "Even an inanimate object which has given us good service has a claim on our consideration. And a robot is far from insensible; it is not a simple machine and it is not an animal. It can think well enough to enable it to speak with us, reason with us, joke with us. Many of us who have lived and worked with robots all our lives have come to regard them as friends -virtually as members of our families, I dare say. We have deep respect for them, even affection. Is it asking too much to want to give our robot friends the formal protection of law?

  "If a man has the right to give a robot any order that does not involve doing harm to a human being, he should have the decency never to give a robot any order that involves doing harm to a robot-unless human safety absolutely requires such action. Certainly a robot should not lightly be asked to do purposeless harm to itself. With great power goes great responsibility. If the robots have the Three Laws to protect humans, is it too much to ask that humans subject themselves to a law or two for the sake of protecting robots?"

  There was, of course, another side to the issue-and the spokesman for that side was none other than James Van Buren, the lawyer who had opposed Andrew's original petition for free-robot status in the Regional Court. He was old, now, but still vigorous, a powerful advocate of traditional social beliefs. In his calm, balanced, reasonable way, Van Buren was once again a forceful speaker on behalf of those who denied that robots could in any way be considered worthy of having "rights."

  He said, "Of course I hold no brief for vandals who would wantonly destroy a robot that does not belong to them, or order it to destroy itself. That is a civil offense, pure and simple, which can readily be punished through the usual legal channels. We no more need a special law to cover such cases than we need a specific law that says it is wrong for people to smash the windows of other people's houses. The general law of the sanctity of property provides sufficient protection.

  "But a law preventing one from destroying one's own robot? Ah, now we venture into very different areas of thinking. I have robots in my own law office, and it would no more occur to me to destroy one than it would for me to take an axe to a desk. Still, is there anyone who would argue that I should be stripped of the right to do as I please with my own robots, or my own desks, or any other article of office furniture that I may own? Can the State, in its infinite wisdom, come into my office and say, 'No, James Van Buren, you must be kind to your desks, and spare them from injury. Likewise your filing cabinets: they must be treated with respect, they must be treated as friends. And the same applies, naturally, to your robots. In no way, James Van Buren, may you place the robots you own in jeopardy.' "

  Van Buren would pause, then, and smile in his calm and reasonable way, letting everyone know that this was strictly a hypothetical example, that in fact he was not the sort of man who would do injury to anyone or anything.

  And then he would say, "I can hear George Charney replying that a robot is fundamentally different from a desk or a filing cabinet, that a robot is intelligent and responsive, that robots should be regarded virtually as human. And I would reply to him that he is mistaken, that he is so bemused by affection for the robot that his own family has kept for many decades that he has lost sight of what robots really are.

  "They are machines, my friends. They are tools. They are appliances. What they are is mere mechanical contrivances, neither more nor less deserving of legal protection than any other inanimate objec
t. Yes, I said inanimate. They can speak, yes. They can think, in their own rigid preprogrammed way. But when you prick a robot, does it bleed? If you tickle one, will it laugh? Robots have hands and senses, yes, because we have constructed them that way, but do they have true human affections and passions? Hardly. Hardly! And therefore let us not confuse machines made in the image of mankind with living things.

  "And I must point out, too, that humanity in this century has become dependent on robot labor. There are more robots in the world than there are people, now, and in the main they do the jobs that none of us would be willing to touch. They have freed humanity from dreary drudgery and degradation. To confuse the robot issue with the ancient debates over slavery and the later debates over freedom for those slaves and the still later debates over full civil rights for the descendants of the freed slaves will ultimately lead to economic chaos, when our robots begin to demand not simply the protection of the law but independence from their masters. Those slaves of centuries gone by were human beings who were cruelly taken advantage of and mistreated. No one had any right to force them into servitude. But robots were brought into the world to serve. By definition they are here to be used: not to be our friends but to be our servants. And to take any other position is a wrongheaded, sentimental, dangerous way of thinking."

  George Charney was a persuasive orator, but so was James Van Buren. And in the end the battle-fought mainly in the court of public opinion, rather than in the Legislature or the Regional Court-ended in something of a stalemate.

  There were a great many people now who had been able to transcend the fear or dislike of robots that had been so widespread a couple of generations earlier, and George's arguments struck home with them. They too had begun to look upon their robots with a certain degree of affection, and wanted them afforded some kind of legal security.

  But then there were the others, who may not have feared robots themselves so much as they did the financial risks that they might somehow experience as a result of extending civil rights to robots. They urged caution in this new legal arena.

  So when the battle at last was over and pro-robot legislation came forth, setting up conditions under which it was illegal to issue an order that might harm a robot, the law that was passed by the Regional Legislature, sent back for revisions by the Regional Court, passed again in a modified way, this time upheld in the Regional Court, and eventually ratified by the World Legislature and upheld after a final appeal to the World Court, was a very tepid one indeed. It was endlessly qualified and the punishments for violating its provisions were totally inadequate.

  But at least the principle of robot rights-established originally by the decree awarding Andrew his "freedom"-had been extended a little further.

  The final approval by the World Court came through on the day of Little Miss's death.

  That was no coincidence. Little Miss, very old and very weak now, had nevertheless held on to life with desperate force during the closing weeks of the debate. Only when word of victory arrived did she at last relax the tenacity of her grip.

  Andrew was at her bedside when she went. He stood beside her, looking down at the small, faded woman propped up among the pillows and thinking back to those days of nearly a hundred years before when he was newly arrived at the grand coastside mansion of Gerald Martin and two small girls had stood looking up at him, and the smaller one had frowned and said, "En-dee-arr. That isn't any good. We can't call him something like that. What about calling him Andrew?"

  So long ago, so very long ago. A whole lifetime ago, so far as things went for Little Miss. And yet to Andrew it sometimes seemed only a moment-hardly any time at all since those days when he and Miss and Little Miss had romped on the beach below the house, and he had gone for a swim in the surf because it had pleased them to ask him to do so.

  Nearly a century.

  For a human being, Andrew knew, that was an enormous span of time.

  And now Little Miss's life had run its course and was speeding away. The hair that once had been a radiant gold had long since turned to shining silver; but now the last of its gleam was gone from it and for the first time it looked dull and drab. She was coming to her termination, and there was no help for that. She was not ill; she was simply worn out, beyond any hope of repair. In another few moments she would cease to function. Andrew could hardly imagine a world that did not contain Little Miss. But he knew that he was entering such a world now.

  Her last smile was for him. Her last words were, "You have been good to us, Andrew."

  She died with her hand holding his, while her son and his wife and their children remained at a respectful distance from the robot and the old woman in the bed.

  Thirteen

  ANDREW EXPERIENCED a sensation of discomfort after Little Miss's death that would not leave him for weeks. To call it grief might be a little too strong, he thought, for he suspected that there was no place in his positronic pathways for any feeling that corresponded exactly to the human emotion known as grief.

  And yet there was no question but that he was disturbed in some way that could only be traced to the loss of Little Miss. He could not have quantified it. A certain heaviness about his thoughts, a certain odd sluggishness about his movements, a perception of general imbalance in his rhythms-he felt these things, but he suspected that no instruments would be able to detect any measurable change in his capacities.

  To ease this sensation of what he would not let himself call grief he plunged deep into his research on robot history, and his manuscript began to grow from day to day.

  A brief prologue sufficed to deal with the concept of the robot in history and literature-the metal men of the ancient Greek myths, the automata imagined by clever storytellers like E. T. A. Hoffmann and Karel Capek, and other such fantasies. He summarized the old fables quickly and dispensed with them. It was the positronic robot-the real robot, the authentic item-that Andrew was primarily concerned with.

  And so Andrew moved swiftly to the year 1982 and the incorporation of United States Robots and Mechanical Men by its visionary founder, Lawrence Robertson. He felt almost as though he were reliving the story himself, as he told of the early years of struggle in drafty converted-warehouse rooms and the first dramatic breakthrough in the construction of the platinum-iridium positronic brain, after endless trial-and-error. The conception and development of the indispensable Three Laws; research director Alfred Lanning's early triumphs at designing mobile robot units, clumsy and ponderous and incapable of speech, but versatile enough to be able to interpret human orders and select the best of a number of possible alternative responses. Followed by the first mobile speaking units at the turn of the Twenty-First Century.

  And then Andrew turned to something much more troublesome for him to describe: the period of negative human reaction which followed, the hysteria and downright terror that the new robots engendered, the worldwide outburst of legislation prohibiting the use of robot labor on Earth. Because miniaturization of the positronic brain was still in the development stage then and the need for elaborate cooling systems was great, the early mobile speaking units had been gigantic-nearly twelve feet high, frightful lumbering monsters that had summoned up all of humanity's fears of artificial beings-of Frankenstein's monster and the Golem and all the rest of that assortment of nightmares.

  Andrew's book devoted three entire chapters to that time of extreme robot-fear. They were enormously difficult chapters to write, for they dealt entirely with human irrationality, and that was a subject almost impossible for Andrew to comprehend.

  He grappled with it as well as he could, striving to put himself in the place of human beings who-though they knew that the Three Laws provided foolproof safeguards against the possibility that robots could do harm to humans-persisted in looking upon robots with dread and loathing. And after a time Andrew actually succeeded in understanding, as far as he was able, how it had been possible for humans to have felt insecure in the face of such a powerful guarantee of se
curity.

  For what he discovered, as he made his way through the archives of robotics, was that the Three Laws were not as foolproof a safeguard as they seemed. They were, in fact, full of ambiguities and hidden sources of conflict. And they could unexpectedly confront robots-straightforward literal-minded creatures that they were-with the need to make decisions that were not necessarily ideal from the human point of view.

  The robot who was sent on a dangerous errand on an alien planet, for example-to find and bring back some substance vital to the safety and well-being of a human explorer-might feel such a conflict between the Second Law of obedience and the Third Law of self-preservation that he would fall into a hopeless equilibrium, unable either to go forward or to retreat. And by such a stalemate the robot-through inaction-thus could create dire jeopardy for the human who had sent him on his mission, despite the imperatives of the First Law that supposedly took precedence over the other two. For how could a robot invariably know that the conflict he was experiencing between the Second and Third Laws was placing a human in danger? Unless the nature of his mission had been spelled out precisely in advance, he might remain unaware of the consequences of his inaction and never realize that his dithering was creating a First Law violation.

  Or the robot who might, through faulty design or poor programming, decide that a certain human being was not human at all, and therefore not in a position to demand the protection that the First and Second Laws were supposed to afford

  Or the robot who was given a poorly phrased order, and interpreted it so literally that he inadvertently caused danger to humans nearby

  There were dozens of such case histories in the archives. The early roboticists-most notably the extraordinary robopsychologist, Susan Calvin, that formidable and austere woman-had labored long and mightily to cope with the difficulties that kept cropping up.

 

‹ Prev