by Isaac Asimov
Husband and wife were alone again. They had not dared say a word until Demerest was safely put to sleep under the watchful eyes of two men detailed for the purpose. The unexpected waterclap had roused everybody and a sharply bowdlerized account of the incident had been given out.
The manual controls were now locked off and Bergen said, “From this point on, the manuals :will have to be adjusted to fail-safe. And visitors will have to be searched.”
“Oh, John,” said Annette. “I think people are insane. There we were, facing death for us and for Ocean-Deep; just the end of everything. And I kept thinking-1 must keep calm; I mustn’t have a miscarriage.”
“You kept calm all right. You were magnificent. I mean, Project Big World! I never conceived of such a thing, but by -by--Jove, it’s an attractive thought. It’s wonderful.”
“I’m sorry I had to say all that, John. It was all a fake, of course. I made it up, Demerest wanted me to make something up really. He wasn’t a killer or destroyer; he was, according to his own overheated lights, a patriot, and I suppose he was telling himself he must destroy in order to save -a common enough view among the small-minded. But he said he would give us time to talk him out of it and I think he was praying we would manage to do so. He wanted us to think of something that would give him the excuse to save in order to save, and I gave it to him. I’m sorry I had to fool you, John.”
“You didn’t fool me.”
“I didn’t?”
“How could you? I knew you weren’t a member of PPC.”
“What made you so sure of that? Because I’m a woman?”
“Not at all. Because I’m a member, Annette, and that’s confidential. And, if you don’t mind, I will begin a move to initiate exactly what you suggested-Project Big World.”
“Well!” Annette considered that and, slowly, smiled. “Well! That’s not bad. Women do have their uses.”
“Something,” said Bergen, smiling also, “I have never denied.”
Ed Ferman of F & SF and Barry Malzberg, one of the brightest of the new generation of science fiction writers, had it in mind in early 1973 to prepare an anthology in which a number of different science fiction themes were carried to their ultimate conclusion. For each story they tapped some writer who was associated with a particular theme, and for a story on the subject of robotics, they wanted me, naturally.
I tried to beg off with my usual excuses concerning the state of my schedule, but they said if I didn’t do it there would be no story on robotics at all, because they wouldn’t ask anyone else. That shamed me into agreeing to do it.
I then had to think up a way of reaching an ultimate conclusion. There had always been one aspect of the robot theme I had never had the courage to write, although the late John Campbell and I had sometimes discussed it.
In the first two Laws of Robotics, you see, the expression “human being” is used, and the assumption is that a robot can recognize a human being when he sees one. But what is a human being? Or, as the Psalmist asks of God, “What is man that thou art mindful of him?”
Surely, if there’s any doubt as to the definition of man, the Laws of Robotics don’t necessarily hold. So I wrote THAT THOU ART MINDFUL OF HIM, and Ed and Barry were very happy with it-and so was I. It not only appeared in the anthology, which was entitled Final Stage, but was also published in the May 1974 issue of F & SF.
That Thou Art Mindful of Him
The Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
1.
Keith Harriman, who had for twelve years now been Director of Research at United States Robots and Mechanical Men Corporation, found that he was not at all certain whether he was doing right. The tip of his tongue passed over his plump but rather pale lips and it seemed to him that the holographic image of the great Susan Calvin, which stared unsmilingly down upon him, had never looked so grim before.
Usually he blanked out that image of the greatest roboticist in history because she unnerved him. (He tried thinking of the image as “it” but never quite succeeded.) This time he didn’t quite dare to and her long-dead gaze bored into the side of his face.
It was a dreadful and demeaning step he would have to take. Opposite him was George Ten, calm and unaffected either by
Harriman’s patent uneasiness or by the image of the patron saint of robotics glowing in its niche above.
Harriman said, “We haven’t had a chance to talk this out, really, George. You haven’t been with us that long and I haven’t had a good chance to be alone with you. But now I would like to discuss the matter in some detail.”
“I am perfectly willing to do that, “ said George. “In my stay at U. S. Robots, I have gathered the crisis has something to do with the Three Laws.”
“Yes. You know the Three Laws, of course.”
“I do.”
“Yes, I’m sure you do. But let us dig even deeper and consider the truly basic problem. In two centuries of, if I may say so, considerable success, U. S. Robots has never managed to persuade human beings to accept robots. We have placed robots only where work is required that human beings cannot do, or in environments that human beings find unacceptably dangerous. Robots have worked mainly in space and that has limited what we have been able to do.”
“Surely,” said George Ten, “that represents a broad limit, and one within which U. S. Robots can prosper.”
“No, for two reasons. In the first place, the boundaries set for us inevitably contract. As the Moon colony, for instance, grows more sophisticated, its demand for robots decreases and we expect that, within the next few years, robots will be banned on the Moon. This will be repeated on every world colonized by mankind. Secondly, true prosperity is impossible without robots on Earth. We at U. S. Robots firmly believe that human beings need robots and must learn to live with their mechanical analogues if progress is to be maintained.”
“Do they not? Mr. Harriman, you have on your desk a computer input which, I understand, is connected with the organization’s Multivac. A computer is a kind of sessile robot; a robot brain not attached to a body--”
“True, but that also is limited. The computers used by mankind have been steadily specialized in order to avoid too humanlike an intelligence. A century ago we were well on the way to artificial intelligence of the most unlimited type through the use of great computers we called Machines. Those Machines limited their action of their own accord. Once they had solved the ecological problems that had threatened human society, they phased themselves out. Their own continued existence would, they reasoned, have placed them in the role of a crutch to mankind and, since they felt this would harm human beings, they condemned themselves by the First Law.”
“And were they not correct to do so?”
“In my opinion, no. By their action, they reinforced mankind’s Frankenstein complex; its gut fears that any artificial man they created would turn upon its creator. Men fear that robots may replace human beings.”
“Do you not fear that yourself?”
“I know better. As long as the Three Laws of Robotics exist, they cannot. They can serve as partners of mankind; they can share in the great struggle to understand and wisely direct the laws of nature so that together they can do more than mankind can possibly do alone; but always in such a way that robots serve human beings.”
“But if the Three Laws have shown themselves, over the course of two centuries, to keep robots within bounds, what is the source of the distrust of human beings for robots?”
“Well”--and Harriman’s graying hair tufted as he scratched his head vigorously--“mostly superstition, of course. Unfortunately, there are also some complexities involved that anti-ro
bot agitators seize upon.”
“Involving the Three Laws?”
“Yes. The Second Law in particular. There’s no problem in the Third Law, you see. It is universal. Robots must always sacrifice themselves for human beings, any human beings.”
“Of course,” said George Ten.
“The First Law is perhaps less satisfactory, since it is always possible to imagine a condition in which a robot must perform either Action A or Action B, the two being mutually exclusive, and where either action results in harm to human beings. The robot must therefore quickly select which action results in the least harm. To work out the positronic paths of the robot brain in such a way as to make that selection possible is not easy. If Action A results in harm to a talented young artist and B results in equivalent harm to five elderly people of no particular worth, which action should be chosen.”
“Action A, “ said George Ten. “Harm to one is less than harm to five.”
“Yes, so robots have always been designed to decide. To expect robots to make judgments of fine points such as talent, intelligence, the general usefulness to society, has always seemed impractical. That would delay decision to the point where the robot is effectively immobilized. So we go by numbers. Fortunately, we might expect crises in which robots must make such decisions to be few....But then that brings us to the Second Law.”
“The Law of Obedience.”
“Yes. The necessity of obedience is constant. A robot may exist for twenty years without every having to act quickly to prevent harm to a human being, or find itself faced with the necessity of risking its own destruction. In all that time, however, it will be constantly obeying orders....Whose orders?”
“Those of a human being.”
“Any human being? How do you judge a human being so as to know whether to obey or not? What is man, that thou art mindful of him, George?”
George hesitated at that.
Harriman said hurriedly, “A Biblical quotation. That doesn’t matter. I mean, must a robot follow the orders of a child; or of an idiot; or of a criminal; or of a perfectly decent intelligent man who happens to be inexpert and therefore ignorant of the undesirable consequences of his order? And if two human beings give a robot conflicting orders, which does the robot follow?”
“In two hundred years,” said George Ten, “have not these problems arisen and been solved?”
“No,” said Harriman, shaking his head violently. “We have been hampered by the very fact that our robots have been used only in specialized environments out in space, where the men who dealt with them were experts in their field. There were no children, no idiots, no criminals, no well-meaning ignoramuses present. Even so, there were occasions when damage was done by foolish or merely unthinking orders. Such damage in specialized and limited environments could be contained. On Earth, however, robots must have judgment. So those against robots maintain, and, damn it, they are right.”
“Then you must insert the capacity for judgment into the positronic brain.”
“Exactly. We have begun to reproduce JG models in which the robot can weigh every human being with regard to sex, age, social and professional position, intelligence, maturity, social responsibility and so on.”
“How would that affect the Three Laws?”
“The Third Law not at all. Even the most valuable robot must destroy himself for the sake of the most useless human being. That cannot be tampered with. The First Law is affected only where alternative actions will all do harm. The quality of the human beings involved as well as the quantity must be considered, provided there is time for such judgment and the basis for it, which will not be often. The Second Law will be most deeply modified, since every potential obedience must involve judgment. The robot will be slower to obey, except where the First Law is also involved, but it will obey more rationally.”
“But the judgments which are required are very complicated.”
“Very. The necessity of making such judgments slowed the reactions of our first couple of models to the point of paralysis. We improved matters in the later models at the cost of introducing so many pathways that the robot’s brain became far too unwieldy. In our last couple of models, however, I think we have what we want. The robot doesn’t have to make an instant judgment of the worth of a human being and the value of its orders. It begins by obeying all human beings as any ordinary robot would and then it learns. A robot grows, learns and matures. It is the equivalent of a child at first and must be under constant supervision. As it grows, however, it can, more and more, be allowed, unsupervised, into Earth’s society. Finally, it is a full member of that society.”
“Surely this answers the objections of those who oppose robots.”
“No,” said Harriman angrily. “Now they raise others. They will
not accept judgments. A robot, they say, has no right to brand this person or that as inferior. By accepting the orders of A in preference to that of B, B is branded as of less consequence than A and his human rights are violated.”
“What is the answer to that?”
“There is none. I am giving up.”
“I see.”
“As far as I myself am concerned....Instead, I turn to you, George.”
“To me?” George Ten’s voice remained level. There was a mild surprise in it but it did not affect him outwardly. “Why to me?”
“Because you are not a man,” said Harriman tensely. “I told you I want robots to be the partners of human beings. I want you to be mine.”
George Ten raised his hands and spread them, palms outward, in an oddly human gesture. “What can I do?”
“It seems to you, perhaps, that you can do nothing, George. You were created not long ago, and you are still a child. You were designed to be not overfull of original information--it was why I have had to explain the situation to you in such detail--in order to leave room for growth. But you will grow in mind and you will come to be able to approach the problem from a non-human standpoint. Where I see no solution, you, from your own other standpoint, may see one.”
George Ten said, “My brain is man-designed. In what way can it be non-human?”
“You are the latest of the JG models, George. Your brain is the most complicated we have yet designed, in some ways more subtly complicated than that of the old giant Machines. It is open-ended and, starting on a human basis, may--no, will--grow in any direction. Remaining always within the insurmountable boundaries of the Three Laws, you may yet become thoroughly non-human in your thinking.”
“Do I know enough about human beings to approach this problem rationally? About their history? Their psychology?”
“Of course not. But you will learn as rapidly as you can.”
“Will I have help, Mr. Harriman?”
“No. This is entirely between ourselves. No one else knows of this and you must not mention this project to any human being, either at U. S. Robots or elsewhere.”
George Ten said, “Are we doing wrong, Mr. Harriman, that you seek to keep the matter secret?”
“No. But a robot solution will not be accepted, precisely because it is robot in origin. Any suggested solution you have you will turn over to me; and if it seems valuable to me, I will present it. No one will ever know it came from you.”
“In the light of what you have said earlier,” said George Ten calmly, “this is the correct procedure....When do I start?”
“Right now. I will see to it that you have all the necessary films for scanning.”
1a.
Harriman sat alone. In the artificially lit interior of his office, there was no indication that it had grown dark outside. He had no real sense that three hours had passed since he had taken George Ten back to his cubicle and left him there with the first film references.
He was now merely alone with the ghost of Susan Calvin, the brilliant roboticist who had, virtually single-handed, built up the positronic robot from a massive toy to man’s most delicate and versatile instrument; so delicate and ver
satile that man dared not use it, out of envy and fear.
It was over a century now since she had died. The problem of the Frankenstein complex had existed in her time, and she had never solved it. She had never tried to solve it, for there had been no need. Robotics had expanded in her day with the needs of space exploration.
It was the very success of the robots that had lessened man’s need for them and had left Harriman, in these latter times--
But would Susan Calvin have turned to robots for help. Surely, she would have--
And he sat there long into the night.
2.
Maxwell Robertson was the majority stockholder of U. S. Robots and in that sense its controller. He was by no means an impressive person in appearance. He was well into middle age, rather pudgy, and had a habit of chewing on the right corner of his lower lip when disturbed.
Yet in his two decades of association with government figures he had developed a way of handling them. He tended to use softness, giving in, smiling, and always managing to gain time.
It was growing harder. Gunnar Eisenmuth was a large reason for its having grown harder. In the series of Global Conservers, whose power had been second only to that of the Global Executive during the past century, Eisenmuth hewed most closely to the harder edge of the gray area of compromise. He was the first Conserver who had not been American by birth and though it could not be demonstrated in any way that the archaic name of U. S. Robots evoked his hostility, everyone at U. S. Robots believed that.
There had been a suggestion, by no means the first that year--or that generation--that the corporate name be changed to World Robots, but Robertson would never allow that. The company had been originally built with American capital, American brains, and American labor, and though the company had long been worldwide in scope and nature, the name would bear witness to its origin as long as he was in control.