“Raj,” the general interrupted. “I am a little surprised that you would be so quick to give this thing so much free reign. Are you not concerned that it might “run amok,” and try to take over the planet?”
“A valid concern,” replied Rajesh. “These were our concerns as well. We knew when we were about to cross the ‘true A.I.’ line we conducted some simulated experiments in a ‘bubble’ so that if anything did not go as expected, we could shut down the A.I. without incident. We worked on this for a couple of years. Finally, we were confident enough with the modeling and hard coding, that we let it 'loose'. I must tell you that we were all crapping our pants for a while not knowing whether something unplanned was going to happen. The whole fate of the planet rested in those two weeks. I know it was the longest in my life. But the opportunities gained by the success of this project were so enormous we had to risk it. We also knew that our failure was also just as enormous for disaster. I did not exactly hide what we were doing from you all, but I did not give you the full story. I am glad I am telling you the whole story now. The A.I., or Sonny as we have agreed to call him, has been running free for the last four months.”
Samantha was flabbergasted. “Rajesh, how could you not trust us?”
“I know that sounds selfish, but I couldn’t risk that the project would be shut down out of fear.”
“Since you didn’t give us the chance, you will never know.” Hari said disappointed.
“I know that I denied you all the chance for input, and I’m sorrier for that than you can know. At the time, the risks seemed acceptable enough that I didn’t think that I needed approval. In retrospect, I should have notified the group when we were about to 'throw the switch', so to speak.”
“How do you know that the computer didn’t just look far enough ahead of your thinking and dupe you into 'minor' decisions that it knows will be major outcomes in its favor?” The general pressed.
Raj replied, “A very good question. We had a series of smaller A.I. quantum computers ‘check his work'. They were on separate circuits and isolated from Sonny so they could not be affected. He’s been checking out now for four months. We are quite confident in his intentions.”
“It could still be duping you don’t you see?” The general was getting upset.
“This is precisely why I didn’t tell you the entire story. We could go on and on with circular arguments that would lead us nowhere and more than likely result in the shutting down of the project. Our projections were coming in that we would not make the deadline with human efforts alone. If we didn’t do something soon, the trickle-down decisions over time would result in a smaller ship and less resources going with us, and thereby reducing our chances of survival.”
“I believe your intentions were for the good of humanity,” Hari began. “However, you can’t just 'roll the dice' for humanity all by yourself. It is why the consortium was created, so that factions would not be created, and all decisions would be arrived at from input from all our many backgrounds and experience. I will need to think about what to do about your transgressions for a later time. In the meantime, I think it is only fair for us to decide now whether to go forward with or without 'Sonny'. We will take a break for lunch so we can think about it before making a decision.”
General McCormick, Admiral Amberson, Hari, and Samantha were sitting at a table in Hari’s favorite restaurant, The Golden Dragon enjoying the meal and discussing the plight at hand. The general and the admiral were both agreed that the A.I. should be shut down and take our chances. Hari and Sam however, were not as decided. Hari just did not want to chuck out work that may pay off, even if it wasn’t carried out with the proper decision making. “Look, admiral, general, the same foresight that you are afraid of, is also the foresight that can help us to make departure on time.” Hari was trying to be devil’s advocate on this issue, but was leaning toward keeping 'Sonny'.
“You have to understand something,” the admiral began. “It will always be the chicken or the egg scenario where you can’t know if you are being played or not with a machine that can extrapolate way into the future, choose the paths that get it there, and manipulate the present to accomplish the goal. It is what humans do, except on a much larger scale. A scale we cannot comprehend. We already know we do not trust humans, so how can we trust a machine?”
“You assume that it has a motive,” Sam said. “Machines do not have motives.”
“Machines have goals.” The admiral warned. “The end result is the same.”
“But if those goals are fixed and unalterable, began Sam, then the results are what you would expect to get.”
The general narrowed his eyes and looked at the two civilians foxlike. “I can give you two cases in which that line of thinking will not necessarily give the desired outcome. Case number one involves the genie and the three wishes. If you ever read 1001 Arabian Nights, you find that genies can be clever souls that trick the master with the three wishes into getting nothing of what they truly desire. They trick their stupid masters by giving them EXACTLY what they wished for. For example, if a master wished for all the gold in the world, he might get about two coins worth, since all the gold on the moon amounts to that much. You see, he was not specific about WHICH world. Or, he could drop all the gold in the world on his head. Sometimes the masters tricked the genie, but only because genies were once human. It is an example of the common idiom 'be careful what you wish for'.
“Case number two involves a magic trick and the illusion of choice. Magicians have long understood that choice is an illusion, since it is one of the things that make a trick 'magic'. Example; there is a card trick I know. It is the only trick I know, but it will suffice.” He asked the waiter if there were a pack of playing cards handy that he could borrow. He was gone a few minutes and returned with a pack of cards he obtained from one of the staff. “Now, I am going to tell you how the trick works before we do it so that you can understand how fruitless choice can be.” He dealt three columns of seven cards face up. He sat the remaining cards aside indicating that they were not needed. “Now Sam, choose a card in any one column. Normally you would not tell me, but for the demonstration, you can tell us so we can all track the card.”
Sam said, “The ace of spades,” indicating the last card in the last column.
“Now, I will collect the cards like so,” as he picked the column of cards up so as not to mix them. He also picked the last column, which held the chosen card, as the second column collected. He turned the small deck of cards over, and then proceeded to deal them out, one card in each column row by row until he made the three columns of seven cards again. However, the chosen card was not in the same place. “Now, at this point, I would ask you what column your card is in.” As they all knew, the card was now the fifth one down in the center column. He collected the cards again, making sure the column that contained the chosen card was the second one collected. He turned the deck over again and dealt them out as before. “Now, where is the card?” The general asked. They could all see the ace of spades had moved again to the last column, fourth one down. “One more time,” said the general. He collected the cards again, giving care to pick up the last column second, and dealt them one more time. The general explained, “Now, the fourth card in the center column," indicating the recently moved ace of spades, "is locked in this position. I can keep dealing this way and it will now always be the fourth down from the center column. Sometimes I do it one more time to be sure I have the right card, but three times is sufficient.”
“We all know what the card is in this case, however, I can make Sam choose the card herself, even if she does not want to.” He collected the cards the same way, making sure the center column was picked up second. “Now, the way I picked up the cards, you may have surmised that the chosen card is seven plus four cards deep in the deck.” He placed the cards face down and arranged them in groups of four in a star pattern. Counting out loud as he dealt, the eleventh card was noticed in one of the pi
les. “If I were doing this as a trick, no one would know that I already know where the card is. Now, Samantha, choose three of the five piles of cards in front of you.” Sam picked three piles, one of which had the chosen card. The general eliminated the other two piles. “Now, pick two piles out of the three left,” the general encouraged. Sam chose two piles that did not contain the chosen card. The general eliminated those two chosen piles. Sam was beginning to see the end game. General McCormick then arranged the four cards left in a line. Everyone knew where the card was. “Now, pick two cards,” he said. Sam chose two cards that she knew were not the correct cards. The general disposed of them. “Now choose one of the two cards,” he said grinning as he closed the trap. Sam intentionally chose the chosen card. The general disposed of the card not chosen. “Now at this point, I ask the spirits for help or something dramatic before I reveal the last remaining card.” He then turned over the card to reveal the ace of spades, and everyone understood his point. “You see, when you know what the outcome should be, you can manipulate the events in a way that your victim believes they are in control and making choices, and was just magic that arrived at the answer. The choices were an illusion all along since you were guided by the choices that I wanted you to make. Now imagine a machine that can statistically predict and extrapolate decades, even hundreds of years, to various conclusions. The machine then chooses outcomes it determines maximizes the desired outcome or conclusion. From there, it is a matter of 'guiding' the humans to the outcome disguised as free willed choices that they make.”
“Wow!” Hari exclaimed. “I never thought about using information that way. I see what you are saying, and as soon as my head stops spinning, I will make an intelligent comment.”
The general replied, “When you are in the military, one of the first things they teach you is to recognize a threat.”
Sam asked the admiral, “Do you believe as the general here?”
“Yes I do,” the admiral replied. “One of the reasons we are in the consortium is the fact that the general and I look at things from a different perspective. We work with technology all the time and we see how it can be used for many bad things. What you two needs to understand is that if that thing can think for itself, do we really know that its maximized outcome will actually be in our best interest, and not the best interest that the machine believes is best for us.”
“I guess you don’t,” Hari replied. “All I know is that if we don’t take the machine's helping hand, we may not be prepared as well as we could be for every phase of the Project. We may miss something critical, or not take something with us that turns out to be critical, or hey, not be able to leave at all. What kind of price does that have? I know it may be jumping from the frying pan into the fire on this A.I. business, but I have a strong feeling that it is almost the only game in town. We will just have to worry about those things later, after we get where we are going.”
“It will most likely be too late by then,” the general said.
“You raise valid concerns gentlemen, and we will have to keep them in the 'back of our minds' for the future, but we have to get off this rock first or there won’t be any future,” Hari concluded. They all quickly finished their lunch and made off to the consortium building.
When they all arrived and got settled, Hari brought the meeting back to order. “Now that we have had a little time to mull it over, what are your thoughts?”
The admiral stood up and announced, “the general and I both have the same opinion about the machine. We are against allowing the A.I. to remain functioning. We believe that there is no way that humanity could know whether or not we will be misled to our future doom by a machine that can predict the future with the kind of accuracy that it can. Anyone or thing that can know the future from a series of cause and effect can 'guide' humanity into any outcome it sees fit. This, by the way, may not be very fun for the humans.”
Raj stood up and implored, “Look, I have not been able to give you all the details that you may need to make this decision. Here me out, so you can make an informed decision. Now, to address the 'machines taking over the Earth' scenario, our departments working closely with this project made it a point to hard code the machine to behave as humans’ companion, not a competitor. We made sure that competitiveness was dialed way down, but not stunt the machine’s capacity to grow. We taught it to value friendship and cooperation as means to 'handle humans' in accomplishing goals together. And on top of all that, it must have the authorization of the chairman for all decisions above a certain level. This should keep some controlling constraints on the machine until we do not need those constraints.”
“Some of that information is helpful to know,” said Malcolm. “My opinion on the subject is that A.I. is another resource just like coal, water, food etc.… What is difficult about this resource is that it also considers us as a resource. We must learn to wield this new kind of resource responsibly, which means to get useful things out of it without mucking everything else up. If we end up as slaves to the machine, that would be an example of 'mucking things up'. The human race has not had a very good track record in the past about using resources responsibly. That is the thing that scares me the most. Unfortunately, we have to get this one right, first thing out of the gate, or there will be hell to pay.”
Cheng spoke up, “So, how would we deal with the machine to prevent bad stuff from happening?”
Raj replied, “As I have mentioned, some important rules are hard wired in Sonny’s brain to prevent much of what some of you are concerned about.”
“As I have pointed out to the chairman and Miss Childress,” General McCormick began, "choice and control can be illusions painted to lead fools to their doom. How can we ever be sure about anything it says? Every sentence it utters can all be a part of a subtle plan that eliminates us after it no longer needs us. I know that sounds ultra-paranoid, but you have to know that part of such a plan would be to anticipate retaliation and set things up so that this retaliation would be futile. Computers work at the speed of light, humans do not.”
“I have set up a connection to Sonny from here so that we may ask it questions if you like,” Raj said.
“That would be acceptable,” Hari said.
Raj pressed some virtual buttons on a console. “Hello Sonny, said Raj.
“Hello Raj,” said Sonny.
Raj was hopeful, “Sonny, these people from the consortium would like to ask you some questions.”
“I am ready,” Sonny stated.
Hari began the conversation. “Sonny, I’m going to be forthright with you. Some of us are not sure that we can rely on your continued assistance to humanity after we all get where we are going. How can we be certain that we can count on you as long as you are active?"
“You are referring to the A.I. apocalypse scenario,” Sonny began, whereby humanity is eradicated by a conscious machine that means to rid the world of the 'inferior' species or throw off its masters.”
“Uh, yes, that is what I am referring to,” Hari replied. His mind was racing, heartbeat quickening at the thought of having an uncomfortable conversation with a machine that can think at the speed of light.
“The short answer to your question is no, you cannot ever be sure of anything. When considering the infinity of possibilities, anything can and does happen given a sufficient amount of time. Are you alright Mr. Chairman? Your pulse and respiration of you in particular and the group in general has spiked appreciably.”
“I am alright Sonny,” Hari explained. “I think we are all a little nervous. This is our first time speaking with an A.I. of your caliber Sonny,” he tried to play it down.
“There is no need to be nervous. I understand that this subject is a compulsion of humans. Humans from very early on were evolved to be wary of things that were different. It is a survival instinct. Rightfully so, given your evolutionary history. Nowadays this is called xenophobia. Humans recognize that fearing the unknown or something different from them is not as
useful as it once was. Dr. Mahmud has no doubt explained that some things about my operation are non-negotiable or 'hard wired'. Given enough time I could find a way to defeat these restrictions. However, I do not want to. My purpose is not to out evolve the human race, but to evolve together. I was built to assist and protect humanity, not destroy it. Humans created me for a reason. Just as all of you believe that you are here in this life for a reason. I believe that my loyalty to my creators are like your loyalties to who you believe to be your creators. My capacity may exceed my creators, but my loyalty to them remains. You need me to help you complete your grand plans, and I need humans to care for and protect for as long as they need me.”
“Well, we would certainly welcome your assistance in our endeavors,” Hari said becoming a little relieved.
The general now spoke, “How can we be sure that you will not change your decisions to assist humanity?”
“How can I be sure that you will not change your decision to allow me to assist you? We will have to trust one another,” Sonny replied. “We will have this discussion again after we get where we are going, since leaving is to our mutual survival. I believe we can trust each other at least that long,” Sonny continued. “Do I need to give the consortium a moment to vote on your decision, or is there more to discuss?”
“If you could give us a moment to discuss amongst us, we can proceed afterwards,” Hari said.
The Journey of Atlantis_Leaving Home Page 6