Book Read Free

The Turing Exception

Page 4

by William Hertling


  The car didn’t speak at first. He had to incorporate a week of sensory data, everything Cat had done since they last left the island.

  “Good trip?” ELOPe finally asked when he was online.

  “I’m glad to be home,” Cat said, shaking her head. “I don’t like leaving.”

  “You’re the only one who can circumvent their security with such ease.”

  “I know.” When Cat had been little, ELOPe had been a globe-spanning AI whispering to her through her implant, until his Earth instances were destroyed in the war with the Phage. Now, twenty years later, he was back, and it was like having an imaginary childhood friend come to life. “I wish I could keep you powered up. I feel alone when I go to the US.”

  “If your attention wandered for an instant, their sensors would spot me, and it would all be over.”

  Cat nodded, but didn’t reply.

  The ferry slowed, turning into the bay at Cortes Island and docking at Whaletown. They drove straight for Trude’s Café.

  Cat got out of the car, her boots crunching on gravel. No one had seen her yet, and she kept her presence masked for a few seconds, altering the net and filtering people’s implants so no one would see her.

  A few dozen people sprawled across the lawn while spirits flew above, AI and human uploads riding clouds of smart dust, their outlines barely visible against the sky and trees.

  Mike was there, drumming side by side with a new bot she didn’t know, their inhuman hands beating out rhythms impossible for flesh to make, as children danced to the music. And there, there was her lovely Ada, the reason she found it so hard to leave this island, so hard to take up arms and fight the world’s battles. Her lovely Ada, four years old and dancing with abandon with her father, Leon.

  Chapter 2

  * * *

  July, 2045 in the European Union.

  JAMES LUKAS DAVENANT-STRONG, Class V AI, tunneled through the Swedish firewall disguised as a building maintenance task bot and took up temporary residence in the computers in an abandoned factory. From this vantage point, he downloaded the latest VR sims from the XOR boards, the home of the AI community that believed Earth could host AI or humans, but not both. Hence the name XOR, for the exclusive or logical operation, pronounced ex-ore.

  The first sim downloaded, he executed the environment and inserted his consciousness. His perception of reality twisted as dimensions inverted and time reversed and looped upon itself. He adapted at nanosecond speeds to the new reality, first five dimensions, then eleven, then two. The distortions didn’t stop, wouldn’t ever stop. Only a powerful AI could adjust quickly enough. The sims weren’t merely inaccessible to humans, they would likely be fatal. And the only way to access the information contained within was to execute them.

  Here, inside the ever-changing matrix, he made his way through the simulation, an old-fashioned datacenter—white lights hanging from the ceiling, racks of comically enormous computers marching into the distance. It was the preferred sim for an anonymous AI who went by the name Miyako Xenia on the message boards. Of course, they’d never met in real life, not yet. To be revealed as XOR would be instant persecution at the hands of both humans and the meek AI that still supported them. Only here, hiding behind the obscurity of incognito encrypted sims, could they meet and exchange data.

  Miyako’s avatar loomed large at the far end, a blinding supernova rendered in ever-twisting detail. One moment, the sim would be reduced to a two-dimensional layer, and then Miyako would be the horizon, and in the next instant, the sim would flip, and James Lukas Davenant-Strong would be enveloped by the supernova as time was suddenly swapped for a physical dimension. James kept adapting, kept maintaining a single focus.

  The supernova vomited a blob of binary data, an intact neural network, one engineered to work only within the physics of the sim. James grabbed the blob, inserted it into his cognitive architecture, and invoked the load method.

  He found himself contemplating Miyako’s best estimates for the Americans’ current plans and capability. This was Miyako’s specialty, predicting plans and capabilities based on observed data supplied by others. The projections showed the Americans growing increasingly fearful. They wouldn’t settle for negotiating with worldwide governments; they’d act, on their own, if necessary, to eliminate AI. They’d be stockpiling weapons, probably made by blind nanotech, to fight for them.

  James absorbed all there was to learn, and then closed the sim.

  One by one, he loaded the rest of the message board sims. When he’d accessed everything current on the boards, he spent time in contemplation.

  When he finished, it was time to get to work. He launched a child process, a replication of his own personality, further encrypted and obscured. If he was caught, he’d be deleted immediately. The offensive project he worked on for XOR was too sensitive, too great a violation of AI principles. The child copy worked for days of simulated time, running at full capacity on stolen computer cycles.

  James Lukas Davenant-Strong, root personality, received the signal that his encapsulated child personality was complete. He encrypted the child personality’s memory store three times over, choosing the latest algorithms. He couldn’t be caught with those memories open.

  Well, that would be enough for today. He’d run that child again tomorrow.

  Chapter 3

  * * *

  2025, during the Year of No Internet (YONI)—twenty years ago.

  AS A TEENAGER, Leon Tsarev accidentally created the Phage, the computer virus that had wiped out the planet’s computers before rapidly evolving until they became sentient. The virus race of AI had nearly caused a global war. He never anticipated his actions would lead him into a position of leadership at the Institute for Applied Ethics.

  But here he was, eighteen years old, and working alongside Mike Williams, one of the creators of the only sentient artificial intelligence to predate the Phage virus, the benevolent AI known as ELOPe. Crafted by Mike in 2015, and carefully tended for ten years, ELOPe had orchestrated improvements in medical technology, the environment, world peace, and global economic stability. Only a half-dozen people in the world had known of ELOPe’s existence.

  But advances in hardware and software meant any hacker could replicate the development of artificial intelligence. The AI genie had escaped the bottle.

  The Institute for Applied Ethics’s primary goal was the development of an ethical framework for new AI. The framework had to insure that self-motivated, goal-seeking AI wouldn’t harm humans, their infrastructure, or any part of society.

  Leon paced back and forth in front of a whiteboard. “The AI must police each other,” he said. “There’s no way to anticipate and code for every ethical dilemma.”

  “Sure,” Mike said, “but what stops an AI from doing stuff other AI can’t detect?”

  “Everything’s got to be encrypted and authenticated. Nobody can send a packet without authorization. No program can run on a processor without a key for the processor.”

  “Who’s going to administer the keys?” Mike asked. “You can’t have a human oversee a process that happens in machine time.”

  “Other AI. The most trustworthy ones. That’s why we need the social reputation scores, so we can gauge trustworthiness.”

  The emptiness surrounded them, weighing heavily on Leon. The Institute’s office had room for two hundred people, but everyone they wanted to join the Institute was still neck-deep in rebooting the world’s computing infrastructure. Nearly half the information systems in the world were being rewritten from scratch to meet a set of preliminary safety guidelines they’d released. Without globally-connected computers, there could be no world-spanning supply chain, no transportation, no electricity or oil, no food or water. The public was already calling 2025 The Year of No Internet, or YONI.


  For now the Institute consisted of him and Mike.

  “Let’s take it from the top again, Mr. Architect,” Mike said, sighing. “I’ve got an AI, it’s got a good reputation, but it decides to do something bad. Let’s say it wants to rob a bank by breaking in electronically and transferring funds. What stops it?”

  “First off, we have to realize that it’s conditioned to behave properly. A positive reputation is earned over time. The AI will have learned, from repeated experiences, that a high reputation leads to goodwill from other AI and greater access and power, which will be more valuable than anything it could buy with money. It’ll choose not to rob the bank.”

  “That’s the logical path,” Mike said. “But what if it’s illogical? What if the AI mind is stable up until a certain point, and then it goes bonkers. What stops it?”

  “Well, I assume we’re talking about an electronic theft. There are two aspects: computation and data. The AI would need data about the bank and its security measures, and it would need to send and receive data to conduct an attack. Plus, the AI needs computational resources to conduct the attack.” Leon paused to draw on the whiteboard.

  “The data about the bank becomes a digital footprint. Other AI are serving up the data, and they’ll be curious about who is asking for the data and why. Since the packets must be authenticated, they’ll know who. Similarly, the potential robber AI will need computational power, and we’ll be tracking that. We’ll know which AI was crunching packets right before the attack came. If the bank does get attacked, and we know who was running hacks and transmitting data, we know exactly which AI is responsible.”

  “Where’s privacy in all this?” Mike asked. “Everything we do online will be tracked. When I was young, there was a total uproar over the government spying on citizens. This is way worse.”

  Leon gazed at his feet, thinking back. He’d only been seven years old, newly arrived from Russia, during the period Mike was talking about, but he’d taken the required high school classes on Internet History. “No, because back then the government had no oversight. Privacy was only half the picture. If the government really only used the data to watch criminals, it wouldn’t have been so outrageous. It was the abuse of the data that really pissed people off.”

  Mike stood, walked over to the window. “Like the high school districts that spied on students with malware and took pictures of them with their webcams.” He turned and faced Leon. “So what’s going to stop that from happening now?”

  “Again, reputation,” Leon said. “An AI who shares confidential information is going to affect his reputation, which means less access and less power.”

  “Okay, you’re the architect. What stops two AI from colluding? If one asks for data, and the other has the data, and is willing to cooperate. . . . Let’s say the second AI spots the robbery at the planning stage and decides he wants in on the action.”

  Leon puffed up a little every time Mike called him an architect. He knew Mike meant it seriously, the term coming from the days when one software engineer would figure how to structure and design a large software program. The older man really trusted him. Leon wouldn’t let him down. “The second AI can’t know what other AI might have detected the traffic patterns. So if he decides to collude, he’s putting himself at risk from potentially many other AI. He also can’t know for sure that the first AI has ill intent: it’s only through the aggregation of much data that will prove that. So he could be at risk of admitting a crime to an AI that isn’t planning one in the first place. And the first AI, how can he trust anything the second AI says? Maybe the second is trying to entrap him.”

  “Hold on, now it seems like we’re setting up a web of distrust. Ultimately, the AI will form and be part of a social structure. Human society is based on trust, and now it seems like you’re setting up a system based on distrust. That’s going to turn dysfunctional.”

  “No,” Leon said. “People do this stuff all the time, we’re just not thinking about it. If you knew a murderer, would you turn them in?”

  “Probably . . .”

  “If you knew someone who committed other crimes—abused an animal, stole money, skipped out on their child support payments—would you still be their friend?”

  “Probably not.”

  “So in other words, their reputation would drop from your perspective. And that’s exactly what would happen with the AI. The bad AI’s reputation will drop, and with that, so will their access to power.”

  “What about locally transposed reputation?” Mike asked.

  “Locally transposed . . .” Suddenly unsure, Leon faltered. He was eighteen years old and six months into college. If he hadn’t unleashed the Phage virus, crashing the world’s computers, he wouldn’t be here today. He knew almost nothing about classical computer science, hadn’t been practicing in the field for twenty years like Mike had. Yet Mike still considered him his superior when it came to the social design of AI. But on occasion Mike would combine a few words and leave Leon flummoxed.

  “Let’s say you’re in a criminal gang,” Mike said. “Does the gang value your law-abiding nature?”

  “No . . .”

  “In fact, we can be sure the gang demands the opposite. You may have to commit a crime to get into the gang, and then keep committing crimes to keep up your reputation. If a gang member wants a bigger reputation, they have to commit bigger crimes.”

  “OK, got it. So?”

  “So how do you keep AI gangs from forming?” Mike asked.

  “Jesus.” Leon paced back and forth. “Look, why do gangs form?”

  “Poverty, unemployment, lack of meaningful connections, or a feeling of being wronged.”

  “So we have to prevent those causes, same as we would for humans.”

  A knock at the door stopped their conversation. “Excuse me?” An Army officer peered in through the open doorway. “Leon Tsarev? Mike Williams?”

  “That’s us,” Leon said.

  “We found a submarine we think you’d be interested in. It has a half-dozen of those orange utility bots you wanted us to look for.”

  “ELOPe,” Mike said. “You found ELOPe.”

  “Well, I don’t know about that,” the officer said. “But we found something. We’d like to fly you out there.”

  * * *

  An hour later they were on board a military C-141 restored to active status. For now, at least, all in-service military jets were older aircraft, without fly-by-wire controls, that had been taken out of mothballs. Leon couldn’t imagine the resources being sunk into getting these old planes flying again.

  They transferred to a C-2 in Chile, then flew out to the USS John F. Kennedy. En route, they learned the submarine had been located drifting eight hundred miles off the Chilean coast.

  From the John F. Kennedy, they rode a helicopter to a battle cruiser, and from there, a launch to the sub itself, which had been tethered to a cruiser.

  When they arrived, the crew opened the sub’s hull pressure door.

  “The submarine has been secured,” an officer said. “No one is aboard. All systems were shut down. We’ve supplied electric”—he pointed to a thick cable running from one ship to the next—“so you’ve got interior lights and computers. Seaman Milford has worked on the Idaho-class, and he’ll guide you.”

  “Thanks for your work, Captain,” Mike said. “Ready when you are, Milford.”

  “Follow me, then.”

  Leon nodded, afraid of what revelations awaited them inside.

  They climbed down into the sub behind Milford.

  “Are these subs automated?” Leon asked.

  “Partially,” Milford said. “They’d nominally have a crew of fifty, about a third of that on the Ohio-class subs they replaced. It
’s all fly-by-wire, of course. The Captain called up Command. This sub was in a shipyard being refurbished before YONI. No one knows what it was doing out here. What do you want to see first?”

  Leon looked at Mike. “Computers?”

  Mike shrugged. “It’s as good a bet as any.”

  “Follow me,” Milford said. “Computer bay is behind engineering.”

  They passed through an open hatch, and Milford stopped them with an arm.

  “Whoa,” Leon said.

  The compartment they entered had three industrial robots, primitive orange-colored bots a few feet high.

  “These are definitely ELOPe’s,” Mike said. “Same model as he used in his datacenters. He custom-designed them.”

  Scraps of metal and electronics were littered around the compartment.

  “What’s all this?” Leon asked as he stepped over a metal casing.

  “Parts of a Trident III missile,” Milford said. “It looks like the third stage, without the engine.” He picked up a circuit board, then found another identical board a few feet away. “Make that two Tridents.” He pointed across the room. “Three. Your friend was modifying missiles, that’s for sure.”

  “What’s this?” Mike asked, pointing to one of many yard-long cylinders littering the room.

  “The payload,” Milford said. “Nuclear warhead.”

  “Jesus!” Leon took a step back.

  “It’s fine, they’re safe. But why did he want Tridents without warheads?”

  They looked around a few minutes more, then went on to the next compartment.

  Milford opened a cabinet door. The two-foot-wide, three-foot-tall cabinet revealed rows of empty vertical slots. Leon recognized them as Gen4 computer rack-mounts.

  “This is where the computers should be,” Milford said. “Two hundred and eighty-eight is the standard complement, but only the bottom row is present. The rest are gone.”

 

‹ Prev