The Turing Test

Home > Thriller > The Turing Test > Page 6
The Turing Test Page 6

by Andrew Updegrove


  “So,” Shannon said, “you might say that the program is more intelligent at that point than it was when it started?”

  “Well, ‘intelligent’ might be too strong a word at this point. But the program is certainly more capable, and you could definitely say that it’s ‘learned’ something.”

  “What’s another method?”

  “Computers weren’t very powerful when they started to work on artificial intelligence, so a lot of the techniques computer scientists came up with were designed to figure out the most they could in the shortest amount of time with the smallest amount of computing power. One way they did that was, in effect, to program computers to make guesses based on the information they had. If a guess paid off, the program had saved time. And if it didn’t, it was no worse off than it was before. They called that kind of shortcut technique a ‘heuristic,’ and programs still use that approach today.

  “Another thing they did was to push what a program learned back into the way the program was making decisions, rather than just putting information into a database. That way, if a program ran into a problem, it could backtrack to the point where it went wrong and go forward with that new knowledge in mind. To use the maze example again, if the program realized that right turns almost never worked out, it could incorporate that learning into its decision-making process. Then it would go back to the last right turn it made and move forward from there, always trying left turns first. That was another advance.”

  “How long ago did it take to get to that point?” Shannon asked.

  “Oh, we’re talking about back in the 1960s. They’d been at it for about ten years by then.”

  “Really? If that’s where they were after ten years, how come we’re not a lot farther along by now?”

  “Well, for starters, let’s talk about the difference between what they call ‘narrow’ and ‘general’ intelligence. If you’re talking about just a maze, there are only so many variables to work with, and only a few skills involved. After you’ve written a program to solve one maze, you should be able to use the same program to solve every other maze ever created in the same way. That’s an example of ‘narrow’ intelligence, and programs have been around for decades that do useful work in all kinds of very specific narrow areas. You encounter lots of these narrow AIs every day, like mapping programs on your phone and advertising agents that recommend products to you based on what you, or people like you, have bought before. But ‘general’ intelligence is another thing.”

  “What’s general intelligence? Everything else?”

  “Well, for purposes of this discussion, let’s say yes. A general intelligence AI would be able to do anything a human could, as well and as quickly. And that’s enormously challenging. Let’s use an autonomous car program as an example. That’s still a narrow AI, because all it can do is drive a car and nothing else. But just look at all the things that program must be capable of. Like making decisions instantaneously in all kinds of situations – like how long it will take to stop, depending on road conditions, how fast it can legally drive on any given stretch of road, how to tell the difference between a road sign and a pedestrian and much, much more.

  “It also needs to take in and correctly make use of massive amounts of sensor data in real time, like how close the car is to the shoulder and the center line of the road, how fast and where every other car within hundreds of yards is going, whether a light is changing up ahead, and so on. If you split that up into different categories, you get into lots of very tough problems computer scientists have been struggling with for decades.”

  “Such as?”

  “How about image recognition? It’s easy to imagine teaching a computer to recognize two-dimensional outlines, like squares and circles, in a digital document, because it’s easy to turn those figures into mathematical relationships. Once those relationships are established, a computer can identify any set of four equal-length lines joined at right angles as a ‘square.’ But how about if instead of a digitized square, we want the computer to identify a visual image of a rotating wooden block? The first problem is that now we have to teach a computer how to work with data it receives from an external source, like a video camera.”

  “Interesting. So how did they do that?”

  “In the first experiments, they put a light to one side and then set a wooden block on a table, taking advantage of the contrast between the light and shadowed areas of the block. Where the brightness changed abruptly, they programmed the computer to recognize a ‘line.’ That was a good start, but what the computer was ‘seeing’ now was a whole lot more complicated than a two-dimensional square. For one thing, unless the camera is looking at a block head-on, there aren’t any right angles anymore, and the angles that are there keep changing as the block’s orientation changes. So now you need to write an algorithm that describes the changes that the perceived angles in a cube go through as it rotates if you want your program to still be able to recognize something called a ‘cube.’”

  “How long ago was that?”

  “Still the 1960s.”

  “And yet we still don’t have all-purpose machine vision, do we?”

  “Well, there’s a whole lot more to image recognition than that suggests. For one thing, you don’t want to write a different algorithm to deal with every geometric shape. That means coming up with one that’s a whole lot more complicated and powerful. And still, we’re only talking about one visual recognition challenge. Now imagine we’re talking about a face now. Any straight lines?”

  “Just glasses, maybe.”

  “Any abrupt shifts between light and dark?”

  “No.”

  “Is there a big difference between a front view and a profile?”

  “Okay, I get the picture. And then, I guess, there’s also the fact that, until recently, computers weren’t powerful enough to analyze images like that in real time.”

  “Absolutely. And don’t forget the stakes can be very high. Let’s go back to our self-driving car again, and assume we’re traveling sixty miles an hour towards a curve in the road. How is it supposed to tell the difference between a billboard with an ad for Frank’s Produce at that curve from a truck crossing the road when the truck has the same ad on it?”

  “You’re not making me feel good about self-driving cars.”

  “Well, they’re making a lot of progress really quickly now. Anyway, image recognition and self-driving cars aren’t the only tough challenges. Voice recognition is a whole lot more difficult than anyone expected. Computer scientists have been working hard on that one for more than half a century, and the results still aren’t perfect.”

  “What’s the big problem?”

  “Well, in my view, there are really two crucial challenges. The first one is giving a program all the tools it needs to solve problems – that means enough memory organized in the right way, enough processing power, the right kind of sensors and data to tell it what it needs to know, and most of all, the right algorithms to allow it to efficiently and effectively make use of those resources.”

  “I guess that’s kind of obvious. And I can see how you’d need to make even more progress in every one of those areas to support what you called ‘general intelligence.’ What’s the second challenge?”

  “That would be giving an AI the ability to teach itself, taking context into account. So far, programs have been created that can learn specific things in specific situations in order to perform specific jobs. There are other projects that have tried to work more broadly. There’s one called ‘Cyc,’ from the middle letters of the word encyclopedia, that has been adding hundreds of thousands of pieces of knowledge into a computer database to help a program develop the equivalent of what we think of as common sense. Other projects are trying to teach computers to be able to acquire knowledge by reading.

  “But so far, we haven�
��t gotten to the point of creating a program with anything like the all-purpose ability of a person to absorb everything she senses in the world around her, unconsciously integrate that into all she knew before, and then make use of that new knowledge to do all sorts of useful things.”

  “You stopped kind of abruptly there. Why?”

  “It occurred to me that to finish up our lightning history of AI, I might need to add the words ‘until now.’”

  * * *

  The next day, Frank received the first cut of the new data Shannon had requested. He went back and forth with the NSA data analysts for another day, driving them crazy with additional search filters and requests to correlate results with other data, until he was satisfied.

  “Take a look,” he said, handing Shannon a sheaf of spreadsheets.

  “Do you want to be a bit more specific?” she asked, squinting at the endless columns of tiny figures.

  “Sorry – sure. If you go through the numbers, every announcement likely to have a negative impact of more than .005 percent of global emissions of greenhouse gases resulted in an attack, but only if the announcement related to one of the top twenty countries, ranked by CO2 emissions. That’s really incredible.”

  “Because there’s a bright-line cut off?”

  “That part’s interesting but not unexpected. What I find significant is every single announcement within the same parameters resulted in a responsive attack. Think how many exploits must have been planned to be able to do that, and how much unique malware must be out there, just waiting to be triggered? It’s incredible to me that anyone, anywhere, could have infiltrated so many different systems and designed so many attacks. We’ve analyzed thirteen separate attack waves now. That’s a fantastic accomplishment by whoever is behind it.”

  “Does it give you any clue who that might be?”

  “That’s the weirdest part of all. I can’t imagine anyone other than the best government teams in the U.S., Russia, and China staging this range and sophistication of attacks. But those countries have been hit much too hard for the attacks to be camouflage to throw investigators off. And anyway, I’m not sure even one of those teams could pull off something like this. Just think how many vulnerabilities you’d have to buy to invade this many different systems.”

  “Buy?” Shannon asked.

  “Sure. There’s an active market buying and selling zero-day exploits. You remember what they are?”

  “Software vulnerabilities that no one knows about yet.”

  “Right. There are lots of hackers out there that make a good living finding vulnerabilities and then selling them to the highest bidder. Zero-day vulnerabilities for popular or critical software programs go for a lot – sometimes hundreds of thousands of dollars, and even more. Do you remember when the FBI wanted to break into the iPhone of the terrorists that killed dozens of people in San Bernardino?”

  “Of course. Apple wouldn’t help them, because they wanted to protect the privacy of their other customers.”

  “That’s right. The FBI paid over $1.3 million dollars to someone who figured out how to crack the access code.”

  “I assume that was a one-off case, though,” Shannon said. “Who would want to buy a vulnerability besides the developer of the software with the flaw?”

  “Before I answer that, let me challenge the assumption you just made. The developer of the program might never get the chance to buy that vulnerability at all.”

  “Why? Isn’t the vendor the person the hacker would go to first?”

  “Some would. But unfortunately, others wouldn’t. People willing to tell a vendor about a flaw often do so for free, out of a sense of community service. But people who want to make as much money as possible are often happy to sell a vulnerability to anyone, including a criminal, if he’s the highest bidder.”

  “I guess I shouldn’t be surprised. Does that mean the FBI or Homeland Security outbid everyone else and then tell the software vendors where the flaws are, so they can fix them?”

  “Government agencies do buy a lot of zero-day exploits. But they don’t pass them along.”

  “That sounds crazy. Why not?”

  “So they can use them. And not just to hack into the systems of suspected terrorists and other bad guys abroad, but right here at home, without having to let anyone know – even the software developers or Internet service providers.”

  “So, you’re saying our government buys up vulnerabilities and lets everybody around the world keep using flawed software? Wouldn’t it be just a matter of time before someone else found the same vulnerability and exploited it? Maybe against us – or, heck, maybe against the same agency?”

  “That’s right. Don’t forget the government not so long ago tried to get software vendors to build ‘backdoors’ into their own software so the agencies could use them. But the software vendors told the government to take a hike, since black hats would inevitably discover the same backdoors and exploit them. So, the agencies buy as many zero-day vulnerabilities as they can on the dark Internet instead.”

  Shannon shook her head. “At least they’d still need a warrant to exploit those vulnerabilities, right?”

  “That’s what the government says. If you believe it always plays by the book, then there’s not a lot to worry about. But not everyone believes the government always will, assuming it is now. And then there’s the fact that the government itself has been hacked. Back in 2017, somebody – probably the Russians – hacked the NSA and stole a huge library of zero-day vulnerabilities. Then they posted them all to a public Web site. Not long after, somebody used one of them to stage a global ransom attack against hundreds of thousands of computers.

  “So, who is it on our side that buys all those zero-day vulnerabilities?” Shannon asked. “The CIA?”

  Frank frowned. “Now that you mention it, by far and away the biggest buyer of zero-day exploits is the NSA.”

  7

  Sorry. Gotta Split

  The ancient fault yielded in a titanic lurch, splitting a hundred miles of sea floor and heaving one side upward a full fifteen feet. That action in turn thrust trillions of tons of seawater skyward.

  Three and a half minutes later, the needle of a seismic monitor launched into wild gyrations as the first vibrations reached the McMurdo Station, on the edge of the Ross Ice Shelf in Antarctica. For more than two and a half minutes, the shocks ebbed and flowed, shaking awake hundreds of scientists and support personnel. In a mob, they streamed into streets illuminated by the near-perpetual light of the polar spring.

  One of those who stumbled out into the icy street was John Milne. Ignoring the still-moving ground and the shattered glass on the snow, he ran to the door of the geophysical lab. Scanning the paper drum of the monitor, he saw that the tremors had mostly ranged between 6.4 and 6.9 on the Richter Scale. But the first shock registered 7.2. With the nearest tectonic plate border hundreds of miles away, the quake must have been a monster.

  He logged into the global earthquake network to see where else the quake had been detected, but he saw nothing that could be related. That meant the epicenter must be closer to McMurdo than any other monitoring device. That was surprising, as the Antarctic plate borders weren’t particularly active. He only recalled reading about one big quake in the past, and he looked it up. Hmm. That one was an 8.1 event in 1998 near the Balleny Islands. 8.1 was a big quake.

  He pulled up a map of the locations of every seismograph within two thousand miles and realized the epicenter must have been very close indeed; the monitors closest to McMurdo were at Christchurch, New Zealand and Hobart, in Tasmania, neither of which had yet detected the event. He’d need data from at least two locations in addition to his own to determine the epicenter, using the time it took the shockwaves to reach each one.

  He watched the screen intently, waiting for more data to arr
ive, but none did. He picked up the phone and called the station manager.

  “Henry, this is John Milne at the geoscience lab. I think we need to assume a tsunami may be on the way. We should start evacuating the station pronto.”

  “How soon could it arrive?”

  “I don’t know. All I can tell so far is that the epicenter can’t be too far off – less than a thousand miles, certainly. A tsunami moves about five hundred miles an hour through deep water, and faster when it starts to shelve, so if we’re going to be hit, it could get here in less than an hour.”

  “How big?”

  “There aren’t any detection buoys in the Southern Ocean, so there won’t be any way to tell till it gets here, assuming there is one. If the fault shifted miles below the sea floor, there won’t be anything to worry about. But if it shifted near the surface, it could be a big one.”

  “Got it. We’ll get started right away.”

  Milne began pulling together a portable seismic monitor, backup batteries, and anything else he could think of that might be useful. The siren that was now blaring nearby added urgency to his task.

  Lugging a bin crammed with gear, he tottered down the stairs. Out on the street, he dragged it behind him across the snow and hoisted it into the back of one of the trucks in a convoy forming up nearby. He climbed in after it and reopened his laptop. Finally! There was the data he was looking for. Christchurch had registered the quake. He needed one more report to know for certain where the quake had occurred.

  Ah! Hobart had the quake now, too. The truck lurched into gear, and they were underway. And here was the first estimate of the epicenter. Wow – it really had been close – six hundred miles north-northeast of McMurdo.

  Soon there was more data coming in from the global earthquake monitoring system. It was a big event indeed – the first estimate was 8.4 on the Richter scale – a major quake. He called the station manager on his satellite phone and told him everyone not already on their way should start walking while they waited for a vehicle to return for them.

 

‹ Prev