Book Read Free

Army of None

Page 15

by Paul Scharre


  The students’ projects have been getting better over the years, Dela Cuesta explained, as they are able to harness more advanced open-source components and software. A few years ago, a class project to create a robot tour guide for the school took two years to complete. Now, the timeline has been shortened to nine weeks. “The stuff that was impressive to me five, six years ago we could accomplish in a quarter of the time now. It just blows my mind,” he said. Still, Dela Cuesta pushes students to build things custom themselves rather than use existing components. “I like to have the students, as much as possible, build from scratch.” Partly, this is because it’s often easier to fit custom-built hardware into a robot, an approach that is possible because of the impressive array of tools Dela Cuesta has in his shop. Along a back wall were five 3-D printers, two laser cutters to make custom parts, and a mill to etch custom circuit boards. An even more important reason to have students do things themselves is they learn more that way. “Custom is where I want to go,” Dela Cuesta said. “They learn a lot more from it. It’s not just kind of this black box magic thing they plug in and it works. They have to really understand what they’re doing in order to make these things work.”

  Across the hall in the computer systems lab, I saw the same ethos on display. The teachers emphasized having students do things themselves so they were learning the fundamental concepts, even if that meant re-solving problems that have already been solved. Repackaging open-source software isn’t what the teachers are after. That isn’t to say that students aren’t learning from the explosion in open-source neural network software. On one teacher’s desk sat a copy of Jeff Heaton’s Artificial Intelligence for Humans, Volume 3: Deep Learning and Neural Networks. (This title begs the uncomfortable question whether there is a parallel course of study, Artificial Intelligence for Machines, where machines learn to program other machines. The answer, I suppose, is “Not yet.”) Students are learning how to work with neural networks, but they’re doing so from the bottom up. A junior explained to me how he trained a neural network to play tic-tac-toe—a problem that was solved over fifteen years ago, but remains a seminal coding problem. Next year, TJ will offer a course in computer vision that will cover convolutional neural networks.

  Maybe it’s a cliché to say that the projects students were working on are mind-blowing, but I was floored by the things I saw TJ students doing. One student was disassembling a Keurig machine and turning it into a net-enabled coffeemaker so it could join the Internet of Things. Wires snaked through it as though the internet was physically infiltrating the coffeemaker, like Star Trek’s Borg. Another student was tinkering with something that looked like a cross between a 1980s Nintendo Power Glove and an Apple smartwatch. He explained it was a “gauntlet,” like that used by Iron Man. When I stared at him blankly, he explained (in that patient explaining-to-an-old-person voice that young people use) that a gauntlet is the name for the wrist-mounted control that Iron Man uses to fly his suit. “Oh, yeah. That’s cool,” I said, clearly not getting it. I don’t feel like I need the full functionality of my smartphone mounted on my wrist, but then again I wouldn’t have thought ten years ago that I needed a touchscreen smartphone on my person at all times in the first place. Technology has a way of surprising us. Today’s technology landscape is a democratized one, where game-changing innovations don’t just come out of tech giants like Google and Apple but can come from anyone, even high-school students. The AI revolution isn’t something that is happening out there, only in top-tier research labs. It’s happening everywhere.

  THE EVERYONE REVOLUTION

  I asked Brandon Tseng from Shield AI where this path to ever-greater autonomy was taking us. He said, “I don’t think we’re ever going to give [robots] complete autonomy. Nor do I think we should give them complete autonomy.” On one level, it’s reassuring to know that Tseng, like nearly everyone I met working on military robotics, saw a limit to how much autonomy we should give machines. Reasonable people might disagree on where that limit is, and for some people autonomous weapons that search for and engage targets within narrow human-defined parameters might be acceptable, but everyone I spoke with agreed there should be some limits. But the scary thing is that reasonableness on the part of Tseng and other engineers may not be enough. What’s to stop a technologically inclined terrorist from building a swarm of people-hunting autonomous weapons and letting them loose in a crowded area? It might take some engineering and some time, but the underlying technological know-how is readily available. We are entering a world where the technology to build lethal autonomous weapons is available not only to nation-states but to individuals as well. That world is not in the distant future. It’s already here.

  What we do with the technology is an open question. What would be the consequence of a world of autonomous weapons? Would they lead to a robutopia or robopocalypse? Writers have pondered this question in science fiction for decades, and their answers vary wildly. The robots of Isaac Asimov’s books are mostly benevolent partners to humans, helping to protect and guide humanity. Governed by the Three Laws of Robotics, they are incapable of harming humans. In Star Wars, droids are willing servants of humans. In the Matrix trilogy, robots enslave humans, growing them in pods and drawing on their body heat for power. In the Terminator series, Skynet strikes in one swift blow to exterminate humanity after it determines humans are a threat to its existence.

  We can’t know with any certainty what a future of autonomous weapons would look like, but we do have better tools than science fiction to guess at what promise and perils they might bring. Humanity’s past and present experiences with autonomy in the military and other settings point to the potential benefits and dangers of autonomous weapons. These lessons allow us to peer into a murky future and, piece by piece, begin to discern the shape of things to come.

  PART III

  Runaway Gun

  9

  ROBOTS RUN AMOK

  FAILURE IN AUTONOMOUS SYSTEMS

  March 22, 2003—The system said to fire. The radars had detected an incoming tactical ballistic missile, or TBM, probably a Scud missile of the type Saddam had used to harass coalition forces during the first Gulf War. This was their job, shooting down the missile. They needed to protect the other soldiers on the ground, who were counting on them. It was an unfamiliar set of equipment; they were supporting an unfamiliar unit; they didn’t have the intel they needed. But this was their job. The weight of the decision rested on a twenty-two-year-old second lieutenant fresh out of training. She weighed the available evidence. She made the best call she could: fire.

  With a BOOM-ROAR-WOOSH, the Patriot PAC-2 missile left the launch tube, lit its engine, and soared into the sky to take down its target. The missile exploded. Impact. The ballistic missile disappeared from their screens: their first kill of the war. Success.

  From the moment the Patriot unit left the States, circumstances had been against them. First, they’d fallen in on a different, older, set of equipment than what they’d trained on. Then once in theater, they were detached from their parent battalion and attached to a new battalion whom they hadn’t worked with before. The new battalion was using the newer model equipment, which meant their old equipment (which they weren’t fully trained on in the first place) couldn’t communicate with the rest of the battalion. They were in the dark. Their systems couldn’t connect to the larger network, depriving them of vital information. All they had was a radio.

  But they were soldiers, and they soldiered on. Their job was to protect coalition troops against Iraqi missile attacks, and so they did. They sat in their command trailer, with outdated gear and imperfect information, and they made the call. When they saw the missiles, they took the shots. They protected people.

  The next night, at 1:30 a.m., there was an attack on a nearby base. A U.S. Army sergeant threw a grenade into a command tent, killing one soldier and wounding fifteen. He was promptly detained but his motives were unclear. Was this the work of one disgruntled soldier or was he
an infiltrator? Was this the first of a larger plot? Word of the attack spread over the radio. Soldiers were sent to guard the Patriot battery’s outer perimeter in case follow-on attacks came, leaving only three people in the command trailer, the lieutenant and two enlisted soldiers.

  Elsewhere that same night, further north over Iraq, British Flight Lieutenant Kevin Main turned around his Tornado GR4A fighter jet and headed back toward Kuwait, his mission for the day complete. In the back seat as navigator was Flight Lieutenant Dave Williams. What Main and Williams didn’t know as they rocketed back toward friendly lines was that a crucial piece of equipment, the identification friend or foe (IFF) signal, wasn’t on. The IFF was supposed to broadcast a signal to other friendly aircraft and ground radars to let them know their Tornado was friendly and not to fire. But the IFF wasn’t working. The reason why is still mysterious. It could be because Main and Williams turned it off while over Iraqi territory so as not to give away their position and forgot to turn it back on when returning to Kuwait. It could be because the system simply broke, possibly from a power supply failure. The IFF signal had been tested by maintenance personnel prior to the aircraft taking off, so it should have been functional, but for whatever reason it wasn’t broadcasting.

  As Main and Williams began their descent toward Ali Al Salem air base, the Patriot battery tasked with defending coalition bases in Kuwait sent out a radar signal into the sky, probing for Iraqi missiles. The radar signal bounced off the front of Main and Williams’ aircraft and reflected back, where it was received by the Patriot’s radar dish. Unfortunately, the Patriot’s computer didn’t register the radar reflection from the Tornado as an aircraft. Because of the aircraft’s descending profile, the Patriot’s computer tagged the radar signal as coming from an anti-radiation missile. In the Patriot’s command trailer, the humans didn’t know that a friendly aircraft was coming in for a landing. Their screen showed a radar-hunting enemy missile homing in on the Patriot battery.

  The Patriot operators’ mission was to shoot down ballistic missiles, which are different from anti-radiation missiles. It would be hard for a radar to confuse an aircraft flying level with a ballistic missile, which follows a parabolic trajectory through the sky like a baseball. Anti-radiation missiles are different. They have a descending flight profile, like an aircraft coming in on landing. Anti-radiation missiles home on radars and could be deadly to the Patriot. Shooting them wasn’t the Patriot operators’ primary job, but they were authorized to engage if the missile appeared to be homing in on their radar.

  The Patriot operators saw the missile headed toward their radar and weighed their decision. The Patriot battery was operating alone, without the ability to connect to other radars on the network because of their outdated equipment. Deprived of the ability to see other radar inputs directly, the lieutenant called over the radio to the other Patriot units. Did they see an anti-radiation missile? No one else saw it, but this meant little, since other radars may not have been in a position to see it. The Tornado’s IFF signal, which would have identified the blip on their radar as a friendly aircraft, wasn’t broadcasting. Even if it had been working, as it turns out, the Patriot wouldn’t have been able to see the signal—the codes for the IFF hadn’t been loaded into the Patriot’s computers. The IFF, which was supposed to be a backup safety measure against friendly fire, was doubly broken.

  There were no reports of coalition aircraft in the area. There was nothing at all to indicate that the blip that appeared on their scopes as an anti-radiation missile might, in fact, be a friendly aircraft. They had seconds to decide.

  They took the shot. The missile disappeared from their scope. It was a hit. Their shift ended. Another successful day.

  Elsewhere, Main and Williams’ wingman landed in Kuwait, but Main and Williams never returned. The call went out: there is a missing Tornado aircraft. As the sun came up over the desert, people began to put two and two together. The Patriot had shot down one of their own.

  U.S. Army Patriot Operations The Patriot air and missile defense system is used to counter a range of threats from enemy aircraft and missiles.

  The Army opened an investigation, but there was still a war to fight. The lieutenant stayed at her post; she had a job to do. The Army needed her to do that job, to protect other soldiers from Saddam’s missiles. Confusion and chaos are unfortunate realities of war. Unless the investigation determined that she was negligent, the Army needed her in the fight. More of Saddam’s missiles were coming.

  The very next night, another enemy ballistic missile popped up on their scope. They took the shot. Success. It was a clean hit—another enemy ballistic missile down. The same Patriot battery had two more successful ballistic missile shootdowns before the end of the war. In all, they were responsible for 45 percent of all successful ballistic missile engagements in the war. Later, the investigation cleared the lieutenant of wrongdoing. She made the best call with the information she had.

  Other Patriot units were fighting their own struggle against the fog of war. The day after the Tornado shoot down, a different Patriot unit got into a friendly fire engagement with a U.S. F-16 aircraft flying south of Najaf in Iraq. This time, the aircraft shot first. The F-16 fired off a radar-hunting AGM-88 high-speed anti-radiation missile. The missile zeroed in on the Patriot’s radar and knocked it out of commission. The Patriot crew was unharmed—a near miss.

  After these incidents, a number of safety measures were immediately put in place to prevent further fratricides. The Patriot has both a manual (semiautonomous) and auto-fire (supervised autonomous) mode, which can be kept at different settings for different threats. In manual mode, a human is required to approve an engagement before the system will launch. In auto-fire mode, if there is an incoming threat that meets its target parameters, the system will automatically engage the threat on its own.

  Because ballistic missiles often afford very little reaction time before impact, Patriots sometimes operated in auto-fire mode for tactical ballistic missiles. Now that the Army knew the Patriot might misidentify a friendly aircraft as an anti-radiation missile, however, they ordered Patriot units to operate in manual mode for anti-radiation missiles. As an additional safety, systems were now kept in “standby” status so they could track targets, but could not fire without a human bringing the system back to “operate” status. Thus, in order to fire on an anti-radiation missile, two steps were needed: bringing the launchers to operate status and authorizing the system to fire on the target. Ideally, this would prevent another fratricide like the Tornado shootdown.

  Despite these precautions, a little over a week later on April 2, disaster struck again. A Patriot unit operating north of Kuwait on the road to Baghdad picked up an inbound ballistic missile. Shooting down ballistic missiles was their job. Unlike the anti-radiation missile that the earlier Patriot unit had fired on—which turned out to be a Tornado—there was no evidence to suggest ballistic missiles might be misidentified as aircraft.

  OBSERVE

  ORIENT

  DECIDE

  ACT

  What is it?

  Whose is it?

  Radar detects and classifies object

  Humans apply outside information and context

  Is it hostile?

  Is it a valid target?

  Establish situational awareness

  Apply rules of engagement

  Engage?

  Decision whether or not to fire

  Manual mode (semi-autonomous): Human operator must authorize engagement or system will not fire

  Auto-fire mode (supervised autonomous): System will fire unless human operator halts engagement

  System fires and missile maneuvers to target

  Human operator can choose to abort missile while in flight

  Patriot Decision-Making Process The OODA decision-making process for a Patriot system. In manual mode, the human operator must take a positive action in order for the system to fire. In auto-fire mode, the human supervises the syst
em and can intervene if necessary, but the system will fire on its own if the human does not intervene. Auto-fire mode is vital for defending against short-warning attacks where there may be little time to make a decision before impact. In both modes, the human can still abort the missile while in flight.

  What the operators didn’t know—what they could not have known—was that there was no missile. There wasn’t even an aircraft misidentified as a missile. There was nothing. The radar track was false, a “ghost track” likely caused by electromagnetic interference between their radar and another nearby Patriot radar. The Patriot units supporting the U.S. advance north to Baghdad were operating in a nonstandard configuration. Units were spread in a line south-to-north along the main highway to Baghdad instead of the usual widely distributed pattern they would adopt to cover an area. This may have caused radars to overlap and interfere.

 

‹ Prev