Army of None

Home > Other > Army of None > Page 10
Army of None Page 10

by Paul Scharre


  TRACE intends to harness these advances and others in machine learning to build better ATR algorithms. ATR algorithms that performed on par with or better than humans in identifying non-cooperative targets such as tanks, mobile missile launchers, or artillery would be a game changer in terms of finding and destroying enemy targets. If the resulting target recognition system was of sufficiently low power to be located on board the missile or drone itself, human authorization would not be required, at least from a purely technical point of view. The technology would enable weapons to hunt and destroy targets all on their own.

  Regardless of whether DARPA was intending to build autonomous weapons, it was clear that programs like CODE and TRACE were putting in place the building blocks that would enable them in the future. Tousley’s view was that it wasn’t DARPA’s call whether to authorize that next fateful step across the line to weapons that would choose their own targets. But if it wasn’t DARPA’s call whether to build autonomous weapons, then whose call was it?

  6

  CROSSING THE THRESHOLD

  APPROVING AUTONOMOUS WEAPONS

  The Department of Defense has an official policy on the role of autonomy in weapons, DoD Directive 3000.09, “Autonomy in Weapon Systems.” (Disclosure: While at DoD, I led the working group that drafted the policy.) Signed in November 2012, the directive is published online so anyone can read it.

  The directive includes some general language on principles for design of semiautonomous and autonomous systems, such as realistic test and evaluation and understandable human-machine interfaces. The meat of the policy, however, is the delineation of three classes of systems that get the “green light” for approval in the policy. These are: (1) semiautonomous weapons, such as homing munitions; (2) defensive supervised autonomous weapons, such as the ship-based Aegis weapon system; and (3) nonlethal, nonkinetic autonomous weapons, such as electronic warfare to jam enemy radars. These three types of autonomous systems are in wide use today. The policy essentially says to developers, “If you want to build a weapon that uses autonomy in ways consistent with existing practices, you’re free to do so.” Normal acquisition rules apply, but those types of systems do not require any additional approval.

  Any future weapon system that would use autonomy in a novel way outside of those three categories gets a “yellow light.” Those systems need to be reviewed before beginning formal development (essentially the point at which large sums of money would be spent) and again before fielding. The policy outlines who participates in the review process—the senior defense civilian officials for policy and acquisitions and the chairman of the Joint Chiefs of Staff—as well as the criteria for review. The criteria are lengthy, but predominantly focus on test and evaluation for autonomous systems to ensure they behave as intended—the same concern Tousley expressed. The stated purpose of the policy is to “minimize the probability and consequences of failures in autonomous and semiautonomous weapon systems that could lead to unintended engagements.” In other words, to minimize the chances of armed robots running amok.

  Lethal autonomous weapons are not prohibited by the policy directive. Instead, the policy provides a process by which new uses of autonomy could be reviewed by relevant officials before deployment. The policy helps ensure that if DoD were to build autonomous weapons that they weren’t developed and deployed without sufficient oversight, but it doesn’t help answer the question of whether DoD might actually approve such systems. On that question, the policy is silent. All the policy says is that if an autonomous weapon met all of the criteria, such as reliability under realistic conditions, then in principle it could be authorized.

  GIVING THE GREEN LIGHT TO AUTONOMOUS WEAPONS

  But would it be authorized? DARPA programs are intended to explore the art of the possible, but that doesn’t mean that DoD would necessarily turn those experimental projects into operational weapon systems. To better understand whether the Pentagon might actually approve autonomous weapons, I sat down with then-Pentagon acquisition chief, Under Secretary of Defense Frank Kendall. As the under secretary of defense for acquisition, technology and logistics, Kendall was the Pentagon’s chief technologist and weapons buyer under the Obama Administration. When it came to major weapons systems like the X-47B or LRASM, the decision whether or not to move forward was in Kendall’s hands. In the process laid out under the DoD Directive, Kendall was one of three senior officials, along with the under secretary for policy and the chairman of the Joint Chiefs, who all had to agree in order to authorize developing an autonomous weapon.

  Kendall has a unique background among defense technologists. In addition to a distinguished career across the defense technology enterprise, serving in a variety of roles from vice president of a major defense firm to several mid-level bureaucratic jobs within DoD, Kendall also has worked pro bono as a human rights lawyer. He has worked with Amnesty International, Human Rights First, and other human rights groups, including as an observer at the U.S. prison at Guantánamo Bay. Given his background, I was hopeful that Kendall might be able to bridge the gap between technology and policy.

  Kendall made clear, for starters, that there had never been a weapon autonomous enough even to trigger the policy review. “We haven’t had anything that was even remotely close to autonomously lethal.” If he were put in that position, Kendall said his chief concerns would be ensuring that it complied with the laws of war and that the weapon allowed for “appropriate human judgment,” a phrase that appears in the policy directive. Kendall admitted those terms weren’t defined, but conversation with him began to elucidate his thinking.

  Kendall started his career as an Army air defender during the Cold War, where he learned the value of automation first hand. “We had an automatic mode for the Hawk system that we never used, but I could see in an extreme situation where you’d turn it on, because you just couldn’t do things fast enough otherwise,” he said. When you have “fractions of a second” to decide—that’s a role for machines.

  Kendall said that automatic target recognition and machine learning were improving rapidly. As they improve, it should become possible for the machine to select its own targets for engagement. In some settings, such as taking out an enemy radar, he thought it could be done “relatively soon.”

  This raises tricky questions. “Where do you want the human intervention to be?” he asked. “Do you want it to be the actual act of employing the lethality? Do you want it to be the acceptance of the rules that you set for identifying something as hostile?” Kendall didn’t have the answers. “I think we’re going to have to sort through all that.”

  One important factor was the context. “Are you just driving down the street or are you actually in a war, or you’re in an insurgency? The context matters.” In some settings, using autonomy to select and engage targets might be appropriate. In others, it might not.

  Kendall saw using an autonomous weapon to target enemy radars as fairly straightforward and something he didn’t see many people objecting to. There were other examples that pushed the boundaries. Kendall said on a trip to Israel, his hosts from the Israel Defense Forces had him sit in a Merkava tank that was outfitted with the Trophy active protection system. The Israelis fired a rocket propelled grenade near the tank (“offset a few meters,” he said) and the Trophy system intercepted it automatically. “But suppose I also wanted to shoot back at . . . wherever the bullet had come from?” he asked. “You can automate that, right? That’s protecting me, but it’s the use of that weapon in a way which could be lethal to whoever, you know, was in the line of fire when I fire.” He pointed out that automating a return-fire response might prevent a second shot, saving lives. Kendall acknowledged that had risks, but there were risks in not doing it as well. “How much do we want to put our own people at risk by not allowing them to use this technology? That’s the other side of the equation.”

  Things become especially difficult if the machine is better than the person, which, at some point, will happen. “I thin
k at that point, we’ll have a tough decision to make as to how we want to go with that.” Kendall saw value in keeping a human in the loop as a backup, but, “What if it’s a situation where there isn’t that time? Then aren’t you better off to let the machine do it? You know, I think that’s a reasonable question to ask.”

  I asked him for his answer to the question—after all, he was the person who would decide in DoD. But he didn’t know.

  “I don’t think we’ve decided that yet,” he said. “I think that’s a question we’ll have to confront when we get to where technology supports it.”

  Kendall wasn’t worried, though. “I think we’re a long way away from the Terminator idea, the killer robots let loose on the battlefield idea. I don’t think we’re anywhere near that and I don’t worry too much about that.” Kendall expressed confidence in how the United States would address this technology. “I’m in my job because I find my job compatible with being a human rights lawyer. I think the United States is a country which has high values and it operates consistent with those values. . . . I’m confident that whatever we do, we’re going to start from the premise that we’re going to follow the laws of war and obey them and we’re going to follow humanitarian principles and obey them.”

  Kendall was worried about other countries, but he was most concerned about what terrorists might do with commercially available technology. “Automation and artificial intelligence are one of the areas where the commercial developments I think dwarf the military investments in R&D. They’re creating capabilities that can easily be picked up and applied for military purposes.” As one example, he asked, “When [ISIS] doesn’t have to put a person in that car and can just send it out on its own, that’s a problem for us, right?”

  THE REVOLUTIONARY

  Kendall’s boss was Deputy Secretary of Defense Bob Work, the Pentagon’s number-two bureaucrat—and DoD’s number-one robot evangelist. As deputy secretary from 2014–17, Work was the driving force behind the Pentagon’s Third Offset Strategy and its focus on human-machine teaming. In his vision of future conflicts, AI will work in concert with humans in human-machine teams. This blended human-plus-machine approach could take many forms. Humans could be enhanced through exoskeleton suits and augmented reality, enabled by machine intelligence. AI systems could help humans make decisions, much like in “centaur chess,” where humans are assisted by chess programs that analyze possible moves. In some cases, AI systems may perform tasks on their own with human oversight, particularly when speed is an advantage, similar to automated stock trading. Future weapons will be more intelligent and cooperative, swarming adversaries.

  Collectively, Work argues these advances may lead to a “revolution” in warfare. Revolutions in warfare, Work explained in a 2014 monograph, are “periods of sharp, discontinuous change [in which] . . . existing military regimes are often upended by new more dominant ones, leaving old ways of warfare behind.”

  In defense circles, this is a bold claim. The U.S. defense community of the late 1990s and early 2000s became enamored with the potential of information technology to lead to a revolution in warfare. Visions of “information dominance” and “network-centric warfare” foundered in the mountains of Afghanistan and the dusty streets of Iraq as the United States became mired in messy counterinsurgency wars. High-tech investments in next-generation weapon systems such as F-22 fighter jets were overpriced or simply irrelevant for finding and tracking insurgents or winning the hearts and minds of civilian populations. And yet . . .

  The information revolution continued, leading to more advanced computer processors and ever more sophisticated machine intelligence. And even while warfare in the information age might not have unfolded the way Pentagon futurists might have envisioned, the reality is information technology dramatically shaped how the United States fought its counterinsurgency wars. Information became the dominant driver of counternetwork operations as the United States sought to find insurgents hiding among civilians, like finding a needle in a stack of needles.

  Sweeping technological changes like the industrial revolution or the information revolution unfold in stages over time, over the course of decades or generations. As they do, they inevitably have profound effects on warfare. Technologies like the internal-combustion engine that powered civilian automobiles and airplanes in the industrial revolution led to tanks and military aircraft. Tanks and airplanes, along with other industrial-age weaponry such as machine guns, profoundly changed World War I and World War II.

  Work is steeped in military history and a student of Pentagon futurist Andy Marshall, who for decades ran DoD’s Office of Net Assessment and championed the idea that another revolution in warfare was unfolding today. Work understands the consequences of falling behind during periods of revolutionary change. Militaries can lose battles and even wars. Empires can fall, never to recover. In 1588, the mighty Spanish Armada was defeated by the British, who had more expertly exploited the revolutionary technology of the day: cannons. In the interwar period between World War I and World War II, Germany was more successful in capitalizing on innovations in aircraft, tanks, and radio technology and the result was the blitzkrieg—and the fall of France. The battlefield is an unforgiving environment. When new technologies upend old ways of fighting, militaries and nations don’t often get second chances to get it right.

  If Work is right, and a revolution in warfare is under way driven in part by machine intelligence, then there is an imperative to invest heavily in AI, robotics, and automation. The consequences of falling behind could be disastrous for the United States. The industrial revolution led to machines that were stronger than humans, and the victors were those who best capitalized on that technology. Today’s information revolution is leading to machines that are smarter and faster than humans. Tomorrow’s victors will be those who best exploit AI.

  Right now, AI systems can outperform humans in narrow tasks but still fall short of humans in general intelligence, which is why Work advocates human-machine teaming. Such teaming allows the best of both human and machine intelligence. AI systems can be used for specific, tailored tasks and for their advantages in speed while humans can understand the broader context and adapt to novel situations. There are limitations to this approach. In situations where the advantages in speed are overwhelming, delegating authority entirely to the machine is preferable.

  When it comes to lethal force, in a March 2016 interview, Work stated, “We will not delegate lethal authority for a machine to make a decision.” He quickly caveated that statement a moment later, however, adding, “The only time we will . . . delegate a machine authority is in things that go faster than human reaction time, like cyber or electronic warfare.”

  In other words, we won’t delegate lethal authority to a machine . . . unless we have to. In the same interview, Work said, “We might be going up against a competitor that is more willing to delegate authority to machines than we are and as that competition unfolds, we’ll have to make decisions about how to compete.” How long before the tightening spiral of an ever-faster OODA loop forces that decision? Perhaps not long. A few weeks later in another interview, Work stated it was his belief that “within the next decade or decade and a half it’s going to become clear when and where we delegate authority to machines.” A principal concern of his was the fact that while in the United States we debate the “moral, political, legal, ethical” issues surrounding lethal autonomous weapons, “our potential competitors may not.”

  There was no question that if I was going to understand where the robotics revolution was heading, I needed to speak to Work. No single individual had more sway over the course of the U.S. military’s investments in autonomy than he did, both by virtue of his official position in the bureaucracy as well as his unofficial position as the chief thought-leader on autonomy. Work may not be an engineer writing the code for the next generation of robotic systems, but his influence was even broader and deeper. Through his public statements and internal policies, Work
was shaping the course of DoD’s investments, big and small. He had championed the concept of human-machine teaming. How he framed the technology would influence what engineers across the defense enterprise chose to build. Work immediately agreed to an interview.

  THE FUTURE OF LETHAL AUTONOMY

  The Pentagon is an imposing structure. At 6.5 million square feet, it is one of the largest buildings in the world. Over 20,000 people enter the Pentagon every day to go to work. As I moved through the sea of visitors clearing security, I was reminded of the ubiquity of the robotics revolution. I heard the man in line behind me explain to Pentagon security that the mysterious item in his briefcase raising alarms in their x-ray scanners was a drone. “It’s a UAV,” he said. “A drone. I have clearance to bring it in,” he added hastily.

  The drones are literally everywhere, it would seem.

  Work’s office was in the famed E-ring where the Pentagon’s top executives reside, and he was kind enough to take time out of his busy schedule to talk with me. I started with a simple question, one I had been searching to answer in vain in my research: Is the Department of Defense building autonomous weapons?

  Underscoring the definitional problem, Work wanted to clarify what I meant by “autonomous weapon” before answering. I explained I was defining an autonomous weapon as one that could search for, select, and engage targets on its own. Work replied, “We, the United States, have had a lethal autonomous weapon, using your definition, since 1945: the Bat [radar-guided anti-ship bomb].” He said, “I would define it as a narrow lethal autonomous weapon in that the original targeting of the Japanese destroyer that we fired at was done by a Navy PBY maritime patrol aircraft . . . they knew [the Japanese destroyer] was hostile—and then they launched the weapon. But the weapon itself made all of the decisions on the final engagement using an S-band radar seeker.” Despite his use of the term “autonomous weapon” to describe a radar-guided homing munition, Work clarified he was comfortable with that use of autonomy. “I see absolutely no problem in those types of weapons. It was targeted on a specific capability by a man in the loop and all the autonomy was designed to do was do the terminal endgame engagement.” He was also comfortable with how autonomy was used in a variety of modern weapons, from torpedoes to the Aegis ship combat system.

 

‹ Prev