Book Read Free

Army of None

Page 11

by Paul Scharre


  Painting a picture of the future, Work said, “We are moving to a world in which the autonomous weapons will have smart decision trees that will be completely preprogrammed by humans and completely targeted by humans. So let’s say we fire a weapon at 150 nautical miles because our off-board sensors say a Russian battalion tactical group is operating in this area. We don’t know exactly what of the battalion tactical group this weapon will kill, but we know that we’re engaging an area where there are hostiles.” Work explained that the missile itself, following its programming logic, might prioritize which targets to strike—tanks, artillery, or infantry fighting vehicles. “We’re going to get to that level. And I see no problem in that,” he said. “There’s a whole variety of autonomous weapons that do end-game engagement decisions after they have been targeted and launched at a specific target or target area.” (Here Work is using “autonomous weapon” to refer to fire-and-forget homing munitions.)

  Loitering weapons, Work acknowledged, were qualitatively different. “The thing that people worry about is a weapon we fire at range and it loiters in the area and it decides when, where, how, and what to kill without anything other than the human launching it in the general direction.” Work acknowledged that, regardless of the label used, these loitering munitions were qualitatively different than homing munitions that had to be launched at a specific target. But Work didn’t see any problem with loitering munitions either. “People start to get nervous about that, but again, I don’t worry about that at all.” He said he didn’t believe the United States would ever fire such a weapon into an area unless it had done the appropriate estimates for potential collateral damage. If, on the other hand, “we are relatively certain that there are no friendlies in the area: weapons free. Let the weapon decide.”

  These search-and-destroy weapons didn’t bother Work, even if they were choosing their own targets, because they were still “narrow AI systems.” These weapons would be “programmed for a certain effect against a certain type of target. We can tell them the priorities. We can even delegate authority to the weapon to determine how it executes end game attack.” With these weapons, there may be “a lot of prescribed decision trees, but the human is always firing it into a general area and we will do [collateral damage estimation] and we will say, ‘Can we accept the risk that in this general area the weapon might go after a friendly?’ And we will do the exact same determination that we have right now.”

  Work said the key question is, “What is your comfort level on target location error?” He explained, “If you are comfortable firing a weapon into an area in which the target location error is pretty big, you are starting to take more risks that it might go against an asset that might be a friendly asset or an allied asset or something like that. . . . So, really what’s happening is because you can put so much more processing power onto the weapon itself, the [acceptable degree of] target location error is growing. And we will allow the weapon to search that area and figure out the endgame.” An important factor is what else is in the environment and the acceptable level of collateral damage. “If you have real low collateral damage [requirements],” he said, “you’re not going to fire a weapon into an area where the target location is so large that the chances of collateral damage go up.”

  In situations where that risk was acceptable, Work saw no problems with such weapons. “I hear people say, ‘This is some terrible thing. We’ve got killer robots.’ No we don’t. Robots . . . will only hit the targets that you program in. . . . The human is still launching the weapon and specifying the type of targets to be engaged, even if the weapon is choosing the specific targets to attack within that wide area. There’s always going to be a man or woman in the loop who’s going to make the targeting decision,” he said, even if that targeting decision was now at a higher level.

  Work contrasted these narrow AI systems with artificial general intelligence (AGI), “where the AI is actually making these decisions on its own.” This is where Work would draw the line. “The danger is if you get a general AI system and it can rewrite its own code. That’s the danger. We don’t see ever putting that much AI power into any given weapon. But that would be the danger I think that people are worried about. What happens if Skynet rewrites its own code and says, ‘humans are the enemy now’? But that I think is very, very, very far in the future because general AI hasn’t advanced to that.” Even if technology did get there, Work was not so keen on using it. “We will be extremely careful in trying to put general AI into an autonomous weapon,” he said. “As of this point I can’t get to a place where we would ever launch a general AI weapon . . . [that] makes all the decisions on its own. That’s just not the way that I would ever foresee the United States pursuing this technology. [Our approach] is all about empowering the human and making sure that the humans inside the battle network has tactical and operational overmatch against their enemies.”

  Work recognized that other countries may use AI technology differently. “People are going to use AI and autonomy in ways that surprise us,” he said. Other countries might deploy weapons that “decide who to attack, when to attack, how to attack” all on their own. If they did, then that could change the U.S. calculus. “The only way that we would go down that path, I think, is if it turns out our adversaries do and it turns out that we are at an operational disadvantage because they’re operating at machine speed and we’re operating at human speeds. And then we might have to rethink our theory of the case.” Work said that challenge is something he worries about. “The nature of the competition about how people use AI and autonomy is really going to be something that we cannot control and we cannot totally foresee at this point.”

  THE PAST AS A GUIDE TO THE FUTURE

  Work forthrightly answered every question I put to him, but I still found myself leaving the interview unsatisfied. He had made clear that he was comfortable using narrow AI systems to perform the kinds of tasks we’re doing today: endgame autonomy to confirm a target chosen by a human or defensive human-supervised autonomy like at Aegis. He was comfortable with loitering weapons that might operate over a wider area or smarter munitions that could prioritize targets, but he continued to see humans playing a role in launching and directing those weapons. There were some technologies Work wasn’t comfortable with—artificial general intelligence or “boot-strapping” systems that could modify their own code. But there was a wide swath of systems in between. What about a uninhabited combat aircraft that made its own targeting decisions? How much target error was acceptable? He simply didn’t know. Those are questions future defense leaders would have to address.

  To help shed light on how future leaders might answer those questions, I turned to Dr. Larry Schuette, director of research at the Office of Naval Research. Schuette is a career scientist with the Navy and has a doctorate in electrical engineering, so he understands the technology intimately. ONR has repeatedly been at the forefront of advancements in autonomy and robotics, and Schuette directs much of this research. He is also an avid student of history, so I hoped he could help me understand what the past might tell us about the shape of things to come.

  As a researcher, Schuette made it clear to me that autonomous weapons are not an area of focus for ONR. There are a lot of areas where uninhabited and autonomous systems could have value, but his perspective was to focus on the mundane tasks. “I’m always looking for: what’s the easiest thing with the highest return on investment that we could actually go do where people would thank us for doing it. . . . Don’t go after the hard missions. . . . Let’s do the easy stuff first.” Schuette pointed to thankless jobs like tanking aircraft or cleaning up oil spills. “Be the trash barge. . . . The people would love you.” His view was that even tackling these simple, unobjectionable missions was a big enough challenge. “I know that what is simple to imagine in science and technology isn’t as simple to do.”

  Schuette also emphasized that he didn’t see a compelling operational need for autonomous weapons. Today’s mode
l of “The man pushes a button and the weapon goes autonomous from there but the man makes the decision” was a “workable framework for some large fraction of what you would want to do with unmanned air, unmanned surface, unmanned underwater, unmanned ground vehicles. . . . I don’t see much need in future warfare to get around that model,” he said.

  As a student of history, however, Schuette had a somewhat different perspective. His office looked like a naval museum, with old ship’s logs scattered on the bookshelves and black-and-white photos of naval aviators on the walls. While speaking, Schuette would frequently leap out of his chair to grab a book about unrestricted submarine warfare or the Battle of Guadalcanal to punctuate his point. The historical examples weren’t about autonomy, rather they were about a broader pattern in warfare. “History is full of innovations and asymmetric responses,” he said. In World War II, the Japanese were “amazed” at U.S. skill at naval surface gunfire. In response, they decided to fight at night, resulting in devastating nighttime naval surface action at the Battle of Guadalcanal. The lesson is that “the threat gets a vote.” Citing Japanese innovations in long-range torpedoes, Schuette said, “We had not planned on fighting a torpedo war. . . . The Japanese had a different idea.”

  This dynamic of innovation and counter-innovation inevitably leads to surprises in warfare and can often change what militaries see as ethical or appropriate. “We’ve had these debates before about ethical use of X or Y,” Schuette pointed out. He compared today’s debates about autonomous weapons to debates in the U.S. Navy in the interwar period between World War I and World War II about unrestricted submarine warfare. “We went all of the twenties, all the thirties, talking about how unrestricted submarine warfare was a bad idea we would never do it. And when the shit hit the fan the first thing we did was begin executing unrestricted submarine warfare.” Schuette grabbed a book off his shelf and quoted the order issued to all U.S. Navy ship and submarine commanders on December 7, 1941, just four and a half hours after the attack at Pearl Harbor:

  EXECUTE AGAINST JAPAN UNRESTRICTED AIR AND SUBMARINE WARFARE

  The lesson from history, Schuette said, was that “we are going to be violently opposed to autonomous robotic hunter-killer systems until we decide we can’t live without them.” When I asked him what he thought would be the decisive factor, he had a simple response: “Is it December eighth or December sixth?”

  7

  WORLD WAR R

  ROBOTIC WEAPONS AROUND THE WORLD

  The robotics revolution isn’t American-made. It isn’t even American-led. Countries around the world are pushing the envelope in autonomy, many further and faster than the United States. Conversations in U.S. research labs and the Pentagon’s E-ring are only one factor influencing the future of autonomous weapons. Other nations get a vote too. What they do will influence how the technology develops, proliferates, and how other nations—including the United States—react.

  The rapid proliferation of drones portends what is to come for increasingly autonomous systems. Drones have spread to nearly a hundred countries around the globe, as well as non-state groups such as Hamas, Hezbollah, ISIS, and Yemeni Houthi rebels. Armed drones are next. A growing number of countries have armed drones, including nations that are not major military powers such as South Africa, Nigeria, and Iraq.

  Armed robots are also proliferating on the ground and at sea. South Korea has deployed a robot sentry gun to its border with North Korea. Israel has sent an armed robotic ground vehicle, the Guardium, on patrol near the Gaza border. Russia is building an array of ground combat robots and has plans for a robot tank. Even Shiite militias in Iraq have gotten in on the game, fielding an armed ground robot in 2015.

  Armed Drone Proliferation As of June 2017, sixteen countries possessed armed drones: China, Egypt, Iran, Iraq, Israel, Jordan, Kazakhstan, Myanmar, Nigeria, Pakistan, Saudi Arabia, Turkey, Turkmenistan, United Arab Emirates, the United Kingdom, and the United States. Some nations developed armed drones indigenously, while others acquired the technology from abroad. Over 90 percent of international armed drone transfers (shown on the map via arrows) have been from China.

  Armed robots are heading to sea as well. Israel has also developed an armed uninhabited boat, the Protector, to patrol its coast. Singapore has purchased the Protector and deployed it for counterpiracy missions in the Straits of Malacca. Even Ecuador has an armed robot boat, the ESGRUM, produced entirely indigenously. Armed with a rifle and rocket launcher, the ESGRUM will patrol Ecuadorian waterways to counter pirates.

  As in the United States, the key question will be whether these nations plan to cross the line to full autonomy. No nation has stated they plan to build autonomous weapons. Few have ruled them out either. Only twenty-two nations have said they support a ban on lethal autonomous weapons: Pakistan, Ecuador, Egypt, the Holy See, Cuba, Ghana, Bolivia, Palestine, Zimbabwe, Algeria, Costa Rica, Mexico, Chile, Nicaragua, Panama, Peru, Argentina, Venezuela, Guatemala, Brazil, Iraq, and Uganda (as of November 2017). None of these states are major military powers and some, such as Costa Rica or the Holy See, lack a military entirely.

  One of the first areas where countries will be forced to grapple with the choice of whether to delegate lethal authority to the machine will be for uninhabited combat aircraft designed to operate in contested areas. Several nations are reportedly developing experimental combat drones similar to the X-47B, although for operation from land bases rather than aircraft carriers. These include the United Kingdom’s Taranis, China’s Sharp Sword, Russia’s Skat, France’s nEUROn, India’s Aura, and a rumored unnamed Israeli stealth drone. Although these drones are likely designed to operate with protected communications links to human controllers, militaries will have to decide what actions they want the drone to carry out if (and when) communications are jammed. Restricting the drone’s rules of engagement could mean giving up valuable military advantage, and few nations are being transparent about their plans.

  Given that a handful of countries already possess the fully autonomous Harpy, it isn’t a stretch to imagine them and others authorizing a similar level of autonomy with a recoverable drone. Whether countries are actually building those weapons today is more difficult to discern. If understanding what’s happening inside the U.S. defense industry is difficult, peering behind the curtain of secret military projects around the globe is even harder. Are countries like Russia, China, the United Kingdom, and Israel building autonomous weapons? Or are they still keeping humans in the loop, walking right up to the line of autonomous weapons but not crossing it? Four high-profile international programs, a South Korean robot gun, a British missile, a British drone, and a Russian fleet of armed ground robots, show the difficulty in uncovering what nations around the globe are doing.

  THE CURIOUS CASE OF THE AUTONOMOUS SENTRY BOT

  South Korea’s Samsung SGR-A1 robot is a powerful example of the challenge in discerning how much autonomy weapon systems have. The SGR-A1 is a stationary armed sentry robot designed to defend South Korean’s border against North Korea. In 2007, when the robot was revealed, the electrical engineering magazine IEEE Spectrum reported it had a fully autonomous mode for engaging targets on its own. In an interview with the magazine, Samsung principal research engineer Myung Ho Yoo said, “the ultimate decision about shooting should be made by a human, not the robot.” But the article made clear that Yoo’s “should” was not a requirement, and that the robot did have a fully automatic option.

  The story was picked up widely, with the SGR-A1 cited as an example of a real-world autonomous weapon by The Atlantic, the BBC, NBC, Popular Science, and The Verge. The SGR-A1 made Popular Science’s list of “Scariest Ideas in Science” with PopSci asking, “WHY, GOD? WHY?” Several academic researchers conducting in-depth reports on military robotics similarly cited the SGR-A1 as fully autonomous.

  In the face of this negative publicity, Samsung backpedaled, saying that in fact a human was required to be in the loop. In 2010, a spokesperson for Samsung clarified
that “the robots, while having the capability of automatic surveillance, cannot automatically fire at detected foreign objects or figures.” Samsung and the South Korean government have been tight-lipped about details, though, and one can understand why. The SGR-A1 is designed to defend South Korea’s demilitarized zone along its border with North Korea, with whom South Korea is technically still at war. Few countries on earth face as immediate and intense a security threat. One million North Korean soldiers and the threat of nuclear weapons loom over South Korea like a menacing shadow. In the same interview in which he asserted a human will always remain in the loop, the Samsung spokesperson asserted, “the SGR-1 can and will prevent wars.”

  What are the actual specifications and design parameters for the SGR-A1? It’s essentially impossible to know without directly inspecting the robot. If Samsung says a human is in the loop, all we can do is take their word for it. If South Korea is willing to delegate more autonomy to their robots than other nations, however, it wouldn’t be surprising. Defending the DMZ against North Korea is a matter of survival for South Korea. Accepting the risks of a fully autonomous sentry gun may be more than worth it for South Korea if it enhances deterrence against North Korea.

 

‹ Prev