Army of None

Home > Other > Army of None > Page 39
Army of None Page 39

by Paul Scharre


  Though they haven’t always succeeded in the past, great powers have worked together to avoid weapons that could cause excessive harm. This time, however, leading military powers aren’t trying, in part because the issue has been framed as a humanitarian one, not a strategic one. In CCW discussions, countries have heard expert views on the Martens Clause, which has never been used to ban a weapon before, but strategic considerations have gotten short shrift. A few experts have presented on offense-defense balance and arms races, but there has been virtually no discussion of how autonomous weapons might complicate crisis stability, escalation control, and war termination. John Borrie from the UN Institute for Disarmament Research is concerned about the risk of “unintended lethal effects” from autonomous weapons, but he acknowledged, “it’s not really a significant feature of the policy debate in the CCW.”

  This is unfortunate, because autonomous weapons raise important issues for stability. There may be military benefits to using fully autonomous weapons, but it would be facile and wrong to suggest that overall they are safer and more humane than semiautonomous weapons that retain a human in the loop. This argument conflates the benefits of adding automation, which are significant, with completely removing the human from the loop. There may be cases where their use would result in more-humane outcomes, provided they functioned properly, such as hostage rescue in communications-denied environments or destroying mobile missiles launchers armed with WMD. On the whole, though, the net effects of introducing fully autonomous weapons on the battlefield are likely to be increased speed, greater consequences when accidents occur, and reduced human control.

  States have every incentive to cooperate to avoid a world where they have less control over the use of force. Mutual restraint is definitely in states’ interests. This is especially true for great powers, given the destruction that war among them would bring. Restraint doesn’t come from a treaty, though. The fear of reciprocity is what generates restraint. A treaty is merely a focal point for coordination. Is restraint possible? History suggests any attempt to restrain autonomous weapons must meet three essential conditions to succeed.

  First, a clear focal point for coordination is needed. The simpler and clearer the line, the better. This means that some rules like “no general intelligence” are dead on arrival. The open letter signed by 3,000 AI scientists called for a ban on “offensive autonomous weapons beyond meaningful human control.” Every single one of those words is a morass of ambiguity. If states could agree on the difference between “offensive” and “defensive” weapons, they would have banned offensive weapons long ago. “Meaningful human control” is even more vague. Preemptive bans that try to specify the exact shape of the technology don’t work either. The best preemptive bans focus on the key prohibited concept, like banning lasers intended to cause permanent blindness.

  Second, the horribleness of a weapon must outweigh its military utility for a ban to succeed. Regardless of whether the weapon is seen as destabilizing, a danger to civilians, or causing unnecessary suffering, it must be perceived as bad enough—or sufficiently useless militarily—that states are not tempted to breach the ban.

  Third, transparency is essential. States must trust that others are not secretly developing the weapon they themselves have foresworn. Bob Work told me that he thought countries “will move toward some type of broad international discussion on how far we should go on autonomous weapons.” The problem he saw was verification: “The verification of this regime is going to be very, very difficult because it’s just—it’s ubiquitous. It’s now exploding around us.” This is a fundamental problem for autonomous weapons. The essence of autonomy is software, not hardware, making transparency very difficult.

  Are there models for restraint with autonomous weapons that meet these criteria? Is there a military equivalent to Asimov’s Laws, a “lethal laws of robotics” that states could agree on? There are many possible places countries could draw a line. States could focus on physical characteristics of autonomous weapons: size, range, payload, etc. States could agree to refrain from certain types of machine intelligence, such as unsupervised machine learning on the battlefield. To illustrate the range of possibilities, here are four very different ways that nations could approach this problem.

  OPTION 1: BAN FULLY AUTONOMOUS WEAPONS

  The Campaign to Stop Killer Robots has called for “a comprehensive, pre-emptive prohibition on the development, production and use of fully autonomous weapons.” Assuming that states found it in their interests to do so, could they create a ban that is likely to result in successful restraint?

  Any prohibition would need to clearly distinguish between banned weapons and the many existing weapons that already use autonomy. It should be possible to clearly differentiate between the kind of defensive human-supervised autonomous weapons in use today and fully autonomous weapons that would have no human supervision. “Offensive” and “defensive” are distinctions that wouldn’t work, but “fixed” and “mobile” autonomous weapons could. The types of systems in use today are all fixed in place. They are either static (immobile) or affixed to a vehicle occupied by people.

  Distinguishing between mobile, fully autonomous weapons and advanced missiles would be harder. The chief difference between the semiautonomous HARM and the fully autonomous Harpy is the Harpy’s ability to loiter over a wide area and search for targets. Debates over weapons like the LRASM and Brimstone show how difficult it can be to make this distinction without understanding details about not only the weapon’s functionality, but also its intended use. Drawing a distinction between recoverable robotic vehicles and nonrecoverable munitions would be easier.

  From the perspective of balancing military necessity against the horribleness of the weapon, these distinctions would be sensible. The most troubling applications of autonomy would be fully autonomous weapons on mobile robotic vehicles. Fixed autonomous weapons would primarily be defensive. They also would be lower risk, since humans could supervise engagements and physically disable the system if it malfunctioned. Nonrecoverable fully autonomous weapons (e.g., loitering munitions) would be permitted, but their risks would be mitigated by the fact that they can’t be sent on patrol. Militaries would want to have some indication that there is an enemy in the vicinity before launching them. There are other ways nations could draw lines on what is and isn’t allowed, but this is one set of choices that would seem sensible.

  Regardless of where nations draw the line, there are a number of factors that make restraint challenging. How would nations know that others were complying? The United States, the United Kingdom, France, Russia, China, and Israel are already developing experimental stealth drones. Operational versions of these aircraft would be sent into areas in which communications might be jammed. Even if nations agreed that these combat drones should not attack targets unless authorized by a human, there would be no way for them to verify each other’s compliance. Delegating full autonomy would likely be valuable in some settings. Even if in peacetime nations genuinely desired mutual restraint, in wartime the temptation might be great enough to overcome any reservations. After all, it’s hard to argue that weapons like the Harpy, TASM, or a radar-hunting combat drone shock the conscience. Using them may entail accepting a different level of risk, but it’s hard to see them as inherently immoral. Further complicating restraint, it might be difficult to even know whether nations were complying with the rules during wartime. If a robot destroyed a target, how would others know whether a human had authorized the target or the robot itself?

  All of these factors: clarity, military utility, horribleness of the weapon, and transparency suggest that a ban on fully autonomous weapons is unlikely to succeed. It is almost certain not to pass in the CCW, where consensus is needed, but even if it did, it is hard to see how such rules would remain viable in wartime. Armed robots that had a person in the loop would need only a flip of the switch, or perhaps a software patch, to become fully autonomous. Once a war begins, history sugges
ts that nations will flip the switch, and quickly.

  OPTION 2: BAN ANTIPERSONNEL AUTONOMOUS WEAPONS

  A ban on autonomous weapons that targeted people may be another matter. The ban is clearer, the horribleness of the weapon greater, and the military utility lower. These factors may make restraint more feasible for antipersonnel autonomous weapons.

  It would be easier for states to distinguish between antipersonnel autonomous weapons and existing systems. There are no antipersonnel equivalents of homing missiles or automated defensive systems in use around the world. This could allow states to sidestep the tricky business of carving out exceptions for existing uses.

  The balance between military utility and the weapon’s perceived horribleness is also very different for antipersonnel autonomous weapons. Targeting people is much more problematic than targeting objects for a variety of reasons. Antipersonnel autonomous weapons are also significantly more hazardous than anti-matériel autonomous weapons. If the weapon malfunctions, humans cannot simply climb out of a tank to escape being targeted. A person can’t stop being human. Antipersonnel autonomous weapons also pose a greater risk of abuse by those deliberating wanting to attack civilians.

  Finally, the public may see machines that target and kill people on their own as genuinely horrific. Weapons that autonomously targeted people would tap into an age-old fear of machines rising up against their makers. Public revulsion could be a decisive factor in achieving political support for a ban. There is something clean and satisfying to the rule, to paraphrase Navy engineer John Canning: “let machines target machines; let people target people.”

  The military utility of antipersonnel autonomous weapons is also far lower that anti-matériel autonomous weapons. The reasons for moving to supervised autonomy (speed) or full autonomy (no communications) don’t generally apply when targeting people. Defensive systems like Aegis need a supervised autonomous mode to defend against salvos of high-speed missiles, but overwhelming defensive positions through waves of human attackers has not been an effective tactic since the invention of the machine gun. The additional half second it would take to keep a human in the loop for a weapon like the South Korean sentry gun is marginal. Antipersonnel autonomous weapons in communications-denied environments are also likely to be of marginal value for militaries. At the early stages of a war when communications are contested, militaries will be targeting objects such as radars, missile launchers, bases, airplanes, and ships, not people. Militaries would want the ability to use small, discriminating antipersonnel weapons to target specific individuals, such as terrorist leaders, but those would be semiautonomous weapons; a human would be choosing the target.

  Transparency would still be challenging. As is the case for weapons like the South Korean sentry gun, others would have to essentially trust countries when they say they have a human in the loop. Many nations are already fielding armed robotic ground vehicles, and they are likely to become a common feature of future militaries. It would be impossible to verify that these robotic weapons do not have a mode or software patch waiting on the shelf that would enable them to autonomously target people. Given the ubiquity of autonomous technology, it would also be impossible to prevent terrorists from creating homemade autonomous weapons. Large-scale industrial production of the kinds of antipersonnel weapons that Stuart Russell fears, however, would be hard to hide. If the military utility of these weapons were low enough, it isn’t clear that the risk of small scale uses would compel other nations to violate a prohibition.

  Russell has argued that a treaty could be effective in “stopping an arms race and preventing large-scale manufacturing of such weapons.” The combination of low military utility and high potential harm may make restraint possible for antipersonnel autonomous weapons.

  OPTION 3: ESTABLISH “RULES OF THE ROAD” FOR AUTONOMOUS WEAPONS

  Different problems with autonomous weapons lend themselves to different solutions. A ban on antipersonnel autonomous weapons would reduce the risk of harm to civilians, but would not address the problems autonomous weapons pose for crisis stability, escalation control, and war termination. These are very real concerns, and nations will want to cooperate to ensure their robotic systems do not interact in ways that lead to unintended outcomes.

  Rather than a treaty, one solution could be to adopt a non-legally-binding code of conduct to establish a “rules of the road” for autonomous weapons. The main goal of such a set of rules would be to reduce the potential for unintended interactions between autonomous systems in crises. The best rules would be simple and self-enforcing, like “robotic vehicles should not fire unless fired upon” and “return fire must be limited, discriminating, and proportionate.” Like maritime law, these rules would be intended to govern how autonomous agents interact when they encounter one another in unstructured environments, respecting the right of self-defense but also a desire to avoid unwanted escalation.

  Any ruleset could undoubtedly be manipulated by clever adversaries spoiling for a fight, but the main purpose would be to ensure predictable reactions from robotic systems among nations seeking to control escalation. The rules wouldn’t need to be legally binding, since it would be in states’ best interests to cooperate. These rules would likely collapse in war, as rules on submarine warfare did, but that wouldn’t matter since the intent would be to control escalation in circumstances short of war. Once a full-blown war is under way, the rules wouldn’t be needed.

  OPTION 4: CREATE A GENERAL PRINCIPLE ABOUT THE ROLE OF HUMAN JUDGMENT IN WAR

  The problem with the above approaches is that technology is always changing. Even the most thoughtful regulations or prohibitions will not be able to foresee all the ways that autonomous weapons could evolve over time. An alternative approach would be to focus on the unchanging element in war: the human.

  The laws of war do not specify what role(s) humans should play in lethal force decisions, but perhaps they should. Is there a place for human judgment in war, even if we had all the technology we could imagine? Should there be limits on what decisions machines make in war, not because they can’t, but because they shouldn’t?

  One approach would be to articulate a positive requirement for human involvement in the use of force. Phrases like “meaningful human control,” “appropriate human judgment,” and “appropriate human involvement” all seem to get at this concept. While these terms are not yet defined, they suggest broad agreement that there is some irreducible role for humans in lethal force decisions on the battlefield. Setting aside for the moment the specific label, what would be the underlying idea behind a principle of “_______ human _______”?

  IHL may help give us some purchase on the problem, if one adopts the viewpoint that the laws of war apply to people, not machines. This was the view captured in the U.S. Department of Defense Law of War Manual:

  The law of war rules on conducting attacks (such as the rules relating to discrimination and proportionality) impose obligations on persons. These rules do not impose obligations on the weapons themselves; . . . Rather, it is persons who must comply with the law of war.

  Humans are obligated under IHL to make a determination about the lawfulness of an attack and cannot delegate this obligation to a machine. This means that the human must have some information about the specific attack in order to make a determination about whether it complies with the principles of distinction, proportionality, and precautions in attack. The human must have sufficient information about the target(s), the weapon, the environment, and the context for the attack to determine whether that particular attack is lawful. The attack also must be bounded in time, space, targets, and means of attack for the determination about the lawfulness of that attack to be meaningful. There would presumably be some conditions (time elapsed, geographic boundaries crossed, circumstances changed) under which the human’s determination about the lawfulness of the attack might no longer be valid.

  How much information the person needs and what those bounds are on autonomy is op
en for debate. This perspective would seem to suggest, though, that IHL requires some minimum degree of human involvement in lethal force decisions: (1) human judgment about the lawfulness of an attack; (2) some specific information about the target(s), weapon, environment, and context for attack in order to make a determination about lawfulness of that particular attack; and (3) the weapon’s autonomy be bounded in space, time, possible targets, and means of attack.

  There may be other ways of phrasing this principle and reasonable people might disagree, but there could be merit in countries reaching agreement on a common standard for human involvement in lethal force. While an overarching principle along these lines would not tell states which weapons are permitted and which are not, it could be a common starting point for evaluating technology as it evolves. Many principles in IHL are open to interpretation: unnecessary suffering, proportionality, and precautions in attack, for example. These terms do not tell states which weapons cause unnecessary suffering or how much collateral damage is proportionate, but they still have value. Similarly, a broad principle outlining the role of human judgment in war could be a valuable benchmark against which to evaluate future weapons.

 

‹ Prev