Book Read Free

Army of None

Page 38

by Paul Scharre


  Carving out exceptions can make it easier to get more countries to sign on to a ban, but can be problematic if the technology is still evolving. One lesson from history is that it is very hard to predict the future path of technology. The 1899 Hague declarations banned gas-filled projectiles, but not poison gas in canisters, a technicality that Germany exploited in World War I in defense of its first large-scale poison gas attack at Ypres. On the other hand, the 1899 declarations also banned expanding bullets, a technology that turned out not to be particularly terrible. Expanding bullets are widely available for purchase by civilians in the United States for personal self-defense, although militaries have generally refrained from their use.

  Hague delegates were aware of these challenges and tried to mitigate them, particularly for rapidly-evolving aerial weapons. The 1899 declarations banned projectiles from balloons or “or by other new methods of a similar nature,” anticipating the possibility of aircraft, which came only four years later. The 1907 Hague rules attempted to solve the problem of evolving technology by prohibiting “attack or bombardment, by whatever means, of towns, villages, dwellings, or buildings which are undefended.” This still fell short, however. The focus on “undefended” targets failed to anticipate the futility of defending against air attack, and the reality that even with defenses, “the bomber will always get through.”

  Technology will evolve in unforeseen ways. Successful preemptive bans focus on the intent behind a technology, rather than specific restrictions. For example, the ban on blinding lasers prohibits lasers specifically designed to cause permanent blindness, rather than limit a certain power level in lasers. The United States takes a similar intent-based interpretation of the ban on expanding bullets, that they are prohibit only to the extent that they are intended to cause unnecessary suffering.

  Preemptive bans pose unique challenges and opportunities. Because they are not yet in states’ inventories, the military utility of a new weapon, such as blinding lasers or environmental modification, may be amorphous. This can sometimes make it easier for a ban to succeed. States may not be willing to run the risk of sparking an arms race if the military utility of a new weapon seems uncertain. On the other hand, states often may not fully understand how terrible a weapon is until they see it on the battlefield. States correctly anticipated the harm that air-delivered weapons could cause in unprotected cities, but poison gas and nuclear weapons shocked the conscience in ways that contemporaries were not prepared for.

  VERIFICATION

  One topic that frequently arises in discussions about autonomous weapons is the role of verification regimes in treaties. Here the track record is mixed. A number of treaties, such as the Nuclear Non-Proliferation Treaty, Chemical Weapons Convention, INF Treaty, START, and New START have formal inspections to verify compliance. Others, such as the Outer Space Treaty’s prohibition against military installations on the moon, have de facto inspection regimes. The land mine and cluster munitions bans do not have inspection regimes, but do require transparency from states on their stockpile elimination.

  Not all successful bans include verification. The 1899 ban on expanding bullets, 1925 Geneva Gas Protocol, CCW, SORT, and the Outer Space Treaty’s ban on putting weapons of mass destruction (WMD) in orbit all do not have verification regimes. The Environmental Modification Convention and Biological Weapons Convention (BWC) only say that states who are concerned that another is cheating should lodge a complaint with the UN Security Council. (The Soviet Union reportedly had a secret biological weapons program, making the BWC a mixed case.)

  In general, verification regimes are useful if there is a reason to believe that countries might be developing the prohibited weapon in secret. That could be the case if they already have it (chemical weapons, land mines, or cluster munitions) or if they might be close (nuclear weapons). Inspection regimes are not always essential. What is required is transparency. Countries need to know whether other nations are complying or not for mutual restraint to succeed. In some cases, the need for transparency can be met by the simple fact that some weapons are difficult to keep secret. Anti-ballistic missile facilities and ships cannot be easily hidden. Other weapons can be.

  WHY BAN?

  Finally, the motivation behind a ban seems to matter in terms of the likelihood of success. Successful bans fall into a few categories. The first is weapons that are perceived to cause unnecessary suffering. By definition, these are weapons that harm combatants excessive to their military value. Restraint with these weapons is self-reinforcing. Combatants have little incentive to use these weapons and strong incentives not to, since the enemy would almost certainly retaliate.

  Bans on weapons that were seen as causing excessive civilian harm have also succeeded, but only when those bans prohibit possessing the weapon at all (cluster munitions and the Ottawa land mine ban), not when they permit use in some circumstances (air-delivered weapons, submarine warfare, incendiary weapons, and the CCW land mine protocol). Bans on weapons that are seen as destabilizing (Seabed Treaty, Outer Space Treaty, ABM Treaty, INF Treaty) have generally succeeded, at least when only a few parties are needed for cooperation. Arms limitation has been exceptionally difficult, even when there are only a few parties, but has some record of success. Prohibiting the expansion of war into new geographic areas has only worked when the focal point for cooperation is clear and there is low military utility in doing so, such as banning weapons on the moon or in Antarctica. Attempts to regulate or restrict warfare from undersea or the air failed, most likely because the regulations were too nuanced. “No submarines” or “no aircraft” would have been clearer, for example.

  Ultimately, even in the best of cases, bans aren’t perfect. Even for highly successful bans, there will be some nations who don’t comply. This makes military utility a decisive factor. Nations want to know they aren’t giving up a potentially war-winning weapon. This is a profound challenge for those seeking a ban on autonomous weapons.

  21

  ARE AUTONOMOUS WEAPONS INEVITABLE?

  THE SEARCH FOR LETHAL LAWS OF ROBOTICS

  In the nearly ten years I have spent working on the issue of autonomous weapons, almost every person I have spoken with has argued there ought to be some limits on what actions machines can take in war, although they draw this line in very different places. Ron Arkin said he could potentially be convinced to support a ban on unsupervised machine learning to generate new targets in the field. Bob Work drew the line at a weapon with artificial general intelligence. There are clearly applications of autonomy and machine intelligence in war that would be dangerous, unethical, or downright illegal. Whether nations can cooperate to avoid those harmful outcomes is another matter.

  Since 2014, countries have met annually at the United Nations Convention on Certain Conventional Weapons (CCW) in Geneva to discuss autonomous weapons. The glacial progress of diplomacy is in marked contrast to the rapid pace of technology development. After three years of informal meetings, the CCW agreed in 2016 to establish a Group of Governmental Experts (GGE) to discuss autonomous weapons. The GGE is a more formal forum, but has no mandate to negotiate a multinational treaty. Its main charge is to establish a working definition for autonomous weapons, a sign of how little progress countries have made.

  Definitions matter, though. Some envision autonomous weapons as simple robotic systems that could search over a wide area and attack targets on their own. Such weapons could be built today, but compliance with the law of war in many settings would be difficult. For others, “autonomous weapon” is a general term that applies to any kind of missile or weapon that uses autonomy in any fashion, from an LRASM to a torpedo. From this perspective, concern about autonomous weapons is ill-founded (since they’ve been around for seventy years!). Some equate “autonomy” with self-learning and adapting systems, which although possible today, have yet to be incorporated into weapons. Others hear the term “autonomous weapons” and envision machines with human-level intelligence, a development that
is unlikely to happen any time soon and would raise a host of other problems if it did. Without a common lexicon, countries can have heated disagreements talking about completely different things.

  The second problem is common to any discussions about emerging technologies, which is that it is hard to foresee how these weapons might be used, under what conditions, and to what effect in future wars. Some envision autonomous weapons as more reliable and precise than humans, the next logical evolution of precision-guided weapons, leading to more-humane wars with fewer civilian casualties. Others envision calamity, with rogue robot death machines killing multitudes. It’s hard to know which vision is more likely. It is entirely possible that both come true, with autonomous weapons making war more precise and humane when they function properly, but causing mass lethality when they fail.

  The third problem is politics. Countries view autonomous weapons through the lens of their own security interests. Nations have very different positions depending on whether or not they think autonomous weapons might benefit them. It would be a mistake to assume that discussions are generating momentum toward a ban.

  Still, international discussions have made some progress. An early consensus has begun to form around the notion that the use of force requires some human involvement. This concept has been articulated in different ways, with some NGOs and states calling for “meaningful human control.” The United States, drawing on language in DoD Directive 3000.09, has used the term “appropriate human judgment.” Reflecting these divergent views, the CCW’s final report from its 2016 expert meetings uses the neutral phrase “appropriate human involvement.” But no country has suggested that it would be acceptable for there to be no human involvement whatsoever in decisions about the use of lethal force. Weak though it may be, this common ground is a starting point for cooperation.

  One of the challenges in current discussions on autonomous weapons is that the push for a ban is being led by NGOs, not states. Only a handful of states have said they support a ban, and none of them are major military powers. When viewed in the context of historical attempts to regulate weapons, this is unusual. Most attempts at restricting weapons have come from great powers.

  The fact that the issue’s framing has been dominated by NGOs campaigning to ban “killer robots” affects the debate. Potential harm to civilians has been front and center in the discussion. Strategic issues, which have been the rationale for many bans in the past, have taken a back seat. The NGOs campaigning for a ban hope to follow in the footsteps of bans on land mines and cluster munitions, but there are no successful examples of preemptive bans on weapons because of concerns about civilian harm. It is easy to see why this is the case. Bans that are motivated by concern about excessive civilian casualties pit an incidental concern for militaries against a fundamental priority: military necessity. Even when countries genuinely care about avoiding civilian harm, they can justifiably say that law-abiding nations will follow existing rules in IHL while those who do not respect IHL will not. What more would a ban accomplish, other than needlessly tie the hands of those who already respect the law? Advocates for the bans on cluster munitions and land mines could point to actual harm caused by those weapons, but for emerging technologies both sides have only hypotheticals.

  When weapons have been seen as causing excessive civilian casualties, the solution has often been to regulate their use, such as avoiding attacks in populated areas. In theory, these regulations allow militaries to use weapons for legitimate purposes while protecting civilians. In practice, these prohibitions have almost always failed in war. In analyzing Robert McNamara’s call for a “no cities” nuclear doctrine, Thomas Schelling pointed out the inherent problems with these rules: “How near to a city is a military installation ‘part’ of a city? If weapons go astray, how many mistakes that hit cities can be allowed for before concluding that cities are ‘in’ the war? . . . there is no such clean line.”

  Supporters of an autonomous weapons ban have wisely argued against such an approach, sometimes called a “partition,” that would permit them in environments without civilians, such as undersea, but not populated areas. Instead, the Campaign to Stop Killer Robots has called for a complete ban on the development, production, and use of fully autonomous weapons. Opponents of a ban sometimes counter that the technology is too diffuse to be stopped, but this wrongly equates a ban with a nonproliferation regime. There are many examples of successful bans (expanding bullets, environmental modification, chemical and biological weapons, blinding lasers, the Mine Ban Treaty, and cluster munitions) that do not attempt to restrict the underlying technologies that would enable these weapons.

  What all these bans have in common and what current discussions on autonomous weapons lack, however, is clarity. Even if no one has yet built a laser intended to cause permanent blindness, the concept is clear. As we’ve seen, there is no widespread agreement on what an autonomous weapon is. Some leaders in the NGO community have actually argued against creating a working definition. Steve Goose from Human Rights Watch told me that it’s “not a wise campaign strategy at the very beginning” to come up with a working definition. That’s because a definition determines “what’s in and what’s out.” He said, “when you start talking about a definition, you almost always have to begin the conversation of potential exceptions.” For prior efforts like land mines and cluster munitions, this was certainly true. Countries defined these terms at the end of negotiations. The difference is that countries could get on board with the general principle of a ban and leave the details to the end because there was a common understanding of what a land mine or a cluster munition was. There is no such common understanding with autonomous weapons. It is entirely reasonable that states and individuals who care a great deal about avoiding civilian casualties are skeptical of endorsing a ban when they have no idea what they would actually be banning. Automation has been used in weapons for decades, and states need to identify which uses of autonomy are truly concerning. Politics gets in the way of solving these definitional problems, though. When the starting point for discussions is that some groups are calling for a ban on “autonomous weapons,” then the definition of “autonomous weapons” instantly becomes fraught.

  The result is a dynamic that is fundamentally different than other attempted weapons bans. This one isn’t being led by great powers, and it isn’t being led by democratic nations concerned about civilian harm either, as was the case with land mines and cluster munitions. The list of nations that support a ban on autonomous weapons is telling: Pakistan, Ecuador, Egypt, the Holy See, Cuba, Ghana, Bolivia, Palestine, Zimbabwe, Algeria, Costa Rica, Mexico, Chile, Nicaragua, Panama, Peru, Argentina, Venezuela, Guatemala, Brazil, Iraq, and Uganda (in order of when they endorsed a ban). Do Cuba, Zimbabwe, Algeria, and Pakistan really care more about human rights than countries like Canada, Norway, and Switzerland, who have not endorsed a ban? What the countries supporting a ban have in common is that they are not major military powers. With a few exceptions, like the Holy See, for most of these countries their support for a ban isn’t about protecting civilians; it’s an attempt to tie the hands of more-powerful nations. Most of the countries on this list don’t need to know what autonomous weapons are to be against them. Whatever autonomous weapons may be, these countries know they aren’t the ones building them.

  The prevailing assumption in international discussions seems to be that autonomous weapons would most benefit advanced militaries. In the short term, this is likely true, but as autonomous technology diffuses across the international system, the dynamic is likely to reverse. Fully autonomous weapons would likely benefit the weak. Keeping a human in the loop in contested environments will require protected communications, which is far more challenging than building a weapon that can hunt targets on its own. Nevertheless, these countries likely have the perception that a ban would benefit them.

  This sets up a situation where NGOs and smaller states who are advocating for a ban would asymmetrically benefit, at l
east in the near term, and would not be giving up anything. This only generates resistance from states who are leaders in military robotics, many of whom see their technology development proceeding in an entirely reasonable and prudent fashion. The more that others want to take them away, the more that autonomous weapons look appealing to the countries that might build them.

  This is particularly the case when ban supporters have no answer for how law-abiding nations could defend themselves against those who do develop fully autonomous weapons. Steve Goose acknowledged this problem: “You know you’re not going to get every country in the world to sign something immediately, but you can get people to be affected by the stigma that would accompany a comprehensive prohibition,” he said. “You have to create this stigma that you don’t cross the line.” This can be a powerful tool in encouraging restraint, but it isn’t foolproof. There is a strong stigma against chemical weapons, but they continue to be used by dictators who care nothing for the rule of law or the suffering of civilians. Thus, for many the case against a ban is simple: it would disarm only the law-abiding states who signed it. This would be the worst of all possible outcomes, empowering the world’s most odious regimes with potentially dangerous weapons, while leaving nations who care about international law at a disadvantage. Proponents of a ban have yet to articulate a strategic rationale for why it would be in a leading military power’s self-interest to support a ban.

 

‹ Prev