Army of None

Home > Other > Army of None > Page 12
Army of None Page 12

by Paul Scharre


  THE BRIMSTONE MISSILE

  Similar to the U.S. LRASM, the United Kingdom’s Brimstone missile has come under fire from critics who have questioned whether it has too much autonomy. The Brimstone is an aircraft-launched fire-and-forget missile designed to destroy ground vehicles or small boats. It can accomplish this mission in a variety of ways.

  Brimstone has two primary modes of operation: Single Mode and Dual Mode. In Single Mode, a human “paints” the target with a laser and the missile homes in on the laser reflection. The missile will go wherever the human points the laser, allowing the human to provide “guidance all the way to the target.” Dual Mode combines the laser guidance with a millimeter-wave (MMW) radar seeker for “fast moving and maneuvering targets and under narrow Rules of Engagement.” The human designates the target with a laser, then there is a “handoff” from the laser to the MMW seeker at the final stage so the weapon can home in on fast moving targets. In both modes of operation, the missile is clearly engaging targets that have been designated by a human, making it a semiautonomous weapon.

  However, the developer also advertises another mode of operation, “a previously-developed fire-and-forget, MMW-only mode” that can be enabled “via a software role change.” The developer explains:

  This mode provides through-weather targeting, kill box-based discrimination and salvo launch. It is highly effective against multi-target armor formations. Salvo-launched Brimstones self-sort based on firing order, reducing the probability of overkill for increased one-pass lethality.

  This targeting mode would allow a human to launch a salvo of Brimstones against a group of enemy tanks, letting the missiles sort out which missiles hit which tank. According to a 2015 Popular Mechanics article, in this mode the Brimstone is fairly autonomous:

  It can identify, track, and lock on to vehicles autonomously. A jet can fly over a formation of enemy vehicles and release several Brimstones to find targets in a single pass. The operator sets a “kill box” for Brimstone, so it will only attack within a given area. In one demonstration, three missiles hit three target vehicles while ignoring nearby neutral vehicles.

  On the Brimstone’s spec sheet, the developer also describes a similar functionality against fast-moving small boats, also called fast inshore attack craft (FIAC):

  In May 2013, multiple Brimstone missiles operating in an autonomous [millimeter] wave (MMW) mode completed the world’s first single button, salvo engagement of multiple FIAC, destroying three vessels (one moving) inside a kill box, while causing no damage to nearby neutral vessels.

  When operating in MMW-only mode, is the Brimstone an autonomous weapon? While the missile has a reported range in excess of 20 kilometers, it cannot loiter to search for targets. This means that the human operator must know there are valid targets—ground vehicles or small boats—within the kill box before launch in order for the missile to be effective.

  The Brimstone can engage these targets using some innovative features. A pilot can launch a salvo of multiple Brimstones against a group of targets within a kill box and the missiles themselves “self-sort based on firing order” to hit different targets. This makes the Brimstone especially useful for defending against enemy swarm attacks. For example, Iran has harassed U.S. ships with swarming small boats that could overwhelm ship defenses, causing a USS Cole–type suicide attack. Navy helicopters armed with Brimstones would be an extremely effective defense against boat swarms, allowing pilots to take out an entire group of enemy ships at once without having to individually target each ship.

  Even with all of the Brimstone’s features, the human user still needs to launch it at a known group of targets. Because it cannot loiter, if there weren’t targets in the kill box when the missile activated its seeker, the missile would be wasted. Unlike a drone, the missile couldn’t return to base. The salvo launch capability allows the pilot to launch multiple missiles against a swarm of targets, rather than select each one individually. This makes a salvo of Brimstones similar to the Sensor Fuzed Weapon that is used to take out a column of tanks. Even though the missiles themselves might self-sort which missile hits which target, the human is still deciding to attack that specific cluster of targets. Even in MMW-only mode, the Brimstone is a semiautonomous weapon.

  The line between the semiautonomous Brimstone and a fully autonomous weapon that would choose its own targets is a thin one. It isn’t based on the seeker or the algorithms. The same seeker and algorithms could be used on a future weapon that could loiter over the battlespace—a missile with an upgraded engine or a drone that could patrol an area. A future weapon that patrolled a kill box, rather than entered one at a snapshot in time, would be an autonomous weapon, because the human could send the weapon to monitor the kill box without knowledge of any specific targets. It would allow the human to fire the weapon “blind” and let the weapon decide if and when to strike targets.

  Even if the Brimstone doesn’t quite cross the line to an autonomous weapon, it takes one more half step toward it, to the point where all that is needed is a light shove to cross the line. A MMW-only Brimstone could be converted into a fully autonomous weapon simply by upgrading the missile’s engine so that it could loiter for longer. Or the MMW-only mode algorithms and seeker could be placed on a drone. Notably, the MMW-only mode is enabled in the missile by a software change. As autonomous technology continues to advance, more missiles around the globe will step right up to—or cross—that line.

  Would the United Kingdom be willing to cross that line? The debate surrounding another British program, the Taranis drone, shows the difficulty in ascertaining how far the British might be willing to push the technology.

  THE TARANIS DRONE

  The Taranis is a next-generation experimental combat drone similar to those being developed by the United States, India, Russia, China, France, and Israel. BAE Systems, developer of the Taranis, has given one of the most extensive descriptions of how a combat drone’s autonomy might work for weapons engagements. Similar to the X-47B, the Taranis is a demonstrator airplane, but the British military intends to carry the demonstration further than the United States and conduct simulated weapons engagements with the Taranis.

  Information released by BAE shows how Taranis might be employed. It explains a simulated weapons test that “will demonstrate the ability of [an unmanned combat aircraft system] to: fend off hostile attack; deploy weapons deep in enemy territory and relay intelligence information.” In the test:

  1

  Taranis would reach the search area via a preprogrammed flight path in the form of a three-dimensional corridor in the sky. Intelligence would be relayed to mission command.

  2

  When Taranis identifies a target it would be verified by mission command.

  3

  On the authority of mission command, Taranis would carry out a simulated firing and then return to base via the programmed flight path.

  At all times, Taranis will be under the control of a highly-trained ground crew. The Mission Commander will both verify targets and authorise simulated weapons release.

  This protocol keeps the human in the loop to approve each target, which is consistent with other statements by BAE leadership. In a 2016 panel at the World Economic Forum in Davos, BAE Chairman Sir Roger Carr described autonomous weapons as “very dangerous” and “fundamentally wrong.” Carr made clear that BAE only envisioned developing weapons that kept a connection to a human who could authorize and remain responsible for lethal decision-making.

  In a 2016 interview, Taranis program manager Clive Marrison made a similar statement that “decisions to release a lethal mechanism will always require a human element given the Rules of Engagement used by the UK in the past.” Marrison then hedged, saying, “but the Rules of Engagement could change.”

  The British government reacted swiftly. Following multiple media articles alleging BAE was building in the option for Taranis to “attack targets of its own accord,” the UK government released a statement th
e next day stating:

  The UK does not possess fully autonomous weapon systems and has no intention of developing or acquiring them. The operation of our weapons will always be under human control as an absolute guarantee of human oversight, authority and accountability for their use.

  The British government’s full-throated denial of autonomous weapons would appear to be as clear a policy statement as there could be, but an important asterisk is needed regarding how the United Kingdom defines an “autonomous weapon system.” In its official policy expressed in the UK Joint Doctrine Note 2/11, “The UK Approach to Unmanned Aircraft Systems,” the British military describes an autonomous system as one that “must be capable of achieving the same level of situational understanding as a human.” Short of that, a system is defined as “automated.” This definition of autonomy, which hinges on the complexity of the system rather than its function, is a different way of using the term “autonomy” than many others in discussions on autonomous weapons, including the U.S. government. The United Kingdom’s stance is not a product of sloppy language; it’s a deliberate choice. The UK doctrine note continues:

  As computing and sensor capability increases, it is likely that many systems, using very complex sets of control rules, will appear and be described as autonomous systems, but as long as it can be shown that the system logically follows a set of rules or instructions and is not capable of human levels of situational understanding, then they should only be considered to be automated.

  This definition shifts the lexicon on autonomous weapons dramatically. When the UK government uses the term “autonomous system,” they are describing systems with human-level intelligence that are more analogous to the “general AI” described by U.S. Deputy Defense Secretary Work. The effect of this definition is to shift the debate on autonomous weapons to far-off future systems and away from potential near-term weapon systems that may search for, select, and engage targets on their own—what others might call “autonomous weapons.” Indeed, in its 2016 statement to the United Nations meetings on autonomous weapons, the United Kingdom stated: “The UK believes that [lethal autonomous weapon systems] do not, and may never, exist.” That is to say, Britain may develop weapons that would search for, select, and engage targets on their own; it simply would call them “automated weapons,” not “autonomous weapons.” In fact, the UK doctrine note refers to systems such as the Phalanx gun (a supervised autonomous weapon) as “fully automated weapon systems.” The doctrine note leaves open the possibility of their development, provided they pass a legal weapons review showing they can be used in a manner compliant with the laws of war.

  In practice, the British government’s stance on autonomous weapons is not dissimilar from that expressed by U.S. defense officials. Humans will remain involved in lethal decision-making . . . at some level. That might mean a human operator launching an autonomous/automated weapon into an area and delegating to it the authority to search for and engage targets on its own. Whether the public would react differently to such a weapon if it were rebranded an “automated weapon” is unclear.

  Even if the United Kingdom’s stance retains some flexibility, there is still a tremendous amount of transparency into how the U.S. and UK governments are approaching the question of autonomous weapons. Weapons developers like BAE, MBDA, and Lockheed Martin have detailed descriptions of their weapon systems on their websites, which is not uncommon for defense companies in democratic nations. DARPA describes its research programs publicly and in detail. Defense officials in both countries openly engage in a dialogue about the boundaries of autonomy and the appropriate role of humans and machines in lethal force. This transparency stands in stark contrast to authoritarian regimes.

  RUSSIA’S WAR BOTS

  While the United States has been very reluctant to arm ground robots, with only one short-lived effort during the Iraq war and no developmental programs for armed ground robots, Russia has shown no such hesitation. Russia is developing a fleet of ground combat robots for a variety of missions, from protecting critical installations to urban combat. Many of Russia’s ground robots are armed, ranging from small robots to augment infantry troops to robotic tanks. How much autonomy Russia is willing to place into its ground robots will have a profound impact on the future of land warfare.

  The Platform-M, a tracked vehicle roughly the size of a four-wheeler armed with a grenade launcher and an assault rifle, is on the smaller scale of Russian war bots. In 2014, the Platform-M took part in an urban combat exercise alongside Russian troops. According to an official statement from the Russian military, “the military robots were assigned to eliminate provisional illegal armed formations in urban conditions and striking stationary and mobile targets.” The Russian military did not describe the degree of the Platform-M’s autonomy, although according to the developer:

  Platform-M . . . is used for gathering intelligence, for discovering and eliminating stationary and mobile targets, for firepower support, for patrolling and for guarding important sites. The unit’s weapons can be guided, it can carry out supportive tasks and it can destroy targets in automatic or semiautomatic control systems; it is supplied with optical-electronic and radio reconnaissance locators.

  The phrase “can destroy targets in automatic . . . control” makes it sound like an autonomous weapon. This claim should be viewed with some skepticism. For one, videos of Russian robots show soldiers selecting targets on a computer screen. More importantly, the reality is that detecting targets autonomously in a ground combat environment is far more technically challenging than targeting enemy radars as the Harpy does or enemy ships on the high seas like TASM. The weapons Platform-M carries—a grenade launcher and assault rifle—would be effective against people, not armored vehicles like tanks or armored personnel carriers. People don’t emit in the electromagnetic spectrum like radars. They aren’t “cooperative targets.” At the time this claim was made in 2014, autonomously finding a person in a cluttered ground combat environment would have been difficult. Advances in neural nets have changed this in the past few years, making it easier to identify people. But discerning friend from foe would still be a challenge.

  The autonomous target identification problem Russian war bots face is far more challenging than the South Korean sentry gun on the DMZ. In a demilitarized zone such as that separating North and South Korea, a country might decide to place stationary sentry guns along the border and authorize them to shoot anything with an infrared (heat) signature coming across. Such a decision would not be without its potential problems. Sentry guns that lack any ability to discriminate valid military targets from civilians could senselessly murder innocent refugees attempting to flee an authoritarian regime. In general, though, a DMZ is a more controlled environment than offensive urban combat operations. Authorizing static, defensive autonomous weapons that are fixed in place would be far different than roving autonomous weapons that would be intended to maneuver in urban areas where combatants are mixed in among civilians.

  Technologies exist today that could be used for automatic responses against military targets, if the Russians wanted to give such a capability to the Platform-M. The technology is fairly crude, though. For example, the Boomerang shot detection system is a U.S. system that uses an array of microphones to detect incoming bullets and calculate their origin. According to the developer, “Boomerang uses passive acoustic detection and computer-based signal processing to locate a shooter in less than a second.” By comparing the relative time of arrival of a bullet’s shock wave at the various microphones, Boomerang and other shot detection systems can pinpoint a shooter’s direction. It can then call out the location of a shot, for example, “Shot. Two o’clock. 400 meters.” Alternatively, acoustic shot detection systems can be directly connected to a camera or remote weapon station and automatically aim them at the shooter. Going the next step to allow the gun to automatically fire back at the shooter would not be technically challenging. Once the shot has been detected and the gun aimed, all
that it would take would be to pull the trigger.

  It’s possible this is what Russia means when it says the Platform-M “can destroy targets in automatic . . . control.” From an operational perspective, however, authorizing automatic return-fire would be quite hazardous. It would require an extreme confidence in the ability of the shot detection system to weed out false positives and to not be fooled by acoustic reflections and echoes, especially in urban areas. Additionally, the gun would have no ability to account for collateral damage—say, to hold fire because the shooter is using human shields. Finally, such a system would be a recipe for fratricide, with robot systems potentially automatically shooting friendly troops or other friendly robots. Two robots on the same side could become trapped in a never-ending loop of automatic fire and response, mindlessly exchanging gunfire until they exhausted their ammunition or destroyed each other. It is unclear whether this is what Russia intends, but from a technical standpoint it would possible.

  Russia’s other ground combat robots scale up in size and sophistication from the Platform-M. The MRK-002-BG-57 “Wolf-2” is the size of a small car and outfitted with a 12.7 mm heavy machine gun. According to David Hambling of Popular Mechanics, “In the tank’s automated mode, the operator can remotely select up to 10 targets, which the robot then bombards. Wolf-2 can act on its own to some degree (the makers are vague about what degree), but the decision to use lethal force is ultimately under human control.” The Wolf-2 sits among a family of similar size robot vehicles. The amphibious Argo is roughly the size of a Mini Cooper, sports a machine gun and rocket-propelled grenade launcher, and can swim at speeds up to 2.5 knots. The A800 Mobile Autonomous Robotic System (MARS) is an (unarmed) infantry support vehicle the size of a compact car that can carry four infantry soldiers and their gear. Pictures online show Russian soldiers riding on the back, looking surprisingly relaxed as the tracked robot cruises through an off-road course.

 

‹ Prev