Army of None

Home > Other > Army of None > Page 7
Army of None Page 7

by Paul Scharre


  The origin of the X-47 was in the Joint Unmanned Combat Air Systems (J-UCAS) program, a joint program between DARPA, the Navy, and the Air Force in the early 2000s to design an uninhabited combat aircraft. J-UCAS led to the development of two experimental X-45A aircraft, which in 2004 demonstrated the first drone designed for combat missions. Most drones today are intended for surveillance missions, which means they are designed for soaring and staying aloft for long periods of time. The X-45A, however, sported the same sharply angled wings and smooth top surfaces that define stealth aircraft like the F-117, B-2 bomber, and F-22 fighter. Designed to penetrate enemy air defenses, the intent was for the X-45A to perform close in jamming and strike missions in support of manned aircraft. The program was never completed, though. In the Pentagon’s 2006 Quadrennial Defense Review, a major strategy and budget review conducted every four years, the J-UCAS program was scrapped and restructured.

  J-UCAS’s cancellation was curious because it came at the height of the post-9/11 defense budget boom and at a time when the Defense Department was waking up to the potential of robotic systems more broadly. Even while the military was deploying thousands of drones to Iraq and Afghanistan, the Air Force was highly resistant to the idea of uninhabited aircraft taking on combat roles in future wars. In the ensuing decade since J-UCAS’s cancellation, despite repeated opportunities, the Air Force has not restarted a program to build a combat drone. Drones play important roles in reconnaissance and counterterrorism, but when it comes to dogfighting against other enemy aircraft or taking down another country’s air defense network, those missions are currently reserved for traditional manned aircraft.

  The reality is that what may look from the outside like an unmitigated rush toward robotic weapons is, in actuality, a much more muddled picture inside the Pentagon. There is intense cultural resistance within the U.S. military to handing over combat jobs to uninhabited systems. Robotic systems are frequently embraced for support roles such as surveillance or logistics, but rarely for combat applications. The Army is investing in logistics robots, but not frontline armed combat robots. The Air Force uses drones heavily for surveillance, but is not pursing air-to-air combat drones. Pentagon vision documents such as the Unmanned Systems Roadmaps or the Air Force’s 2013 Remotely Piloted Aircraft Vector often articulate ambitious dreams for robots in a variety of roles, but these documents are often disconnected from budgetary realities. Without funding, these visions are more hallucinations than reality. They articulate goals and aspirations, but do not necessarily represent the most likely future path.

  The downscoping of the ambitious J-UCAS combat aircraft to the plodding MQ-25 tanker is a great case in point. In 2006 when the Air Force abandoned the J-UCAS experimental drone program, the Navy continued a program to develop a combat aircraft. The X-47B was supposed to mature the technology for a successor stealth drone, but in a series of internal Pentagon memoranda issued in 2011 and 2012, Navy took a sharp turn away from a combat aircraft. Designs were scaled back in favor of a less ambitious nonstealthy surveillance drone. Concept sketches shifted from looking like the futuristic sleek and stealthy X-45A and X-47B to the more pedestrian Predator and Reaper drones, already over a decade old at that point. The Navy, it appears, wasn’t immune to the same cultural resistance to combat drones found in the Air Force.

  The Navy’s resistance to developing an uninhabited combat aerial vehicle (UCAV) is particularly notable because it comes in the face of pressure from Congress and a compelling operational need. China has developed anti-ship ballistic and cruise missiles that can outrange carrier-based F-18 and F-35 aircraft. Only uninhabited aircraft, which can stay aloft far longer than would be possible with a human in the airplane, have sufficient range to keep the carrier relevant in the face of advanced Chinese missiles. Sea power advocates outside the Navy in Congress and think tanks have argued that without a UCAV on board, the aircraft carrier itself would be of limited utility against a high-technology opponent. Yet the Navy’s current plan is for its carrier-based drone, the MQ-25, to ferry gas for human-inhabited jets. For now, the Navy is deferring any plans for a future UCAV.

  The X-47B is an impressive machine and, to an outside observer, it may seem to portend a future of robot combat aircraft. Its appearance belies the reality that within the halls of the Pentagon, however, there is little enthusiasm for combat drones, much less fully autonomous ones that would target on their own. Neither the Air Force nor the Navy have programs under way to develop an operational UCAV. The X-47B is a bridge to a future that, at least for now, doesn’t exist.

  THE LONG-RANGE ANTI-SHIP MISSILE

  The Long-Range Anti-Ship Missile (LRASM) is a state-of-the-art missile pushing the boundaries of autonomy. It is a joint DARPA-Navy-Air Force project intended to fill a gap in the U.S. military’s ability to strike enemy ships at long ranges. Since the retirement of the TASM, the Navy has relied on the shorter-range Harpoon anti-ship missile, which has a range of only 67 nautical miles. The LRASM, on the other hand, can fly up to 500 nautical miles. LRASM also sports a number of advanced survivability features, including the ability to autonomously detect and evade threats while en route to its target.

  LRASM uses autonomy in several novel ways, which has alarmed some opponents of autonomous weapons. The LRASM has been featured in no less than three New York Times articles, with some critics claiming it exhibits “artificial intelligence outside human control.” In one of the articles, Steve Omohundro, a physicist and leading thinker on advanced artificial intelligence, stated “an autonomous weapons arms race is already taking place.” It is a leap, though, to assume that these advances in autonomy mean states intend to pursue autonomous weapons that would hunt for target on their own.

  The actual technology behind LRASM, while cutting edge, hardly warrants these breathless treatments. LRASM has many advanced features, but the critical question is who chooses LRASM’s targets—a human or the missile itself? On its website, Lockheed Martin, the developer of LRASM, states:

  LRASM employs precision routing and guidance. . . . The missile employs a multi-modal sensor suite, weapon data link, and enhanced digital anti-jam Global Positioning System to detect and destroy specific targets within a group of numerous ships at sea. . . . This advanced guidance operation means the weapon can use gross target cueing data to find and destroy its pre-defined target in denied environments.

  While the description speaks of advanced precision guidance, it doesn’t say much that would imply artificial intelligence that would hunt for targets on its own. What was the genesis of the criticism? Well . . . Lockheed used to describe LRASM differently.

  Before the first New York Times article in November 2014, Lockheed’s description of LRASM boasted much more strongly of its autonomous features. It used the word “autonomous” three times in the description, describing it as an “autonomous, precision-guided anti-ship” missile that “cruises autonomously” and has an “autonomous capability.” What exactly the weapon was doing autonomously was somewhat ambiguous, though.

  After the first New York Times article, the description changed, substituting “semi-autonomous” for “autonomous” in multiple places. The new description also clarified the nature of the autonomous features, stating “The semi-autonomous guidance capability gets LRASM safely to the enemy area.” Eventually, even the words “semi-autonomous” were removed, leading to the description online today which only speaks of “precision routing and guidance” and “advanced guidance.” Autonomy isn’t mentioned at all.

  What should we make of this shifting story line? Presumably the weapon’s functionality hasn’t changed, merely the language used to describe it. So how autonomous is LRASM?

  Lockheed has described LRASM as using “gross target cueing data to find and destroy its predefined target in denied environments.” If “predefined” target means that the specific target has been chosen in advance by a human operator, LRASM would be a semiautonomous weapon. On the other hand, if “predefined
” means that the human has chosen only a general class of targets, such as “enemy ships,” and given the missile the freedom to hunt for these targets over a wide area and engage them on its own, then it would be an autonomous weapon.

  Helpfully, Lockheed posted a video online that explains LRASM’s functionality. In a detailed combat simulation, the video shows precisely which engagement-related functions would be done autonomously and which by a human. In the video, a satellite identifies a hostile surface action group (SAG)—a group of enemy ships—and relays their location to a U.S. destroyer. The video shows a U.S. sailor looking at the enemy ships on his console. He presses a button and two LRASMs leap from their launching tubes in a blast of flame into the air. The text on the video explains the LRASMs have been launched against the enemy cruiser, part of the hostile SAG. Once airborne, the LRASMs establish a line-of-sight datalink with the ship. As they continue to fly out toward the enemy SAG, they transition to satellite communications. A U.S. F/A-18E fighter aircraft then fires a third LRASM (this one air-launched) against an enemy destroyer, another ship in the SAG. The LRASMs enter a “communications and GPS-denied environment.” They are now on their own.

  The LRASMs maneuver via planned navigational routing, moving from one predesignated way point to another. Then, unexpectedly, the LRASMs encounter a “pop-up threat.” In the video, a large red bubble appears in the sky, a no-go zone for the missiles. The missiles now execute “autonomous routing,” detouring around the red bubble on their own. A second pop-up threat appears and the LRASMs modify their route again, moving around the threat to continue on their mission.

  As the LRASMs approach their target destination, the video shifts to a new perspective focusing on a single missile, simulating what the missile’s sensors see. Five dots appear on the screen representing objects detected by the missile’s sensors, labeled “ID:71, ID:56, ID:44, ID:24, ID:19.” The missile begins a process the video calls “organic [area of uncertainty] reduction.” That’s military jargon for a bubble of uncertainty. When the missile was launched, the human launching it knew where the enemy ship was located, but ships move. By the time the missile arrives at the ship, the ship could be somewhere else. The “area of uncertainty” is the bubble within which the enemy ship could be, a bubble that gets larger over time.

  Since there could be multiple ships in this bubble, the LRASM begins to narrow down its options to determine which ship was the one it was sent to destroy. How this occurs is not specified, but on the video a large “area of uncertainty” appears around all the dots, then quickly shrinks to surround only three of them: ID:44, ID:24, and ID:19. The missile then moves to the next phase of its targeting process: “target classification.” The missile scans each object, finally settling on ID:24. “Criteria match,” the video states, “target classified.” ID:24, the missile has determined, is the ship it was sent to destroy.

  Having zeroed in on the right target, the missiles begin their final maneuvers. Three LRASMs descend below the enemy ships’ radars to skim just above the water’s surface. On their final approach, the missiles scan the ships one last time to confirm their targets. The enemy ships fire their defenses to try to hit the incoming missiles, but it’s too late. Two enemy ships are hit.

  The video conveys the LRASM’s impressive autonomous features, but is it an autonomous weapon? The autonomous/semiautonomous/advanced guidance described on the website is clearly on display. In the video, midway through the flight the missiles enter a “communications and GPS denied environment.” Within this bubble, the missiles are on their own; they cannot call back to human controllers. Any actions they take are autonomous, but the type of actions they can take are limited. Just because the weapon is operating without a communications link to human controllers doesn’t mean it has the freedom to do anything it wishes. The missile isn’t a teenager whose parents have left town for the weekend. It has only been programmed to perform certain tasks autonomously. The missile can identify pop-up threats and autonomously reroute around them, but it doesn’t have the freedom to choose its own targets. It can identify and classify objects to confirm which object was the one it was sent to destroy, but that isn’t the same as being able to choose which target to destroy.

  Screenshots from LRASM Video In a video simulation depicting how the LRASM functions, a satellite transmits the location of enemy ships to a human, who authorizes the attack on those specific enemy ships.

  The LRASMs are launched against specific enemy ships, in this case a “SAG Cruiser.”

  While en route to their human-designated targets, the LRASMs employ autonomous routing around pop-up threats (shown as a bubble).

  Because the human-designated target is a moving ship, by the time the LRASM arrives at the target area there is an “area of uncertainty” that defines the ship’s possible location. Multiple objects are identified within this area of uncertainty. LRASM uses its onboard (“organic”) sensors to reduce the area of uncertainty and identify the human-designated target. LRASM confirms “ID:24” is the target it was sent to destroy. While the missile has many advanced features, it does not choose its own target. The missile uses its sensors to confirm the human-selected target.

  It is the human who decides which enemy ship to destroy. The critical point in the video isn’t at the end of the missile’s flight as it zeroes in on the ship—it’s at the beginning. When the LRASMs are launched, the video specifies that they are launched against the “SAG cruiser” and “SAG destroyer.” The humans are launching the missiles at specific ships, which the humans have tracked and identified via satellites. The missiles’ onboard sensors are then used to confirm the targets before completing the attack. LRASM is only one piece of a weapon system that consists of the satellite, ship/aircraft, human, and missile. The human is “in the loop,” deciding which specific targets to engage in the broader decision cycle of the weapon system. The LRASM merely carries out the engagement.

  BREAKING THE SPEED LIMIT: FAST LIGHTWEIGHT AUTONOMY

  Dr. Stuart Russell is a pioneering researcher in artificial intelligence. He literally wrote the textbook that is used to teach AI researchers around the world. Russell is also one of the leaders in the AI community calling for a ban on “offensive autonomous weapons beyond meaningful human control.” One research program Russell has repeatedly raised concerns about is DARPA’s Fast Lightweight Autonomy (FLA).

  FLA is a research project to enable high-speed autonomous navigation in congested environments. Researchers outfit commercial off-the-shelf quadcopters with custom sensors, processors, and algorithms with the goal of making them autonomously navigate through the interior of a cluttered warehouse at speeds up to forty-five miles per hour. In a press release, DARPA compared the zooming quadcopters to the Millennium Falcon zipping through the hull of a crashed Star Destroyer in Star Wars: The Force Awakens. (I would have gone with the Falcon maneuvering through the asteroid field in The Empire Strikes Back . . . or the Falcon zipping through the interior of Death Star II in The Return of the Jedi. But you get the idea: fast = awesome.) In a video accompanying the press release, shots of the flying quadcopters are set to peppy instrumental music. It’s incongruous because in the videos released so far the drones aren’t actually moving through obstacles at 45 mph . . . yet. For now, they are creeping their way around obstacles, but they are doing so fully autonomously. FLA’s quadcopters use a combination of high-definition cameras, sonar, and laser light detection and ranging (LIDAR) to sense obstacles and avoid them all on their own.

  Autonomous navigation around obstacles, even at slow speeds, is no mean feat. The quadcopter’s sensors need to detect potential obstacles and track them as the quadcopter moves, a processor-hungry task. Because the quadcopter can only carry so much computing power, it is limited in how quickly it can process the obstacles it sees. The program aims in the coming months to speed it up. As DARPA program manager Mark Micire explained in a press release, “The challenge for the teams now is to advance the algorithms an
d onboard computational efficiency to extend the UAVs’ perception range and compensate for the vehicles’ mass to make extremely tight turns and abrupt maneuvers at high speeds.” In other words, to pick up the pace.

  FLA’s quadcopters don’t look menacing, but it isn’t because of the up-tempo music or the cutesy Star Wars references. It’s because there’s nothing in FLA that has anything to do with weapons engagements. Not only are the quadcopters unarmed, they aren’t performing any tasks associated with searching for and identifying targets. DARPA explains FLA’s intended use as indoor reconnaissance:

  FLA technologies could be especially useful to address a pressing surveillance shortfall: Military teams patrolling dangerous overseas urban environments and rescue teams responding to disasters such as earthquakes or floods currently can use remotely piloted unmanned aerial vehicles (UAVs) to provide a bird’s-eye view of the situation, but to know what’s going on inside an unstable building or a threatening indoor space often requires physical entry, which can put troops or civilian response teams in danger. The FLA program is developing a new class of algorithms aimed at enabling small UAVs to quickly navigate a labyrinth of rooms, stairways and corridors or other obstacle-filled environments without a remote pilot.

  To better understand what FLA was doing, I caught up with one of the project’s research teams from the University of Pennsylvania’s General Robotics Automation Sensing and Perception (GRASP) lab. Videos of GRASP’s nimble quadcopters have repeatedly gone viral online, showing swarms of drones artfully zipping through windows, seemingly dancing in midair, or playing the James Bond theme song on musical instruments. I asked Dr. Daniel Lee and Dr. Vijay Kumar, the principal investigators of GRASP’s work on FLA, what they thought about the criticism that the program was paving the way toward autonomous weapons. Lee explained that GRASP’s research was “very basic” and focused on “fundamental capabilities that are generally applicable across all of robotics, including industrial and consumer uses.” The technology GRASP was focused on “localization, mapping, obstacle detection and high-speed dynamic navigation.” Kumar added that their motivations for this research were “applications to search and rescue and first response where time-critical response and navigation at high speeds are critical.”

 

‹ Prev