Unmanned: Drones, Data, and the Illusion of Perfect Warfare
Page 19
From then on, everyone clamored to adopt the model of finding, fixing, and finishing the target, what the air force had been striving to do since Desert Storm in trying to find Scuds and then Serbs and then the Sheikh, what the army wanted in becoming more like the air force in adopting precision and targeting, what the counter-IED empire was doing in acquiring its own drones and inventing “attack the network” to thwart later roadside bombs, and even what special operations forces and the CIA were learning to perfect in their perpetual pursuit of each individual high-value target to be ticked off an endless list. The Zarqawi mission would augur and be a miniature version of the killing of bin Laden that would take place seven years later. And it would set the stage for President Obama to hold back on action in Iraq almost a decade later; “boots on the ground” was no longer the standard for measuring American military capability, certainly not in the case of going after the ISIS fighters. But, as in Vietnam, finding, fixing, and finishing the target can also be magnificently immediate while taking on a kind of vacant quality separated from any larger human endeavor or achievement.
CHAPTER FIFTEEN
Beyond the Speed of War
He is fair in manhood, dignified in bearing,
graced with charm in his whole person.
He has a strength more mighty than yours,
unsleeping he is by day and by night.
TABLET I, EPIC OF GILGAMESH
In an obscure office building tucked behind the Fair Oaks Mall in Fairfax, Virginia, and at highly secure data centers in a half dozen locations from Maryland to California, is the national signatures pool, a massive electronic library that catalogs hundreds of thousands of signatures, the digital marks of our entire world.1 The signatures database, which has been meticulously collected for decades, catalogs the distinguishing features of everything, civilian and military, foreign and domestic, from weapons to vehicles to fabrics to vegetation to individual people.
Everything gives off a spectral signature—a house, a car, a knife—observable in countless regions of the electromagnetic spectrum. As one official of this secretive world says, a signature “is a distinctive basic characteristic or set of characteristics that consistently re-occurs and uniquely identifies a piece of equipment, activity, individual, or event.”2 Because all material reflects, absorbs, or emits photons as a consequence of its molecular makeup, a high-resolution deconstruction of the intensity of these materials can form a rendering unique to any given material.3
The collection of signatures goes back to the days when the Soviet enemy was behind an iron curtain and the intelligence wizards needed to come up with innovative and even elliptical methods to acquire information. The earliest days of atomic fission spawned a special type of sleuthery, with aircraft and satellites sniffing out rare isotopic concentrations to discover the existence of nuclear tests and then even to characterize the makeup and capabilities of hidden nuclear weapons. These techniques of scientific detection and technical intelligence took on the name measurements and signature intelligence (or MASINT).4 MASINT collection and analysis never had the allure of human intelligence, the wonder of imagery, or the capacious plenty of signals intelligence, even if it did provide the possibility of seeing into the beyond. Instead, it served as a kind of technical back end of the nuclear age, with every enemy weapon given added character by its radioactive return or other chemical signature; every target characterized not just by location and size but also by its physical composition. Finally, friendly weapons systems were made “signature dependent,” that is, sent off to find and attack through progressions of sensing enemy signatures and making arcane calculations to precisely find, locate, attack, and assess—each act building on the last.
Detection augmented normal seeing and hearing: infrared detection of the plume of a missile, acoustic detection of the sound emitted by a submarine, electrooptical detection of laser light, materials sampling to detect the presence of chemical or biological agents. At the height of the Cold War, the emerging “INTs” that built into the whole of MASINT spawned highly qualified scientists with a wide array of specialties.
Scientists are needed because MASINT differs from “normal” intelligence in that with MASINT, what is seen is inferred from the physical characteristics—it is not just what something looks like to the naked eye. By way of explanation, photographs rely on the literal extraction of information by a human. MASINT deals with nonliteral exploitation.5
Any kind of sensor, as determined by its size, weight, and sophistication, measures reflective energy based upon spectral and spatial resolution, observation time, and frequency of observation, processing the resulting data to highlight different spectra against a static background.6 Every sensor collects energy that bounces off an object. And in the electronic era, every sensor converts its returns into digits, a series of picture elements (or pixels), which are themselves just zeros and ones.7
Unless a target is visibly observed by the human eye, something has to translate what a nonliteral sensor detects into what we “see” when we think of seeing. When normal people think of radar, they imagine pulses of energy sent out and a simulation of a physical shape formed in the reflective returns: it’s an airplane in the sky, a tank on the ground, etc. And that’s indeed how it all started. What is physical and can be seen, even at long distances, is what is reflected. But fast-forward to the modern day: What if the object you are trying to “see” is tiny, or nonreflective, or moving? And what if you can’t send out a beam of energy to pulse it because that would make you vulnerable to observation and attack yourself? There are a gazillion permutations and steps in the underlying physics, but that’s basically how nonliteral detection emerged as a supplement to the visual and the physically reflective.
Now to see it: the reflected energy travels in wavelengths and must be received by a sensor that can translate those waves into something understandable to humans. What is visible to the human eye are three bands of electromagnetic energy almost in the middle of the electronic spectrum; red, green, and blue (known as RGB). An infrared sensor measures wavelengths adjacent to the visible bands in the spectrum.8 A multispectral sensor can monitor reflected energy in ten spectral bands of visible and infrared light. Hyperspectral imagers, the most complex and with the broadest view, monitor spectral bands numbering up to 200 or more. This includes reflected energy in the ultraviolet (UV), visible, near-infrared (IR), and short-wave infrared (SWIR) portions of the electromagnetic spectrum, as well as the emitted energy in the mid-wave infrared (MWIR) and long-wave infrared (LWIR) portions of the infrared spectrum.9
Multispectral imaging (MSI) has been used in the civil world for decades to observe everything from general land cover to detailed species identification. In the 1960s, scientists confirmed that reflectance measurements by multispectral airborne and space sensors permitted the identification of the mineral makeup of rocks, soils, and vegetation. In weather forecasting, MSI is used to detect cloud droplets, ice particles, dust, ash, and smoke, each of which can then be associated with specific frequencies. MSI can also monitor wavelengths over broad areas to characterize terrain and man-made features, a technique in widespread use by the military in mapmaking.
Hyperspectral imaging (HSI), on the other hand, collects the energy of a wider section of the electromagnetic spectrum and from many narrower bands simultaneously, from infrared across the visible to ultraviolet. Because hyperspectral sensors can sample spectral signals reflected and emitted from the same area, a sensor can even separate atmospheric signals from ground signals, thus allowing the sensor to essentially “see” through clouds.10 It wasn’t until 1989 that the first hyperspectral imager was flown,11 and today, it is the most complex form of MASINT.12
To fully understand the world of signature-derived intelligence, it is useful to think of the domain name system that orders the Internet. When a Web address (a URL) is typed into a browser or clicked on as a hyperlink, an international library of numeric Internet Protocol (IP) addres
ses is instantly searched, returning the Web server associated with each address. That way, the IP address—the site’s signature, if you will—can be a public and easily remembered name translating the domain name. And given that the DNS system is its own network, if it is unable to translate a particular domain name, it asks another at a higher echelon, and so on, until the correct IP address is returned.
A hyperspectral signature can be thought of in the same way. Since every object reflects and absorbs light in different ways, the amount and type of radiation reflected directly relate to an object’s surface chemical and physical characteristics, illumination factors, and atmospheric properties.13 In intelligence terms, the ultimate goal in imaging is to produce a complete reflectance spectrum for each pixel, an achievement that can only come from hyperspectral imaging.14
Hyperspectral imaging then can simply be described as a type of remote sensing that uses powerful information contained in the full-spectrum signature of an object (that is, its total reflective makeup). Not surprisingly, the most important feature of hyperspectral imaging for military and intelligence purposes, in addition to its complexity, is that there are just a few highly expensive black boxes on a small population of highly classified platforms that are able to practically collect and translate into images.15 HSI is a multistep process involving an enormous amount of imaging and computing, but in the end, say for instance in the case of IEDs, spectral signatures related to bombs and techniques to hide them are collected and validated against ground truth to populate data sets of objects of interest. What makes the data instantly available and militarily relevant is the database of spectral signatures that underlies the whole process.
While a multispectral sensor might indicate the presence of an object such as a vehicle, a hyperspectral sensor can also detect whether it’s metal or plastic, what kind of metal it’s made from, the color and type of paint it has, and the amount of moisture it contains. A multispectral image might differentiate between desert and farmland, separating features in the near-infrared region because the chlorophyll in the plants is reflected to a far greater extent than any other feature. A hyperspectral image of the same farmland can differentiate a barley crop from potatoes, detect stressed vegetation, and even determine soil composition.
The Pentagon first actively initiated research on a hyperspectral sensor that would be able to return near-real-time data in 1991, initially to fill a need experienced in Cold War Europe and later in Bosnia and Kosovo, which was to see what was hidden in shadows and under trees.16 The Hycorder black box was flown in October 1994 and June 1995, the first of an unmanned hyperspectral generation that would begin to open the way for the fighting man to see in a completely new manner. The imaging radio-spectrometer was fitted on board a navy Pioneer drone that flew over the White Sands Missile Range in New Mexico and Yuma, Arizona. Reflective targets with known signatures were precisely placed on the ground, and the spectral information was downlinked to a visualization and analysis system that processed the continuously running video using the finest computer of the day, a Pentium Pro PC. Desert Radiance, as the experiments were called, proved the feasibility of detection of a tactical target by use of its unique spectral signature.17 Desert Radiance was followed by Forest Radiance, Island Radiance, and Littoral Radiance, each planned collection operation a proving ground for the calibration of aerial sensors and processors and the building of a larger and larger signature library.
As part of the Hyperspectral MASINT Support to Military Operations (HYMSMO) umbrella program started in the late 1990s, different hyperspectral sensors were flown to explore tactical detection and classification of potential military targets. Hyperspectral imagers were placed on manned aircraft and in space,18 each attempting to increase spatial resolution and signal-to-noise ratios to militarily useful levels. In each case, a series of runs were flown in which tanks and other military vehicles were precisely placed on a targeted terrain, or fabric and painted target panels were used to simulate camouflage. Ground truth measurements were also taken simultaneously, from towers and other airborne platforms, to compare the reflectance of the surface to the energy recorded by the imaging sensor. And in 1997, blind testing was introduced, that is, hyperspectral imaging used to find hidden objects. Overall detection success rates were nowhere near the level needed for combat.19 And HSI continued to be conceived in Cold War terms, detecting the evidence of weapons of mass destruction manufacture or deployment through the presence of plumes or runoff; or in strictly conventional military terms, as countercamouflage—detecting objects that were intentionally hidden from sight.
The WARHORSE black box flew on board Predator a year before 9/11.20 It’s another acronym, of course, for Wide Area Reconnaissance Hyperspectral Overhead Real-Time Surveillance Experiment. It is a hyperspectral sensor that images from approximately 10,000 feet, with a collection process that entails21 a massive amount of data, far beyond anything seen with Global Hawk imagery or synthetic aperture radars or even operational multispectral sensors.
Just one frame of a hyperspectral imager is on the scale of 20 gigabytes or more. Before WARHORSE, this huge amount of data was stored on digital tapes, which were then mailed to the appropriate organizations for processing, with intelligence returning days or weeks or even months after the image was taken.22 With WARHORSE, the data were collected, calibrated, corrected, and presented so that when a spectral signature in the stored database harmonized with something being processed (or in other words, when the IP address of a sought-after website was matched), a camera on board Predator simultaneously took a still image (or “chip”) of the same scene, the image itself being modified to create false color variations so that it was visible to the human eye. In this way, WARHORSE could provide tip-offs or cueing of other sensors. However, the hyperspectral data was still so complex that it needed to be processed elsewhere for further exploitation.
Enter the Signatures Support Program. A program that had always been dominated by strategic nuclear and “national” collection shifted to conventional and even unconventional war in the late 1990s. The decades-old collection of signatures started to look at dynamic phenomena, that is, the signatures of real-time events and activities immediately relevant to the fighting man and woman.23 Hyperspectral imaging, if it could be made practical and cost effective, would allow a way to see through clouds and under trees, to detect what was underground or underwater, to find what the enemy was trying to hide, and even to rescue a friendly downed pilot hiding behind enemy lines (by detecting a previously applied reflective “tag”).24 But the only way that hyperspectral imaging could be turned into anything beyond a science project was to rely on onboard processing of the enormous amount of data generated to extract only the bits needed to identify prospective targets. WARHORSE was the first step.
When war in Afghanistan began in 2001, every form of intelligence collection in this remote and unknown land was employed, including experimental hyperspectral sensors. First to enlist in Afghanistan was NASA’s satellite-based Hyperion sensor, which was used to assess pre-and postbomb damage by comparing before-and-after scenes of difficult places that had been bombed, such as tunnels and caves.25 Hyperspectral imaging was also able to detect concentrations of carbon dioxide in cave-riddled areas and thereby possibly signal the presence of humans.26 When the first sensors were applied for tactical detection, in Afghanistan and then in Iraq, they were also shown to be able to detect buried IEDs by detecting the presence of disturbed dirt or by using “change detection” techniques to go back and see anomalies of military significance. Common types of IEDs were also directly detected through signature matching, particularly as the IED library grew.27
Hyperspectral products were not quite in the hands of the war-fighter because of security classifications and scarcity, and a real challenge to overcome was bandwidth, given how much data was demanded and had to move through the networks. But the Pentagon, sufficiently optimistic about the prospect of real-time imaging, in 2002 approv
ed the HyCAS or Hyperspectral Collection and Analysis System technology demonstration, a five-year program that would assess the feasibility of spectral data as a source of regular tactical intelligence, while also figuring out ways of incorporating HSI sensors into the day-to-day workings of the Data Machine.28 As Sue Payton, the Pentagon’s head of advanced systems, said upon unveiling HyCAS, the United States now had hyperspectral sensors on aircraft and even in space. HyCAS included sensors on Global Hawk, on Predator, and on manned navy P-3 aircraft.
In 2007, the US Geological Survey conducted HALO Falcon, a sweeping hyperspectral survey of Afghanistan that collected data from an altitude of 50,000 feet.29 The public announcement was that the mission was designed to assess Afghanistan’s natural resources, such as coal, water, and minerals, and that no less than President Karzai had requested the mission.30 The true purpose of the mission was to build a complete snapshot signature of the country in order to form a baseline that intelligence collection of the future could rely upon. In other words, the purpose was to create a library of the entire country’s broad signature.
Experimentation in the United States continued as collection accelerated in Afghanistan and Iraq. The signatures experts processing the volumes of data that were newly arriving purchased and fabricated the materials that made up such things as military vehicles, camouflage, fabrics, and paints, in order to conduct spectral characterization and add to the library. At black box laboratories, work accelerated not just on new means of collecting and processing hyperspectral data, but also on reducing signal-to-noise ratio (false alarm rates), on improving spectral and radiometric stability and image quality at high altitudes, and on improvements in computational capabilities and communication that would make it possible to overlay hyperspectral data with imagery or eavesdropping. MASINT was becoming the new everything, and new standards were created for all kinds of multi-and hyperspectral collection.31 Partly driven by war, partly driven by the promise—any promise—of support for the troops in the counter-IED battle, partly just reflecting the incredible pace of technological change in the information field, and partly prompted by the unappeasable ambitions of the Data Machine, a new vibrancy pulsated through the signatures world. The main air force signatures data center in Tennessee filled to capacity, and “automated scene detection” was slowly developed to ease the processing burden.