Book Read Free

Accessory to War

Page 23

by Neil DeGrasse Tyson


  Whether you’re a fighter or an astrophysicist, you can’t do much without hard information. Fighters use information in real time, whereas we astrophysicists want our information saved for later—sometimes even years later. Because we analyze at leisure what our observatories have detected in passing, preservation is a huge concern. Galileo could only draw what he saw. Photography was the big breakthrough of the nineteenth century, producing a record of what would otherwise be unprovable. Come the twentieth century, there were multiple breakthroughs. Special-purpose emulsions, the baking of film, spectral filters, photomultiplier tubes, CCDs and their pixels—jointly they yielded a vast archive of information awaiting the engagement, or re-engagement, of ingenious analysts.

  Envision a rectangular digital image, a picture. Now envision the smallest possible section of it. That’s a picture element, a “pix-el.” This represents the fundamental unit of detection for charge-coupled devices, or CCDs, which began to transform image-making in the 1970s and had swept away all other approaches by the 1990s. While still in graduate school, I was eyewitness to this revolution, and its impact on my field cannot be overstated.

  When the CCD is exposed to light, whether from a nearby street scene or a faraway galaxy, each of its pixels stores some number of electrons, depending on the intensity of the light hitting each of the tiny locations on the CCD’s light-responsive computer chip. The more intense the light, the more electrons get stored—although if the light is too bright, you will saturate the detector and the excess electrons will spill over into neighboring pixels, contaminating their data. Double the exposure, and you get double the number of electrons. The electrons that congregate in each pixel are then collected from the chip, tabulated, and turned into a single electronic tile in the mosaic that constitutes the complete image. The more pixels, the more resolution available to you. Nowadays you can easily download a street scene from Wikimedia Commons that measures 2592 columns × 1944 rows, which translates into a grid of more than 5,000,000 pixels—a crisply detailed photo. But that’s nothing: if you’re not worried about overtaxing your computer, you can download an image of the Orion Nebula from the HubbleSite Gallery that’s 18,000 × 18,000—a grid of 324,000,000 pixels, packed to the gills with detail.

  There’s also the issue of “quantum efficiency.” In the most efficient detector possible, one photon would give you one electron. Reality isn’t quite so cooperative, although CCDs massively outperform film. For every hundred photons of light that landed on the silver halide crystals in Eastman Kodak’s now-obsolete astrophotographic emulsion IIIaJ, only about three triggered the necessary chemical reaction to produce an image. That was 3 percent quantum efficiency. What’s the quantum efficiency of a CCD today? Some astronomical CCDs are more than 60 percent efficient across a wide band of visible wavelengths. That’s a factor-of-twenty improvement in detection power. Other CCDs top out at 90 percent quantum efficiency in selected wavelengths. They also pick up near-infrared and near-ultraviolet. In addition, the CCD can be used with any lens. All these benefits mean that astrophysicists can acquire information from far deeper in space, and from many more regions, than ever before.

  Noise can be a problem, though. When a telescope targets something dim, it might not collect enough light to trip the detection threshold. On the other hand, some of what seems to be light might just be noise. Every telescope, every detector, has inherent noise. A CCD, too, has noise—its own warmth is enough to kick some electrons into the pixels—and so the best CCDs and cameras are now chilled during use. In the old days, astrophysicists would have been using photographic plates to record what our telescopes detected, and we would have needed long exposures to get our images. Knowing there were still dimmer things we weren’t detecting, we would have yearned for bigger telescopes to collect more light. We would have needed money, engineers, another dome, another mountaintop.

  In the early days of CCD technology, chips were small, with few pixels. Some were manufactured in university or industrial laboratories specifically to serve the astrophysicist. But as the CCD became commoditized, especially because of demand for digital cameras, the price, quality, and pace of improvement grew rapidly. The CCD transformed astrophysics, giving new life to small telescopes and endowing large ones with previously unimaginable powers of detection. Some researchers made entire careers of redoing earlier brilliant work whose authors had approximated and speculated about what could be lurking beyond the available data. In the era of the CCD, astrophysicists can tackle the same problems but with greater success. We can push past the earlier limits on data and speculate at yet another level.

  Anyone who can’t afford to depend on serendipity would say you have to identify your target or goal in advance. Which leads us to the military potential of the CCD.

  Knowing what you’re looking for is integral to ISR: intelligence, surveillance, reconnaissance. The advent of the CCD did wonders for America’s ISR, just as it did wonders for America’s astrophysicists. After all, astrophotography and photoreconnaissance differ only in their choice of target, their distance from the target, and the direction of their gaze. In December 1976 the KH-11 KENNAN—one of the KEYHOLE series—became the first spy satellite equipped with CCD technology.67

  The change was transformative. No longer would the National Reconnaissance Office have to wait days for a spy satellite’s parachute-equipped, heat-shielded film canisters to be grabbed in mid-air during a rendezvous with an airplane or, worse, dropped in the ocean and collected by (preferably) US ships, then processed, and finally delivered to the right person’s desk.68 Now the images captured by a KH-11—for instance, of a Soviet aircraft carrier under construction at a shipyard on the Black Sea—could be almost instantaneously transmitted via a data-relay satellite to a ground station near Washington, DC.

  The earliest spy satellites, developed under the CORONA program, were set up to search; their cameras focused on broad coverage. KEYHOLE and GAMBIT satellites, next in line, captured a closer look at specific targets already identified by their CORONA predecessors. HEXAGON satellites further sharpened the resolution of individual targets and improved the search capability. Most carried both a main camera for broad imaging of otherwise inaccessible areas and a mapping camera to assist in war planning. As HEXAGON’s maker, Lockheed Martin, described its role in a press release, the country “depended on these search and surveillance satellites to understand the capabilities, intentions, and advancements of those who opposed the U.S. during the Cold War. Together they became America’s essential eyes in space.”69

  The camera on the final CORONA spy satellite, launched in 1960 and retroactively renamed the KH-1, could detect objects as small as eight meters wide. A mere six years later, the KH-8 GAMBIT’s camera could refine this to fifteen centimeters. A decade later the KH-11 KENNAN, the first to have a CCD, offered much broader coverage, greater recording capacity, and a considerably longer lifetime, but at the cost of lower resolution: two meters. The so-called Advanced KH-11, however, offered both infrared capability and high resolution.

  Not surprisingly, there’s also a long list of spy satellites launched during the Cold War by the Soviet Union and a short list launched by China. Equally unsurprising is that, although the US programs have usually retained their classified status for decades, there have also been periodic leaks, unintentional disclosures, and episodes of quasi-involuntary declassification. In 1981 a respected aeronautics publication showed a leaked KH-11 image of a Soviet bomber; in 1984 an American naval analyst leaked the KH-11 image of a Soviet aircraft carrier to a respected military publication. KH-11 itself, along with its progeny and cousins, remains classified.70

  Today there are no more film canisters suspended from parachutes. Rochester, New York—home of Eastman Kodak—is sunk in joblessness, and high-res CCDs are the global standard. There likely now exists a continually updated optical, infrared, and radar image bank of every square foot of every conflict zone and potential conflict zone on the planet. One oft-
reproduced Advanced KH-11 image from the 1990s shows a pharmaceutical plant in Sudan said to have been connected with the making of chemical weapons. Another shows a mountain camp in Afghanistan described as an al-Qaeda training facility. More recent satellites—reconnaissance, geospatial, commercial, communications, weather—have imaged and re-imaged such militarily significant targets as Osama bin Laden’s compound in Abbottabad, Pakistan. They have detected the sudden appearance of numerous armored vehicles at a military base in Aleppo, Syria, and recorded increased activity just prior to a rocket launch at the Sohae Satellite Launching Station in North Korea.

  But spy satellites monitoring conflict zones aren’t the only source of such images. Uncountable numbers of commercial satellite images can now be bought by whoever wishes to pay. As William E. Burrows puts it,

  The intelligence establishment itself regularly supplements its own systems’ “take” with commercial satellite imagery, and the use of civilian spacecraft for routine intelligence collection and potential war-fighting is increasing because it’s cheaper than maneuvering their classified counterparts and processing the avalanche of digital data that keeps coming down in near real time. . . . If the intelligence establishment can in effect use a credit card to buy excellent commercial imagery, so can tyrants and terrorists.71

  Yes, but so can humanitarian aid agencies and environmental groups.

  Will satellite images, whatever their source, never be misused and always make us safer? Probably not. But is it good to have records of the extent of deforestation in the Amazon between 1975 and 2012, and to have been alerted to the breakup of the largest ice shelf in the Arctic in 2003? Probably so. There’s now an organization called International Charter: Space and Major Disasters, which supplies free satellite imagery to emergency responders across the world so that they can act more quickly and effectively. Like GPS, those eyes in the sky are dual use.

  Q: What do you get when you cross a spy satellite with a ballistic missile, and then launch the result into interplanetary space? A: NASA’s Deep Impact mission to comet Tempel 1, the first time an intentional collision, rather than a mere flyby, was a mission’s main agenda.

  On July 3, 2005, after traversing more than 400 million kilometers in less than six months, the Deep Impact spacecraft released an eight-hundred-pound hunk of mass—its “smart” impactor—that smashed into Tempel 1 the following day with the explosive energy of five tons of TNT. It excavated a deep crater, purposefully kicking up loads of dust that could be observed and recorded by the orbiting spacecraft’s camera and infrared spectrometer as well as by numerous telescopes around the world. We can now definitively say that Tempel 1 has water ice on its surface, a “very fluffy structure that is weaker than a bank of powder snow,” and an abundance of carbon-containing molecules. Those molecules tell us that a comet not unlike Tempel 1 could, in passing, have deposited organic material on Earth during our planet’s first billion-plus years of existence, when it was being regularly bombarded from space by all manner of rocks, including comets.72

  Obviously the impactor had to hit its target—a very dark (0.06 albedo) blob of comet-matter less than four miles in diameter—or the mission would have been for naught, just as a fighting force’s artillery has to hit its targets or lose the battle. All concerned parties were in motion: Earth as a launch platform, the spacecraft, the impactor, and the comet. The impactor was fitted with a telescope, a medium-res multi-spectral CCD camera, target sensors, a battery to sustain it during its final day of life, and a dose of hydrazine fuel for brief bouts of propulsion to adjust course. This ballistic projectile had to be released from the spacecraft at a time and angle that would guarantee its subsequent close approach to the comet. Plus, the ultimate collision had to occur on the comet’s sunlit side so that the resulting dust could be seen.

  Rather than relying on the usual time-consuming practice of ground-based navigation—transmission of data down to Earth, human analysis and execution of commands, relaying of commands back up to the spacecraft—the mission used an onboard system called AutoNav to orchestrate the actual collision. Activated two hours before that final moment, AutoNav took four images per minute so that it could stay current with the position and velocity of both the comet and the impactor. Being smart about keeping the impactor on course, it initiated three targeting maneuvers: at ninety minutes, thirty-five minutes, and twelve minutes before impact.73 The mission was a success—not because of luck, but because astrophysicists as well as warfighters know how to use multi-spectral data to deploy a ballistic projectile to hit a moving target. We are independent. We are interdependent. We are allies.

  6

  DETECTION STORIES

  Each band of wavelengths in the electromagnetic spectrum is a window to a different component of cosmic reality. As the tally of detectable wavelengths grew, so too did the tally of exploitable collaborations between astrophysics and the military. Some of these were widely known in their day. Others were secret. Still others were accidental alliances that could not have been scheduled, planned, or predicted.

  I.

  Our first story is about Jodrell Bank—a few muddy acres of fields in Cheshire, England, twenty-odd miles south of Manchester, that at the end of World War II were being overseen by a botanist at the University of Manchester but were shortly turned into the site of a major observatory. The area’s suitability as a site for the world’s first large steerable radio telescope lay in its low population and especially in its lack of public electricity lines. As Bernard Lovell wrote in his account of the logistical, financial, and political nightmares connected with bringing the observatory’s Mark I radio dish into existence, “Electrical gadgets used in and around houses often spark and radiate more energy into a radio telescope than an entire extragalactic nebula.” What made the Mark I steerable was repurposed wartime hardware: two bearing assemblies that had borne the big rotating guns on two British battleships during World War I but could be bought for a song in 1950 from the Admiralty’s Gunnery Establishment.1

  On the night of October 4, 1957, a couple of months after the Mark I became operational—though just barely, as the project was steeped in debt—the Soviet Union launched Sputnik 1. Suddenly the huge radio dish, capable of receiving as well as transmitting signals, and designed for research into cosmic rays, meteors, and the Moon, became the only instrument on Earth capable of radar-tracking the core stage of the intercontinental ballistic missile, the R-7 rocket, that had launched the satellite and had itself achieved Earth orbit. During twilight, a skywatcher observing in the deepening darkness might manage to see the gleam of the satellite as it passed overhead, high above and still in sunshine. A ham radio operator could easily pick up the satellite’s radio beeps on a frequency of 20.005 megahertz. But only the Mark I could detect the radar echoes bouncing off the rocket.

  For the sake of England’s prestige and the whole world’s benefit, there was no question of refusing to take on the task. Intensive work began on October 7; initial intimations of success came on the 11th; unmistakable triumph occurred on the 12th. Here is Lovell’s account of the 12th:

  Just before midnight there was suddenly an unforgettable sight on the cathode ray tube as a large fluctuating echo, moving in range, revealed to us what no man had yet seen—the radar track of the launching rocket of an earth satellite, entering our telescope beam as it swept across England a hundred miles high over the Lake District, moving out over the North Sea at a speed of 5 miles per second. We were transfixed with excitement. A reporter who claimed to have had a view of the inside of the laboratory where we were, wrote that I had leapt into the air with joy.2

  Soon the Mark I (which Lovell calls the bowl, and which was later renamed the Lovell Telescope), along with Jodrell Bank’s newer telescopes, proved indispensable in verifying the telemetry of the earliest Soviet and American space probes. The observatory’s cooperation with verification requests from both the US and the USSR during the space race loomed large in attracting desperately
needed funds and thereby sustaining its own science agenda.3 An unsavory bargain? No, realpolitik.

  On the first day of 1958, Lovell received a telegram from Moscow, saying, “Every success in your work. Best thanks for satellite operations.”4 Soon there would be more satellite operations. The Soviet Union’s subsequent requests involved their pioneering Luna (Lunik) and Venera probes of the Moon and Venus. Confronted with professed international skepticism that it had actually launched Luna 1 on January 2, 1959—and disappointed that Jodrell Bank hadn’t managed to locate it (the spacecraft missed the Moon by more than two diameters)—the Soviet Union sent Jodrell Bank a telex an hour after the Luna 1 launch with transmission frequencies and exact coordinates for its next Moon probe, Luna 2. The Soviets wanted Jodrell Bank to independently verify what they predicted would be a successful lunar landing.

  This time the Mark I succeeded in its appointed task, in part because its antenna was already set up to capture the transmission band being used by Luna. By local midnight on September 12, 1959, the Brits were receiving Luna 2’s signals on two frequencies. Clearly the rocket was on the right course. Predicted time of impact was 10:01 the following day. At 10:02 they began to worry, but twenty-three seconds later, the signals stopped. Human-created hardware had made it to the Moon. Some high-profile US politicians persisted in their public skepticism, but the facts were the facts. Less than a month later, and precisely one year after Sputnik 1, Luna 3 reached and photographed the far side of the Moon, a first, while the following month a US Pioneer spacecraft (P-3) designed to orbit that same body exploded on the launch pad. One unnamed American even commented that “it was only necessary for an announcement to be made of American intentions for the Russians to do it first.”5

 

‹ Prev