Book Read Free

Accessory to War

Page 16

by Neil DeGrasse Tyson


  Germany’s path from prewar to postwar was more dramatic. A formidable prewar exporter not only of superb glass and optics but of steel, chemicals, and electrical goods, Germany had swiftly gained export ground on cotton-and-coal Britain since the 1890s, raising British fears of being overtaken and undercut. In 1897, the year Barr and Stroud set up the world’s only rangefinder factory, Britain was the world’s top exporter at $1.4 billion, the United States was a close second at $1.2 billion, and Germany a lagging third at $865 million. By 1913, while Britain’s exports had doubled, Germany’s had more than tripled.83

  War and its blockades, followed by defeat, armistice, and the Treaty of Versailles, should have decisively halted Germany’s race toward the top. Under the terms of the treaty, signed in June 1919, every business enterprise engaged in “the manufacture, preparation, storage or design of arms, munitions, or any war material whatever” was to be closed. Both import into and export from Germany of “arms, munitions and war material of every kind” was to be “strictly forbidden.” Aside from specified allowable quotas, all German armaments, munitions, and war material, including “aiming apparatus” and the “component parts” of various guns (both of which lie within the bailiwick of optics), were to be swiftly “surrendered to the Governments of the Principal Allied and Associated Powers to be destroyed or rendered useless.”84

  Ah, but what is “war material”? That question kept the members of the Inter-Allied Military Control Commission (IAMCC), the Treaty’s disarmament inspector-overseers, awake at night and drawing up lists all day.85 As the exasperated British brigadier general who served as second-in-command on the IAMCC’s Armaments Subcommission later wrote,

  The thing defies definition. Is a field-kitchen war material? Or a field ambulance? Or a motor-lorry? All three are capable of civilian use. When are you to “call a spade a spade,” and when should you call it an entrenching tool? How are you to distinguish between war explosives and “commercial” explosives? The dynamite which serves to blast a quarry is as useful to the sapper in war as to the quarryman in peace. . . .

  Our categories of war material grew and grew until they filled scores of pages of print. The species and sub-species extended to hundreds of articles. The list of “optical” war material, from periscopes to range-finders, alone ran to fifty-two items. “Signaling material” was almost equally multitudinous. In both cases many of the incriminated articles, such as field-glasses, telephones, and wireless apparatus, were unquestionably ambiguous in character, equally susceptible of use for war and for peace.86

  Brigadier General J. H. Morgan and his fellow overseers found it equally frustrating to decide which factories to close. While the war had decimated France’s industrial capacity, it had left Germany’s largely intact. More than 7,500 engineering, electrical, and chemical factories had been tasked with the production of war material; as of the war’s end, claimed Germany, most were “reconverted” to production for civilian purposes. Obligated by the treaty to permit Germany to continue production of the stipulated quantities of armaments, and feeling pressured not to constrain Germany’s capacity to pay reparations, the overseers ended up deciding to “spare every factory and workshop which could establish . . . its re-conversion. The result was that Germany was left with every lathe that ever turned a shell”—and, though General Morgan doesn’t mention it, probably every grinder that had ever polished a periscope lens, including those at Zeiss. Plus, he and his colleagues discovered “in due course, that vast stocks of arms which never appeared in the official returns made to us by the German Government were being concealed all over Germany.”87

  Thus the halt was more like a brief interruption. In 1913 Zeiss was among Germany’s largest business endeavors, with total assets triple those of Schott; together, these twin companies were safely ensconced among the top hundred. By then, Zeiss was not simply a German enterprise but an international conglomerate; it not only exported its products but managed a web of foreign sales agencies, foreign manufacturing licenses (including some held by Bausch and Lomb Optical Company, founded in upstate New York by German immigrants), and foreign factories (including a very lucrative one near London). Schott, with a simpler business model, nonetheless exported more than half its total glass production and about a quarter of its optical glass before the war. The outcome of the war did cramp their international style—Zeiss’s London factory was sold in 1918 for a mere £10,000, for instance, and Schott’s exports to the UK in 1920–21 were a mere one percent rather than the steady prewar 5 or 6 percent. But by the mid-1920s, despite treaty restrictions and increased tariffs, both Zeiss and Schott were again making deals with British companies.88

  More important, in Jena they went deep into R & D and were soon pushing the limits of optical technology again, with high-profile civilian achievements alongside those of use to the military.89 In 1925 the world’s first planetarium opened in Munich, equipped with the world’s first star projector, designed and built by Zeiss. In 1930 America’s first planetarium opened in Chicago, again with a Zeiss projector. And in 1933, as audiences were being enraptured by the sight of the stars in Zeiss-equipped planetarium domes from Stockholm to Rome to Moscow, a heavily remilitarized Germany made clear its displeasure at disarmament by finally withdrawing altogether from the League of Nations, that high-minded pioneering world association brought into being by the Treaty of Versailles.

  What about America’s part in the wartime production of optical glass? Before the United States joined the war, its imports of optical glass cost about half a million dollars a year.90 Bausch and Lomb, the major domestic producer (of which Zeiss had bought a 25 percent share), made barely one ton of optical glass a month. Yet, upon joining, America was expected to supply one ton of optical glass a day to the Allies. While US citizens loaned their binoculars to the military, the country’s glassmakers geared up.91 Again, the transformation resulted from a public–private partnership, but unlike Britain’s piecemeal approach, based around voluntary cooperation, the American solution was top-down and carefully focused.

  By late spring 1917 the Council of National Defense had dispatched silicate scientists (silica being the main component of common sand, the main component of glass) from the Carnegie Institution’s Geophysical Laboratory to the nation’s glass factories. The US Army Ordnance Department made the scientist in charge, F. E. Wright, a lieutenant colonel. Consequently, the Army itself was, as Wright put it later, “the court of last appeal,” which he found “a useful lever” in wartime conditions. So the Army ran the show, the scientists obeyed, and the factories ramped up production as fast as they could, with the assistance (and coercion) of other government agencies. Given the strict controls and tight deadlines, the experts opted for basics and high volume—just six types of glass, adequate for most instruments—rather than range, innovation, and top quality. In September 1917 US factories produced more than five tons of optical glass, in December more than twenty tons. In 1918 the total US production of “satisfactory” glass for optical munitions amounted to nearly three hundred tons, two-thirds of which came from Bausch and Lomb.92

  In World War I, unlike its successor, the air was not initially a strategic battlefield. Space was decades away from becoming a site for surveillance and reconnaissance. Radio and aircraft were still rudimentary. The intimate alliance between astrophysics and the military would not be forged until just before the next world war.

  The modern, Western offspring of astronomy, astrophysics is not even a century and a half old. Its midwives were two nineteenth-century technological innovations. The more widely known of the two, photography—literally “light drawing”—stemmed from a welter of investigations into the image-forming proclivities of light. The lesser-known and more specialized innovation, spectroscopy—which separates light into its component colors, yielding heaps of information about its source—derived from the prismatic study of the Sun’s spectrum and the discovery that every substance radiates a characteristic and unique combina
tion of colors. Jointly, photography and spectroscopy empowered the astronomer to record and analyze whatever light the available telescopes could gather from the sky.

  The inception of photography during the 1830s and 1840s changed the ground rules of representation and the concept of evidence. Astronomers had long needed a convincing way to record their observations. In the seventeenth and eighteenth centuries they could talk about, write about, compose anagrams about, or draw what they saw. Their audience had to trust in their honor and take their word. Drawings were the best anybody could do, but they have inherent limitations. As long as a human hand holding a pencil is recording the photons, the record is susceptible to error: human beings, especially sleepy ones with eyestrain and variable artistic skill, are not reliable recorders. On occasion, Galileo circumvented the problem by using symbols. In Sidereus Nuncius (“The Starry Messenger”), rushed to publication in February 1610, his drawings of the movements of Jupiter and its largest moons consist simply of a large circle and several dots; his drawings of stars are either six-pointed asterisks (small or medium-sized) or six-pointed cookie-cutter stars with a dot in the middle.93

  Finally, in the mid-nineteenth century, a presumptively unbiased recording device came to the rescue: the camera. By employing one of the multifarious new light-drawing techniques, you could record the terrestrial and celestial worlds with minimal interference from eye, hand, brain, or personality. Your quirks and limitations would fade to irrelevancy, whether you used a silver-plated, highly polished sheet of copper exposed to iodine vapors and mercury fumes or a glass plate coated with a gelatin concoction.

  One of photography’s inventors, Louis-Jacques-Mandé Daguerre, and many of its first commentators were concerned primarily with art, specifically painting, which they thought would either be facilitated or nullified by the miraculous mechanical invention. One writer hailed the daguerreotype as “equally valuable to art as the power-loom and steam-engine to manufactures, and the drill and steam-plough to agriculture.”94 Others contended that photography heralded the death of painting. Soon photography would, in fact, unshackle artists from any remaining obligation to capture visual reality, thus clearing a broad path for modernist painters such as Gauguin, van Gogh, and Picasso, not to mention early art photographers such as Julia Margaret Cameron. While scientists embraced photography as a tool to gather data and remove the observer’s impression of a scene, artists embraced it as another good reason to convey subjective impressions, internally generated visions, or the essence of their medium.

  Among photography’s pioneers and proponents were several high-profile scientists. William Henry Fox Talbot, inventor of the light-sensitive paper negative in 1834–35, was a Royal Society gold medalist in mathematics and a Fellow of the Royal Astronomical Society.95 Another Englishman, Sir John Frederick William Herschel, president of the Royal Astronomical Society, coined the word “photography” in 1839. He also coined the word “snapshot” in 1860, introduced the photographic usage of the words “positive” and “negative,” discovered that sodium hyposulfite—“hypo” for short—could be used as a photographic fixative (rendering the emulsion no longer sensitive to light), made the acquaintance of Fox Talbot, corresponded with Daguerre, and, all in all, threw himself so early and so thoroughly into the new endeavor of drawing with light that he practically ranks as one of photography’s inventors.

  Even more influential than Sir John Herschel during the early months of photography’s official existence was the French astronomer and physicist François Arago, director of the Paris Observatory, perpetual secretary of the French Academy of Sciences, and, following the Revolutions of 1848, the provisional government’s colonial minister as well as its minister of war. He was also a great publicist. On January 7, 1839, acting as Daguerre’s spokesperson and scientific advocate, Arago announced the invention of the daguerreotype at the Academy. It was a thrilling moment for science, art, commerce, national heritage, and much else besides. “Monsieur Daguerre,” said Arago, “has discovered special surfaces on which an optical image will leave a perfect imprint—surfaces on which every feature of the object is visually reproduced, down to the most minute details, with incredible exactitude and subtlety.”96

  Arago also asserted that the new technique was “bound to furnish physicists and astronomers with extremely valuable methods of investigation.” Together with two noted physicists of his day, Arago himself had tried but failed to make an image of the Moon by projecting moonlight onto a screen coated with silver chloride. Now, at the urging of several members of the academy, Daguerre had managed to “cast an image of the Moon, formed by a very ordinary lens, onto one of his specially prepared surfaces, where it left an obvious white imprint,” and had thus become “the first to produce a perceptible chemical change with the help of the luminous rays of Earth’s satellite.”97 To contemporary eyes, the image is unimpressive; to mid-nineteenth-century eyes, it was mind-blowing. Anyone who knew anything about chemistry or physics now rushed to attempt un daguerréotype.

  In early July, on behalf of a commission charged with assessing the wisdom of granting Daguerre a lifetime government pension in exchange for permitting France to present the discovery to the world, Arago reported to the Chamber of Deputies that the daguerreotype would rank with the telescope and the microscope in its potential range of applications:

  We do not hesitate to say that the reagents discovered by M. Daguerre will accelerate the progress of one of the sciences, which most honors the human spirit. With its aid the physicist will be able henceforth to proceed to the determination of absolute intensities; he will compare the various lights by their relative effects. If needs be, this same photographic plate will give him the impressions of the dazzling rays of the sun, of the rays of the moon which are three hundred thousand times weaker, or of the rays of the stars.98

  By August 19, Daguerre’s pension was a done deal, and Arago announced the details of the process. Every aspiring daguerreotypist could now just follow directions.99

  The first impressive daguerreotype of a celestial object dates to early 1840. It was a portrait of the Moon one inch in diameter, the product of a twenty-minute exposure from the roof of a building in New York City, made by a physician–chemist named John William Draper. In 1845, by exposing a silvered plate for a mere sixtieth of a second, two French physicists—Léon Foucault and Armand-Hippolyte-Louis Fizeau—produced a respectable image of the Sun. In 1850 two Bostonians—John Adams Whipple, a professional photographer, and William Cranch Bond, first director of the Harvard College Observatory—daguerreotyped Vega, the sixth brightest star in the nighttime sky, by exposing their plate for a hundred seconds. The next year, another professional photographer, Johann Julius Friedrich Berkowski, in collaboration with the director of the Royal Observatory in Königsberg, Prussia, used an exposure of eighty-four seconds to daguerreotype a total solar eclipse. Astrophotography was well and truly under way.

  Meanwhile, inventive individuals were hard at work making photography more user-friendly. Within a few years, the one-off daguerreotype positive would be a relic, replaced by a glass plate coated with a light-sensitive emulsion that yielded a negative, thereby ushering in a new era of reproducibility. In 1880 hand-craftsmanship gave way to mechanization when the Eastman Dry Plate and Film Company opened in Rochester, New York. By the end of the decade, photography had become an essential tool of the astronomer’s trade.100

  Compared with photography, spectroscopy—the other midwife of astrophysics—might seem an arcane development. No populist fanfare or breathless newspaper accounts attended its birth.

  As soon as telescopes became standard equipment, piles of people began to spend gobs of time finding dim blips of light, mapping their positions, estimating their brightness and colors, and adding them to the ballooning catalogue of stars, nebulas, and comets. The task was limitless. But no sky map says much about the stuff of which the stars are made, or about their life cycles or their motions. For that you need to k
now their chemistry and understand their physics. That’s where spectroscopy comes in.

  Every element, every molecule—each atom of calcium or sodium, each molecule of methane or ammonia, no matter where it exists in the universe—absorbs and emits light in a unique way. That’s because each electron in a calcium atom, and each electron bond between atoms in a methane molecule, makes the same wiggles and jiggles as its counterpart in every other calcium atom or methane molecule, and each of those wiggles absorbs or emits the same amount of energy. That energy announces itself to the universe as a specific wavelength of light. Combine all the wiggles of all the electrons, and you’ve got the atom’s or molecule’s electromagnetic signature, its very own rainbow. Spectroscopy is how astrophysicists capture and interpret that rainbow.

  Spectroscopy’s prehistory begins with Isaac Newton in 1666, when he showed, using prisms, that a visible beam of “white” sunlight harbors a continuous spectrum of seven visible colors, which he named Red, Orange, Yellow, Green, Blue, Indigo, and Violet (playfully known to students as ROY G. BIV). For the next couple of centuries, investigators on several continents followed his lead. In 1752 a Scot named Thomas Melvill found that when he burned a chunk of sea salt (think sodium) and passed the firelight through a slit onto a prism, it yielded a striking bright yellow line; two and a half centuries later, sodium would be the active ingredient in yellow-tinged sodium vapor streetlights.101 In 1785 a Pennsylvanian named David Rittenhouse devised a way to produce spectra with something other than a prism: a screen made of stretched hairs, densely packed in parallel lines and arranged to provide a series of slits that could disperse a beam of light into its constituent wavelengths. In 1802 an Englishman named William Hyde Wollaston found that the Sun’s spectrum includes not only the seven colors that met Newton’s eye but also seven dark lines or gaps amid the colors. It was now evident that visible light held a lot of hidden information, reinforcing the prior two years’ discoveries of infrared and ultraviolet, which had shown that light itself could be hidden from human view.

 

‹ Prev