Book Read Free

Strange Glow

Page 9

by Timothy J Jorgensen


  The definitive experiment came on December 2, 1942, when criticality was reached by amassing enough uranium to attain a critical mass. As soon as the resulting nuclear chain reaction was achieved, it was then quickly terminated. It was a proof-of-principle experiment, and the principle had now been proved.43 Szilárd’s nuclear chain reaction hypothesis of just 10 years earlier was absolutely correct, and production of an atomic bomb was possible.

  Criticality is the driving force behind both nuclear power plants and nuclear bombs. And uncontrolled criticality is the biggest hazard for all nuclear activities. Still, as we will see, nuclear power plants cannot become nuclear bombs even if humans lose all control over the chain reaction. If Fermi had lost control of the Chicago Pile it would have caused a nuclear meltdown, not a nuclear explosion. There are major engineering obstacles to making a nuclear bomb. They cannot be produced by accident; you have to intentionally build one. So power plants are subject to the risk of a meltdown, but they are not latent nuclear bombs waiting to explode.

  SUN STROKE: FUSION

  We began Part One with the sun and its visible electromagnetic radiation. Fittingly, we have come full circle and will now close out Part One back at the sun, with a brief discussion of the sun’s visible particulate radiation. Yes, the sun also produces particulate radiation, and it’s all made possible by nuclear fusion.

  Fusion is the opposite of fission. Strange as it may seem, while very large atoms can split apart through fission and thus release energy, very small atoms can fuse together and also release energy. The mechanism of a fusion reaction is too complicated for us to deal with here.44 But an important distinction is that while fission can occur spontaneously, fusion must be forced to occur. That is, it requires an enormous input of energy to initiate a fusion reaction, which ultimately results in even more output energy. While there is a net gain in energy production during the fusion reaction, the input energy needs of fusion are extremely high and require the attainment of temperature levels found only on stars. That is why the only place that fusion naturally occurs in our solar system is on the sun, our closest star, where temperatures are extremely hot. These fusion events release energy that pushes through the sun’s surface in the form of solar flares, emitting large quantities of high-energy particles into space. These high-energy particles coming from the sun constitute cosmic radiation and are a major source of the radiation that reaches Earth.45

  Cosmic radiation is part of our background radiation. It is responsible for the nighttime color display that we see at the northern and southern latitudes on Earth, known as the aurora borealis (northern lights) or aurora australis (southern lights), respectively. The auroras occur when cosmic particles are deflected by Earth’s magnetic field and shower down through converging entry points in the atmosphere near the poles.

  Cosmic particles contribute to our natural background radiation exposure. But even at the poles, cosmic particles contribute only slightly to our annual radiation dose, because Earth’s atmosphere shields us from most of the impinging cosmic particles. In contrast, astronauts in outer space lack atmospheric protection, and receive significant cosmic particle exposure during space missions. Some of these cosmic particles are so large that when one crosses the retina of an astronaut’s eye, they can see a small flash. This is reminiscent of Rutherford’s microscope experiments in which he could actually see alpha particles as flashes on a fluorescent screen, except that the astronauts can dispense with the fluorescent screen. Their screen is their own retina.

  The enormous heat required to induce fusion reactions is the major reason we don’t have any fusion-based power plants. It is currently not feasible to generate enough power to sustain temperatures required for a fusion reaction, and there are no materials on Earth that could resist the heat needed to sustain them. So, even though the net output of a fusion reaction is more than the input energy, the obstacles to realizing that energy are substantial.

  This is why claims of cold fusion attract so much attention. Cold fusion claims imply that fusion has been achieved by some novel mechanism that doesn’t require input heat. This would mean that we could get fusion energy out without putting any heat energy in—a major technological advance that would promise easy and unlimited power forever. So far, all claims of achieving cold fusion have been debunked.46

  In contrast, nuclear weapons based on hot fusion do exist. They are often called hydrogen bombs, or H-bombs, because they are based on the fusion of hydrogen nuclei;47 but they are also sometimes referred to as thermonuclear bombs. Each bomb can release 1,000 times, or more, energy than the atomic bombs detonated on Hiroshima and Nagasaki. Due to the input energy required to initiate fusion, a fission bomb is needed to trigger a fusion bomb. So each fusion bomb is actually a combined fission/fusion bomb. Fortunately, a fusion bomb has not yet been detonated during warfare, although some have been test detonated by several countries. The United States test detonated the first fusion bomb, named Ivy Mike, on a coral atoll island in the South Pacific in 1952. The number of fusion bombs currently in existence worldwide is not known exactly, but it likely numbers in the thousands.

  CODA TO PART ONE

  This brings us to the close of Part One. We’ve learned the distinction between radiation and radioactivity and how each was discovered. And we’ve learned how Rutherford and his colleagues coaxed the atomic nucleus into revealing its structural secrets, thus toppling “plum pudding” as the standard model of nuclear architecture. In the process, we’ve seen that radiation comes in the form of either electromagnetic waves of different wavelengths, or in the form of different types of high-speed subatomic particles. Amazingly, both forms of radiation can carry their energy through solid objects, a phenomenon that Roentgen was the first to observe. But a portion of that energy is left within the object that the radiation traverses, thereby delivering to that object a radiation dose. But our knowledge of dose at this point in our story is still vague. We need to explore dose further if we are to understand radiation’s health risks. Still, we now know enough nuclear physics concepts to engage in a meaningful conversation about the health effects of radiation.

  So far, the health aspects of our story have been merely anecdotal and limited to the small group of scientists that worked with radiation shortly after its discovery at the beginning of the twentieth century. Although they had little knowledge of dose and scant understanding of the potential dangers of radiation, most of these scientists took some limited precautions to protect themselves from exposure, at least as much as was practicable for the work they conducted. If radiation work had remained confined to this small cadre of researchers, the effects of radiation on health would likely never have become a major public health issue. But radioactive materials soon became a commodity and were put to many uses in various consumer products. Public exposure to concentrated sources of radiation became increasingly prevalent. As such, there were bound to be health consequences to the public and particularly to the workers who handled radiation-producing devices and radioactivity. Before long, those health consequences became very evident.

  PART TWO

  THE HEALTH EFFECTS OF RADIATION

  CHAPTER 5

  PAINTED INTO A CORNER: RADIATION AND OCCUPATIONAL ILLNESS

  No wonder you’re late. Why, this watch is exactly two days slow.

  —The Mad Hatter in Alice’s Adventures in Wonderland (Lewis Carroll)

  THE MYSTERIOUS ILLNESS OF SCHNEEBERG’S MINERS

  Today we would call H. E. Müller a victims’ advocate. Although not a medical doctor, he was a respected man of integrity, and the mining village of Schneeberg, Germany, hired him around 1900 to track down the source of a mysterious lung ailment that plagued the miners of the community.1 He would not let them down.2

  Lung disease among miners had been known for many centuries, and the term bergsucht (disease of the mountains) had entered the vernacular through the medical writings of the Swiss Renaissance physician Paracelsus (1493–1541). Bergs
ucht encompassed a collection of miners’ lung ailments that modern physicians would collectively call chronic obstructive pulmonary diseases (COPD); but they also included lung cancers, as well as infectious lung diseases, such as tuberculosis and pneumonia, that were endemic among mine workers. The most prevalent form of bergsucht that Schneeberg’s miners suffered seemed to be clinically different from that endemic to other mining regions. It didn’t have as much variation in its symptoms, it had an earlier age of onset, and it killed more quickly.

  In 1879 two physicians, F. H. Härting and W. Hesse—a family practitioner in Schneeberg and a public health official from nearby Schwartzenberg, respectively—decided to systematically study the disease. They performed autopsies on more than 20 bergsucht victims from Schneeberg and found that 75% of them had died in an identical manner. It turned out that the clinical symptoms of Schneeberg’s bergsucht were better defined than elsewhere because the underlying disease was predominantly of a single deadly type—lung cancer. This discovery was made at a time when smoking was rare and lung cancer rarer. The autopsy findings were astounding. But what was the cause?

  Two known toxins, arsenic and nickel, were considered likely causal agents for the lung cancer. Both were present in high concentrations in Schneeberg’s ore. Cobalt and silica dust were also suspected. Unfortunately, none of these agents could be conclusively linked to the disease. Perhaps some specific combination of these candidates was required to produce illness. Decades passed, yet a definitive conclusion on the cause of Schneeberg’s lung cancers remained elusive.

  It is a sad truism of public health practice that the first indication of toxicity from any hazardous agent will always appear among those who are occupationally exposed. This is because workers are typically exposed to higher levels for longer periods of time than the general public, and higher and longer exposures often shorten the time to onset of disease and increase the severity of symptoms.

  A classic example is mercury poisoning among hat workers. Mercury solutions were used in the hat-making industry during the nineteenth century to preserve the animal pelts used in hat production (e.g., beaver-fur top hats). Workers, called hatters, breathed the fumes, and the mercury accumulated in their bodies. They developed a wide range of neurological symptoms that were misunderstood as signs of insanity. It soon became generally known that hatters often suffered mental difficulties, and this developed into the colloquialism “mad as a hatter” to describe anyone who seemed a little daft. We now know that the hatters’ apparent madness was due to the neurotoxicity resulting from their mercury exposure, and we have since adopted strict regulations limiting the general public’s exposure to mercury.3

  Mercury was by no means the only chemical to cause a particular illness in workers. As early as the eighteenth century, doctors were well aware of other occupational diseases, such as painter’s colic (lead in paint), brass founder’s ague (metal oxide fumes), and miner’s asthma (rock dust).4

  Even cancer was then known to be an occupational illness. In 1779, the surgeon Percivall Pott (1714–1788) reported that scrotal cancer (euphemistically called soot warts) was endemic among the chimney sweeps of England.5 He claimed that their unusual affliction was a woeful consequence of prolonged exposure to chimney soot. It was thought that body sweat ran down their torsos and caused soot to accumulate around their scrotums, putting them at risk of scrotal cancer. Finding the cancer even among teenage workers brought attention to the fact that the chimney sweep industry was employing boys as young as four years old; they were hired because their small size allowed them access to even the narrowest chimneys. The British Chimney Sweep Act of 1788 raised the minimum age for chimney sweep apprenticeships to eight years old and required employers to provide clean clothing; otherwise it did nothing to protect the workers from soot exposure. It took more than a century to experimentally confirm Pott’s contention that soot contains cancer-causing compounds (i.e., carcinogens).

  Thus, workers like hatters and chimney sweeps often served as the canaries in the coal mine for the general public.6 They warn the rest of us of otherwise hidden hazards so we can take precautions.7 In the early years, the general public had a romantic view of radiation. Even though people didn’t understand it and didn’t fully appreciate its technological potential, they did appreciate its novelty and entertainment value. Unlike the radiation researchers, however, the general public was not fully cognizant of the fact that radiation exposure could pose significant health problems. That would soon change. What was needed was radiation’s equivalent of a miner’s canary to warn the public to proceed with caution. Ironically, the miners’ canaries for radiation’s hazards turned out to be actual miners themselves. And the major health hazard of radiation had been documented and characterized well before radiation had even been discovered. Still, someone needed to put all the salient facts together and find the cause of the miners’ ailments.

  Time plodded on, and the miners continued to suffer from their mysterious plague. Then a fortuitous finding in the field of geology broke the case. Professor Carl Schiffner (1865–1950) of the Mining College of Freiberg was intrigued by the discovery of radon-222 by the German chemist Freidrich Dorn (1848–1916).8 Schiffner decided to perform a geological survey of the radon concentrations of all natural waters of the Saxon region, including Schneeberg. His survey revealed very high concentrations of radon in Schneeberg’s waters and even in the air of its mines.

  Learning of Schiffner’s findings, Müller, who was still working on the Schneeberg mine problem, immediately suspected that radon was the cause of the miners’ lung cancers. Müller knew that x-ray exposure had previously been associated with cancer in those who worked with x-ray machines. He also knew that radioactive substances, such as radon, emitted x-ray-like radiation. Putting these two observations together, it was no great leap to suppose that radon caused lung cancer by virtue of its emitted radiation. And that is exactly what Müller proposed in 1913. He wrote, “I consider Schneeberg lung carcinoma [i.e., lung cancer] to be a particular kind of occupational disease, acquired in the mines having rock strata containing radium, and thus showing high [radon] levels.”

  Radon is produced when radium decays. A radon atom is born whenever a radium atom dies as part of a long decay chain, the lineage of which starts with uranium-238 and ultimately ends with stable lead-206. Radon is called the progeny and radium is its parent.9 All of the radioactive isotopes in this decay chain are solids except for radon, which is a gas. That is why radon alone was a problem for the Schneeberg miners; only radon could escape from the soil and be inhaled.

  Müller’s hypothesis about the association between lung cancer and mining had many skeptics, but it also could not be entirely dismissed. Besides, even if not radon, it was likely that some type of airborne toxin was involved. Regardless, the remedy seemed to be the same. This amounted to improved mine ventilation and other air containment precautions, in order to reduce miners’ exposure to radon or other airborne mine contaminants. With these occupational hygiene changes in place the problem was thought to be resolved. Moreover, World War I was beginning, and the burdens of war made the threat posed by radon seem trivial. People stopped worrying about radon.

  After the war, physician Margaret Uhlig revisited the issue of radon in the Schneeberg mines. She showed that the limited remediation efforts that had been deployed were inadequate and that miners were still dying of lung cancer in large proportions. Many studies followed, and any remaining skepticism was put to rest. A major review of 57 radon studies published through 1944 concluded, “There is a growing conviction in the United States and abroad that radiation emitted by radon inhaled over a long period of time … will cause cancer of the lung in man.”10 In the years since then, the fundamental truth of this statement has never come under any serious doubt.

  Although the fact that radon causes lung cancer is unquestionable, what has come into question is whether the levels of radon often found in typical homes pose a significant lung ca
ncer risk to residents. As you can imagine, in addition to the fact that home radon levels are typically much lower than mine levels, homes and their residents differ from mines and miners in many ways that can affect cancer risk. We will visit the issue of radon in homes in Part Three. For now, just know that radon was the first radioactive isotope to be associated with cancer in humans. Its ability to cause cancer has been a generally accepted fact since as early as 1944.

  BAD TO THE BONE

  Frances Splettstocher (1904–1925) never thought of herself as a radiation worker although she knew her work involved the use of radioactive material. She was a factory worker in the watch-making industry, and her employer was the Waterbury Clock Company in Waterbury, Connecticut.11 She loved her job, as did all her female coworkers. For a 17-year-old girl without higher education, it was one of the best jobs available in Waterbury. In addition, the factory was less than a mile from her home on Oak Street, where she lived with her parents, three sisters, and three brothers. Her father also worked at the factory, so she likely walked to and from work with him every day. Thus, Frances perfectly typified the women working at the plant, who were mostly in their midteens to early twenties and came from upper middle-class working families. Life was good for Frances Splettstocher in 1921. Who could have imagined that by 1925 she would be dead?

 

‹ Prev