Book Read Free

Strange Glow

Page 12

by Timothy J Jorgensen


  The Manhattan Project recruited the best and brightest nuclear physicists of the day, sequestered them at secret locations (mostly Los Alamos, New Mexico), and sent out secret agents around the world to buy up every available quantity of uranium they could get their hands on, much of it from Africa.59 The accumulation and concentration of radioactive materials at these quantities was unprecedented and worrisome. It made the radium levels that killed the dial painters look small. Nuclear physicist Arthur Holly Compton (1892–1962) recalled the mindset of the Manhattan Project scientists in 1942: “Our physicists became worried. They knew what had happened to the early experimenters with radioactive material. Not many of them had lived very long. They were themselves to work with materials millions of times more active than those of the earlier experimenters. What was their own life expectancy?”60 No one could answer that question.

  The United States’ entry into World War II was looming.61 The Manhattan Project couldn’t wait until more radiation biology data accumulated to refine the project’s safety procedures. The physicists decided to press on using the current radiation safety standards of the day, even modeling their fume hood and ventilation systems after designs that had been employed to protect the radium dial painters.62 To cover all bases, they also planned a crash program in radiation biology research within the Manhattan Project itself, to run concurrently with the nuclear physics program. So as not to reveal its mission, the radiation biology research program was cryptically named the Chicago Health Division (CHD).

  The CHD immediately recognized the inadequacy of existing radiation protection standards, which had been designed purely for routine radiation work, in safeguarding against the enormously diverse and novel exposures that workers would experience in the Manhattan Project. For one thing, there wasn’t much known about acceptable tolerance levels for the known particulate radiations, let alone the highly novel particles produced through fission reactions. Furthermore, the experience of the radium girls had shown that ingested or inhaled radioisotopes could concentrate in tissues and create their own unique problems. Virtually nothing was known about whether other internalized radioisotopes would also seek out particular organs. And some of the radioisotopes that were being produced had no counterparts in nature.

  Another big problem was the actual measurement of the amount of radiation to which individual personnel were exposed. What was needed was some type of instrument that was small enough to be attached to a worker’s body, or placed in a pocket, to measure cumulative radiation doses throughout a normal workday. Such compact personal (“pocket”) ionization chambers had become commercially available from the Victoreen Company as early as 1940, but they tended to give unreliably high readings if dropped or exposed to static electricity. Because of this problem, it was routine to carry two of them and only record the lower reading of the two. Dental x-ray films, carried in a pocket, also allowed for crude monitoring of the radiation exposures of personnel, but they were even worse when it came to reliability.63 The CHD thus considered improvements in the technology of dose measurement (i.e., dosimetry) as essential to their mission, and the research team made major advances in this area.

  The CHD scientists also had qualms about the concept of maximum tolerable dose, because it was still an unverified assumption. They further recognized that exposure to radiation, as measured by gas ionization, was only an imperfect estimate of the amount of energy deposited in body tissue (i.e., the dose).64 They understood that dose, not exposure, was driving biological effects. The same exposures from different types of radiation could result in different doses. They were more interested in dose than exposure, and they worked to find better ways of measuring it.

  Since human cells are approximately 98% water, CHD scientists found the amount of energy deposited in water was a much better estimate of the radiation dose to human tissues than could be achieved by measurement of energy deposited in a gas (which was the way roentgens were quantified). They defined a new dose unit called a rad, an acronym of radiation absorbed dose, which represented the amount of energy deposited in a specific mass of tissue. The rad soon replaced the roentgen as a radiation protection unit.

  But they also soon found that rads from different types of radiation could differ in their tissue-damaging efficiency. To account for this, they applied weighting factors to doses from different radiation types. This allowed them to standardize the radiation doses so that they would all produce the same level of biological effect regardless of the radiation type. These new weighted units were called rems (an acronym for rad equivalence in man), and were referred to as dose equivalent units.

  THE NEW GUY IN TOWN: MILLISIEVERTS

  If you find all this business about roentgens, rads, and rems confusing, you are not alone. Even the experts have a hard time juggling them. This ever-changing metric for measuring radiation risk has been a major obstacle to conveying risk levels to an increasingly risk-attuned public. As stated previously, the dose equivalent is the only valid measure of conveying health risk information because it accounts for the different biological effects of the various types of ionizing radiations and puts them all on a level playing field. For example, the health risk of neutrons can be directly compared to the risk of x-rays using the dose equivalent unit. So the dose equivalent unit is really the only unit needed to measure health risks from ionizing radiations. The exposure and dose units can be entirely dispensed with, if the question is solely health risk.

  This insight about the utility of dose equivalent units was largely the contribution of the Swedish medical physicist Rolf Maximilian Sievert (1896–1966), who pioneered the development of biologically relevant radiation measurements. In 1979, a standard international (SI) unit for dose equivalent was defined and deployed by the General Conference on Weights and Measures, the international body that sets the standards for scientific measurement units. To honor Sievert, the SI unit for dose equivalents was named after him.

  The sievert (Sv) and its more popular offspring the millisievert (mSv)—1/1,000th of a sievert—are very practical units for the field of radiation protection practice because they are tied directly to health effects. Although the mSv is really just a derivative of the older working unit, the millirem (mrem),65 it has some practical advantages over the mrem.66 Namely, within the range of typical human radiation experience, from background to lethal levels, dose equivalents can be expressed in mSv without resorting to using any decimals. For example, the lowest whole-body dose equivalents are around 3 mSv per year for background radiation, while lethal dose equivalents are slightly in excess of 5,000 mSv. So there’s no need to wander too much out of this dose equivalent range if your interest is strictly human health risks.67

  It is hard to overstate the importance the concept of dose equivalence has had on the radiation protection field. Unfortunately, the term “dose equivalent” is cumbersome to use in everyday speech. No wonder it often gets truncated to simply “dose.” In fact, we will do the same throughout the rest of this book for that very reason. Just remember, when a radiation quantity is expressed in mSv, it’s a dose equivalent measurement and nothing else. And remember, you don’t need to know which type of ionizing radiation it is in order to evaluate the health risks from radiation doses expressed in mSv. All those considerations are worked into the mSv unit.68

  WANTED: MECHANISTIC INSIGHT

  The early deployment of the dose equivalence concept to radiation protection practice is particularly impressive given that the scientists who developed the approach had no basic understanding of the underlying radiation biology principles involved. In fact, Robert S. Stone (1895–1966), the scientist who spearheaded the approach within the Manhattan Project’s radiation protection group, lamented: “Beneath all observable effects was the mechanism by which radiations, no matter their origin, caused changes in biological material. If this mechanism could have been discovered, many of the problems would have been simpler.”69

  Although they lacked knowledge of the mec
hanism by which radiation damaged living tissue, the radiation protection group’s conclusions and approach to the problem were exactly correct. More than 70 years later, we now know the mechanistic details of how radiation damages tissues (which we will explore in later chapters). And those mechanisms actually explain the empirical findings of the Manhattan Project scientists. Over the years, the dose equivalence concept has been refined and recalibrated. Nevertheless, the general approach has stood the test of time, supporting the fundamental principle that radiation doses, when adjusted by a biological-effectiveness weighting factor for the radiation type, are accurate predictors of human health outcomes.70 Or more simply, as Peracelsus might have put it: “The [radiation] dose makes the poison.”

  The Manhattan Project scientists were also prescient in their distrust of the MTD. They decided to put aside the concept of MTD. They reasoned that until the MTD was validated as applicable to radiation effects, they would work under the premise that the lower the radiation dose the better, and they promoted the notion that no worker should receive any higher dose than was absolutely necessary to complete his or her job.71 And for ingested and inhaled radioisotopes, their goal was zero exposure. They believed that enforcing strict industrial hygiene practices in the workplace could prevent workers from being contaminated with radioactivity from their work.

  MURPHY’S LAW

  Although the Manhattan Project scientists had largely solved the most pressing radiation protection problems for the world’s largest and most complex radiation job sites, which included thousands of radiation workers, their protection solutions primarily addressed only the radiation hazards of routine work, in which all workers complied with safety regulations. Unfortunately, the physicists working on the bomb were pushing the envelope on what occupational activities could be considered routine. Sometimes the physicists pushed too hard, creating unique radiation hazards with consequences for themselves that no radiation protection program could have easily foreseen.

  On the evening of August 21, 1945, Harry K. Daghlian Jr. (1921–1945), a Manhattan Project physicist, was working alone after hours; this was a major violation of safety protocol. He was experimenting with neutron-reflector bricks, composed of a material (tungsten carbide) that could reflect neutrons back toward their source. Reflecting neutrons was considered one possible means of inducing criticality in an otherwise subcritical mass of plutonium. Unfortunately for Daghlian, the reflection approach turned out to be correct, and he had slippery fingers. He was handling one of these bricks and accidentally dropped one on top of the plutonium core. There was a flash of blue light in the room, and Daghlian instantly knew what that meant: the core had gone critical.72 He also knew what he needed to do and what it would mean to his life. He reached into the pile with his hand and removed the brick, thereby returning the core to subcritical. Then he called in authorities and awaited his fate.73

  This criticality accident resulted in a burst of neutrons, as well as gamma rays, which irradiated Daghlian’s entire body. It is estimated that he received a whole body dose of 5,100 mSv—more than 1,000 times a typical annual background dose. This dose resulted in severe anemia that would ultimately kill him. His hands, which had actually reached into the core to remove the brick, received a much higher dose. They soon became blistered and gangrenous, causing him excruciating pain. His team supervisor, Louis Slotin (1910–1946), sat by his bedside in the hospital every day until Daghlian finally died on September 15.74

  Not only had Daghlian broken a standard safely rule by working alone, he had also broken a cardinal rule of core assembly: never use a method that would result in criticality if some component were dropped on it. Rather, criticality should only be tested by raising that component; then, if it slips, gravity will pull it away and not toward a critical mass situation.75

  Regrettably, it seems that Slotin, who should have known better, did not learn from Daghlian’s experience. One year later, Slotin was lowering a hemisphere of beryllium over the exact same plutonium core with the help of a screwdriver.76 The screwdriver slipped, the beryllium dropped, and the core went critical, just like it had for Daghlian. Within just a few minutes, Slotin began vomiting—an indication that he had received a fatal dose (i.e., much higher than 5,000 mSv). He was taken to the hospital to face his destiny. He died nine days later in the same hospital room that Daghlian had.

  Seven other men were working in the laboratory at the time of the accident, but only Slotin received a fatal dose. One of the seven, Alvin C. Graves (1909–1965), was also hospitalized with symptoms of radiation sickness, but he recovered and was released from hospital care after a few weeks, suggesting that he had received a whole-body dose somewhere between 2,000 and 5,000 mSv. Unfortunately, this would not be the last time that Graves was involved in a fatal nuclear accident.

  FIGURE 5.3. THE LOUIS SLOTIN ACCIDENT. This photograph depicts a re-enactment of the accident that killed Louis Slotin. It accurately shows the spatial arrangement of every item in the room when the accident took place. Ignoring safety rules, Slotin lowered a beryllium hemisphere over a uranium core using a screwdriver for leverage. Unfortunately, the screwdriver slipped, the beryllium hemisphere fell, and the uranium went supercritical, causing a burst of radiation to be emitted. By reenacting the circumstances of the accident, scientists determined that the whole-body radiation dose that Slotin received must have exceeded 5,000 mSv (a lethal dose). Slotin died 9 days after the accident.

  HOT CHOCOLATE

  Few know that the Manhattan Project was not the only secret radiation project going on in the United States during World War II. There was another one, of nearly as significant military importance, being conducted in the Radiation Laboratory at MIT. The focus of this project was on improving the deployment of radar, but it would ultimately have much more widespread influence.

  Radar is an acronym for “radio detection and ranging.” It is a technology that uses radio waves to detect planes and ships. It can also determine their altitude, speed, and direction of movement. It is indispensable to modern warfare for monitoring movements of enemy forces, as well as for civilian activities, such as aviation.

  Radar exploits the same property of radio waves that allowed Marconi to transmit his signals across the Atlantic Ocean; that is, their tendency to bounce. Marconi’s radio waves bounced off the inside layer of the atmosphere, skipping their way around the globe as a pebble skips across the surface of a pond. In addition to this skipping phenomenon, when radio waves directly hit large objects in their path, some bounce back to where they came from. Measuring the time it takes for radio waves to return to their source is the basis of radar. If you can detect the radio waves bouncing back, you can estimate the size of the object from which they bounced and, since they travel at a constant speed (i.e., the speed of light), you can calculate how far away the object is by how fast the signal returns to its source.77

  Radar was developed simultaneously and in secrecy by different nations in the years just prior to World War II. At the core of the technology was a magnetron that generated microwave signals. Microwaves are radio waves with wavelengths ranging from about one meter (about one yard) to one millimeter (about 1/25 of an inch). They are “micro”—the Latin word for small or short—only with respect to other radio waves, which can be as long as an American football field. They have mega wavelengths compared to x-rays and gamma rays. The military required a lot of magnetrons to satisfy their radar needs, and they had contracted with the Raytheon Company to produce magnetrons for the Radiation Laboratory at MIT. Raytheon was charged with building as many magnetrons as possible, as fast as they could, and delivering them to MIT. But “as fast as they could” turned out to be just 17 per day, which was woefully inadequate. Enter the hero of this story, Percy Spencer.

  Percy Spencer (1894–1970) was a Raytheon engineer and a world expert on radar tube design. Spencer had originally become interested in radio technology at the age of 18 when the Titanic sunk and he heard how a Marconi sh
ipboard radio had broadcast the distress signals that brought other ships to the rescue. Shortly thereafter, he entered the US Navy to obtain training in marine radio and was sent to the Navy Radio School. When he finished his tour of duty, he got a job in the private sector, working for a company that made radio equipment for the military. In the 1920s he joined Raytheon.78

  It was largely on the reputation of Spencer that Raytheon had gotten the magnetron contract in the first place. Spencer soon developed a way to mass-produce the magnetrons. Instead of using machined parts, he punched out smaller parts from sheet metal, just as a cookie cutter cuts cookies, and then soldered the punched out parts together to make the required magnetron components, which were then assembled to make a magnetron. This change eliminated the need for skilled machinists, who then were in short supply, and allowed unskilled workers to take over magnetron production. Eventually, 5,000 Raytheon employees were working in production, and Raytheon’s magnetron output increased to 2,600 per day.

  There were few concerns about health risks from radar equipment at the time, since radar tubes emitted only radio waves and not ionizing radiation. Consequently, there were no protective procedures to guard against the microwaves emitted by the radar. One day, while Spencer was working with a live radar apparatus, he noticed that a candy bar he had in his pocket had completely melted. Intrigued, he set up a contraption to do a little experiment on an egg. He focused a high intensity microwave beam on a raw egg to see what would happen. The egg exploded due to rapid heating. He extended the experiment to popcorn kernels and found he could make popcorn with the microwaves. (By now, you should be able to see where this story is going, so we’ll dispense with further details.) On October 8, 1945, just two months after the atomic bombs were dropped on Japan, Raytheon lawyers filed a patent for a microwave oven. They appropriately named their first commercial product the Radarange (a contraction of Radar Range).

 

‹ Prev