Book Read Free

Strange Glow

Page 22

by Timothy J Jorgensen


  AT THE END OF THE DAY

  After more than half of a century of counting, what does the LSS have to show for itself? In terms of cancer, plenty! We now know the average cancer risk for a unit dose of ionizing radiation (i.e., the rate). And we know this very accurately, at least for higher doses. That is, we now know the percentage of increase in the lifetime risk of cancer per mSv of whole-body radiation dose.23 And what is that value? It is 0.005% per mSv.24 And what can we do with this number? We can use the number to convert any known whole-body dose into a cancer risk estimate.25 For example, the risk of cancer to a patient produced by a whole-body spiral CT diagnostic radiology scan,26 which delivers a whole-body dose of about 20 mSv, is determined as follows:

  20 mSv × 0.005% = 0.1% increased

  lifetime risk of contracting cancer

  Now comes the hard part, which is to interpret that risk metric. What exactly does it mean to us?

  The risk percentage can be interpreted in many different ways, but most people find one of the following two interpretations to be the most comprehensible:

  1. A risk of 0.1% is the same as saying that there is a 1 in 1,000 chance of getting a cancer from this radiation dose. To put it another way, if 1,000 patients received a spiral CT scan, only one of those patients would be expected to get a cancer from having had the scan. This interpretation raises the question: “Is 1 in 1,000 too much risk?”

  2. We can alternatively compare 0.1% risk with the baseline risk of getting a cancer. The unfortunate truth that most people often fail to appreciate is that cancer death is quite common. Although the exact number is debatable, a figure of 25% (1 in 4) is a reasonable estimate of an average American’s risk of dying from cancer. (It’s much higher than 25% for a heavy smoker.) So the 0.1% risk from radiation needs to be added to that baseline risk we all share. This means that if your overall risk of death by cancer started at 25% before the spiral CT scan, then after the scan it’s 25.1% because of the added radiation exposure. This interpretation raises the question: “Is moving your risk level from 25.0% to 25.1% too dangerous?”

  Now, having a handle on the level of risk, you need to ask yourself a question: “Is this cancer risk acceptable to me?” And the answer probably depends on the benefit of the CT scan to you. If you’ve been in severe internal pain, and a whole-body spiral CT scan (sometimes called a “cat” scan) represents the most informative procedure for diagnosing the source of that pain, you may feel that the risk is well worth the benefit. But if you have no symptoms of anything, and the procedure is being done just to screen for possible disease, you may not consider it worth the risk. But only you can decide that.

  In any event, the ability to convert a radiation dose to a risk estimate levels the playing field. Armed with an accurate estimate of risk for a whole variety of exposure scenarios, we now have the tools we need to make decisions regarding which radiation exposures are acceptable and which are not. We can also compare dose levels from different types of carcinogens for the same level of risk. (For example, we can ask questions like: “How does the cancer risk of radiation compare with the cancer risk of cigarette smoking?”) Most importantly, we now have the means to make an informed decision about controlling our radiation risks, and we can dispense with naive approaches, like the wife or daughter standard for radium ingestion mentioned earlier. We owe all this to the “most important people alive”—the atomic bomb survivors—for allowing us to learn from their tragic experience. Now it’s up to us to be worthy stewards of this knowledge, and put it to good use for the betterment of public health.

  As you ponder these risks, you may ask: “How relevant is this risk estimate, driven largely by the atomic bomb survivor data, to me and my situation? I haven’t been exposed to such high doses. Are the estimated rates of cancer based on high dose bomb survivors even relevant to low doses that I’m exposed to?” Excellent questions! Let’s explore this issue.

  The overall higher doses of the bomb survivors could certainly make a difference to the accuracy of low dose risk estimates. Although it is often pointed out that most of the atomic bomb survivors received nowhere near the doses required to produce radiation sickness (i.e., greater than 1,000 mSv), the fact is that they still got a lot more radiation than you’ll receive from something like a dental x-ray. When we talk about risk per unit dose, as we do above, we assume that there is a one-to-one relationship between dose and risk. That is, if you halve the dose you halve the risk, and if you double the dose then you double the risk. But is that true? Some scientists think that at very low doses this linear relationship doesn’t hold true. They point out that many laboratory studies show cells can repair low levels of radiation damage, and it is only when damage levels get to the point where repair mechanisms can no longer cope that radiation risk then becomes directly proportional to dose. This is a fair enough criticism and, if it’s true, our risk estimates inferred from the higher doses sustained by atomic bomb survivors may overestimate the risk from lower dose exposures, such as chest x-rays. But if the lower doses really aren’t as dangerous because of repair processes, then the risk rates derived from atomic bomb survivors that are used to set radiation protection standards will be overprotective, not underprotective. We have certainly not underestimated the risk.27

  The cancer risk rate of 0.005% per mSv, specified above, does not directly account for cellular repair and protection mechanisms that may be in play at low radiation doses.28 For this reason, most reputable scientists believe the radiation risk estimates we use for purposes of radiation protections and risk benefit analysis, driven as they are by relatively high-dose atomic bomb survivor data, represent the worst case scenario. There is no credible scientific evidence to suggest that we have significantly understated the risk of cancer from radiation, nor is there any valid theoretical basis to suggest that we have done so. Since these risk estimates represent the worst case, when we set radiation dose limits to protect against the worst case, we are being highly conservative in terms of minimizing risk and, therefore, affording the highest level of protection. To put it simply, when radiation scientists aren’t sure about something, they tend to recommend policy measures that overprotect the public, in order to compensate for that uncertainty.

  This is not unlike the way civil engineers might approach the design of a suspension bridge to ensure the public is protected from its potential collapse. They would likely first calculate the strength of the steel cables needed to support the weight of the roadbed and vehicle traffic, based on fundamental static engineering principles and materials research, and then double that cable strength in building specifications, just in case the real cable that shows up at the construction site doesn’t meet its theoretical strength specification. In effect, the project has been “over engineered” to provide a margin of safety. Radiation protection standards can be thought of in the same way. They have been over engineered by using worst-case atomic bomb survivor risk rates, coupled with added safety factors to account for uncertainties.

  We’ll further investigate cancer risk calculations and how they can be used to weigh risks and benefits in Part Three. But for now, let’s just absorb the take-home message of this chapter. The message is that the LSS of the atomic bomb survivors is one of the strongest epidemiology studies ever conducted and has given us a very accurate estimate of cancer risk per unit of radiation dose. It is so strong that it is unlikely to be undermined by any other epidemiological study within our lifetimes.

  To reiterate, it is important to remember that the LSS is so reliable because it is a large cohort study with many years of follow-up. In the epidemiology game, you want a cohort study—the gold standard—to provide the evidence. And the bigger the cohort study, the stronger the evidence. So the atomic bomb survivor cohort study, comprised of over 94,000 study subjects, recruited simultaneously and followed for over 65 years, is about the best evidence you can get and provides tremendous reassurance as to the reliability of the health risk estimates generated from i
ts findings.

  This also explains why epidemiologists pay little attention to the small case-control studies that occasionally appear in the scientific literature or popular press, claiming to contradict the findings of the LSS. For example, when the media reports on some newly published small case-control study that appears to show dental x-rays are producing brain tumors at an alarming rate, all it elicits from the epidemiologic community is a collective yawn. What authors of such reports often fail to mention is that it has been well established that people with brain tumors not only report more dental x-rays, but also report more exposure to hair driers, hair dyes, and electric razors.29 The likely reason is that people with brain tumors are psychologically sensitized to any exposures related to their heads. So which is it, the dental x-rays, the hair dryers, the hair dye, or the electric razors that caused their brain tumors? In all likelihood, none of them. The best explanation for the apparent associations is simply that these case-control studies are tainted by recall or other biases. But at least these are known biases. In a well-designed, case-control study, there are ways of statistically adjusting for those biases that you can anticipate, but it is impossible to correct for biases that you don’t even know exist. Alas, unknown biases are the Achilles heel of case-control studies.

  So what do these contrarian case-control studies do to our established risk estimates for radiation-induced cancer? Nothing. The LSS is so strong that a small case-control study is never going to unseat its findings. We may improve the precision and accuracy of our current risk estimates as more information comes out of the LSS. We may even augment that evidence with data from some large ongoing case-control and cohort studies of medically exposed patients, and even larger studies of radiation workers with decades of exposure history.30 But these studies will likely only improve the precision of our risk estimates at low doses, and allow us to confirm that the atomic bomb victim data is relevant to people who are exposed to radiation under different circumstances and at different dose rates. For the most part, we have what we have, in terms of cancer risk information, and what we have is pretty darn good.

  In short, we know much more about the cancer risks of radiation than any other carcinogen.31 So, if we, as reasonable and intelligent people concerned about our health and the health of others, can’t come to a workable consensus about acceptable levels of cancer risk from the use of radiation, there is little hope that we will be able reach a consensus on any other cancer hazards for which very much less is known, and for which there is virtually no possibility that a huge cohort study will ever suddenly appear and provide the needed information. On the flip side, however, if we can agree on general principles by which cancer risks of radiation will be addressed, these same principles can likely be co-opted to comparable risk situations, where the data are scarcer, but the need no less compelling. Radiation is thus an excellent test case as to how best to handle environmental cancer threats in general.

  With all this focus on radiation and cancer, one might presume that cancer was the main interest of the scientists who started the atomic bomb survivor studies. Remarkably, in the beginning, virtually no one was interested in measuring radiation-induced cancer risk. That was just a side activity. The original goal of the atomic bomb survivor studies was to measure the horrific genetic effects that certain people had predicted would occur among the progeny of the atomic bomb survivors. Some of these predictions of a population explosion of mutant children were the fantasy of science fiction writers with overactive imaginations, but others were based on the sound laboratory research of geneticists, radiation biologists, and other reputable scientists. Public interest and concern about genetic effects were extremely high, and cancer risks seemed to be of minor concern. Fortunately, the mutant progeny that had been predicted did not appear, although the survivor studies continue to search for them. In the next chapter we’ll explore exactly why these mutants were expected, and why they never showed.

  CHAPTER 10

  BREEDING SEASON: GENETIC EFFECTS

  When an atomic bomb is set off … and kills hundreds of thousands of people directly, enough mutations have been implanted in the survivors’ [reproductive cells] to cause a comparable amount of genetic deaths … from now into the very distant future.

  —Hermann J. Muller, 1946 Nobel Laureate in Physiology or Medicine

  There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.

  —Mark Twain

  LORD OF THE FLIES

  Hermann Joseph Muller Jr. (1890–1967) was a man of humble origins. The grandson of European immigrants to New York City, he spent his childhood first in the neighborhood of Harlem in Manhattan and then in the Bronx. His father was a metal artisan, and his mother a housewife. They could offer him no more than overcrowded city public schools for his education, but Hermann made the most of those schools. He inherited from his father a love of the natural world, particularly all things biological. His aptitude for science combined with his strong work ethic won him a scholarship to Columbia University to study biology. At Columbia, his biological interests turned specifically to genetics and he eventually earned a PhD in the subject. He remained in academia, moving from student to faculty member, and climbed the academic ladder from one university to the next. Then, on December 10, 1946, at age 56, he stood in his formal jacket with tails to accept his Nobel Prize in Physiology or Medicine. Was his another story of living the American dream? Hardly.

  The first half of the twentieth century had been the heyday for the radiation physicists. By 1946, there had been 21 Nobel Prizes in Physics awarded for discoveries related to radiation. Muller, a biologist, was being awarded a Nobel Prize in Physiology or Medicine “for the discovery of the production of mutations by means of x-ray irradiation.” Radiation biology had arrived.

  The experiments that won Muller his prize had actually been completed in 1927, well before World War II. Yet only 20 years later, after the war had ended and in the wake of the atomic bombings of Japan, had they caught the public’s attention. The world had changed dramatically in those 20 years and so had Muller. He was not the man he was before. It was as though he were accepting his Nobel on behalf of a man who no longer existed.

  The question that needed to be answered back in 1927 was whether radiation could produce inheritable mutations. An inheritable mutation is a permanent change to a gene that causes it to transmit an altered trait (i.e., a new trait, not found in the parents) to the offspring.1 Muller showed that radiation could and did produce inheritable mutations, at least in his biological model, the common fruit fly (Drosophila melanogaster). But why study fruit flies?

  Before fruit flies became the model of choice for genetic research, Gregor Mendel (1822–1884) had done groundbreaking hereditary experiments with ordinary garden peas. Mendel, by all accounts, was a washout at most everything he did. He had failed at an ambition to be a parish priest because he was too squeamish to visit sick parishioners. He spent two years at the University of Vienna studying natural history, but failed his examinations to get a teaching certificate. He even acknowledged that he had joined his order of Augustinian monks, in Brünn, Moravia (now Brno, Czech Republic), in 1843, largely to escape the pressures of working for a living.2

  The abbot of the monastery had assigned Mendel gardening duties, likely because of his prior experience with natural history, which suited Mendel just fine. In his spare time, he started performing hybrid crosses between pea plants that displayed opposite traits (e.g., smooth vs. wrinkled seeds; green vs. yellow seeds; tall vs. short height) and was bemused to find that offspring of the crosses tended to have the trait of either one or the other parent plant, but not an intermediate trait.3 For example, a hybrid cross between a tall plant (six-foot height) and a short plant (one-foot height) produced offspring that were either six-feet tall or one foot, but never three feet. This phenomenon may have been news to Mendel, but botanists of the time were well aware
of it, although they couldn’t provide an explanation.

  But Mendel, like John Snow, appreciated the value of counting and measuring things. So he kept meticulous records of his pea hybridization experiments. In the winter, when there was no gardening to be done, he amused himself by studying his pea records. At some point he noticed that no matter what traits he crossed, there were always constant ratios of the two traits among the offspring, and the exact ratio depended upon whether it was a first generation or second generation cross. The constant ratios followed very consistent patterns, regardless of the trait being examined. Mendel reported his findings at the 1865 meeting of the Brünn Society for the Study of Natural History, but no one at the meeting was interested in Mendel’s ratios. His report was published in the meeting’s proceedings and forgotten.

  Then in 1900, three botanists who were working independently with different species of plants and different traits, rediscovered the ratios. To their disappointment, their subsequent searches of the literature revealed that Mendel had scooped them 35 years earlier.4 Nevertheless, they had demonstrated that Mendel’s ratios were not restricted to just peas. Still, were they restricted just to plants?

  Mendel’s ratios were ultimately codified, along with other concepts of inheritance that he revealed in his writings, into the grander Mendelian principles of inheritance, a set of rules that defines how hereditary traits are transmitted from one generation to the next. The ramifications of the Mendelian principles to the science of genetics were completely revolutionary. Description of those principles is well beyond the scope of this book. Nevertheless, we can say that they have certain mechanistic implications as to how heredity works and suggest that inheritable traits are transmitted from parent to offspring in the form of discrete units of information that we now know as genes. Mendel’s work implied that some versions of genes, known as variants,5 were associated with dominant traits (e.g., smooth seed, yellow seed, and tall plant), while other genetic variants were associated with recessive traits (e.g., wrinkled seeds, green seeds, and short plant). Whenever a dominant gene was present it suppressed expression of the recessive gene; therefore, recessive genes could only be expressed in the absence of dominant genes. Mendel’s fundamental conclusion—that the male and female parents could each contribute different versions of the same gene to their offspring, and that some variants dominated over others—was tremendously insightful. His peas had given him the awareness that genes control heredity; but, at that point, genes existed solely as an abstract concept that enabled scientists to predict the distribution of traits among offspring, nothing more.

 

‹ Prev