Mad Science: The Nuclear Power Experiment
Page 6
These spent fuel assemblies, also known as high level radioactive waste, must be constantly cooled with water in the pools. Any loss of cooling water, from mechanical failure, act of sabotage, or a natural disaster, would be catastrophic. The average US nuclear plant contains the equivalent radioactivity of about five times what was released during the Chernobyl meltdown in 1986, and hundreds of times more than that produced from the bombs used at Hiroshima and Nagasaki. Because many US reactors have operated for thirty years or more, pools at these aging reactors cannot house any more spent fuel, and assemblies are transferred to “dry cask” storage, which are steel and concrete caskets containing the assemblies that are placed outside the reactor. At least five years must elapse between the time reactors use fuel and when they are placed in dry casks, to at least partially reduce the amount of radioactivity and decrease the heat in casks.
Changing fuel assemblies is not the only activity that takes place during refueling. There are a number of mechanical inspections that occur while the reactor is out of service. A scheduled release of radioactive water into the environment also typically occurs. When the refueling is complete, the reactor is restarted – in a gradual manner, just as it must be shut down in a tapered manner, to reduce safety risks. Any other unscheduled closing of a reactor to address mechanical problems observes the same procedures.
The final step in operating a nuclear reactor actually occurs after it closes permanently. A total of twenty nuclear power reactors in the US have been shut down, mostly in the 1980s and 1990s. Operators of these reactors are required by federal law to “decommission” the plant, which involves a series of functions that disassembles the reactor’s parts and stores them away from humans, animals, and plants. The decommissioning process has taken much longer than expected for reactors already closed – typically at least a decade. It also has been a very costly process; federal law requires all reactor operators to maintain a minimum amount of funds specifically earmarked for decommissioning, during the time the reactor operates; this amount often runs into the hundreds of millions of dollars.
The preceding description included 1) the “front end” of the nuclear fuel cycle (uranium mining, milling, conversion, enrichment, and fabrication), 2) the process of generating electrical power in reactors, and 3) decommissioning nuclear reactors that have shut down permanently. Each raises questions about safety and health. There are numerous ways in which humans can be exposed to radiation in the production of atomic electrical power, including:
– Occupational Exposure: Nuclear workers can be exposed to radiation in each phase of the process of generating electricity, from the time that miners dig uranium out of the ground to the decommissioning efforts after a reactor shuts down permanently. Standards that impose maximum levels of occupational exposure have existed for years, and monitoring worker exposure is mandated by law, but several problems exist. Permissible exposure levels do not necessarily mean safe exposure levels. Worker exposures have not always been monitored, and government oversight has not always been adequate. Exposures measured in workers usually are just a gross total of radiation, and do not measure specific types of radioactive chemicals. Finally, health studies of workers are hampered by the fact that nuclear workers are typically healthier than the general population and have better access to health care, and thus comparing disease rates of workers to local residents often shows lower rates among workers, and thus is not particularly helpful.
– Accidents/Meltdowns: Since the first nuclear reactors were built over sixty years ago, mechanical failures at these complex machines have been a concern. Of greatest concern is the potential for a catastrophic meltdown that would emit large amounts of radioactivity, exposing not just workers and local residents, but many humans. This concern has become a reality on several occasions, the worst of which occurred at the Chernobyl plant in the former Soviet Union in 1986. A total loss of coolant at one of the plant’s reactors during an experiment due to human error caused an explosion, blowing the concrete lid completely off the reactor, which had no outer containment building – a design error. Massive amounts of radiation were released and detected around the globe, thousands of miles from the site. The damage to human health was devastating, and may take years to fully assess. The multiple meltdowns at the Fukushima plant in Japan beginning March 2011 after a powerful earthquake and tsunami are still in progress as of this writing, and may eventually rival Chernobyl in contamination and human casualties. Other accidents include the 1979 Three Mile Island meltdown, in which over half of the reactor core melted – again a result of human error during operations.
– Routine Releases: Even the best-designed and best-managed nuclear reactor must release some amount of radiation into the environment outside the plant. There are planned releases, like those during refueling about every eighteen months. There are also accidental releases from mechanical problems. Plant personnel attempt to minimize these releases, but no reactor can operate with zero releases of radioactivity. In 1971, the AEC held public hearings on guidelines for “effluents” from nuclear plants. The Commission later proposed a standard for liquid and gaseous emissions, then also proposed that no action was needed against any reactor operator unless they doubled the limit in a single quarter or quadrupled the annual limit over twelve months (AEC/NRC). Nobody denies that routine releases from nuclear plants occur, nor does anyone deny that these emissions enter air, food, and water. But there has been a long debate over whether these releases, even though they are legally sanctioned, have harmed humans, which will be examined later in this book.
– Exposure to Stored Waste: Nuclear power reactors store most of the waste that they generate, in deep pools of water or in dry casks. These storage methods are designed to keep humans from being exposed to any of this radioactivity. But foolproof methods are a promise, not a guarantee, and the threat of exposure – due to mechanical failure, act of terrorism, natural disaster, or simply transporting these chemicals – will remain for thousands of years.
The issues listed above – that is, understanding risks from exposure to radiation from nuclear power plants – are essentially matters of health research that are addressed by scientists. But any aspect of nuclear power was and is highly politicized, and the effort to understand health risk is perhaps the most politicized. Attempts to document truths about atomic energy risks were met head-on by a culture of secrecy and deception.
Soothing Big Bang Fears
An understanding of how secrecy, deception, and outright lies became a part of the nuclear power industry must begin by recognizing the historical roots of this culture. Specifically, the development of the atomic bomb during World War II represented the first use of the novel technology of atomic fission. Spurred by the concern that Nazi Germany might develop and use a similar weapon first, President Franklin D. Roosevelt ordered the US military to conduct an all-out effort to develop the bomb. (Actually, after the discovery of fission, about six nations began researching an atomic bomb.) This effort, famously known as the Manhattan Project, required less than three years to successfully develop and test the new weapon.
Thorough secrecy was an absolute necessity, a built-in fact of life of the Manhattan Project, to keep sensitive information from falling into the wrong hands. The American public was told nothing about it, and even most government officials knew nothing about it (Harry Truman, who had been Vice President for three months, was only informed about the project after he assumed the presidency following Roosevelt’s sudden death). The large budget appropriations (a total of $2 billion) were described to Congressmen in vague terms, who voted for these funds anyway with few questions. People living near weapons production sites were not told of their true purpose, only that they were plants aiding in the war effort. All but the highest-ranking workers at these plants had no idea about the product; again, they were told that the work was simply part of an effort to win the war.
When the test bomb known as Trinity was successfully exploded in
the desert of southern New Mexico in the pre-dawn hours of July 16, 1945, the flash could be seen as far as 200 miles away. The official explanation to people who saw and reported it was that the flash was just an explosion at a munitions site. Not until the Hiroshima bomb was dropped twenty-one days later did the military remove the cover of secrecy from the Manhattan Project, and for the first time, Americans realized the true nature of what had been going on at places like Oak Ridge, Hanford, and Los Alamos.
The use of the bomb and the end of the war brought relief to war-weary Americans. But in addition, a discussion over use of the new device began within scientific, political, and military circles, as well as among everyday citizens. Had it been necessary to use the weapon at all? Should it have been used on civilian targets? Was a second bomb needed to end the war? Truman and his advisors held that the decisions had been morally correct. But many others, including military leaders such as Eisenhower, disagreed. These alternative viewpoints did not change any minds in the White House.
Just six weeks after the bombs were dropped, US troops entered Hiroshima and Nagasaki to inspect and “clean up” the devastated cities. The soldiers later related they were given no specific instructions about what the city would look like or what condition the survivors would be in. No precautions were taken regarding clothing or diet; the soldiers not only breathed the local air, but used the Nagasaki municipal water supply for drinking and bathing. They also wore no badges to measure the amount of radiation to which they were exposed, because a military team had visited the two cities two weeks earlier and found contamination on the ground “below hazardous limits.” Over 1,000 Army, Navy, and Marine troops participated in this mission for six weeks, when they were reassigned.
But the mission’s implications continued, as reports of health problems from the troops followed. Some were complaints immediately after the mission, including skin sores that itched and burned. Others didn’t occur until years later, when the troops were still only middle aged, otherwise healthy men. An unusually high number of various diseases were reported, not just cancer, but disorders of the bloodstream, lung, heart, skin, and bone as well. Reports of these problems were aired by the media, as a number of soldiers spoke out publicly. The federal response, however, was to exclude the Nagasaki group from studies of soldiers stationed close to atom bomb tests in Nevada and the Pacific, and to deny requests from afflicted soldiers seeking Veterans Administration benefits. Protests from members of Congress forced a 1980 report by the Defense Department – which ignored any potential health risks to the troops from service in post-bomb Hiroshima and Nagasaki; only a mention of four cases of multiple myeloma was made, a number considered not abnormal among this population. The years went on, anecdotes piled up, but the deceptive party line held fast.
Those who worked for the Manhattan Project were stationed at Los Alamos, Hanford, and Oak Ridge. Security was extremely tight; only those with clearance were allowed into the factories and laboratories. Access in and out of the towns that sprang up near the three sites was also greatly restricted. High level scientists were instructed not to divulge any information about the project to their families. Even with these extreme measures, information was leaked. After the Soviet Union exploded its first atomic bomb in 1949, it was learned that the Russian effort had benefited from information obtained from Klaus Fuchs, a spy who worked at Los Alamos during the Manhattan Project under the guise of a British scientist, who smuggled details of the Manhattan Project to the Stalin regime in Moscow. Complete secrecy, along with the supporting lies and deceptions, were now an even greater priority for the atomic weapons program.
The Japanese surrender ended World War II, but the American effort to expand its nuclear arsenal continued. The war had rearranged the balance of world power, and the US and Soviet Union emerged as the dominant nations. While the Americans and Soviets had been allies against Nazi Germany, after the surrender the relationship became an uneasy alliance at best and an outright hostile relationship at worst. The Soviet takeover of Eastern Europe at the end of the war fueled worry that Stalin had still greater ambitions on his mind. American intelligence had reason to believe that the Soviets were developing nuclear weapons, and for his part Stalin knew that the US would stockpile them. The Cold War, with nuclear weapons squarely in its midst, was on.
US bomb tests resumed after the war at a relatively slow pace, with just five explosions between 1946 and 1950, and the production of several dozen weapons during this time. The US worldwide monopoly on nuclear weapons continuewd. But during this period of relative calm, the tight secrecy surrounding the bomb program continued. Safety precautions for nuclear weapons workers were poor, but these were far less important to leaders than maintaining secrecy and ensuring success of the bomb program, so no concerns were publicly raised about them.
The first successful atomic bomb test by the Soviets in August 1949 shattered any lasting hope of avoiding hostilities. Hiroshima and Nagasaki were no longer seen as oddities, but the start of a very real, and very threatening trend. The Truman Administration, at the urging of military leaders, stepped up its program of testing and manufacturing atomic bombs. It expanded bomb testing from remote south Pacific locations to include a site in Nevada, just seventy miles northwest of Las Vegas. Production of bombs was also sped up, and the test-and-build race caused the number of nuclear weapons to soar from 1,000 in 1952 to a peak of 32,000 in 1967.
The Soviets, even after Stalin died in 1953, joined in the sprint. The duel for nuclear “superiority” was now a chilling reality which was not eased until the end of the Cold War nearly four decades later. In that time, the US military conducted 1,051 nuclear weapons tests (the Soviets did 719), in the atmosphere and in underground locations (below):
Source: Norris RS and Cochran TB. United States Nuclear Tests, July 1945 to 31 December 1992. Washington DC: National Resources Defense Council, 1994. Excludes weapons used on Hiroshima and Nagasaki in 1945.
Much has been written about the Cold War nuclear arms race, how it was conducted, and how it affected American public policies. A series of secretive and deceptive practices by officials became an integral part of the race developed, and are listed here:
Nuclear War “Winnable.” A number of US leaders viewed nuclear war as inevitable, and the arms race as an opportunity for America to stockpile more weapons than the Soviets in order to “win” such a war. Predictions abounded that even the loss of tens of millions of Americans could result in a military victory. This viewpoint was perhaps the greatest distortion of the atomic age. Nuclear war, like any war, is not an inevitable force of nature, but a conscious choice of leaders. In addition, as the Soviet Union quickly stockpiled weapons, it became clear that all-out nuclear war between two nations would result in massive casualties, to current and future generations, endangering life on the planet – clearly a war without “winners.” Yet numerous American leaders continued to preach the gospel of winnable nuclear war for years.
Selection of Nevada as Test Site. After the Soviets became an atomic power, the Truman administration decided to step up the number and frequency of bomb tests. Sites in the south Pacific would continue to be used, but officials wanted another location in the continental US, to shorten the time needed to shift materials and personnel. Government planners chose a remote location in the Nevada desert, on land already owned and secured by the military. Officials agreed that the site was far enough from large populations to not pose any health risk. But two days after the first Nevada test in January 1951, fallout was detected by Geiger counters used by Eastman Kodak employees in distant Rochester, New York, during a snowstorm, at five to ten times typical background levels. Officials may have been surprised, but denied that the fallout posed any health hazard, as they did for all 100 aboveground tests in Nevada over the next dozen years (another 106 atmospheric tests were conducted in the Pacific) – with no health studies or other evidence to back their claim.
Missile Gap Exaggeration. When the Sovi
et Union joined the atomic club, the US had a much greater number of weapons in its stockpile. This “lead” continued well into the 1950s. As late as 1960, the US had about 20,000 weapons to about 2,000 for the Soviets. The Soviet total only exceeded the American one in the late 1970s. However, the highly charged atmosphere of the 1950s led a number of political leaders to fabricate the notion that the Soviet Union had greater numbers of weapons, in spite of evidence to the contrary secured by intelligence agencies. This “missile gap” was propagated by the Eisenhower administration, by Presidential candidate John F. Kennedy, and by military leaders. These distortions made Americans excessively fearful at the time, and generated support for an expanded US nuclear program based on false information.
Hiding Atomic Tests. Aboveground atomic bomb tests were publicly announced by the military as they occurred. Some were even televised. After 1963, tests continued at the Nevada site, below the ground, until late 1992, when the last test occurred. But in 1993, Energy Department officials made a belated revelation of 204 tests that the public had never been told about, amounting to about 20% of the 1,051 total bomb tests. Most were small-scale tests conducted in the 1960s and 1970s that could escape detection by seismic instruments. “We were shrouded and clouded in an atmosphere of secrecy,” stated Energy Secretary Hazel O’Leary.
Suppression of H-Bomb Opponents. After the Soviets joined the nuclear arms race, the Truman administration began a program to develop a hydrogen bomb, a nuclear device about 1,000 times more powerful than the atomic bomb. The H-bomb was tested successfully in the Pacific in 1952 and 1954. Some scientists were disturbed and voiced their dissent, most notably Dr. J. Robert Oppenheimer, who had led the scientific team at Los Alamos that developed the first nuclear weapons. The AEC council chaired by Oppenheimer recommended against pursuing an H-bomb. In a climate marked by fear of anything remotely linked to communism, this was unacceptable to US leaders. In the late 1930s, Oppenheimer had been briefly engaged to a Communist party member, and attended party several meetings and made small donations – all of which had ended by 1941. The AEC convened a special hearing; some witnesses called Oppenheimer a patriot and humanist, while others cautioned he was a threat. Oppenheimer’s security clearance allowing him access to restricted data was revoked by Eisenhower.