by Al Gore
Citing concerns that the detailed design of a virus that was only a few mutations away from a form that could be spread by human-to-human transmission, the bioterrorism officials tried to dissuade scientists from publishing the full genetic sequence that accompanied their papers. Although publication was allowed to proceed after a full security review, the U.S. government remains actively involved in monitoring genetic research that could lead to new bioweapons. Under U.S. law, the FBI screens the members of research teams working on projects considered militarily sensitive.
HUMAN CLONING
Among the few lines of experiments specifically banned by the U.S. government are those involving federally funded research into the cloning of human beings. As vice president, not long after the birth of the first cloned sheep, Dolly, in 1996, when it became clear that human cloning was imminently feasible, I strongly supported this interim ban pending a much fuller exploration of the implications for humanity of proceeding down that path, and called for the creation of a new National Bioethics Advisory Commission to review the ethical, moral, and legal implications of human cloning.
A few years earlier, as chairman of the Senate Subcommittee on Science, I had pushed successfully for a commitment of 3 percent of the funding for the Human Genome Project to be allocated to the study of extensive ethical, legal, and social implications (they are now referred to as ELSI grants), in an effort to ensure careful study of the difficult questions that were emerging more quickly than their answers. This set-aside has become the largest government-financed research program into ethics ever established. James Watson, the co-discoverer of the double helix, who by then had been named to head the Genome Project, was enthusiastically supportive of the ethics program.
The ethics of human cloning has been debated almost since the very beginning of the DNA era. The original paper published by Watson and Crick in 1953 included this sentence: “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.” As chairman of the Science Investigations Subcommittee in the U.S. House of Representatives, I conducted a series of hearings in the early 1980s about the emerging science of cloning, genetic engineering, and genetic screening. Scientists were focused at that stage on cloning animals, and fifteen years later they succeeded with Dolly. Since then, they have cloned many other livestock and other animals.
But from the start of their experiments, the scientists were clear that all of the progress they were making on the cloning of animals was directly applicable to the cloning of people—and that it was only ethical concerns that had prevented them from attempting such procedures. Since 1996, human cloning has been made illegal in almost every country in Europe, and the then director-general of the World Health Organization called the procedure “ethically unacceptable as it would violate some of the basic principles which govern medically assisted reproduction. These include respect for the dignity of the human being and the protection of the security of human genetic material.”
Nevertheless, most anticipate that with the passage of time, and further development and refinement of the technique, human cloning will eventually take place—at least in circumstances where a clear medical benefit can be gained without causing a clear form of harm to the individual who is cloned or to society at large. In 2011, scientists at the New York Stem Cell Foundation Laboratory announced that they had cloned human embryos by reprogramming an adult human egg cell, engineering it to return to its embryonic stage, and then created from it a line of identical embryonic stem cells that reproduced themselves. Although the DNA of these cells is not identical to that of the patient who donated the egg cell, they are identical to one another, which facilitates the efficacy of research carried out on them.
Several countries, including Brazil, Mexico, and Canada, have banned the cloning of human embryos for research. The United States has not done so, and several Asian countries seem to have far fewer misgivings about moving forward aggressively with the science of cloning human embryos—if not actual humans. From time to time, there are reports that one or another fertility doctor working at some secret laboratory, located in a nation without a ban on human cloning, has broken this modern taboo against human cloning. Most if not all of these stories, however, have been suspected of being fraudulent. There has yet been no confirmed birth of a human clone.
In general, those in favor of proceeding with experiments in human cloning believe that the procedure is not really different from other forms of technological progress, that it is inevitable in any case, and is significantly more promising than most experiments because of the medical benefits that can be gained. They believe that the decision on whether or not to proceed with a specific clone should—like a decision on abortion—be in the hands of the parent or parents.
Those who oppose cloning of people fear that its use would undermine the dignity of individuals and run the risk of “commoditizing” human beings. Cloning does theoretically create the possibility of mass-producing many genetic replicas of the same original—a process that would be as different from natural reproduction as manufacturing is from craftsmanship.
Some base their argument on religious views of the rights and protections due to every person, though many who oppose human cloning base their views not on religion, but on a more generalized humanist assertion of individual dignity. In essence, they fear that the manipulation of humanity might undermine the definition of those individuals who have been manipulated as fully human. This concern seems to rest, however, on an assumption that human beings are reducible to their genetic makeup—a view that is normally inconsistent with the ideology of those who make the protection of individual dignity a top priority.
Both the temporary delay in the public release of details concerning how to create dangerous mutations in the H5N1 bird flu virus and the temporary ban on government-funded research into human cloning represent rare examples of thoughtful—if controversial—oversight of potentially problematic developments in order to assess their implications for humanity as a whole. Both represented examples of U.S. leadership that led to at least a temporary global consensus. In neither case was there a powerful industry seeking to push forward quickly in spite of the misgivings expressed by representatives of the public.
ANTIBIOTICS BEFORE SWINE
Unfortunately, when there is a strong commercial interest in influencing governments to make a decision that runs roughshod over the public interest, business lobbies are often able to have their way with government—which once again raises the question: who is the “we” that makes decisions about the future course of the Life Sciences Revolution when important human values are placed at risk? In the age of Earth Inc., Life Inc., and the Global Mind, the record of decision making includes troubling examples of obsequious deference to the interests of multinational corporations and a reckless disregard of sound science.
Consider the shameful acquiescence by the U.S. Congress in the livestock industry’s continuing absurd dominance of antibiotic use in the United States. In yet another illustration of the dangerous imbalance of power in political decision making, a truly shocking 80 percent of all U.S. antibiotics are still allowed to be legally used on farms in livestock feed and for injections in spite of grave threats to human health. In 2012, the FDA began an effort to limit this use of antibiotics with a new rule that will require a prescription from veterinarians.
Since the discovery of penicillin in 1929 by Alexander Fleming, antibiotics have become one of the most important advances in the history of health care. Although Fleming said his discovery was “accidental,” the legendary Irish scientist John Tyndall (who first discovered that CO2 traps heat) reported to the Royal Society in London in 1875 that a species of Penicillium had destroyed some of the bacteria he was working with, and Ernest Duchesne wrote in 1897 on the destruction of bacteria by another species of Penicillium. Duchesne had recommended research into his discovery but entered the army and went t
o war immediately after publication of his paper. He died of tuberculosis before he could resume his work.
In the wake of penicillin, which was not used in a significant way until the early 1940s, many other potent antibiotics were discovered in the 1950s and 1960s. In the last few decades, though, the discoveries have slowed to a trickle. The inappropriate and irresponsible use of this limited arsenal of life-saving antibiotics is rapidly eroding their effectiveness. Pathogens that are stopped by antibiotics mutate and evolve over time in ways that circumvent the effectiveness of the antibiotic.
Consequently, doctors and other medical experts have urged almost since the first use of these miracle cures that they be used sparingly and only when they are clearly needed. After all, the more they are used, the more opportunities the pathogens have to evolve through successive generations before they stumble upon new traits that make the antibiotics impotent. Some antibiotics have already become ineffective against certain diseases. And with the slowing discovery of new antibiotics, the potency of the ones we use in our current arsenal is being weakened at a rate that is frightening to many health experts. The effectiveness of our antibiotic arsenal—like topsoil and groundwater—can be depleted quickly but regenerated only at an agonizingly slow rate.
One of the most serious new “superbugs” is multidrug-resistant tuberculosis, which, according to Dr. Margaret Chan, director-general of the World Health Organization, is extremely difficult and expensive to treat. At present, 1.34 million people die from tuberculosis each year. Of the 12 million cases in 2010, Chan estimated that 650,000 involved strains of TB that were multidrug-resistant. The prospect of a “post antibiotic world” means, Chan said, “Things as common as strep throat or a child’s scratched knee, could once again kill.” In response to these concerns, the FDA formed a new task force in 2012 to support development of new antibacterial drugs.
But in spite of these basic medical facts, many governments—including, shockingly, the United States government—allow the massive use of antibiotics by the livestock industry as a growth stimulant. The mechanism by which the antibiotics cause a faster growth rate in livestock is not yet fully understood, but the impact on profits is very clear and sizable. The pathogens in the guts of the livestock are evolving quickly into superbugs that are immune to the impact of antibiotics. Since the antibiotics are given in subtherapeutic doses and are not principally used for the health of the livestock anyway, the livestock companies don’t particularly care. And of course, their lobbyists tendentiously dispute the science while handing out campaign contributions to officeholders.
Last year, scientists confirmed that a staphylococcus germ that was vulnerable to antibiotics jumped from humans to pigs whose daily food ration included tetracycline and methicillin. Then, the scientists confirmed that the same staph germ, after becoming resistant to the antibiotics, found a way to jump back from pigs into humans.
The particular staph germ that was studied—CC398—has spread in populations of pigs, chickens, and cattle. Careful analysis of the genetic structure of the germ proved that it was a direct ancestor of an antibiotic-susceptible germ that originated in people. It is now present, according to the American Society for Microbiology, in almost half of all meat that has been sampled in the U.S. Although it can be killed with thorough cooking of the meat, it can nevertheless infect people if it cross-contaminates kitchen utensils, countertops, or pans.
Again, the U.S. government’s frequently obsequious approach to regulatory decision making when a powerful industry exerts its influence stands in stark contrast to the approach it takes when commercial interests are not yet actively engaged. In the latter case, it seems to be easier for government to sensitively apply the precautionary principle. But this controversy illustrates the former case: those who benefit from the massive and reckless use of antibiotics in the livestock industry have fought a rearguard action for decades and have thus far been successful in preventing a ban or even, until recently, a regulation limiting this insane practice.
The European Union has already banned antibiotics in livestock feed, but in a number of other countries the practice continues unimpeded. The staph germ that has jumped from people to livestock and back again is only one of many bacteria that are now becoming resistant to antibiotics because of our idiotic acceptance of the livestock industry’s insistence that it is perfectly fine for them to reduce some of their costs by becoming factories for turning out killer germs against which antibiotics have no effect. In a democracy that actually functioned as it is supposed to, this would not be a close question.
Legislators have also repeatedly voted down a law that would prevent the sale of animals with mad cow disease (bovine spongiform encephalopathy, or BSE)—a neurodegenerative brain disease caused by eating beef contaminated during the slaughtering process by brain or spinal cord tissue from an animal infected by the pathogen (a misfolded protein, or prion) that causes the disease. Animals with later stages of the disease can carry the prions in other tissues as well. When an animal on the way to the slaughterhouse begins stumbling, staggering, and falling down, it is fifty times more likely to have the disease.
The struggle in repeated votes in the Congress has been over whether those specific animals manifesting those specific symptoms should be diverted from the food supply. At least three quarters of the confirmed cases of mad cow disease in North America were traced to animals that had manifested those symptoms just before they were slaughtered. But the political power and lobbying muscle of the livestock industry has so intimidated and enthralled elected representatives in the U.S. that lawmakers have repeatedly voted to put the public at risk in order to protect a tiny portion of the industry’s profits. The Obama administration has issued a regulation that embodies the intent of the law rejected by Congress. However, because it is merely a regulation, it could be reversed by Obama’s successor as president. Again, in a working democracy, this would hardly be a close question.
THE INABILITY OF Congress to free itself from the influence of special interests has implications for how the United States can make the difficult and sensitive judgments that lie ahead in the Life Sciences Revolution. If the elected representatives of the people cannot be trusted to make even obvious choices in the public interest—as in the mad cow votes or the decisions on frittering away antibiotic resistance in order to enrich the livestock industry—then where else can these choices be made? Who else can make them? And even if such decisions are made sensitively and well in one country, what is to prevent the wrong decision being made elsewhere? And if the future of human heredity is affected in perpetuity, is that an acceptable outcome?
EUGENICS
The past record of decisions made by government about genetics is far from reassuring. History sometimes resembles Greek mythology, in that like the gods, our past mistakes often mark important boundaries with warnings. The history of the eugenics movement 100 years ago provides such a warning: a profound misunderstanding of Darwinian evolution was used as the basis for misguided efforts by government to engineer the genetic makeup of populations according to racist and other unacceptable criteria.
In retrospect, the eugenics movement should have been vigorously condemned at the time—all the more so because of the stature of some of its surprising proponents. A number of otherwise thoughtful Americans came to support active efforts by their government to shape the genetic future of the U.S. population through the forcible sterilization of individuals who they feared would otherwise pass along undesirable traits to future generations.
In 1922, a “model eugenical sterilization law” (originally written in 1914) was published by Harry Laughlin, superintendent of the recently established “Eugenics Record Office” in New York State, to authorize sterilization of people regarded as
(1) Feeble-minded; (2) Insane, (including the psychopathic); (3) Criminalistic (including the delinquent and wayward); (4) Epileptic; (5) Inebriate (including drug-habitues); (6) Diseased (including the tuberculous,
the syphilitic, the leprous, and others with chronic, infectious and legally segregable diseases); (7) Blind (including those with seriously impaired vision); (8) Deaf (including those with seriously impaired hearing); (9) Deformed (including the crippled); and (10) Dependent (including orphans, ne’er-do-wells, the homeless, tramps and paupers.)
Between 1907 and 1963, over 64,000 people were sterilized under laws similar to Laughlin’s design. He argued that such individuals were burdensome to the state because of the expense of taking care of them. He and others also made the case that the advances in sanitation, public health, and nutrition during the previous century had led to the survival of more “undesirable” people who were reproducing at rates not possible in the past.
What makes the list of traits in Laughlin’s “model law” bizarre as well as offensive is that he obviously believed they were heritable. Ironically, Laughlin was himself an epileptic; thus, under his model legislation, he would have been suitable for forced sterilization. Laughlin’s malignant theories also had an impact on U.S. immigration law. His work on evaluating recent immigrants from Southern and Eastern Europe was influential in forming the highly restrictive quota system of 1924.
As pointed out by Jonathan Moreno in his book The Body Politic, the eugenics movement was influenced by deep confusion over what evolution really means. The phrase “survival of the fittest” did not originate with Charles Darwin, but with his cousin Sir Francis Galton, and was then popularized by Herbert Spencer—whose rival theory of evolution was based on the crackpot ideas of Jean-Baptiste Lamarck. Lamarck argued that characteristics developed by individuals after their birth were genetically passed on to their offspring in the next generation.