The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements

Home > Other > The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements > Page 17
The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements Page 17

by Sam Kean


  Meanwhile, scientists at the Pasteur Institute in France had dug up Domagk’s obscure journal article. In a froth that was equal parts anti–intellectual property (because they hated how patents hindered basic research) and anti-Teuton (because they hated Germans), the Frenchmen immediately set about busting the IGF patent. (Never underestimate spite as a motivator for genius.)

  Prontosil worked as well as advertised on bacteria, but the Pasteur scientists noticed some odd things when they traced its course through the body. First, it wasn’t prontosil that fought off bacteria, but a derivative of it, sulfonamide, which mammal cells produce by splitting prontosil in two. This explained instantly why bacteria in test tubes had not been affected: no mammal cells had biologically “activated” the prontosil by cleaving it. Second, sulfonamide, with its central sulfur atom and hexapus of side chains, disrupts the production of folic acid, a nutrient all cells use to replicate DNA and reproduce. Mammals get folic acid from their diets, which means sulfonamide doesn’t hobble their cells. But bacteria have to manufacture their own folic acid or they can’t undergo mitosis and spread. In effect, then, the Frenchmen proved that Domagk had discovered not a bacteria killer but bacteria birth control!

  This breakdown of prontosil was stunning news, and not just medically stunning. The important bit of prontosil, sulfonamide, had been invented years before. It had even been patented in 1909—by I. G. Farbenindustrie*—but had languished because the company had tested it only as a dye. By the mid-1930s, the patent had expired. The Pasteur Institute scientists published their results with undisguised glee, giving everyone in the world a license to circumvent the prontosil patent. Domagk and IGF of course protested that prontosil, not sulfonamide, was the crucial component. But as evidence accumulated against them, they dropped their claims. The company lost millions in product investment, and probably hundreds of millions in profits, as competitors swept in and synthesized other “sulfa drugs.”

  Despite Domagk’s professional frustration, his peers understood what he’d done, and they rewarded Pasteur’s heir with the 1939 Nobel Prize in Medicine or Physiology, just seven years after the Christmas mice experiment. But if anything, the Nobel made Domagk’s life worse. Hitler hated the Nobel committee for awarding the 1935 Peace Prize to an anti-Nazi journalist and pacifist, and Die Führer had made it basically illegal for any German to win a Nobel Prize. As such, the Gestapo arrested and brutalized Domagk for his “crime.” When World War II broke out, Domagk redeemed himself a little by convincing the Nazis (they refused to believe at first) that his drugs could save soldiers suffering from gangrene. But the Allies had sulfa drugs by then, too, and it couldn’t have increased Domagk’s popularity when his drugs saved Winston Churchill in 1942, a man bent on destroying Germany.

  Perhaps even worse, the drug Domagk had trusted to save his daughter’s life became a dangerous fad. People demanded sulfonamide for every sore throat and sniffle and soon saw it as some sort of elixir. Their hopes became a nasty joke when quick-buck salesmen in the United States took advantage of this mania by peddling sulfas sweetened with antifreeze. Hundreds died within weeks—further proof that when it comes to panaceas the credulity of human beings is boundless.

  Antibiotics were the culmination of Pasteur’s discoveries about germs. But not all diseases are germ-based; many have roots in chemical or hormonal troubles. And modern medicine began to address that second class of diseases only after embracing Pasteur’s other great insight into biology, chirality. Not long after offering his opinion about chance and the prepared mind, Pasteur said something else that, if not as pithy, stirs a deeper sense of wonder, because it gets at something truly mysterious: what makes life live. After determining that life has a bias toward handedness on a deep level, Pasteur suggested that chirality was the sole “well-marked line of demarcation that at the present can be drawn between the chemistry of dead matter and the chemistry of living matter.”* If you’ve ever wondered what defines life, chemically there’s your answer.

  Pasteur’s statement guided biochemistry for a century, during which doctors made incredible progress in understanding diseases. At the same time, the insight implied that curing diseases, the real prize, would require chiral hormones and chiral biochemicals—and scientists realized that Pasteur’s dictum, however perceptive and helpful, subtly highlighted their own ignorance. That is, in pointing out the gulf between the “dead” chemistry that scientists could do in the lab and the living cellular chemistry that supported life, Pasteur simultaneously pointed out there was no easy way to cross it.

  That didn’t stop people from trying. Some scientists obtained chiral chemicals by distilling essences and hormones from animals, but in the end that proved too arduous. (In the 1920s, two Chicago chemists had to puree several thousand pounds of bull testicles from a stockyard to get a few ounces of the first pure testosterone.) The other possible approach was to ignore Pasteur’s distinction and manufacture both right-handed and left-handed versions of biochemicals. This was actually fairly easy to do because, statistically, reactions that produce handed molecules are equally likely to form righties and lefties. The problem with this approach is that mirror-image molecules have different properties inside the body. The zesty odor of lemons and oranges derives from the same basic molecules, one right-handed and one left-handed. Wrong-handed molecules can even destroy left-handed biology. A German drug company in the 1950s began marketing a remedy for morning sickness in pregnant women, but the benign, curative form of the active ingredient was mixed in with the wrong-handed form because the scientists couldn’t separate them. The freakish birth defects that followed—especially children born without legs or arms, their hands and feet stitched like turtle flippers to their trunks—made thalidomide the most notorious pharmaceutical of the twentieth century.*

  As the thalidomide disaster unfolded, the prospects of chiral drugs seemed dimmer than ever. But at the same time people were publicly mourning thalidomide babies, a St. Louis chemist named William Knowles began playing around with an unlikely elemental hero, rhodium, in a private research lab at Monsanto, an agricultural company. Knowles quietly circumvented Pasteur and proved that “dead” matter, if you were clever about it, could indeed invigorate living matter.

  Knowles had a flat, two-dimensional molecule he wanted to inflate into three dimensions, because the left-handed version of the 3D molecule had shown promising effects on brain diseases such as Parkinson’s. The sticking point was getting the proper handedness. Notice that 2D objects cannot be chiral: after all, a flat cardboard cutout of your right hand can be flipped over to make a left hand. Handedness emerges only with the z-axis. But inanimate chemicals in a reaction don’t know to make one hand or the other.* They make both, unless they’re tricked.

  Knowles’s trick was a rhodium catalyst. Catalysts speed up chemical reactions to degrees that are hard to comprehend in our poky, everyday human world. Some catalysts improve reaction rates by millions, billions, or even trillions of times. Rhodium works pretty fast, and Knowles found that one rhodium atom could inflate innumerably many of his 2D molecules. So he affixed the rhodium to the center of an already chiral compound, creating a chiral catalyst.

  The clever part was that both the chiral catalyst with the rhodium atom and the target 2D molecule were sprawling and bulky. So when they approached each other to react, they did so like two obese animals trying to have sex. That is, the chiral compound could poke its rhodium atom into the 2D molecule only from one position. And from that position, with arms and belly flab in the way, the 2D molecule could unfold into a 3D molecule in only one dimension.

  That limited maneuverability during coitus, coupled with rhodium’s catalytic ability to fast-forward reactions, meant that Knowles could get away with doing only a bit of the hard work—making a chiral rhodium catalyst—and still reap bushels of correctly handed molecules.

  The year was 1968, and modern drug synthesis began at that moment—a moment later honored with a Nobel Prize in Chemi
stry for Knowles in 2001.

  Incidentally, the drug that rhodium churned out for Knowles is levo-dihydroxyphenylalanine, or L-dopa, a compound since made famous in Oliver Sacks’s book Awakenings. The book documents how L-dopa shook awake eighty patients who’d developed extreme Parkinson’s disease after contracting sleeping sickness (Encephalitis lethargica) in the 1920s. All eighty were institutionalized, and many had spent four decades in a neurological haze, a few in continuous catatonia. Sacks describes them as “totally lacking energy, impetus, initiative, motive, appetite, affect, or desire… as insubstantial as ghosts, and as passive as zombies… extinct volcanoes.”

  In 1967, a doctor had had great success in treating Parkinson’s patients with L-dopa, a precursor of the brain chemical dopamine. (Like Domagk’s prontosil, L-dopa must be biologically activated in the body.) But the right- and left-handed forms of the molecule were knotty to separate, and the drug cost upwards of $5,000 per pound. Miraculously—though without being aware why—Sacks notes that “towards the end of 1968 the cost of L-dopa started a sharp decline.” Freed by Knowles’s breakthrough, Sacks began treating his catatonic patients in New York not long after, and “in the spring of 1969, in a way… which no one could have imagined or foreseen, these ‘extinct volcanoes’ erupted into life.”

  The volcano metaphor is accurate, as the effects of the drug weren’t wholly benign. Some people became hyperkinetic, with racing thoughts, and others began to hallucinate or gnaw on things like animals. But these forgotten people almost uniformly preferred the mania of L-dopa to their former listlessness. Sacks recalls that their families and the hospital staff had long considered them “effectively dead,” and even some of the victims considered themselves so. Only the left-handed version of Knowles’s drug revived them. Once again, Pasteur’s dictum about the life-giving properties of proper-handed chemicals proved true.

  11

  How Elements Deceive

  No one could have guessed that an anonymous gray metal like rhodium could produce anything as wondrous as L-dopa. But even after hundreds of years of chemistry, elements continually surprise us, in ways both benign and not. Elements can muddle up our unconscious, automatic breathing; confound our conscious senses; even, as with iodine, betray our highest human faculties. True, chemists have a good grasp of many features of elements, such as their melting points or abundance in the earth’s crust, and the eight-pound, 2,804-page Handbook of Chemistry and Physics—the chemists’ Koran—lists every physical property of every element to far more decimal places than you’d ever need. On an atomic level, elements behave predictably. Yet when they encounter all the chaos of biology, they continue to baffle us. Even blasé, everyday elements, if encountered in unnatural circumstances, can spring a few mean surprises.

  On March 19, 1981, five technicians undid a panel on a simulation spacecraft at NASA’s Cape Canaveral headquarters and entered a cramped rear chamber above the engine. A thirty-three-hour “day” had just ended with a perfectly simulated liftoff, and with the space shuttle Columbia—the most advanced space shuttle ever designed—set to launch on its first mission in April, the agency had understandable confidence. The hard part of their day over, the technicians, satisfied and tired, crawled into the compartment for a routine systems check. Seconds later, eerily peacefully, they slumped over.

  Until that moment, NASA had lost no lives on the ground or in space since 1967, when three astronauts had burned to death during training for Apollo 1. At the time, NASA, always concerned about cutting payload, allowed only pure oxygen to circulate in spacecrafts, not air, which contains 80 percent nitrogen (i.e., 80 percent deadweight). Unfortunately, as NASA recognized in a 1966 technical report, “in pure oxygen [flames] will burn faster and hotter without the dilution of atmospheric nitrogen to absorb some of the heat or otherwise interfere.” As soon as the atoms in oxygen molecules (O2) absorb heat, they dissociate and raise hell by stealing electrons from nearby atoms, a spree that makes fires burn hotter. Oxygen doesn’t need much provocation either. Some engineers worried that even static electricity from the Velcro on astronauts’ suits might ignite pure, vigorous oxygen. Nevertheless, the report concluded that although “inert gas has been considered as a means of suppressing flammability… inert additives are not only unnecessary but also increasingly complicated.”

  Now, that conclusion might be true in space, where atmospheric pressure is nonexistent and just a little interior gas will keep a spacecraft from collapsing inward. But when training on the ground, in earth’s heavy air, NASA technicians had to pump the simulators with far more oxygen to keep the walls from crumpling—which meant far more danger, since even small fires combust wildly in pure oxygen. When an unexplained spark went off one day during training in 1967, fire engulfed the module and cremated the three astronauts inside.

  A disaster has a way of clarifying things, and NASA decided inert gases were necessary, complicated or not, in all shuttles and simulators thereafter. By the 1981 Columbia mission, they filled any compartment prone to produce sparks with inert nitrogen (N2). Electronics and motors work just as well in nitrogen, and if sparks do shoot up, nitrogen—which is locked into molecular form more tightly than oxygen—will smother them. Workers who enter an inert compartment simply have to wear gas masks or wait until the nitrogen is pumped out and breathable air seeps back in—a precaution not taken on March 19. Someone gave the all clear too soon, the technicians crawled into the chamber unaware, and they collapsed as if choreographed. The nitrogen not only prevented their neurons and heart cells from absorbing new oxygen; it pickpocketed the little oxygen cells store up for hard times, accelerating the technicians’ demise. Rescue workers dragged all five men out but could revive only three. John Bjornstad was dead, and Forrest Cole died in a coma on April Fools’ Day.

  In fairness to NASA, in the past few decades nitrogen has asphyxiated miners in caves and people working in underground particle accelerators,* too, and always under the same horror-movie circumstances. The first person to walk in collapses within seconds for no apparent reason. A second and sometimes third person dash in and succumb as well. The scariest part is that no one struggles before dying. Panic never kicks in, despite the lack of oxygen. That might seem incredible if you’ve ever been trapped underwater. The instinct not to suffocate will buck you to the surface. But our hearts, lungs, and brains actually have no gauge for detecting oxygen. Those organs judge only two things: whether we’re inhaling some gas, any gas, and whether we’re exhaling carbon dioxide. Carbon dioxide dissolves in blood to form carbonic acid, so as long as we purge CO2 with each breath and tamp down the acid, our brains will relax. It’s an evolutionary kludge, really. It would make more sense to monitor oxygen levels, since that’s what we crave. It’s easier—and usually good enough—for cells to check that carbonic acid is close to zero, so they do the minimum.

  Nitrogen thwarts that system. It’s odorless and colorless and causes no acid buildup in our veins. We breathe it in and out easily, so our lungs feel relaxed, and it snags no mental trip wires. It “kills with kindness,” strolling through the body’s security system with a familiar nod. (It’s ironic that the traditional group name for the elements in nitrogen’s column, the “pnictogens,” comes from a Greek word for “choking” or “strangling.”) The NASA workers—the first casualties of the doomed space shuttle Columbia, which would disintegrate over Texas twenty-two years later—likely felt light-headed and sluggish in their nitrogen haze. But anyone might feel that way after thirty-three hours of work, and because they could exhale carbon dioxide just fine, little more happened mentally before they blacked out and nitrogen shut down their brains.

  Because it has to combat microbes and other living creatures, the body’s immune system is more biologically sophisticated than its respiratory system. That doesn’t mean it’s savvier about avoiding deception. At least, though, with some of the chemical ruses against the immune system, the periodic table deceives the body for its own good.

 
In 1952, Swedish doctor Per-Ingvar Brånemark was studying how bone marrow produces new blood cells. Having a strong stomach, Brånemark wanted to watch this directly, so he chiseled out holes in the femurs of rabbits and covered the holes with an eggshell-thin titanium “window,” which was transparent to strong light. The observation went satisfactorily, and Brånemark decided to snap off the expensive titanium screens for more experiments. To his annoyance, they wouldn’t budge. He gave up on those windows (and the poor rabbits), but when the same thing happened in later experiments—the titanium always locked like a vise onto the femur—he examined the situation a little closer. What he saw made watching juvenile blood cells suddenly seem vastly less interesting and revolutionized the sleepy field of prosthetics.

  Since ancient times, doctors had replaced missing limbs with clumsy wooden appendages and peg legs. During and after the industrial revolution, metal prostheses became common, and disfigured soldiers after World War I sometimes even got detachable tin faces—masks that allowed the soldiers to pass through crowds without drawing stares. But no one was able to integrate metal or wood into the body, the ideal solution. The immune system rejected all such attempts, whether made of gold, zinc, magnesium, or chromium-coated pig bladders. As a blood guy, Brånemark knew why. Normally, posses of blood cells surround foreign matter and wrap it in a straitjacket of slick, fibrous collagen. This mechanism—sealing the hunk off and preventing it from leaking—works great with, say, buckshot from a hunting accident. But cells aren’t smart enough to distinguish between invasive foreign matter and useful foreign matter, and a few months after implantation, any new appendages would be covered in collagen and start to slip or snap free.

 

‹ Prev