The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements
Page 27
But to progress further, they needed a better way to “see” those infinitesimal particles and track how they behaved. Over his beer, Glaser—who had short, wavy hair, glasses, and a high forehead—decided bubbles were the answer. Bubbles in liquids form around imperfections or incongruities. Microscopic scratches in a champagne glass are one place they form; dissolved pockets of carbon dioxide in beer are another. As a physicist, Glaser knew that bubbles are especially prone to form as liquids heat up and approach their boiling point (think of a pan of water on the stove). In fact, if you hold a liquid just below its boiling point, it will burst into bubbles if anything agitates it.
This was a good start but still basic physics. What made Glaser stand out were the next mental steps he took. Those rare kaons, muons, and pions appear only when an atom’s nucleus, its dense core, is splintered. In 1952, a device called a cloud chamber existed, in which a “gun” shot ultra-fast atomic torpedoes at cold gas atoms. Muons and kaons and so on sometimes appeared in the chamber after direct strikes, and the gas condensed into liquid drops along the particles’ track. But substituting a liquid for the gas made more sense, Glaser thought. Liquids are thousands of times denser than gases, so aiming the atomic gun at, say, liquid hydrogen would cause far more collisions. Plus, if liquid hydrogen was held a shade below its boiling point, even a little kick of energy from a ghostly particle would lather up the hydrogen like Glaser’s beer. Glaser also suspected he could photograph the bubble trails and then measure how different particles left different trails or spirals, depending on their size and charge…. By the time he swallowed the final bubble in his own glass, the story goes, Glaser had the whole thing worked out.
It’s a story of serendipity that scientists have long wanted to believe. But sadly, like most legends, it’s not entirely accurate. Glaser did invent the bubble chamber, but through careful experimentation in a lab, not on a pub napkin. Happily, though, the truth is even stranger than the legend. Glaser designed his bubble chamber to work as explained above, but with one modification.
Depending on their size and charge, different subatomic particles make different swirls and spirals as they blast through a bubble chamber. The tracks are actually finely spaced bubbles in a frigid bath of liquid hydrogen. (Courtesy of CERN)
For Lord knows what reason—perhaps lingering undergraduate fascination—this young man decided beer, not hydrogen, was the best liquid to shoot the atomic gun at. He really thought that beer would lead to an epochal breakthrough in subatomic science. You can almost imagine him smuggling Budweiser into the lab at night, perhaps splitting a six-pack between science and his stomach as he filled thimble-sized beakers with America’s finest, heated them almost to boiling, and bombarded them to produce the most exotic particles then known to physics.
Unfortunately for science, Glaser later said, the beer experiments flopped. Nor did lab partners appreciate the stink of vaporized ale. Undaunted, Glaser refined his experiments, and his colleague Luis Alvarez—of dinosaur-killing-asteroid fame—eventually determined the most sensible liquid to use was in fact hydrogen. Liquid hydrogen boils at −435°F, so even minute amounts of heat will make a froth. As the simplest element, hydrogen also avoided the messy complications that other elements (or beer) might cause when particles collided. Glaser’s revamped “bubble chamber” provided so many insights so quickly that in 1960 he appeared among the fifteen “Men of the Year” in Time magazine with Linus Pauling, William Shockley, and Emilio Segrè. He also won the Nobel Prize at the disgustingly young age of thirty-three. Having moved on to Berkeley by then, he borrowed Edwin McMillan and Segrè’s white vest for the ceremony.
Bubbles aren’t usually counted as an essential scientific tool. Despite—or maybe because of—their ubiquity in nature and the ease of producing them, they were dismissed as toys for centuries. But when physics emerged as the dominant science in the 1900s, physicists suddenly found a lot of work for these toys in probing the most basic structures in the universe. Now that biology is ascendant, biologists use bubbles to study the development of cells, the most complex structures in the universe. Bubbles have proved to be wonderful natural laboratories for experiments in all fields, and the recent history of science can be read in parallel with the study of these “spheres of splendor.”
One element that readily forms bubbles—as well as foam, a state where bubbles overlap and lose their spherical shape—is calcium. Cells are to tissues what bubbles are to foams, and the best example of a foam structure in the body (besides saliva) is spongy bone. We usually think of foams as no sturdier than shaving cream, but when certain air-infused substances dry out or cool down, they harden and stiffen, like durable versions of bath suds. NASA actually uses special foams to protect space shuttles on reentry, and calcium-enriched bones are similarly strong yet light. What’s more, sculptors for millennia have carved tombstones and obelisks and false gods from pliable yet sturdy calcium rocks such as marble and limestone. These rocks form when tiny sea creatures die and their calcium-rich shells sink and pile up on the ocean floor. Like bones, shells have natural pores, but calcium’s chemistry enhances their supple strength. Most natural water, such as rainwater, is slightly acidic, while calcium’s minerals are slightly basic. When water leaks into calcium’s pores, the two react like a mini grade-school volcano to release small amounts of carbon dioxide, which softens up the rock. On a large and geological scale, reactions between rainwater and calcium form the huge cavities we know as caves.
Beyond anatomy and art, calcium bubbles have shaped world economics and empires. The many calcium-rich coves along the southern coast of England aren’t natural, but originated as limestone quarries around 55 BC, when the limestone-loving Romans arrived. Scouts sent out by Julius Caesar spotted an attractive, cream-colored limestone near modern-day Beer, England, and began chipping it out to adorn Roman facades. English limestone from Beer later was used in building Buckingham Palace, the Tower of London, and Westminster Abbey, and all that missing stone left gaping caverns in the seaside cliffs. By 1800, a few local boys who’d grown up sailing ships and playing tag in the labyrinths decided to marry their childhood pastimes by becoming smugglers, using the calcium coves to conceal the French brandy, fiddles, tobacco, and silk they ran over from Normandy in fast cutters.
The smugglers (or, as they styled themselves, free traders) thrived because of the hateful taxes the English government levied on French goods to spite Napoleon, and the scarcity of the taxed items created, inevitably, a demand bubble. Among many other things, the inability of His Majesty’s expensive coast guard to crack down on smuggling convinced Parliament to liberalize trade laws in the 1840s—which brought about real free trade, and with it the economic prosperity that allowed Great Britain to expand its never-darkening empire.
Given all this history, you’d expect a long tradition of bubble science, but no. Notable minds like Benjamin Franklin (who discovered why oil calms frothy water) and Robert Boyle (who experimented on and even liked to taste the fresh, frothy urine in his chamber pot) did dabble in bubbles. And primitive physiologists sometimes did things such as bubbling gases into the blood of half-living, half-dissected dogs. But scientists mostly ignored bubbles themselves, their structure and form, and left the study of bubbles to fields that they scorned as intellectually inferior—what might be called “intuitive sciences.” Intuitive sciences aren’t pathological, merely fields such as horse breeding or gardening that investigate natural phenomena but that long relied more on hunches and almanacs than controlled experiments. The intuitive science that picked up bubbles research was cooking. Bakers and brewers had long used yeasts—primitive bubble-making machines—to leaven bread and carbonate beer. But eighteenth-century haute cuisine chefs in Europe learned to whip egg whites into vast, fluffy foams and began to experiment with the meringues, porous cheeses, whipped creams, and cappuccinos we love today.
Still, chefs and chemists tended to distrust one another, chemists seeing cooks as undisciplined and uns
cientific, cooks seeing chemists as sterile killjoys. Only around 1900 did bubble science coalesce into a respectable field, though the men responsible, Ernest Rutherford and Lord Kelvin, had only dim ideas of what their work would lead to. Rutherford, in fact, was mostly interested in plumbing what at the time were the murky depths of the periodic table.
Shortly after moving from New Zealand to Cambridge University in 1895, Rutherford devoted himself to radioactivity, the genetics or nanotechnology of the day. Natural vigorousness led Rutherford to experimental science, for he wasn’t exactly a clean-fingernails guy. Having grown up hunting quail and digging potatoes on a family farm, he recalled feeling like “an ass in lion’s skin” among the robed dons of Cambridge. He wore a walrus mustache, toted radioactive samples around in his pockets, and smoked foul cigars and pipes. He was given to blurting out both weird euphemisms—perhaps his devout Christian wife discouraged him from swearing—and also the bluest curses in the lab, because he couldn’t help himself from damning his equipment to hell when it didn’t behave. Perhaps to make up for his cursing, he also sang, loudly and quite off-key, “Onward, Christian Soldiers” as he marched around his dim lab. Despite that ogre-like description, Rutherford’s outstanding scientific trait was elegance. Nobody was better, possibly in the history of science, at coaxing nature’s secrets out of physical apparatus. And there’s no better example than the elegance he used to solve the mystery of how one element can transform into another.
After moving from Cambridge to Montreal, Rutherford grew interested in how radioactive substances contaminate the air around them with more radioactivity. To investigate this, Rutherford built on the work of Marie Curie, but the New Zealand hick proved cagier than his more celebrated female contemporary. According to Curie (among others), radioactive elements leaked a sort of gas of “pure radioactivity” that charged the air, just as lightbulbs flood the air with light. Rutherford suspected that “pure radioactivity” was actually an unknown gaseous element with its own radioactive properties. As a result, whereas Curie spent months boiling down thousands of pounds of black, bubbling pitchblende to get microscopic samples of radium and polonium, Rutherford sensed a shortcut and let nature work for him. He simply left active samples beneath an inverted beaker to catch escaping bubbles of gas, then came back to find all the radioactive material he needed. Rutherford and his collaborator, Frederick Soddy, quickly proved the radioactive bubbles were in fact a new element, radon. And because the sample beneath the beaker shrank in proportion as the radon sample grew in volume, they realized that one element actually mutated into another.
Not only did Rutherford and Soddy find a new element, they discovered novel rules for jumping around on the periodic table. Elements could suddenly move laterally as they decayed and skip across spaces. This was thrilling but blasphemous. Science had finally discredited and excommunicated the chemical magicians who’d claimed to turn lead into gold, and here Rutherford and Soddy were opening the gate back up. When Soddy finally let himself believe what was happening and burst out, “Rutherford, this is transmutation!” Rutherford had a fit.
“For Mike’s sake, Soddy,” he boomed. “Don’t call it transmutation. They’ll have our heads off as alchemists!”
The radon sample soon midwifed even more startling science. Rutherford had arbitrarily named the little bits that flew off radioactive atoms alpha particles. (He also discovered beta particles.) Based on the weight differences between generations of decaying elements, Rutherford suspected that alphas were actually helium atoms breaking off and escaping like bubbles through a boiling liquid. If this was true, elements could do more than hop two spaces on the periodic table like pieces on a typical board game; if uranium emitted helium, elements were jumping from one side of the table to the other like a lucky (or disastrous) move in Snakes & Ladders.
To test this idea, Rutherford had his physics department’s glassblowers blow two bulbs. One was soap-bubble thin, and he pumped radon into it. The other was thicker and wider, and it surrounded the first. The alpha particles had enough energy to tunnel through the first glass shell but not the second, so they became trapped in the vacuum cavity between them. After a few days, this wasn’t much of an experiment, since the trapped alpha particles were colorless and didn’t really do anything. But then Rutherford ran a battery current through the cavity. If you’ve ever traveled to Tokyo or New York, you know what happened. Like all noble gases, helium glows when excited by electricity, and Rutherford’s mystery particles began glowing helium’s characteristic green and yellow. Rutherford basically proved that alpha particles were escaped helium atoms with an early “neon” light. It was a perfect example of his elegance, and also his belief in dramatic science.
With typical flair, Rutherford announced the alpha-helium connection during his acceptance speech for the 1908 Nobel Prize. (In addition to winning the prize himself, Rutherford mentored and hand-trained eleven future prizewinners, the last in 1978, more than four decades after Rutherford died. It was perhaps the most impressive feat of progeny since Genghis Khan fathered hundreds of children seven centuries earlier.) His findings intoxicated the Nobel audience. Nevertheless, the most immediate and practical application of Rutherford’s helium work probably escaped many in Stockholm. As a consummate experimentalist, however, Rutherford knew that truly great research didn’t just support or disprove a given theory, but fathered more experiments. In particular, the alpha-helium experiment allowed him to pick the scab off the old theological-scientific debate about the true age of the earth.
The first semi-defensible guess for that age came in 1650, when Irish archbishop James Ussher worked backward from “data” such as the begats list in the Bible (“... and Serug lived thirty years, and begat Nahor… and Nahor lived nine and twenty years, and begat Terah,” etc.) and calculated that God had finally gotten around to creating the earth on October 23, 4004 BC. Ussher did the best he could with the available evidence, but within decades that date was proved laughably late by most every scientific field. Physicists could even pin precise numbers on their guesses by using the equations of thermodynamics. Just as hot coffee cools down in a freezer, physicists knew that the earth constantly loses heat to space, which is cold. By measuring the rate of lost heat and extrapolating backward to when every rock on earth was molten, they could estimate the earth’s date of origin. The premier scientist of the nineteenth century, William Thomson, known as Lord Kelvin, spent decades on this problem and in the late 1800s announced that the earth had been born twenty million years before.
It was a triumph of human reasoning—and about as dead wrong as Ussher’s guess. By 1900, Rutherford among others recognized that however far physics had outpaced other sciences in prestige and glamour (Rutherford himself was fond of saying, “In science, there is only physics; all the rest is stamp collecting”—words he later had to eat when he won a Nobel Prize in Chemistry), in this case the physics didn’t feel right. Charles Darwin argued persuasively that humans could not have evolved from dumb bacteria in just twenty million years, and followers of Scottish geologist James Hutton argued that no mountains or canyons could have formed in so short a span. But no one could unravel Lord Kelvin’s formidable calculations until Rutherford started poking around in uranium rocks for bubbles of helium.
Inside certain rocks, uranium atoms spit out alpha particles (which have two protons) and transmutate into element ninety, thorium. Thorium then begets radium by spitting out another alpha particle. Radium begets radon with yet another, and radon begets polonium, and polonium begets stable lead. This was a well-known deterioration. But in a stroke of genius akin to Glaser’s, Rutherford realized that those alpha particles, after being ejected, form small bubbles of helium inside rocks. The key insight was that helium never reacts with or is attracted to other elements. So unlike carbon dioxide in limestone, helium shouldn’t normally be inside rocks. Any helium that is inside rocks was therefore fathered by radioactive decay. Lots of helium inside a rock means that it�
��s old, while scant traces indicate it’s a youngster.
Rutherford had thought about this process for a few years by 1904, when he was thirty-three and Kelvin was eighty. By that age, despite all that Kelvin had contributed to science, his mind had fogged. Gone were the days when he could put forward exciting new theories, like the one that all the elements on the periodic table were, at their deepest levels, twisted “knots of ether” of different shapes. Most detrimentally to his science, Kelvin never could incorporate the unsettling, even frightening science of radioactivity into his worldview. (That’s why Marie Curie once pulled him, too, into a closet to look at her glow-in-the-dark element—to instruct him.) In contrast, Rutherford realized that radioactivity in the earth’s crust would generate extra heat, which would bollix the old man’s theories about a simple heat loss into space.
Excited to present his ideas, Rutherford arranged a lecture in Cambridge. But however dotty Kelvin got, he was still a force in scientific politics, and demolishing the old man’s proudest calculation could in turn jeopardize Rutherford’s career. Rutherford began the speech warily, but luckily, just after he started, Kelvin nodded off in the front row. Rutherford raced to get to his conclusions, but just as he began knocking the knees out from under Kelvin’s work, the old man sat up, refreshed and bright.
Trapped onstage, Rutherford suddenly remembered a throwaway line he’d read in Kelvin’s work. It said, in typically couched scientific language, that Kelvin’s calculations about the earth’s age were correct unless someone discovered extra sources of heat inside the earth. Rutherford mentioned that qualification, pointed out that radioactivity might be that latent source, and with masterly spin ad-libbed that Kelvin had therefore predicted the discovery of radioactivity dozens of years earlier. What genius! The old man glanced around the audience, radiant. He thought that Rutherford was full of crap, but he wasn’t about to disregard the compliment.