The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements

Home > Other > The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements > Page 28
The Disappearing Spoon: And Other True Tales of Madness, Love, and the History of the World from the Periodic Table of the Elements Page 28

by Sam Kean


  Rutherford laid low until Kelvin died, in 1907, then he soon proved the helium-uranium connection. And with no politics stopping him now—in fact, he became an eminent peer himself (and later ended up as scientific royalty, too, with a box on the periodic table, element 104, rutherfordium)—the eventual Lord Rutherford got some primordial uranium rock, eluted the helium from microscopic bubbles inside, and determined that the earth was at least 500 million years old—twenty-five times greater than Kelvin’s guess and the first calculation correct to within a factor of ten. Within years, geologists with more experience finessing rocks took over for Rutherford and determined that the helium pockets proved the earth to be at least two billion years old. This number was still 50 percent too low, but thanks to the tiny, inert bubbles inside radioactive rocks, human beings at last began to face the astounding age of the cosmos.

  After Rutherford, digging for small bubbles of elements inside rocks became standard work in geology. One especially fruitful approach uses zircon, a mineral that contains zirconium, the pawnshop heartbreaker and knockoff jewelry substitute.

  For chemical reasons, zircons are hardy—zirconium sits below titanium on the periodic table and makes convincing fake diamonds for a reason. Unlike soft rocks such as limestone, many zircons have survived since the early years of the earth, often as hard, poppy-seed grains inside larger rocks. Due to their unique chemistry, when zircon crystals formed way back when, they vacuumed up stray uranium and packed it into atomic bubbles inside themselves. At the same time, zircons had a distaste for lead and squeezed that element out (the opposite of what meteors do). Of course, that didn’t last long, since uranium decays into lead, but the zircons had trouble working the lead slivers out again. As a result, any lead inside lead-phobic zircons nowadays has to be a daughter product of uranium. The story should be familiar by now: after measuring the ratio of lead to uranium in zircons, it’s just a matter of graphing backward to year zero. Anytime you hear scientists announcing a record for the “world’s oldest rock”—probably in Australia or Greenland, where zircons have survived the longest—rest assured they used zircon-uranium bubbles to date it.

  Other fields adopted bubbles as a paradigm, too. Glaser began experimenting with his bubble chamber in the 1950s, and around that same time, theoretical physicists such as John Archibald Wheeler began speaking of the universe as foam on its fundamental level. On that scale, billions of trillions of times smaller than atoms, Wheeler dreamed that “the glassy smooth spacetime of the atomic and particle worlds gives way…. There would literally be no left and right, no before and after. Ordinary ideas of length would disappear. Ordinary ideas of time would evaporate. I can think of no better name than quantum foam for this state of affairs.” Some cosmologists today calculate that our entire universe burst into existence when a single submicronanobubble slipped free from that foam and began expanding at an exponential rate. It’s a handsome theory, actually, and explains a lot—except, unfortunately, why this might have happened.

  Ironically, Wheeler’s quantum foam traces its intellectual lineage to the ultimate physicist of the classical, everyday world, Lord Kelvin. Kelvin didn’t invent froth science—that was a blind Belgian with the fitting name (considering how little influence his work had) of Joseph Plateau. But Kelvin did popularize the science by saying things like he could spend a lifetime scrutinizing a single soap bubble. That was actually disingenuous, since according to his lab notebooks, Kelvin formulated the outline of his bubble work one lazy morning in bed, and he produced just one short paper on it. Still, there are wonderful stories of this white-bearded Victorian splashing around in basins of water and glycerin, with what looked like a miniature box spring on a ladle, to make colonies of interlocking bubbles. And squarish bubbles at that, reminiscent of the Peanuts character Rerun, since the box spring’s coils were shaped into rectangular prisms.

  Plus, Kelvin’s work gathered momentum and inspired real science in future generations. Biologist D’Arcy Wentworth Thompson applied Kelvin’s theorems on bubble formation to cell development in his seminal 1917 book On Growth and Form, a book once called “the finest work of literature in all the annals of science that have been recorded in the English tongue.” The modern field of cell biology began at this point. What’s more, recent biochemical research hints that bubbles were the efficient cause of life itself. The first complex organic molecules may have formed not in the turbulent ocean, as is commonly thought, but in water bubbles trapped in Arctic-like sheets of ice. Water is quite heavy, and when water freezes, it crushes together dissolved “impurities,” such as organic molecules, inside bubbles. The concentration and compression in those bubbles might have been high enough to fuse those molecules into self-replicating systems. Furthermore, recognizing a good trick, nature has plagiarized the bubble blueprint ever since. Regardless of where the first organic molecules formed, in ice or ocean, the first crude cells were certainly bubble-like structures that surrounded proteins or RNA or DNA and protected them from being washed away or eroded. Even today, four billion years later, cells still have a basic bubble design.

  Kelvin’s work also inspired military science. During World War I, another lord, Lord Rayleigh, took on the urgent wartime problem of why submarine propellers were so prone to disintegrate and decay, even when the rest of the hull remained intact. It turned out that bubbles produced by the churning propellers turned around and attacked the metal blades like sugar attacks teeth, and with similarly corrosive results. Submarine science led to another breakthrough in bubble research as well—though at the time this finding seemed unpromising, even dodgy. Thanks to the memory of German U-boats, studying sonar—sound waves moving in water—was as trendy in the 1930s as radioactivity had been before. At least two research teams discovered that if they rocked a tank with jet engine–level noise, the bubbles that appeared would sometimes collapse and wink at them with a flash of blue or green light. (Think of biting wintergreen Life Savers in a dark closet.) More interested in blowing up submarines, scientists didn’t pursue so-called sonoluminescence, but for fifty years it hung on as a scientific parlor trick, passed down from generation to generation.

  It might have remained just that if not for a colleague taunting Seth Putterman one day in the mid-1980s. Putterman worked at the University of California at Los Angeles in fluid dynamics, a fiendishly tricky field. In some sense, scientists know more about distant galaxies than about turbulent water gushing through sewer pipes. The colleague was teasing Putterman about this ignorance, when he mentioned that Putterman’s ilk couldn’t even explain how sound waves can transmutate bubbles into light. Putterman thought that sounded like an urban legend. But after looking up the scant research that existed on sonoluminescence, he chucked his previous work to study blinking bubbles full-time.*

  For Putterman’s first, delightfully low-tech experiments, he set a beaker of water between two stereo speakers, which were cranked to dog-whistle frequencies. A heated toaster wire in the beaker kicked up bubbles, and sound waves trapped and levitated them in the water. Then came the fun part. Sound waves vary between barren, low-intensity troughs and crests of high intensity. The tiny, trapped bubbles responded to low pressure by swelling a thousand times, like a balloon filling a room. After the sound wave bottomed out, the high-pressure front tore in and crushed the bubble’s volume by half a million times, at forces 100 billion times greater than gravity. Not surprisingly, it’s that supernova crush that produces the eerie light. Most amazingly, despite being squished into a “singularity,” a term rarely used outside the study of black holes, the bubble stays intact. After the pressure lifts, the bubble billows out again, unpopped, as if nothing had happened. It’s then squished again and blinks again, with the process repeating thousands of times every second.

  Putterman soon bought more sophisticated equipment than his original garage-band setup, and upon doing so, he had a run-in with the periodic table. To help determine what exactly caused the bubbles to sparkle, he began trying
different gases. He found that although bubbles of plain air produced nice crackles of blue and green, pure nitrogen or oxygen, which together make up 99 percent of air, wouldn’t luminesce, no matter what volume or shrillness he cranked the sound to. Perturbed, Putterman began pumping trace gases from air back into the bubbles until he found the elemental flint—argon.

  That was odd, since argon is an inert gas. What’s more, the only other gases Putterman (and a growing cadre of bubble scientists) could get to work were argon’s heavier chemical cousins, krypton and especially xenon. In fact, when rocked with sonar, xenon and krypton flared up even brighter than argon, producing “stars in a jar” that sizzled at 35,000°F inside water—far hotter than the surface of the sun. Again, this was baffling. Xenon and krypton are often used in industry to smother fires or runaway reactions, and there was no reason to think those dull, inert gases could produce such intense bubbles.

  Unless, that is, their inertness is a covert asset. Oxygen, carbon dioxide, and other atmospheric gases inside bubbles can use the incoming sonar energy to divide or react with one another. From the point of view of sonoluminescence, that’s energy squandered. Some scientists, though, think that inert gases under high pressure cannot help but soak up sonar energy. And with no way to dissipate the energy, bubbles of xenon or krypton collapse and have no choice but to propagate and concentrate energy in the bubbles’ cores. If that’s the case, then the noble gases’ nonreactivity is the key to sonoluminescence. Whatever the reason, the link to sonoluminescence will rewrite what it means to be an inert gas.

  Unfortunately, tempted by harnessing that high energy, some scientists (including Putterman) have linked this fragile bubble science with desktop fusion, a cousin of that all-time favorite pathological science. (Because of the temperatures involved, it’s not cold fusion.) There has long been a vague, free-association link between bubbles and fusion, partly because Boris Deryagin, an influential Soviet scientist who studied the stability of foams, believed strongly in cold fusion. (Once, in an inconceivable experiment, the antithesis of one of Rutherford’s, Deryagin supposedly tried to induce cold fusion in water by firing a Kalashnikov rifle into it.)

  The dubious link between sonoluminescence and fusion (sonofusion) was made explicit in 2002 when the journal Science ran a radioactively controversial paper on sonoluminescence-driven nuclear power. Unusually, Science also ran an editorial admitting that many senior scientists thought the paper flawed if not fraudulent; even Putterman recommended that the journal reject this one. Science printed it anyway (perhaps so everyone would have to buy a copy to see what all the fuss was about). The paper’s lead author was later hauled before the U.S. House of Representatives for faking data.

  Thankfully, bubble science had a strong enough foundation* to survive that disgrace. Physicists interested in alternative energy now model superconductors with bubbles. Pathologists describe AIDS as a “foamy” virus, for the way infected cells swell before exploding. Entomologists know of insects that use bubbles like submersibles to breathe underwater, and ornithologists know that the metallic sheen of peacocks’ plumage comes from light tickling bubbles in the feathers. Most important, in 2008, in food science, students at Appalachian State University finally determined what makes Diet Coke explode when you drop Mentos into it. Bubbles. The grainy surface of Mentos candy acts as a net to snag small dissolved bubbles, which are stitched into large ones. Eventually, a few gigantic bubbles break off, rocket upward, and whoosh through the nozzle, spurting up to twenty magnificent feet. This discovery was undoubtedly the greatest moment in bubble science since Donald Glaser eyed his lager more than fifty years before and dreamed of subverting the periodic table.

  18

  Tools of Ridiculous Precision

  Think of the most fussy science teacher you ever had. The one who docked your grade if the sixth decimal place in your answer was rounded incorrectly; who tucked in his periodic table T-shirt, corrected every student who said “weight” when he or she meant “mass,” and made everyone, including himself, wear goggles even while mixing sugar water. Now try to imagine someone whom your teacher would hate for being anal-retentive. That is the kind of person who works for a bureau of standards and measurement.

  Most countries have a standards bureau, whose job it is to measure everything—from how long a second really is to how much mercury you can safely consume in bovine livers (very little, according to the U.S. National Institute of Standards and Technology, or NIST). To scientists who work at standards bureaus, measurement isn’t just a practice that makes science possible; it’s a science in itself. Progress in any number of fields, from post-Einsteinian cosmology to the astrobiological hunt for life on other planets, depends on our ability to make ever finer measurements based on ever smaller scraps of information.

  For historical reasons (the French Enlightenment folk were fanatic measurers), the Bureau International des Poids et Mesures (BIPM) just outside Paris acts as the standards bureau’s standards bureau, making sure all the “franchises” stay in line. One of the more peculiar jobs of the BIPM is coddling the International Prototype Kilogram—the world’s official kilogram. It’s a two-inch-wide, 90 percent platinum cylinder that, by definition, has a mass of exactly 1.000000… kilogram (to as many decimal places as you like). I’d say that’s about two pounds, but I’d feel guilty about being inexact.

  The two-inch-wide International Prototype Kilogram (center), made of platinum and iridium, spends all day every day beneath three nested bell jars inside a humidity- and temperature-controlled vault in Paris. Surrounding the Kilogram are six official copies, each under two bell jars. (Reproduced with permission of BIPM, which retains full international protected copyright)

  Because the Kilogram is a physical object and therefore damageable, and because the definition of a kilogram ought to stay constant, the BIPM must make sure it never gets scratched, never attracts a speck of dust, never loses (the bureau hopes!) a single atom. For if any of that happened, its mass could spike to 1.000000… 1 kilograms or plummet to 0.999999… 9 kilograms, and the mere possibility induces ulcers in a national bureau of standards type. So, like phobic mothers, they constantly monitor the Kilogram’s temperature and the pressure around it to prevent microscopic bloating and contracting, stress that could slough off atoms. It’s also swaddled within three successively smaller bell jars to prevent humidity from condensing on the surface and leaving a nanoscale film. And the Kilogram is made from dense platinum (and iridium) to minimize the surface area exposed to unacceptably dirty air, the kind we breathe. Platinum also conducts electricity well, which cuts down on the buildup of “parasitic” static electricity (the BIPM’s word) that might zap stray atoms.

  Finally, platinum’s toughness mitigates against the chance of a disastrous fingernail nick on the rare occasions when people actually lay a hand on the Kilogram. Other countries need their own official 1.000000… cylinder to avoid having to fly to Paris every time they want to measure something precisely, and since the Kilogram is the standard, each country’s knockoff has to be compared against it. The United States has had its official kilogram, called K20 (i.e., the twentieth official copy), which resides in a government building in exurban Maryland, calibrated just once since 2000, and it’s due for another calibration, says Zeina Jabbour, group leader for the NIST mass and force team. But calibration is a multimonth process, and security regulations since 2001 have made flying K20 to Paris an absolute hassle. “We have to hand-carry the kilograms through the flight,” says Jabbour, “and it’s hard to get through security and customs with a slug of metal, and tell people they cannot touch it.” Even opening K20’s customized suitcase in a “dusty airport” could compromise it, she says, “and if somebody insists on touching it, that’s the end of the calibration.”

  Usually, the BIPM uses one of six official copies of the Kilogram (each kept under two bell jars) to calibrate the knockoffs. But the official copies have to be measured against their own standard, so every few y
ears scientists remove the Kilogram from its vault (using tongs and wearing latex gloves, of course, so as not to leave fingerprints—but not the powdery kind of gloves, because that would leave a residue—oh, and not holding it for too long, because the person’s body temperature could heat it up and ruin everything) and calibrate the calibrators.* Alarmingly, scientists noticed during calibrations in the 1990s that, even accounting for atoms that rub off when people touch it, in the past few decades the Kilogram had lost an additional mass equal to that of a fingerprint(!), half a microgram per year. No one knows why.

  The failure—and it is that—to keep the Kilogram perfectly constant has renewed discussions about the ultimate dream of every scientist who obsesses over the cylinder: to make it obsolete. Science owes much of its progress since about 1600 to adopting, whenever possible, an objective, non-human-centered point of view about the universe. (This is called the Copernican principle, or less flatteringly the mediocrity principle.) The kilogram is one of seven “base units” of measurement that permeate all branches of science, and it’s no longer acceptable for any of those units to be based on a human artifact, especially if it’s mysteriously shrinking.

  The goal with every unit, as England’s bureau of national standards cheekily puts it, is for one scientist to e-mail its definition to a colleague on another continent and for the colleague to be able to reproduce something with exactly those dimensions, based only on the description in the e-mail. You can’t e-mail the Kilogram, and no one has ever come up with a definition more reliable than that squat, shiny, pampered cylinder in Paris. (Or if they have, it’s either too impossibly involved to be practical—such as counting trillions of trillions of atoms—or requires measurements too precise for even the best instruments today.) The inability to solve the kilogram conundrum, to either stop it from shrinking or superannuate it, has become an increasing source of international worry and embarrassment (at least for us anal types).

 

‹ Prev