One of the most provocative and wondrous ways h appears in nature arises from the so-called uncertainty principle, first articulated in 1927 by the German physicist Werner Heisenberg. The uncertainty principle sets forth the terms of an inescapable cosmic trade-off: for various related pairs of fundamental, variable physical attributes—location and speed, energy and time—it is impossible to measure both quantities exactly. In other words, if you reduce the indeterminacy for one member of the pair (location, for instance), you’re going to have to settle for a looser approximation of its partner (speed). And it’s h that sets the limit on the precision you can attain. The trade-offs don’t have much practical effect when you’re measuring things in ordinary life. But when you get down to atomic dimensions, h rears its profound little head all around you.
IT MAY SOUND more than a bit contradictory, or even perverse, but in recent decades scientists have been looking for evidence that constants don’t hold for all eternity. In 1938 the English physicist Paul A. M. Dirac proposed that the value of no less a constant than Newton’s G might decrease in proportion to the age of the universe. Today there’s practically a cottage industry of physicists desperately seeking fickle constants. Some are looking for a change across time; others, for the effects of a change in location; still others are exploring how the equations operate in previously untested domains. Sooner or later, they’re going to get some real results. So stay tuned: news of inconstancy may lie ahead.
TWELVE
SPEED LIMITS
Including the space shuttle and Superman, a few things in life travel faster than a speeding bullet. But nothing moves faster than the speed of light in a vacuum. Nothing. Although as fast as light moves, its speed is decidedly not infinite. Because light has a speed, astrophysicists know that looking out in space is the same as looking back in time. And with a good estimate for the speed of light, we can come close to a reasonable estimate for the age of the universe.
These concepts are not exclusively cosmic. True, when you flick on a wall switch, you don’t have to wait around for the light to reach the floor. Some morning while you’re eating breakfast and you need something new to think about, though, you might want to ponder the fact that you see your kids across the table not as they are but as they once were, about three nanoseconds ago. Doesn’t sound like much, but stick the kids in the nearby Andromeda galaxy, and by the time you see them spoon their Cheerios they will have aged more than 2 million years.
Minus its decimal places, the speed of light through the vacuum of space, in Americanized units, is 186,282 miles per second—a quantity that took centuries of hard work to measure with such high precision. Long before the methods and tools of science reached maturity, however, deep thinkers had thought about the nature of light: Is light a property of the perceiving eye or an emanation from an object? Is it a bundle of particles or a wave? Does it travel or simply appear? If it travels, how fast and how far?
IN THE MID-FIFTH century B.C. a forward-thinking Greek philosopher, poet, and scientist named Empedocles of Acragas wondered if light might travel at a measurable speed. But the world had to wait for Galileo, a champion of the empirical approach to the acquisition of knowledge, to illuminate the question through experiment.
He describes the steps in his book Dialogues Concerning Two New Sciences, published in 1638. In the dark of night, two people, each holding a lantern whose light can be rapidly covered and uncovered, stand far apart from each other, but in full view. The first person briefly flashes his lantern. The instant the second person sees the light, he flashes his own lantern. Having done the experiment just once, at a distance of less than a mile, Galileo writes:
I have not been able to ascertain with certainty whether the appearance of the opposite light was instantaneous or not; but if not instantaneous it is extraordinarily rapid—I should call it momentary. (p. 43)
Fact is, Galileo’s reasoning was sound, but he stood much too close to his assistant to time the passage of a light beam, particularly with the imprecise clocks of his day.
A few decades later the Danish astronomer Ole Rømer diminished the speculation by observing the orbit of Io, the innermost moon of Jupiter. Ever since January 1610, when Galileo and his brand-new telescope first caught sight of Jupiter’s four brightest and largest satellites, astronomers had been tracking the Jovian moons as they circled their huge host planet. Years of observations had shown that, for Io, the average duration of one orbit—an easily timed interval from the Moon’s disappearance behind Jupiter, through its reemergence, to the beginning of its next disappearance—was just about 42.5 hours. What Rømer discovered was that when Earth was closest to Jupiter, Io disappeared about 11 minutes earlier than expected, and when Earth was farthest from Jupiter, Io disappeared about 11 minutes later.
Rømer reasoned that Io’s orbital behavior was not likely to be influenced by the position of Earth relative to Jupiter, and so surely the speed of light was to blame for any unexpected variations. The 22-minute range must correspond to the time needed for light to travel across the diameter of Earth’s orbit. From that assumption, Rømer derived a speed of light of about 130,000 miles a second. That’s within 30 percent of the correct answer—not bad for a first-ever estimate, and a good deal more accurate than Galileo’s “If not instantaneous….”
James Bradley, the third Astronomer Royal of Great Britain, laid to rest nearly all remaining doubts that the speed of light was finite. In 1725 Bradley systematically observed the star Gamma Draconis and noticed a seasonal shift in the star’s position on the sky. It took him three years to figure it out, but he eventually credited the shift to the combination of Earth’s continuous orbital movement and the finite speed of light. Thus did Bradley discover what is known as the aberration of starlight.
Imagine an analogy: It’s a rainy day, and you’re sitting inside a car stuck in dense traffic. You’re bored, and so (of course) you hold a big test tube out the window to collect raindrops. If there’s no wind, the rain falls vertically; to collect as much water as possible, you hold the test tube in a vertical position. The raindrops enter at the top and fall straight to the bottom.
Finally the traffic clears, and your car hits the speed limit again. You know from experience that the vertically falling rain will now leave diagonal streaks on the car’s side windows. To capture the raindrops efficiently, you must now tip the test tube to the angle that matches the rain streaks on the windows. The faster the car moves, the larger the angle.
In this analogy, the moving Earth is the moving car, the telescope is the test tube, and incoming starlight, because it does not move instantaneously, can be likened to the falling rain. So to catch the light of a star, you’ll have to adjust the angle of the telescope—aim it at a point that’s slightly different from the actual position of the star on the sky. Bradley’s observation may seem a bit esoteric, but he was the first to confirm—through direct measurement rather than by inference—two major astronomical ideas: that light has a finite speed and that Earth is in orbit around the Sun. He also improved on the accuracy of light’s measured speed, giving 187,000 miles per second.
BY THE LATE nineteenth century, physicists were keenly aware that light—just like sound—propagates in waves, and they presumed that if traveling sound waves need a medium (such as air) in which to vibrate, then light waves need a medium too. How else could a wave move through the vacuum of space? This mystical medium was named the “luminiferous ether,” and the physicist Albert A. Michelson, working with chemist Edward W. Morley, took on the task of detecting it.
Earlier, Michelson had invented an apparatus known as an interferometer. One version of this device splits a beam of light and sends the two parts off at right angles. Each part bounces off a mirror and returns to the beam splitter, which recombines the two beams for analysis. The precision of the interferometer enables the experimenter to make extremely fine measurements of any differences in the speeds of the two light beams: the perfect device for detecting
the ether. Michelson and Morley thought that if they aligned one beam with the direction of Earth’s motion and made the other transverse to it, the first beam’s speed would combine with Earth’s motion through the ether, while the second beam’s speed would remain unaffected.
Turns out, M & M got a null result. Going in two different directions made no difference to the speed of either light beam; they returned to the beam splitter at exactly the same time. Earth’s motion through the ether simply had no effect on the measured speed of light. Embarrassing. If the ether was supposed to enable the transmission of light, yet it couldn’t be detected, maybe the ether didn’t exist at all. Light turned out to be self-propagating: neither medium nor magic was needed to move a beam from one position to another in the vacuum. Thus, with a swiftness approaching the speed of light itself, the luminiferous ether entered the graveyard of discredited scientific ideas.
And thanks to his ingenuity, Michelson also further refined the value for the speed of light, to 186,400 miles per second.
BEGINNING IN 1905, investigations into the behavior of light got positively spooky. That year, Einstein published his special theory of relativity, in which he ratcheted up M & M’s null result to an audacious level. The speed of light in empty space, he declared, is a universal constant, no matter the speed of the light-emitting source or the speed of the person doing the measuring.
What if Einstein is right? For one thing, if you’re in a spacecraft traveling at half the speed of light and you shine a light beam straight ahead of the spacecraft, you and I and everybody else in the universe who measures the beam’s speed will find it to be 186,282 miles per second. Not only that, even if you shine the light out the back, top, or sides of your spacecraft, we will all continue to measure the same speed.
Odd.
Common sense says that if you fire a bullet straight ahead from the front of a moving train, the bullet’s ground speed is the speed of the bullet plus the speed of the train. And if you fire the bullet straight backward from the back of the train, the bullet’s ground speed will be its own minus that of the train. All that is true for bullets, but not, according to Einstein, for light.
Einstein was right, of course, and the implications are staggering. If everyone, everywhere and at all times, is to measure the same speed for the beam from your imaginary spacecraft, a number of things have to happen. First of all, as the speed of your spacecraft increases, the length of everything—you, your measuring devices, your spacecraft—shortens in the direction of motion, as seen by everyone else. Furthermore, your own time slows down exactly enough so that when you haul out your newly shortened yardstick, you are guaranteed to be duped into measuring the same old constant value for the speed of light. What we have here is a cosmic conspiracy of the highest order.
IMPROVED METHODS OF measuring soon added decimal place upon decimal place to the speed of light. Indeed, physicists got so good at the game that they eventually dealt themselves out of it.
Units of speed always combine units of length and time—50 miles per hour, for instance, or 800 meters per second. When Einstein began his work on special relativity, the definition of the second was coming along nicely, but definitions of the meter were completely clunky. As of 1791, the meter was defined as one ten-millionth the distance from the North Pole to the equator along the line of longitude that passes through Paris. And after earlier efforts to make this work, in 1889 the meter was redefined as the length of a prototype bar made of platinum-iridium alloy, stored at the International Bureau of Weights and Measures in Sèvres, France, and measured at the temperature at which ice melts. In 1960, the basis for defining the meter shifted again, and the exactitude increased further: 1,650,763.73 wavelengths, in a vacuum, of light emitted by the unperturbed atomic energy-level transition 2p10 to 5d5 of the krypton-86 isotope. Obvious, when you think about it.
Eventually it became clear to all concerned that the speed of light could be measured far more precisely than could the length of the meter. So in 1983 the General Conference on Weights and Measures decided to define—not measure, but define—the speed of light at the latest, best value: 299,792,458 meters per second. In other words, the definition of the meter was now forced into units of the speed of light, turning the meter into exactly 1/299,792,458 of the distance light travels in one second in a vacuum. And so tomorrow, anyone who measures the speed of light even more precisely than the 1983 value will be adjusting the length of the meter, not the speed of light itself.
Don’t worry, though. Any refinements in the speed of light will be too small to show up in your school ruler. If you’re an average European guy, you’ll still be slightly less than 1.8 meters tall. And if you’re an American, you’ll still be getting the same bad gas mileage in your SUV.
THE SPEED OF LIGHT may be astrophysically sacred, but it’s not immutable. In all transparent substances—air, water, glass, and especially diamonds—light travels more slowly than it does in a vacuum.
But the speed of light in a vacuum is a constant, and for a quantity to be truly constant it must remain unchanged, regardless of how, when, where, or why it is measured. The light-speed police take nothing for granted, though, and in the past several years they have sought evidence of change in the 13.7 billion years since the big bang. In particular, they’ve been measuring the so-called fine-structure constant, which is a combination of the speed of light in a vacuum and several other physical constants, including Planck’s constant, pi, and the charge of an electron.
This derived constant is a measure of the small shifts in the energy levels of atoms, which affect the spectra of stars and galaxies. Since the universe is a giant time machine, in which one can see the distant past by looking at distant objects, any change in the value of the fine-structure constant with time would reveal itself in observations of the cosmos. For cogent reasons, physicists don’t expect Planck’s constant or the charge of an electron to vary, and pi will certainly keep its value—which leaves only the speed of light to blame if discrepancies arise.
One of the ways astrophysicists calculate the age of the universe assumes that the speed of light has always been the same, so a variation in the speed of light anywhere in the cosmos is not just of passing interest. But as of January 2006, physicists’ measurements show no evidence for a change in the fine-structure constant across time or across space.
THIRTEEN
GOING BALLISTIC
In nearly all sports that use balls, the balls go ballistic at one time or another. Whether you’re playing baseball, cricket, football, golf, lacrosse, soccer, tennis, or water polo, a ball gets thrown, smacked, or kicked and then briefly becomes airborne before returning to Earth.
Air resistance affects the trajectory of all these balls, but regardless of what set them in motion or where they might land, their basic paths are described by a simple equation found in Newton’s Principia, his seminal 1687 book on motion and gravity. Several years later, Newton interpreted his discoveries for the Latin-literate lay reader in The System of the World, which includes a description of what would happen if you hurled stones horizontally at higher and higher speeds. Newton first notes the obvious: the stones would hit the ground farther and farther away from the release point, eventually landing beyond the horizon. He then reasons that if the speed were high enough, a stone would travel Earth’s entire circumference, never hit the ground, and return to smack you in the back of the head. If you ducked at that instant, the object would continue forever in what is commonly called an orbit. You can’t get more ballistic than that.
The speed needed to achieve low Earth orbit (affectionately called LEO) is a little less than 18,000 miles per hour sideways, making the round trip in about an hour and a half. Had Sputnik 1, the first artificial satellite, or Yury Gagarin, the first human to travel beyond Earth’s atmosphere, not reached that speed after being launched, they would have come back to Earth’s surface before one circumnavigation was complete.
Newton also showed that the gr
avity exerted by any spherical object acts as though all the object’s mass were concentrated at its center. Indeed, anything tossed between two people on Earth’s surface is also in orbit, except that the trajectory happens to intersect the ground. This was as true for Alan B. Shepard’s 15-minute ride aboard the Mercury spacecraft Freedom 7, in 1961, as it is for a golf drive by Tiger Woods, a home run by Alex Rodriguez, or a ball tossed by a child: they have executed what are sensibly called suborbital trajectories. Were Earth’s surface not in the way, all these objects would execute perfect, albeit elongated, orbits around Earth’s center. And though the law of gravity doesn’t distinguish among these trajectories, NASA does. Shepard’s journey was mostly free of air resistance, because it reached an altitude where there’s hardly any atmosphere. For that reason alone, the media promptly crowned him America’s first space traveler.
SUBORBITAL PATHS ARE the trajectories of choice for ballistic missiles. Like a hand grenade that arcs toward its target after being hurled, a ballistic missile “flies” only under the action of gravity after being launched. These weapons of mass destruction travel hypersonically, fast enough to traverse half of Earth’s circumference in 45 minutes before plunging back to the surface at thousands of miles an hour. If a ballistic missile is heavy enough, the thing can do more damage just by falling out of the sky than can the explosion of the conventional bomb it carries in its nose.
The world’s first ballistic missile was the V-2 rocket, designed by a team of German scientists under the leadership of Wernher von Braun and used by the Nazis during World War II, primarily against England. As the first object to be launched above Earth’s atmosphere, the bullet-shaped, large-finned V-2 (the “V” stands for Vergeltungswaffen, or “vengeance weapon”) inspired an entire generation of spaceship illustrations. After surrendering to the Allied forces, von Braun was brought to the United States, where in 1958 he directed the launch of Explorer 1, the first U.S. satellite. Shortly thereafter, he was transferred to the newly created National Aeronautics and Space Administration. There he developed the Saturn V, the most powerful rocket ever created, making it possible to fulfill the American dream of landing on the Moon.
Death By Black Hole & Other Cosmic Quandaries Page 11