The Story of Western Science

Home > Other > The Story of Western Science > Page 23
The Story of Western Science Page 23

by Susan Wise Bauer


  E = mc2

  where c equals the speed of light.

  Einstein thought that the paper he completed on June 30, 1905, might be of special interest. Although the paper’s title (“On the Electrodynamics of Moving Bodies”) didn’t suggest anything particularly revolutionary, it was (as Einstein wrote to a friend), nothing more than “a modification of the theory of space and time”: Einstein’s first exploration of what would later be known as the special theory of relativity.

  The paper set out to reconcile two apparently contradictory laws. The first was the principle of relativity, which had been known since Galileo. A cornerstone of Enlightenment thinking, a classic Baconian assumption, the principle of relativity decrees that a law of physics must work in the same way across all related frames of reference. In his later version of the paper, intended for a general readership, Einstein used the example of a railway car, traveling along tracks next to an embankment, at a regular rate of speed. The car is constantly changing (“translating”) its position relative to the embankment, but it doesn’t rotate at the same time, so this is called “uniform translation.” At the same time, a raven is flying through the air, also in a straight line relative to the embankment, and also at a steady rate of speed—another uniform translation.

  An observer standing on the embankment sees the raven flying at a certain rate of speed. An observer standing on the moving railway car, though, sees the raven flying at a different rate of speed.

  If the embankment is called “coordinate system K” and the railroad car is “coordinate system K′,” we can make this statement:

  If, relative to K, K′ is a uniformly moving coordinate system devoid of rotation, then natural phenomena run their course with respect to K′ according to exactly the same general laws with respect to K.12

  25.1 EINSTEIN’S RAILWAY

  In other words, for both observers the raven is still flying in the same direction and at a constant speed, even though the speed of the raven itself seems to be different.

  Simple enough; but another law of physics contradicts the principle of relativity in a very fundamental way. “There is hardly a simpler law in physics,” Einstein wrote, “than that . . . light is propagated in empty space . . . in straight lines with a velocity c = 300000 km./sec.” The constant speed of light in a vacuum (air, water, and other transparent media slow it down) had been tested repeatedly since physicist Albert Michelson and chemist Edward Morley had accidentally discovered it in the early 1880s: “Let us assume,” Einstein proposed, “that the simple law . . . is justifiably believed.” What, then, was the problem?13

  Imagine that a vacuum exists above the railway tracks, and that a ray of light travels above it, in the same direction as the raven. The principle of relativity insists that an observer on the embankment and an observer on the railway car will see the light traveling at two different speeds—which means that the speed of light is not constant.

  What to do? It seems that either the principle of relativity or the constant speed of light needs to be abandoned; and, as Einstein pointed out, most physicists were inclined to abandon relativity (“in spite of the fact that no empirical data had been found which were contradictory to this principle”). But in fact, neither needs to be given up—as long as we are willing to adjust our ideas about time and space.14

  The two observers were measuring the speed of light per second. Einstein suggested that what was changing was not the speed per second—but the second itself. Time, assumed to be a constant everywhere in the universe, was not constant at all. Time itself dilated, slowed down, as the observer moved faster. So the two observers were both measuring the speed per second of light, but for the observer who was moving, a second was longer. Time was the fourth dimension, the non-Euclidean addition; it turned three-dimensional Euclidean space into four-dimensional “space-time.”

  The special principle of relativity didn’t take gravity into account (hence the adjective “special,” or “limited”). But over the next ten years, Einstein struggled with gravity, looking for a theory of relativity that would incorporate gravitational pull.

  By 1916 he had concluded that Bernhard Riemann was correct: gravity was a result, not a force. The presence of mass or energy (his formula allowed him to equate them) caused space-time (established by the 1905 special theory) to curve; objects traveling freely along the curves appeared to be falling but in fact were simply following “straight” along the surface of space-time. (Imagine that Riemann’s bookworm existed, not on a crumpled sheet of paper, but on the surface of a rubber ball; the bookworm, unable to sense the curvature of its universe, crawls along the ball in what it thinks to be a straight line, but to an observer outside the curvature of the ball, the bookworm appears to be heading downward.)

  The theory could be checked against effects caused by the sun, the most massive object nearby. Relativity explained an existing problem: the perihelion of Mercury, the point in its orbit that was closest to the sun, had shifted, or “precessed,” over the previous centuries, and the precession was too large to be accounted for by the gravitational pull of the other planets. Einstein’s new theory would account for it.

  But there was a second test, one that would predict a phenomenon. If Einstein was correct, light from stars would be “pulled” toward the mass of the sun; starlight would be, observably, bent by the sun’s mass.

  Checking this theory required a total solar eclipse. The General Theory of Relativity was published in 1916, but Einstein’s prediction was not confirmed until the British astronomer Arthur Eddington took measurements during a solar eclipse in 1919. Eddington’s calculations showed that the starlight passing by the sun had shifted, to the exact degree that Einstein had foreseen.

  The general theory of relativity told us that Baconian observation had its limits; that what we can see is not always what is; that common sense can lead us astray; that our senses can deceive us, although we’d better not ignore them. “Science is not just a collection of laws,” Einstein wrote twenty years later. “It is a creation of the human mind, with its freely invented ideas and concepts. Physical theories try to form a picture of reality and to establish its connection with the wide world of sense impressions. Thus the only justification for our mental structures is whether and in what way our theories form such a link.”15

  Eddington’s measurements had linked the “creation” of Einstein’s mind, the general theory of relativity, to the world. Riemann’s geometric theories had been put to use to describe actual sense impressions; physics had caught up to abstract mathematics, and had changed our picture of reality.

  ALBERT EINSTEIN

  The General Theory of Relativity

  (1916)

  Despite the equations, Einstein’s paper is clear, elegantly written, and accessible even to nonmathematicians. The following edition is available in several formats, but Robert W. Lawson’s 1920 translation into English is widely available; most editions include Einstein’s summary of his findings on the special theory first. Read both, since the general theory builds on the special.

  Albert Einstein, Relativity: The Special and the General Theory, trans. Robert W. Lawson, with introduction by Roger Penrose, commentary by Robert Geroch, and historical essay by David C. Cassidy, Pi Press (hardcover, paperback, and e-book, 2005, ISBN 978-0131862616).

  * * *

  * The mathematics of Gauss’s challenge are well outside the scope of this book, but a useful explanation for the nonspecialist (complete with figures and illustrations) can be found in Eli Maor’s To Infinity and Beyond: A Cultural History of the Infinite (Princeton University Press, 1991), 108–34.

  TWENTY-SIX

  Damn Quantum Jumps

  The discovery of subatomic random swerves

  The quantum . . . [will] play a fundamental role in physics,

  heralding the advent of a new state of things, destined . . . to

  transform completely our physical concepts.

  —Max Planck, “The Orig
in and Development

  of the Quantum Theory,” 1922

  The great revelation of quantum theory was that features of

  discreteness were discovered in the Book of Nature, in a

  context in which anything other than continuity seemed to

  be absurd.

  —Erwin Schrödinger, What Is Life?, 1944

  Albert Einstein had a high capacity for new ideas. He could conceptualize the invisible bending of space; he could contemplate a space-time continuum that was quite unlike the three-dimensional reality in which he lived; he could make the imaginative leap into a world where time slowed to a standstill.

  But he couldn’t cope with quantum jumps. “I cannot seriously believe in it,” he wrote to his friend Max Born, not long before Born won the Nobel Prize for his work in quantum mechanics. “The theory is incompatible with the principle that physics is to represent reality in time and space, without spookish long-distance effects.”1

  •

  Those “spookish long-distance effects” were only one branch of the quantum physics that developed in the early twentieth century. But all quantum physics grew out of the same deep root: work done by chemists and physicists on the properties of atoms.

  Those atoms had first been proposed by the Greek philosophers Leucippus and Democritus, who had suggested that all matter was made up of tiny particles, too small for the eye to see: atomos, the “undivided.” Lacking proof, the hypothesis remained one among many. In the seventeenth century, Robert Boyle’s experiments had suggested that the medieval version of atomic theory, a world constructed of corpuscles, was more likely true than not. But it would be another 150 years before the chemist John Dalton, building on experiments with gases done by many others (Joseph Black, Henry Cavendish, Joseph Priestley, and Antoine Lavoisier, to name a few), could restate the theory with conviction. Atoms, Dalton proposed, were indivisible; different atoms had different masses, and when only one type of atom was present, the matter in question was an element. Atoms of different types, mixed together, produced compounds.

  Dalton’s indivisible atom was an uncomplicated solid, but by the last quarter of the nineteenth century, a handful of physicists—among them, Joseph Thomson in Cambridge, Pieter Zeeman in Leiden, Walter Kaufmann in Bonn, and Emil Wiechert in Königsberg—were theorizing that the behavior of cathode rays (glowing beams observed when voltage was applied to vacuum tubes) could best be explained by the presence of smaller particles within atoms. Two Irish physicists, George Stoney and his nephew George Fitzgerald, were responsible for naming these smaller particles electrons: fundamental electrical units, carrying a negative charge.

  This was not exactly the “discovery of the electron,” as it is often described in textbooks. Atoms, as science philosopher Theodore Arabatzis points out, were not “observable entities” like bugs under rocks, so indisputable evidence of the existence of electrons, or atoms, was still missing. Rather, atomic theory was an effort to explain observable phenomena (like the bend in starlight as it passes the sun) by proposing underlying causes. But these proposals were very much hypothetical. They could be granted more or less weight, depending on how well mathematical models based on them predicted the behavior of observable physical phenomena. But they could not be proved—certainly not in any sense that Francis Bacon would have signed off on. In fact, Max Planck, the theoretical physicist who would later pioneer quantum theory, expressed his doubts about electrons; at the turn of the twentieth century, he still did not have “complete confidence in that theory.”2

  But over the next decade or so, calculations based on various aspects of atomic theory began to yield amazingly accurate results. In one of his 1905 papers (“On the Motion of Small Particles Suspended in Liquids at Rest, Required by the Molecular-Kinetic Theory of Heat”), Albert Einstein came up with a mathematical formula that predicted the properties of the apparently random movement of particles in water (“Brownian motion,” first observed by Robert Brown in 1827) by relating them to the movements of those putative atoms.*

  Einstein’s calculations made it possible, theoretically, to estimate the number of atoms in a given substance, but his figures remained untested until 1908, when the French physicist Jean Perrin carried out two series of experiments that confirmed their validity. This research earned Perrin both the Nobel Prize and Einstein’s thanks: “It is a piece of good luck for this subject that you undertook to study it,” Einstein told Perrin the following year. It also convinced most physicists that the existence of the atom was no longer conjecture. “The atomic hypothesis has recently acquired enough credence to cease being a mere hypothesis,” wrote another French physicist, Henri Poincaré. “Atoms are no longer just a useful fiction; we can rightfully claim to see them, since we can actually count them.”3

  The next big question was the structure of the atom. Joseph Thomson had speculated (with no evidence whatsoever) that an atom was like a plum pudding, with the electrons (“corpuscles”) sprinkled evenly throughout it. The problem was that, so far as he could see, electrons were all negatively charged, but atoms were electrically neutral.† “When they are assembled in a neutral atom,” he wrote, visibly struggling with the missing piece of the puzzle, “the negative effect is balanced by something which causes the space through which the corpuscles are spread to act as if it had a charge of positive electricity equal in amount to the sum of the negative charges of the corpuscles.”4

  Jean Perrin guessed that there was another kind of particle within each atom:

  Each atom would consist, on one hand, of one or several positively charged masses—a kind of positive sun, the electric charge of which would greatly exceed that of a particle—and, on the other hand, by numerous particles acting as tiny negatively charged planets orbiting under the action of the electric forces, their negative total charge balancing exactly the total positive charge, thereby making the atom electrically neutral.5

  Our solar system was a powerfully attractive metaphor; it made sense that Thomson’s vague “something” might be a positive nucleus, orbited by the electrons. But Perrin acknowledged that this was only one of a number of possible models; it was merely a hypothesis, untested, unprovable.

  A young German physicist named Hans Geiger came up with a way to look for the nucleus. Working with two colleagues—the distinguished Ernest Rutherford (last seen in Chapter 16, estimating the age of radioactive minerals) and a very young physics student named Ernst Marsden—he invented an instrument that could count the particles thrown off by decaying elements. This “Geiger counter” measured the amount of radiation being emitted, but Geiger and Marsden noticed something odd: if the particles were passed through various kinds of metal plates, they changed direction in a way that couldn’t be accounted for by random motion. Some of them even went backward.

  “It seems very surprising,” Rutherford remarked, when reviewing these results. Something inside the atoms of the metal plates appeared to be colliding with the particles and bouncing them off into different trajectories. Thomson’s “plum pudding” model suggested that particles should simply shoot right through atoms in their path, like buckshot passing through jelly; Rutherford concluded that an atom had to contain something more massive than an electron, something large enough to account for the deflection of the particles. This, he proposed in a 1911 paper, was “a central electric charge concentrated at a point and surrounded by a uniform spherical distribution of opposite electricity equal in amount.” Working with this model, the “Rutherford atom,” he was able to predict the movement of those pass-through particles—proof that each atom contained a nucleus, orbited by electrons.6

  It was an elegant, intuitive model. It had a beautifully Platonic quality: the smallest particles in the universe mirroring the massive planetary movements in the heavens. Over a century later, the Rutherford atom is still the first picture that every chemistry student sees.

  26.1 RUTHERFORD’S ATOM

  It turned out to be wrong. Sort of.

&nb
sp; •

  A decade before, the physicist Max Planck had been working on something called “blackbody radiation,” radiation emitted by bodies that absorb the electromagnetic radiation that strikes them. (Hypothetically, a perfect blackbody object, entirely black, would suck in all the electromagnetic radiation coming its way.) In order to correctly predict the behavior of the radiation coming out of the blackbodies, Planck discovered that he had to fiddle with the properties of energy.7

  According to every physical theory known, energy was a wave. It should be radiating out of those blackbodies smoothly, evenly, constantly. But Planck’s calculations worked only if, instead, energy was pulsing out in chunks—not in waves, but in discrete units. If energy could be treated like separate particles (Planck called these hypothetical particles quanta, from the Latin quantus, “how much”), he could come up with a formula (now known as Planck’s Constant) that explained the behavior of the radiation perfectly.‡

  Planck wasn’t particularly happy about this solution. If a quantum were the size of a rock, it would be seen progressing forward in a series of jumps, not a smooth forward motion; this kind of movement seemed to contradict some of the most basic principles of physics and mechanics. Thirty years later, reflecting on his first formulation of these “quantum jumps,” Planck wrote to a friend, “What I did can be described as simply an act of desperation. . . . It was clear to me that classical physics could offer no solution to this problem . . . [so] I was ready to sacrifice every one of my previous convictions about physical laws.” But he pledged to keep looking for a more satisfactory solution; as far as he was concerned, quanta were “purely a formal assumption,” a mathematical hat trick that yielded the correct answers. Like scientists in the centuries before him, Planck was merely “saving the phenomena.”8

 

‹ Prev