Book Read Free

Asimov's New Guide to Science

Page 53

by Isaac Asimov


  Trying to account for the travel of light-waves through space, physicists decided that light, too, must be conducted by the supposed ether. They began to speak of the luminiferous (“light-carrying”) ether. But this idea at once ran into a serious difficulty. Light-waves are transverse waves: that is, they undulate at right angles to the direction of travel, like the ripples on the surface of water, in contrast to the longitudinal motion of sound waves, which vibrate back and forth in the direction of travel. Now physical theory said that only a solid medium could convey transverse waves. (Transverse water waves travel on the water surface—a special case—but cannot penetrate the body of the liquid.) Therefore the ether had to be solid, not gaseous or liquid—and an extremely rigid solid, too. To transmit waves at the tremendous speed of light, it had to be far more rigid than steel. What is more, this rigid ether had to permeate ordinary matter—not merely the vacuum of space but gases, water, glass, and all the other transparent substances through which light can travel. To cap it all, this solid, super-rigid material had to be so frictionless, so yielding, that it did not interfere in the slightest with the motion of the smallest planetoid or the flicker of an eyelid!

  Yet, despite the difficulties introduced by the notion of the ether, it seemed useful. Faraday, who had no mathematical background at all but had marvelous insight, worked out the concept of lines of force (lines along which a magnetic field has equal strength) and, visualizing these as elastic distortions of the ether, thus used it to explain magnetic phenomena, too.

  In the 1860s, Clerk Maxwell, a great admirer of Faraday, set about supplying the mathematical analysis to account for the lines of force. In doing so, he evolved a set of four simple equations that among them described almost all phenomena involving electricity and magnetism. These equations, advanced in 1864, not only described the interrelationship of the phenomena of electricity and magnetism, but showed the two cannot be separated. Where an electric field exists, there has to be a magnetic field, too, at right angles; and vice versa. There is, in fact, only a single electromagnetic field. (This was the original unified field theory which inspired all the work that followed in the next century.)

  In considering the implications of his equations, Maxwell found that a changing electric field has to induce a changing magnetic field, which in turn has to induce a changing electric field, and so on; the two leapfrog, so to speak, and the field progresses outward in all directions. The result is a radiation possessing the properties of a wave-form. In short, Maxwell predicted the existence of electromagnetic radiation with frequencies equal to that in which the electromagnetic field waxes and wanes.

  It was even possible for Maxwell to calculate the velocity at which such an electromagnetic wave would have to move. He did this by taking into consideration the ratio of certain corresponding values in the equations describing the force between electric charges and the force between magnetic poles. This ratio turned out to be precisely equal to the velocity of light, and Maxwell could not accept that as a mere coincidence. Light was an electromagnetic radiation, and along with it were other radiations with wavelengths far longer, or far shorter, than that of ordinary light—and all these radiations involved the ether.

  THE MAGNETIC MONOPOLES

  Maxwell’s equations, by the way, introduced a problem that is still with us. They seemed to emphasize a complete symmetry between the phenomena of electricity and magnetism: what was true of one is true of the other. Yet in one fundamental way, the two seemed different—a difference that grew all the more puzzling once subatomic particles were discovered and studied. Particles exist that carry one or the other of the two opposed electric charges—positive or negative—but not both. Thus, the electron carries a negative electric charge only, While the positron carries a positive electric charge only. Analogously, ought not there be particles with a north magnetic pole only, and others with a south magnetic pole only? These magnetic monopoles, however, have long been sought in vain. Every object—large or small, galaxy or subatomic particle—that has a magnetic field has both a north pole and a south pole.

  In 1931, Dirac, tackling the matter mathematically, came to the decision that if magnetic monopoles exist (if even one exists anywhere in the universe), it would be necessary for all electric charges to be exact multiples of some smallest charge—as, in fact, they are. And since all electric charges are exact multiples of some smallest charge, must not magnetic monopoles therefore exist?

  In 1974, a Dutch physicist, Gerard ’t Hooft, and a Soviet physicist, Alexander Polyakov, independently showed that it could be reasoned from the grand unified theories that indeed magnetic monopoles must exist, and that they must be enormous in mass. Although a magnetic monopole would be even smaller than a proton, it would have to have a mass of anywhere from 10 quadrillion to 10 quintillion times that of the proton. It would have the mass of a bacterium, all squeezed into a tiny subatomic particle.

  Such particles could only have been formed at the time of the big bang. Never since has there been a sufficiently high concentration of energy to form them. Such huge particles would be moving at 150 miles a second or so, and the combination of huge mass and tiny size would allow it to slip through matter without leaving any signs to speak of. This property may account for the failure to detect magnetic monopoles hitherto.

  If, however, the magnetic monopole managed to pass through a coil of wire, it would send a momentary surge of electric current through that coil (a well-known phenomenon that Faraday first demonstrated, see chapter 5). If the coil were at ordinary temperatures, the surge would come and go so quickly it might be missed. If it were superconductive, the surge would remain for as long as the coil was kept cold enough.

  The physicist Blas Cabrera, at Stanford University, set up a superconductive niobium coil, kept it thoroughly isolated from stray magnetic fields, and waited four months. On 14 February 1982 at 1:53 P.M., there came a sudden flow of electricity, in just about exactly the amount one would expect if a magnetic monopole had passed through. Physicists are now trying to set up devices to confirm this finding; and until they do, we cannot be certain that the magnetic monopole has been detected at last.

  ABSOLUTE MOTION

  But back to the ether which, at the height of its power, met its Waterloo as a result of an experiment undertaken to test another classical question as knotty as action at a distance—namely, the question of absolute motion.

  By the nineteenth century, it had become perfectly plain that the earth, the sun, the stars, and, in fact, all objects in the universe were in motion. Where, then, could you find a fixed reference point, one that was at absolute rest, to determine absolute motion—the foundation on which Newton’s laws of motion were based? There was one possibility. Newton had suggested that the fabric of space itself (the ether, presumably) was at rest, so that one could speak of absolute space. If the ether was motionless, perhaps one could find the absolute motion of an object by determining its motion in relation to the ether.

  In the 1880s, Albert Michelson conceived an ingenious scheme to find just that. If the earth is moving through a motionless ether, he reasoned, then a beam of light sent in the direction of its motion and reflected back should travel a shorter distance than one sent out at right angles and reflected back. To make the test, Michelson invented the “interferometer,” a device with a semimirror that lets half of a light beam through in the forward direction and reflects the other half at right angles. Both beams are then reflected back by mirrors to an eyepiece at the source. If one beam has traveled a slightly longer distance than the other, they arrive out of phase and form interference bands (figure 8.3). This instrument is an extremely sensitive measurer of differences in length—so sensitive, in fact, that it can measure both the growth of a plant from second to second and the diameter of some stars that seem to be dimensionless points of light in even the largest telescope.

  Figure 8.3. Michelson’s interferometer. The semi mirror (center) splits the light beam, reflecting one
half and letting the other half go straight ahead. If the two reflecting mirrors (at right and straight ahead) are at different distances, the returning beams of light will arrive at the observer out of phase.

  Michelson’s plan was to point the interferometer in various directions with respect to the earth’s motion and detect the effect of the ether by the amount by which the split beams were out of phase on their return.

  In 1887, with the help of the American chemist Edward Williams Morley, Michelson set up a particularly delicate version of the experiment. Stationing the instrument on a stone floating on mercury, so that it could be turned in any direction easily and smoothly, they projected their beam in various directions with respect to the earth’s motion. They discovered practically no difference! The interference bands were virtually the same no matter in what direction Michelson and Morley pointed the instrument or how many times they performed the experiment. (It should be said here that more recent experiments along the same line with still more delicate instruments have shown the same negative results.)

  The foundations of physics tottered. Either the ether was moving with the earth, which made no sense at all, or there was, perhaps, no such thing as the ether. In either case there was no absolute motion or absolute space. The physics of Newton had had the rug pulled out from under it. Newtonian physics still held in the ordinary world: planets still moved in accordance with his law of gravitation, and objects on earth still obeyed his law of inertia and of action and reaction. It was just that the classical explanations are incomplete, and physicists must be prepared to find phenomena that do not obey the classical “laws.” The observed phenomena, both old and new, would remain, but the theories accounting for them would have to be broadened and refined.

  The Michelson-Morley experiment is probably the most important experiment-that-did-not-work in the whole history of science. Michelson was awarded the Nobel Prize in physics in 1907—the first American scientist to receive a Nobel Prize, though not for this experiment specifically.

  Relativity

  THE LORENTZ-FITZGERALD EQUATIONS

  In 1893, the Irish physicist George Francis FitzGerald came up with a novel explanation to account for the negative results of the Michelson-Morley experiment.

  He suggested that all matter contracts in the direction of its motion and that the amount of contraction increases with the rate of motion. According to this interpretation, the interferometer is always shortened in the direction of the earth’s “true” motion by an amount that exactly compensates for the difference in distance that the light beam has to travel. Moreover, all possible measuring devices, including human sense organs, would be “foreshortened” in just the same way, so that the foreshortening could, in no possible way, be measured, if we move with the object. FitzGerald’s explanation almost made it look as if nature conspires to keep us from measuring absolute motion by introducing an effect that just cancels out any differences we might try to use to detect that motion.

  This frustrating phenomenon became known as the FitzGerald contraction. FitzGerald worked out an equation for it. An object moving at 7 miles per second (about the speed of our fastest present rockets) would contract by only about two parts per billion in the direction of flight. But, at really high speeds, the contraction would be substantial. At 93,000 miles per second (half the speed of light), it would be 15 percent; at 163,000 miles per second (⅞ the speed of light), 50 percent: that is, a 1-foot ruler moving past us at 163,000 miles per second would seem only 6 inches long to us—provided we were not moving along with it and knew a method of measuring its length as it flew by. And at the speed of light, 186,282 miles per second, its length in the direction of motion would be zero. Since presumably there can be no length shorter than zero, it would follow that the speed of light in a vacuum is the greatest possible velocity in the universe.

  The Dutch physicist Hendrik Antoon Lorentz soon carried FitzGerald’s idea one step further. Thinking about cathode rays, on which Lorentz was working at the time, he reasoned that if the charge of a charged particle were compressed into a smaller volume, the mass of the particle should increase. Therefore a flying particle foreshortened in the direction of its travel by the FitzGerald contraction would have to increase in mass.

  Lorentz presented an equation for the mass increase that turned out to be very similar to FitzGerald’s equation for shortening. At 93,000 miles per second, an electron’s mass would be increased by 15 percent; at 163,000 miles per second, by 100 percent (that is, its mass would be doubled); and at the speed of light, its mass would be infinite. Again it seemed that no speed greater than that of light could be possible, for how could mass be more than infinite?

  The FitzGerald length effect and the Lorentz mass effect are so closely connected that the equations are often lumped together as the Lorentz-FitzGerald equations.

  The change of mass with speed can be measured by a stationary observer far more easily than can the change in length. The ratio of an electron’s mass to its charge can be determined from its deflection by a magnetic field. As an electron’s velocity increased, the mass would increase, but there was no reason to think that the charge would; therefore, its mass-charge ratio should increase, and its path should become less curved. By 1900, the German physicist Walter Kauffman discovered that this ratio increased with velocity in such a way as to indicate that the electron’s mass increases just as predicted by the Lorentz-FitzGerald equations. Later and better measurements showed the agreement to be just about perfect.

  In discussing the speed of light as a maximum velocity, we must remember that it is the speed of light in a vacuum (186,282 miles per second) that is important here. In transparent material media, light moves more slowly. Its velocity in such a medium is equal to its velocity in a vacuum divided by the index of refraction of the medium. (The index of refraction is a measure of the extent by which a light-beam, entering the material obliquely from a vacuum, is bent.)

  In water, with an index of refraction of about 1.3, the speed of light is 186,282 divided by 1.3, or about 143,000 miles per second. In glass (index of refraction about 1.5), the speed of light is 124,000 miles per second; while in diamond (index of refraction, 2.4) the speed of light is a mere 78,000 miles per second.

  RADIATION AND PLANCK’S QUANTUM THEORY

  It is possible for subatomic particles to travel through a particular transparent medium at a velocity greater than that of light in that medium (though not greater than that of light in a vacuum). When particles travel through a medium in this fashion, they throw back a wake of bluish light much as an airplane traveling at supersonic velocities throws back a wake of sound.

  The existence of such radiation was observed by the Russian physicist Paul Alekseyevich Cherenkov (his name is also spelled Cerenkov) in 1934; in 1937, the theoretical explanation was offered by the Russian physicists Ilya Mikhailovich Frank and Igor Yevgenevich Tamm. All three shared the Nobel Prize for physics in 1958 as a result.

  Particle detectors have been devised to detect the Cerenkov radiation, and these Cerenkov counters are particularly well adapted to study particularly fast particles, such as those making up the cosmic rays.

  While the foundations of physics were still rocking from the Michelson-Morley experiment and the FitzGerald contraction, a second explosion took place. This time the innocent question that started all the trouble had to do with the radiation emitted by matter when it is heated. (Although the radiation in question is usually in the form of light, physicists speak of the problem as black-body radiation: that is, they are thinking of an ideal body that absorbs light perfectly—without reflecting any of it away, as a perfectly black body would do—and, in reverse, also radiates perfectly in a wide band of wavelengths.) The Austrian physicist Josef Stefan showed, in 1879, that the total radiation emitted by a body depends only on its temperature (not at all on the nature of its substance), and that, in ideal circumstances, the radiation is proportional to the fourth power of the absolute temperature:
that is, doubling the absolute temperature would increase its total radiation 2 × 2 × 2 × 2, or sixteen-fold (Stefan s law). It was also known that, as the temperature rises, the predominant radiation moves toward shorter wavelengths. As a lump of steel is heated, for instance, it starts by radiating chiefly in the invisible infrared, then glows dim red, then bright red, then orange, then yellow-white, and finally, if it could somehow be kept from vaporizing at that point, it would be blue-white.

  In 1893, the German physicist Wilhelm Wien worked out a theory that yielded a mathematical expression for the energy distribution of black-body radiation—that is, of the amount of energy radiated at each particular wavelength range. This theory provided a formula that accurately described the distribution of energy at the violet end of the spectrum but not at the red end. (For his work on heat, Wien received the Nobel Prize in physics in 1911.) On the other hand, the English physicists Lord Rayleigh and James Jeans worked up an equation that described the distribution at the red end of the spectrum but failed completely at the violet end. In short, the best theories available could explain one-half of the radiation or the other, but not both at once.

  The German physicist Max Karl Ernst Ludwig Planck tackled the problem. He found that, in order to make the equations fit the facts, he had to introduce a completely new notion. He suggested that radiation consists of small units or packets, just as matter is made up of atoms. He called the unit of radiation the quantum (after the Latin word for “how much?”). Planck argued that radiation can be absorbed only in whole numbers of quanta. Furthermore, he suggested that the amount of energy in a quantum depends on the wavelength of the radiation. The shorter the wavelength, the more energetic the quantum; or, to put it another way, the energy content of the quantum is inversely proportional to the wavelength.

 

‹ Prev