To Explain the World: The Discovery of Modern Science

Home > Other > To Explain the World: The Discovery of Modern Science > Page 27
To Explain the World: The Discovery of Modern Science Page 27

by Steven Weinberg


  Just as happened with gravitation, the notion of currents and magnets exerting forces on each other was replaced with the idea of a field, in this case a magnetic field. Each magnet and current-carrying wire contributes to the total magnetic field at any point in its vicinity, and this magnetic field exerts a force on any magnet or electric current at that point. Michael Faraday attributed the magnetic forces produced by an electric current to lines of magnetic field encircling the wire. He also described the electric forces produced by a piece of rubbed amber as due to an electric field, pictured as lines emanating radially from the electric charges on the amber. Most important, Faraday in the 1830s showed a connection between electric and magnetic fields: a changing magnetic field, like that produced by the electric current in a rotating coil of wire, produces an electric field, which can drive electric currents in another wire. It is this phenomenon that is used to generate electricity in modern power plants.

  The final unification of electricity and magnetism was achieved a few decades later, by James Clerk Maxwell. Maxwell thought of electric and magnetic fields as tensions in a pervasive medium, the ether, and expressed what was known about electricity and magnetism in equations relating the fields and their rates of change to each other. The new thing added by Maxwell was that, just as a changing magnetic field generates an electric field, so also a changing electric field generates a magnetic field. As often happens in physics, the conceptual basis for Maxwell’s equations in terms of an ether has been abandoned, but the equations survive, even on T-shirts worn by physics students.*

  Maxwell’s theory had a spectacular consequence. Since oscillating electric fields produce oscillating magnetic fields, and oscillating magnetic fields produce oscillating electric fields, it is possible to have a self-sustaining oscillation of both electric and magnetic fields in the ether, or as we would say today, in empty space. Maxwell found around 1862 that this electromagnetic oscillation would propagate at a speed that, according to his equations, had just about the same numerical value as the measured speed of light. It was natural for Maxwell to jump to the conclusion that light is nothing but a mutually self-sustaining oscillation of electric and magnetic fields. Visible light has a frequency far too high for it to be produced by currents in ordinary electric circuits, but in the 1880s Heinrich Hertz was able to generate waves in accordance with Maxwell’s equations: radio waves that differed from visible light only in having much lower frequency. Electricity and magnetism had thus been unified not only with each other, but also with optics.

  As with electricity and magnetism, progress in understanding the nature of matter began with quantitative measurement, here measurement of the weights of substances participating in chemical reactions. The key figure in this chemical revolution was a wealthy Frenchman, Antoine Lavoisier. In the late eighteenth century he identified hydrogen and oxygen as elements and showed that water is a compound of hydrogen and oxygen, that air is a mixture of elements, and that fire is due to the combination of other elements with oxygen. Also on the basis of such measurements, it was found a little later by John Dalton that the weights with which elements combine in chemical reactions can be understood on the hypothesis that pure chemical compounds like water or salt consist of large numbers of particles (later called molecules) that themselves consist of definite numbers of atoms of pure elements. The water molecule, for instance, consists of two hydrogen atoms and one oxygen atom. In the following decades chemists identified many elements: some familiar, like carbon, sulfur, and the common metals; and others newly isolated, such as chlorine, calcium, and sodium. Earth, air, fire, and water did not make the list. The correct chemical formulas for molecules like water and salt were worked out, in the first half of the nineteenth century, allowing the calculation of the ratios of the masses of the atoms of the different elements from measurements of the weights of substances participating in chemical reactions.

  The atomic theory of matter scored a great success when Maxwell and Ludwig Boltzmann showed how heat could be understood as energy distributed among vast numbers of atoms or molecules. This step toward unification was resisted by some physicists, including Pierre Duhem, who doubted the existence of atoms and held that the theory of heat, thermodynamics, was at least as fundamental as Newton’s mechanics and Maxwell’s electrodynamics. But soon after the beginning of the twentieth century several new experiments convinced almost everyone that atoms are real. One series of experiments, by J. J. Thomson, Robert Millikan, and others, showed that electric charges are gained and lost only as multiples of a fundamental charge: the charge of the electron, a particle that had been discovered by Thomson in 1897. The random “Brownian” motion of small particles on the surface of liquids was interpreted by Albert Einstein in 1905 as due to collisions of these particles with individual molecules of the liquid, an interpretation confirmed by experiments of Jean Perrin. Responding to the experiments of Thomson and Perrin, the chemist Wilhelm Ostwald, who earlier had been skeptical about atoms, expressed his change of mind in 1908, in a statement that implicitly looked all the way back to Democritus and Leucippus: “I am now convinced that we have recently become possessed of experimental evidence of the discrete or grained nature of matter, which the atomic hypothesis sought in vain for hundreds and thousands of years.”4

  But what are atoms? A great step toward the answer was taken in 1911, when experiments in the Manchester laboratory of Ernest Rutherford showed that the mass of gold atoms is concentrated in a small heavy positively charged nucleus, around which revolve lighter negatively charged electrons. The electrons are responsible for the phenomena of ordinary chemistry, while changes in the nucleus release the large energies encountered in radioactivity.

  This raised a new question: what keeps the orbiting atomic electrons from losing energy through the emission of radiation, and spiraling down into the nucleus? Not only would this rule out the existence of stable atoms; the frequencies of the radiation emitted in these little atomic catastrophes would form a continuum, in contradiction with the observation that atoms can emit and absorb radiation only at certain discrete frequencies, seen as bright or dark lines in the spectra of gases. What determines these special frequencies?

  The answers were worked out in the first three decades of the twentieth century with the development of quantum mechanics, the most radical innovation in physical theory since the work of Newton. As its name suggests, quantum mechanics requires a quantization (that is, a discreteness) of the energies of various physical systems. Niels Bohr in 1913 proposed that an atom can exist only in states of certain definite energies, and gave rules for calculating these energies in the simplest atoms. Following earlier work of Max Planck, Einstein had already in 1905 suggested that the energy in light comes in quanta, particles later called photons, each photon with an energy proportional to the frequency of the light. As Bohr explained, when an atom loses energy by emitting a single photon, the energy of that photon must equal the difference in the energies of the initial and final atomic states, a requirement that fixes its frequency. There is always an atomic state of lowest energy, which cannot emit radiation and is therefore stable.

  These early steps were followed in the 1920s with the development of general rules of quantum mechanics, rules that can be applied to any physical system. This was chiefly the work of Louis de Broglie, Werner Heisenberg, Wolfgang Pauli, Pascual Jordan, Erwin Schrödinger, Paul Dirac, and Max Born. The energies of allowed atomic states are calculated by solving an equation, the Schrödinger equation, of a general mathematical type that was already familiar from the study of sound and light waves. A string on a musical instrument can produce just those tones for which a whole number of half wavelengths fit on the string; analogously, Schrödinger found that the allowed energy levels of an atom are those for which the wave governed by the Schrödinger equation just fits around the atom without discontinuities. But as first recognized by Born, these waves are not waves of pressure or of electromagnetic fields, but waves of probability�
�a particle is most likely to be near where the wave function is largest.

  Quantum mechanics not only solved the problem of the stability of atoms and the nature of spectral lines; it also brought chemistry into the framework of physics. With the electrical forces among electrons and atomic nuclei already known, the Schrödinger equation could be applied to molecules as well as to atoms, and allowed the calculation of the energies of their various states. In this way it became possible in principle to decide which molecules are stable and which chemical reactions are energetically allowed. In 1929 Dirac announced triumphantly that “the underlying physical laws necessary for the mathematical theory of a larger part of physics and the whole of chemistry are thus completely known.”5

  This did not mean that chemists would hand over their problems to physicists, and retire. As Dirac well understood, for all but the smallest molecules the Schrödinger equation is too complicated to be solved, so the special tools and insights of chemistry remain indispensable. But from the 1920s on, it would be understood that any general principle of chemistry, such as the rule that metals form stable compounds with halogen elements like chlorine, is what it is because of the quantum mechanics of nuclei and electrons acted on by electromagnetic forces.

  Despite its great explanatory power, this foundation was itself far from being satisfactorily unified. There were particles: electrons and the protons and neutrons that make up atomic nuclei. And there were fields: the electromagnetic field, and whatever then-unknown short-range fields are presumably responsible for the strong forces that hold atomic nuclei together and for the weak forces that turn neutrons into protons or protons into neutrons in radioactivity. This distinction between particles and fields began to be swept away in the 1930s, with the advent of quantum field theory. Just as there is an electromagnetic field, whose energy and momentum are bundled in particles known as photons, so there is also an electron field, whose energy and momentum are bundled in electrons, and likewise for other types of elementary particles.

  This was far from obvious. We can directly feel the effects of gravitational and electromagnetic fields because the quanta of these fields have zero mass, and they are particles of a type (known as bosons) that in large numbers can occupy the same state. These properties allow large numbers of photons to build up to form states that we observe as electric and magnetic fields that seem to obey the rules of classical (that is, non-quantum) physics. Electrons, in contrast, have mass and are particles of a type (known as fermions) no two of which can occupy the same state, so that electron fields are never apparent in macroscopic observations.

  In the late 1940s quantum electrodynamics, the quantum field theory of photons, electrons, and antielectrons, scored stunning successes, with the calculation of quantities like the strength of the electron’s magnetic field that agreed with experiment to many decimal places.* Following this achievement, it was natural to try to develop a quantum field theory that would encompass not only photons, electrons, and antielectrons but also the other particles being discovered in cosmic rays and accelerators and the weak and strong forces that act on them.

  We now have such a quantum field theory, known as the Standard Model. The Standard Model is an expanded version of quantum electrodynamics. Along with the electron field there is a neutrino field, whose quanta are fermions like electrons but with zero electric charge and nearly zero mass. There is a pair of quark fields, whose quanta are the constituents of the protons and neutrons that make up atomic nuclei. For reasons that no one understands, this menu is repeated twice, with much heavier quarks and much heavier electron-like particles and their neutrino partners. The electromagnetic field appears in a unified “electroweak” picture along with other fields responsible for the weak nuclear interactions, which allow protons and neutrons to convert into one another in radioactive decays. The quanta of these fields are heavy bosons: the electrically charged W+ and W−, and the electrically neutral Z0. There are also eight mathematically similar “gluon” fields responsible for the strong nuclear interactions, which hold quarks together inside protons and neutrons. In 2012 the last missing piece of the Standard Model was discovered: a heavy electrically neutral boson that had been predicted by the electroweak part of the Standard Model.

  The Standard Model is not the end of the story. It leaves out gravitation; it does not account for the “dark matter” that astronomers tell us makes up five-sixths of the mass of the universe; and it involves far too many unexplained numerical quantities, like the ratios of the masses of the various quarks and electron-like particles. But even so, the Standard Model provides a remarkably unified view of all types of matter and force (except for gravitation) that we encounter in our laboratories, in a set of equations that can fit on a single sheet of paper. We can be certain that the Standard Model will appear as at least an approximate feature of any better future theory.

  The Standard Model would have seemed unsatisfying to many natural philosophers from Thales to Newton. It is impersonal; there is no hint in it of human concerns like love or justice. No one who studies the Standard Model will be helped to be a better person, as Plato expected would follow from the study of astronomy. Also, contrary to what Aristotle expected of a physical theory, there is no element of purpose in the Standard Model. Of course, we live in a universe governed by the Standard Model and can imagine that electrons and the two light quarks are what they are to make us possible, but then what do we make of their heavier counterparts, which are irrelevant to our lives?

  The Standard Model is expressed in equations governing the various fields, but it cannot be deduced from mathematics alone. Nor does it follow straightforwardly from observation of nature. Indeed, quarks and gluons are attracted to each other by forces that increase with distance, so these particles can never be observed in isolation. Nor can the Standard Model be deduced from philosophical preconceptions. Rather, the Standard Model is a product of guesswork, guided by aesthetic judgment, and validated by the success of many of its predictions. Though the Standard Model has many unexplained aspects, we expect that at least some of these features will be explained by whatever deeper theory succeeds it.

  The old intimacy between physics and astronomy has continued. We now understand nuclear reactions well enough not only to calculate how the Sun and stars shine and evolve, but also to understand how the lightest elements were produced in the first few minutes of the present expansion of the universe. And as in the past, astronomy now presents physics with a formidable challenge: the expansion of the universe is speeding up, presumably owing to dark energy that is contained not in particle masses and motions, but in space itself.

  There is one aspect of experience that at first sight seems to defy understanding on the basis of any unpurposeful physical theory like the Standard Model. We cannot avoid teleology in talking of living things. We describe hearts and lungs and roots and flowers in terms of the purpose they serve, a tendency that was only increased with the great expansion after Newton of information about plants and animals due to naturalists like Carl Linnaeus and Georges Cuvier. Not only theologians but also scientists including Robert Boyle and Isaac Newton have seen the marvelous capabilities of plants and animals as evidence for a benevolent Creator. Even if we can avoid a supernatural explanation of the capabilities of plants and animals, it long seemed inevitable that an understanding of life would rest on teleological principles very different from those of physical theories like Newton’s.

  The unification of biology with the rest of science first began to be possible in the mid-nineteenth century, with the independent proposals by Charles Darwin and Alfred Russel Wallace of the theory of evolution through natural selection. Evolution was already a familiar idea, suggested by the fossil record. Many of those who accepted the reality of evolution explained it as a result of a fundamental principle of biology, an inherent tendency of living things to improve, a principle that would have ruled out any unification of biology with physical science. Darwin and Wallace instead
proposed that evolution acts through the appearance of inheritable variations, with favorable variations no more likely than unfavorable ones, but with the variations that improve the chances of survival and reproduction being the ones that are likely to spread.*

  It took a long time for natural selection to be accepted as the mechanism for evolution. No one in Darwin’s time knew the mechanism for inheritance, or for the appearance of inheritable variations, so there was room for biologists to hope for a more purposeful theory. It was particularly distasteful to imagine that humans are the result of millions of years of natural selection acting on random inheritable variations. Eventually the discovery of the rules of genetics and of the occurrence of mutations led in the twentieth century to a “neo-Darwinian synthesis” that put the theory of evolution through natural selection on a firmer basis. Finally this theory was grounded on chemistry, and thereby on physics, through the realization that genetic information is carried by the double helix molecules of DNA.

  So biology joined chemistry in a unified view of nature based on physics. But it is important to acknowledge the limitations of this unification. No one is going to replace the language and methods of biology with a description of living things in terms of individual molecules, let alone quarks and electrons. For one thing, even more than the large molecules of organic chemistry, living things are too complicated for such a description. More important, even if we could follow the motion of every atom in a plant or animal, in that immense mass of data we would lose the things that interest us—a lion hunting antelope or a flower attracting bees.

 

‹ Prev