Book Read Free

Cracking the Particle Code of the Universe

Page 5

by Moffat, John W.


  STRONG INTERACTIONS BEFORE THE QUARK MODEL

  During the late 1950s and early 1960s, quantum field theory as the fundamental language of particle physics fell into decline in the high-energy physics community. It was not possible to perform quantum field theory calculations successfully for strong interactions. This was a result of the strong-interaction coupling constant being larger than unity, in contrast to the coupling constant in QED—the fine-structure constant—which was much smaller, equal to approximately one divided by 137. Because of the size of the strong-interaction coupling constant, any perturbation theory calculation using quantum field theory in strong interactions failed, and no known methods in quantum field theory were able to overcome this obstacle at the time.

  Theorists turned to the S-matrix approach to strong interactions. Founded by Werner Heisenberg after World War II and also pursued by American physicist John Wheeler, the S-matrix was a way of solving problems in the scattering of strongly interacting particles. The idea was that free, noninteracting particles—hadrons, such as protons—would scatter off other hadrons. The physics of the actual scattering of the particles in accelerators was treated as a “black box”—that is, exactly how the particles scattered was unknowable information to the theorists. This is in contrast to the quantum field theory approach, in which everything should be known about the interactions of the particles at the collision events in spacetime. S-matrix theory became a veritable industry, centered mainly at the University of California at Berkeley, in a group headed by Geoffrey Chew and Stanley Mandelstam. The program led to the phenomenological Regge pole models of strong interactions founded by Italian theorist Tullio Regge.

  With the S-matrix approach, it was not necessary to know about the physical processes at the actual scattering site. To calculate scattering cross-sections, it was only necessary to know the incoming particles being scattered and the outgoing particles. The S-matrix was a mathematical way of connecting the incoming particles and the outgoing particles after the scattering had occurred. The idea was that the mathematical properties of this scattering of the S-matrix were sufficient to explain all the needed physics of strong-interaction scattering in accelerators, obviating the need for quantum field theory. Certain mathematical properties called analyticity and unitarity of the S-matrix played important roles in the development of S-matrix theory. The analyticity had to do with the mathematics of complex variables, and the unitarity had to do with the demand that the scattering processes described by the S-matrix did not exceed 100 percent probability.

  The S-matrix program became the dominant activity in the particle physics community during the early 1960s before the appearance of the quark–parton model. An important motivation for this work was the “bootstrapping” mechanism within the S-matrix formalism. Instead of seeking to explain hadron physics in terms of basic building blocks, the hadrons interacted in a democratic way, each with equal importance. The proponents of the S-matrix theory gave up the idea that some particles were more “elementary” than others. All hadrons—such as protons, neutrons, and hadron resonances—these proponents claimed, were created equal, and bootstrapped themselves to produce a self-consistent solution to strong-interaction physics. In other words, exchanges between the particles produced the strong forces that held them together. Experimentalists observed that the spins or angular momentum of the hadron resonances when plotted on a graph lay on straight lines that increased from zero with increasing energy, and theorists conjectured that this resulted in an infinite number of particle spins as one went to higher and higher energies. Those promoting the bootstrapping program claimed that one should not base particle physics on a fundamental particle unit such as the quark, but that the mutual interactions of the hadrons (considered to be elementary particles at the time) bootstrapped themselves to produce other hadrons. All of this was expressed in terms of the S-matrix.

  Tullio Regge introduced the idea of Regge theory in 1957. By having the angular momentum or spin of the hadron resonances not take on integer and half-integer values, but take on continuous complex values as functions of energy, Regge identified the hadron resonances as poles in the complex angular momentum plane. Regge poles became a flourishing phenomenological activity during the 1960s and 1970s, occupying particle physicists more than quantum field theory. The quark–parton model eventually became more popular in the particle physics community because it was deemed that the S-matrix approach to strong interactions, including Regge pole theory, was not as promising an approach to understanding particle physics as had been hoped by its pioneers.

  GETTING A GRIP ON THE STRONG FORCE WITH THE QUARK–PARTON MODEL

  Along with the program of S-matrix theory and Regge poles, physicists pursued other ideas in attempting to understand how the hadrons that had been discovered up until then interacted with one another and how they could be categorized into groups of particles. Physicists such as Abdus Salam and John Ward, Sheldon Glashow, and Jun John Sakurai extended the ideas of Yang and Mills into strong-interaction physics. They used the symmetry group SU(3) to describe the Yang–Mills interactions, which led them to eight intermediary vector particles carrying the strong force, thus extending the Yang–Mills proposal of a triplet of intermediate vector bosons as occurs in SU(2). These eight bosons are the “octet” in SU(3). Yuval Ne’eman, who was a member of Salam’s group at Imperial College London, independently published a paper”21 in which he assigned the known baryons and mesons to octets of SU(3).

  A month after Ne’eman’s paper, Murray Gell-Mann sent out a preprint of his paper called “The Eightfold Way: A Theory of Strong Interaction Symmetry,” which contained independently the same ideas as those proposed by Ne’eman. The octets of SU(3) baryons that both Gell-Mann and Ne’eman were discussing accommodated the “flavors” of baryons and mesons. Flavor, we recall, is a characteristic that categorizes different hadrons. Gell-Mann never published his paper because he doubted whether experiments could verify his proposals. However, he eventually published a paper in which, in Gell-Mann’s clever way, he cautiously discussed the possible symmetries of strong interactions between particles.22 Only in one of the last sections of the paper did he actually discuss his idea of the Eightfold Way in SU(3) as a possible description of strong interactions.

  In 1974, when Gell-Mann visited the University of Toronto as my guest, he advised me that when writing physics papers one should attempt to include all possible variants of the proposed theory in the paper, thereby preventing some future author from generalizing your theory and leaving you behind without credit. His paper on the symmetries of baryons and mesons, which was revised twice before it was finally published, was a good example of that philosophy because, in it, Gell-Mann managed to cover the most reasonable ways of grouping hadrons. One of the interesting aspects of the paper is that Gell-Mann appeared to renounce the whole idea of a gauge theory of quantum field theory to describe strongly interacting particles, sticking to his use of group theory to classify the hadrons. The possibility of a gauge quantum field theory only came later, with his proposal of QCD.

  Richard Feynman, one of the inventors of QED, which enjoyed remarkable experimental success, wanted to get back into the theoretical game of particle physics during the early 1970s. He had been preoccupied mainly with weak interactions and, in collaboration with Gell-Mann, had discovered the V minus A (V – A) theory of weak interactions, which incorporated the violation of parity (left–right symmetry). Robert Marshak and George Sudarshan had discovered independently V – A weak-interaction theory. V – A stands for vector minus axial vector, referring to the two currents of weak interactions.

  Feynman sought out James Bjorken at Stanford and learned about his theoretical analysis of the detection of hard-core particles inside the proton and neutron at SLAC. Feynman reinterpreted the physical meaning of Bjorken’s scaling laws, which described the SLAC results on deep inelastic scattering of electrons and protons, and invented a mathematical way of describing the probab
ilities of how many of these core particles were embedded in the proton and neutron. Although he knew about Gell-Mann’s three quarks being the basis of his SU(3) group theory of the quark model, Feynman wanted to approach the problem from a more general point of view, based on Bjorken’s analysis of the experimental data. Despite the fact that Feynman had produced important research in weak-interaction theory in collaboration with Murray Gell-Mann, Feynman and Gell-Mann were quite competitive, and Feynman wanted to demonstrate his own brilliance in the field of strong interactions. He called his particles “partons,” and established the probabilities for how many would exist inside the proton for a given energy and momentum transfer in the scattering of protons and electrons in the accelerator.

  Feynman conceived of the proton as being filled with many of his tiny, hard partons. He assumed that these particles were not interacting with one another—that is, they were freely moving particles inside the proton. He gave no explanation regarding why these free partons did not escape the proton to become visible in accelerators. He pictured the proton as a flat pancake, with the flatness being caused by the contraction of the proton in length because of its moving close to the speed of light (the effect of the Lorentz–FitzGerald contraction of material rods in special relativity).

  When one proton is moving past another, you would have two flat pancakes containing partons moving relative to one another at close to the speed of light. He then calculated the probability of a parton in one proton interacting with a parton or partons in another proton when both protons are colliding. This interaction could be electromagnetic as a result of the exchange of photons between the electrically charged partons. Moreover, a parton in one proton could also interact with partons in the other proton through strong interactions by exchanging mesons. From his calculations of the electromagnetic interactions of the partons in the protons, Feynman was able to obtain Bjorken’s scaling law and explain the high-energy defractive behavior of the electron/ proton cross-sections measured at SLAC. His explanation was much simpler and more intuitive than Bjorken’s esoteric explanation based on particle physics and quantum field theory calculations. However, what did this parton model of Feynman’s have to do with the quark model, in which the protons and neutrons were made up of three quarks, not many quarks? Could the parton be identified with the fractionally charged quarks?

  Experiments were then begun at SLAC and CERN to discover how many quarks or partons there were in a proton. The initial results did not prove promising. The theorists were not able to establish that there were only three hard, pointlike objects in the proton. How could they account for the many partons in Feynman’s model of the proton? Theorists were not long in coming up with ways of explaining this situation. In an analogy with atomic physics models of atoms, they postulated that the three quarks of Gell-Mann or the aces of Zweig were valence quarks. Analogous to the cloud of electrons surrounding the protons and neutrons in atoms, they hypothesized that there was a “sea” of quarks and antiquarks buzzing around the three basic valence quarks. The experimentalists still did not find results that could identify partons with quarks. Something was missing. So they postulated further that the interactions that bound the valence quarks inside the proton at lower energies created the difficulty in distinguishing how many partons or quarks were inside the proton. Later, when color charge for quarks was invented, and the interaction binding quarks was understood to be caused by colored gluons, the experimental results of proton collisions at CERN gave a comprehensible picture of partons being identified with quarks.

  Feynman’s parton model eventually became an important technical method for analyzing proton–proton and proton–antiproton collisions. The quark–antiquark interactions that sprayed out as jets from the proton–proton and proton–antiproton collisions were analyzed using his parton distribution functions. Also, Bjorken’s scaling laws were reconfirmed experimentally; these techniques were used for both electron–proton scattering and in weak interactions for collisions involving neutrinos.

  Two schools of thought emerged during the 1970s concerning progress in high-energy physics. One was that Feynman’s parton model was successful in explaining the short-distance scaling behavior of the SLAC experiments because it, essentially, ignored any attempt to incorporate quantum field theory. The other school of thought was that one should incorporate the Feynman and Gell-Mann parton–quark model and the scaling laws into quantum field theory, in particular into a Yang–Mills field theory.

  During these investigations, David Gross and his graduate students David Politzer and Frank Wilczek discovered what they called “asymptotic freedom.” They found that the partons and quarks became essentially free, noninteracting particles at high energies, because the coupling strength between them decreased as the energy increased. This was in contrast to pure QED of photons and electrons. In the case of QED, as an electron penetrates the cloud of annihilating positrons and electrons surrounding the electron target, the electromagnetic force increases. This is called the antiscreening or antishielding effect. In contrast, Gross, Wilczek, and Politzer, and also, independently, Gerard ‘tHooft discovered that the strong force is screened and decreases in strength as it plows its way through the virtual cloud of partons (or quarks–antiquarks) and closely approaches the target quark. The reason for the difference between pure QED and the quark model is the flavor and color properties of quarks. This asymptotic freedom of quarks was confirmed experimentally; at very high energies, quarks can behave as if they are “free” particles. This discovery strengthened the need to describe strong interactions within the Yang–Mills gauge theory. Gross, Politzer, and Wilczek won the Nobel Prize in 2004 for discovering asymptotic freedom in QCD.

  Scattering experiments during the early 1970s confirmed the color charge of quarks. The color charge avoided the violation of Pauli’s exclusion principle, which says that you cannot have identical fermions in the same physical state. The numerical factor of three that occurred in cross-section calculations, indicating that there were three “colors”—red, blue, and green—associated with each quark, was found to be real, although, of course, there are no actual colors in quarks. There had been a healthy amount of skepticism about the reality of the hypothesized characteristic of quark colors. According to QCD, the rate of decay of a pi meson depends on the color charges of the quark and antiquark that compose it. The rate of decay of an electrically neutral pi meson into two photons is nine times faster with colored quarks than it would be if the quarks were colorless. These predictions of QCD theory were verified experimentally first at the Adone electron–positron collider in Italy. This was a strong impetus for the particle physics community to accept, finally, the fact that although the quarks could never be observed outside the proton and neutron, they truly did exist as real objects, confined inside the proton and neutron.

  Originally, Gell-Mann contended that, in studying quarks, one should only be concerned with the white, colorless baryons and mesons, not with what they were possibly composed of. This was because he was still skeptical about the reality of quarks. Physicists have now abandoned this position. Today, when experimentalists at the high-energy accelerators study the debris of particles smashing together, they have to consider the color aspects of quarks and gluons. It is not enough to consider just a colorless gluon. The disturbing psychological aspect of all these colored attributes of quarks and gluons is that you cannot “see” them experimentally. However, the indirect information that emerges through the scattering of protons at energies such as those that are now achievable at the LHC is overwhelming. Seeing is believing, or believing is seeing, as David Gross has said. When two protons collide at the LHC, jets of hadrons are seen to emerge from the point of collision. Experimentalists are now able to interpret these jets as being the quarks and the gluons. The experimental evidence that we have to consider eight colored gluons and not just one colorless gluon, is now substantial and incontrovertible. Software has been developed for the huge grid of comput
ers used by the LHC to analyze the collision data that probe the quark and gluon content of these jets. The jets issuing from the spray of particles after the collisions can be identified as gluons with all their eight color combinations taken into account. The idea of quarks and partons can no longer be considered just a mathematical construction. It has become part and parcel of the everyday life of experimentalists at the LHC.

  The remarkable story of the discovery of quarks and gluons illustrates the necessary and fruitful interplay between theory and experiment in physics and in science in general. The imagination and sometimes wild speculation of the theorists are restrained by the straitjacket of experimental physics through the extraordinary efforts of the experimentalists at high-energy accelerators such as SLAC, the Tevatron, and the LHC.

  2

  Detecting Subatomic Particles

  Our understanding of matter has developed through reducing it into smaller and smaller units. Starting with the Greeks, this process has continued through the centuries, except for a lengthy hiatus from the Middle Ages to the 19th century. Einstein’s observation of Brownian motion, which indicated the existence of atoms, and then the detection of the electron by J.J. Thomson sped up the whole process of reductionism. The invention of the accelerator meant that particles such as protons were able to collide at higher and higher energies, enabling us to “see” subatomic particles at smaller and smaller distance scales. We will soon be able to probe the structure of matter down to distances of only about 10-18 cm, corresponding to an energy of 14 TeV. This should allow us to observe the physical properties of what we now believe are the ultimate units of matter—namely, the quarks and the leptons.

 

‹ Prev