Cracking the Particle Code of the Universe

Home > Other > Cracking the Particle Code of the Universe > Page 7
Cracking the Particle Code of the Universe Page 7

by Moffat, John W.


  The biggest accelerator is our own universe. The universe produces cosmic rays, the discovery of which in 1910 is credited to Austrian physicist Victor Hess and to Theodor Wulf, a Dutch Jesuit priest and physics teacher. Wulf constructed an electroscope, an instrument that was sensitive to charged particles. He discovered that his electroscope lost energy because it was absorbing radiation. It was originally assumed that this radiation emanated from the earth. But when Wulf asked French physicist Paul Langevin to verify this assumption by doing an experiment with the electroscope on top of the Eiffel Tower, Langevin discovered that the electroscope discharged faster at the higher altitude of the Eiffel Tower than lower down on the ground. The radiation seemed to be coming from the sky.

  Then, Hess investigated this phenomenon by ascending in a balloon to see whether the electroscope discharged more significantly as the balloon increased its altitude. Above a height of 5 km, the discharge effect was much stronger than on the ground, from which Hess concluded that the source of radiation was the universe itself.

  American physicist Robert Millikan decided to investigate this phenomenon, too, and named the radiation cosmic rays. Perhaps because of this catchy name, Millikan became better known for the discovery of cosmic rays than Hess, Wulf, or Langevin. However, in 1936, Stockholm awarded Hess one-half of the Nobel Prize for discovering cosmic rays, and the other half went to Carl Anderson, who discovered the positron. For reasons unknown, Wulf got nothing.

  Recent evidence from spacecraft experiments indicates that the sources of cosmic rays are violent supernovae explosions. The cosmic rays consist primarily of protons. As they come in from the universe, they are accelerated to very high energies, up to 1021 eV, which is more than a hundred million times higher than the energies that man-made accelerators such as the LHC can reach. These cosmic rays travel enormous distances through space and finally hit the earth. However, on their way through space, they collide with particles and produce secondary and tertiary cascades of particles. In these cascades, one can detect the positively and negatively charged pi mesons, K-mesons, and muons. It was through using cosmic rays that Anderson discovered the positron in 1932, confirming Dirac’s prediction of antimatter.

  One of the problems with using cosmic rays as one would use proton beams in accelerators for particle physics experiments, however, is that they bring with them a huge, unknown background. The signal-to-noise ratio is too small to allow a proper control of cosmic ray experiments, compared with experiments performed in the laboratory on earth. Nevertheless, cosmic rays are used today to perform experiments on neutrinos, to determine the oscillation of neutrino flavors into one another, and, potentially, to determine the masses of neutrinos. Present-day ongoing experiments with cosmic rays attempt to identify new particles, despite the serious background problems, and also to obtain an experimental understanding regarding whether Einstein’s special relativity is valid at the very high energies of cosmic rays. So far, no experiment has detected conclusively a violation of special relativity theory.

  MODERN ACCELERATORS AND COLLIDERS

  We now turn our attention to the modern machines that speed up particles very close to the speed of light. When such particles collide with one another, they produce cascades of particles with properties that can be detected by those devices that we have examined earlier. The word accelerator signifies generically a machine for accelerating particles to higher energies, whereas the word collider denotes the head-on collision between two particles or a particle and an antiparticle. Particles can also be accelerated and then hit a fixed target, consisting of a material containing nuclei. In practice, the terms accelerator and collider are used interchangeably.

  Accelerators come in two basic kinds: linear and circular. The first linear accelerator prototype was built by Norwegian engineer Rolf Widerøe in 1928, and it became the progenitor of all high-energy particle accelerators. He used a method of resonating particles with a radio frequency electric field to add energy to each traversal of the field. Widerøe published the details of his invention in an article in 1928.5

  Ernest Lawrence studied this article and used the information in it as a basis for building the first circular cyclotron accelerator during the early 1930s at Berkeley. This cyclotron consisted of a round metal chamber separated into two pieces, with a perpendicular magnetic field in the chamber. The two halves were connected by an electric field voltage. Protons were injected into the middle of the chamber and the magnetic field controlled their circular orbits, keeping them in curved paths. Manipulating the electric field allowed the protons to be accelerated as they passed from one half of the instrument to the other. After the protons were accelerated sufficiently, they were removed from the apparatus and focused on a target. This cyclotron was the grandfather of all circular accelerators, including the LHC today. Lawrence received the Nobel Prize in 1939 for developing the cyclotron (Figure 2.5).

  The first modern linear accelerator was constructed by John Cockcroft and Ernest Walton during the early 1930s; they received the Nobel Prize in 1951 for splitting the atomic nucleus. Their machine accelerated protons to about 700,000 electron volts (700 keV). This, of course, is low energy compared with present-day accelerators, but was sufficient to study nuclear physics. Only lower energies are needed to probe the protons and neutrons of the atomic nuclei, as opposed to the subatomic quark constituents of the proton and neutron that are studied today. The Cockcroft–Walton accelerator was a small linear accelerator.

  The most famous example of a linear collider is the one built at Stanford University during the mid 1960s: the Stanford Linear Accelerator Center (or SLAC). This 2-mi-long collider, the largest linear accelerator in the world, consists essentially of two linear accelerators with the beam in one accelerator going in one direction, and the beam in the other going in the opposite direction. The beams consist of positrons in one beam and electrons in the other beam. These particles do not experience significant energy loss, referred to technically as synchrotron radiation, because the particle trajectories are kept linear and not bent. However, this statement is not strictly true, because at SLAC the beams are bent in a slightly curved shape to force the particles to collide head-on.

  Figure 2.5 Lawrence’s original 11-in cyclotron. Photo courtesy of the Science Museum (London) and the Lawrence Berkeley National Laboratory.

  Other linear electron–positron accelerators have been built in Italy, Siberia, and France, and a new one is planned for CERN, to be called the international linear collider (ILC). Despite the fame and the usefulness of SLAC and other linear accelerators, currently this type of accelerator is used mainly to give an initial boost of acceleration to particle beams as they enter a main circular accelerator. In the future, however, the very large linear accelerators that are planned, such as the ILC, will be able to obtain more precise data than the circular accelerators because they will be colliding positrons and electrons, reducing the amount of hadronic background that plagues cyclotrons, including the LHC. They will be able to measure electromagnetic particle collisions and to determine precisely the properties of newly discovered particles such as the Higgs boson.

  During the years after World War II, the cyclotron, or circular accelerator, was engineered into a machine that accelerated protons to higher and higher energies, culminating in 1983 with the Tevatron machine at Fermilab near Chicago, which accelerated protons and antiprotons to an energy of almost 2 TeV. Cyclotron machines were also built at Brookhaven National Laboratory and at Argonne National Laboratory near Chicago.

  The idea of the cyclotron is an important concept underlying all future accelerators because it can accelerate protons from low energies in circular orbits to very high energies, keeping them in tight orbits using magnetic fields. Keeping these tight orbits stable requires what is called strong focusing, and this demands building special magnetic field apparatus. The machines designed for strong focusing are called synchrotrons, and they have played an important role in the history of accele
rators. For example, the Alternating Gradient Synchrotron at Brookhaven and the Proton Synchrotron (PS) at CERN boost the energies of the accelerators at these sites. The round synchrotrons have gaps along the perimeter, where electric fields are applied to accelerate the protons or antiprotons continuously. At some point, strong magnetic fields are used to extract the protons so they can be focused on a nuclear target. Most synchrotrons use protons as the accelerated particle because protons lose less energy in the form of synchrotron radiation when their orbits are bent by magnetic fields. In comparison, electrons lose a lot of energy and therefore are less valuable as initial “kicker” accelerating particles. The energy loss of the particle produces a deceleration, which is what we try to avoid happening.

  In 1961, Italian physicist Bruno Touschek developed the first collider with a single storage ring. In this collider, oppositely charged particles are accelerated within the storage ring in opposite directions and then are focused to collide at a point, producing a cascade of particle debris. The energy produced when two particle beams collide in the storage ring is much greater than the energy obtained by a single particle beam hitting a fixed target. The reason is that when a particle hits a fixed target, a lot of kinetic energy continues on after the collision, reducing the amount of energy that can be studied that represents actual events produced by the collision. With the colliding beams, on the other hand, there is much less loss of energy in the collision, making the Touschek storage ring collider a valuable advance in the development of accelerators and colliders.

  In the ring colliders, one beam of particles is injected in one direction and the other beam is injected in the opposite direction. The same electric and magnetic fields can be used to accelerate, for example, electrons and positrons, which have opposite electric charges. The applied electric field accelerates electrons and positrons in opposite directions, and during this acceleration they bend in opposite directions in the same magnetic field. In ring colliders, one can use protons in both beams or protons and antiprotons. Recall that these particles can then be accelerated to much higher energies than the electrons and positrons.

  The most famous circular electron–positron collider was located at CERN during the 1990s and was called simply the large electron–positron collider (LEP). It reached a maximum energy of 209 GeV, which was the combined energy of the two beams at collision, each having an energy of 104.5 GeV. LEP was closed down in 2000 to make way for the world’s largest ring collider, the large hadron collider (or LHC). In a ring collider such as LEP, the beams of the particles and antiparticles are kept separate until they are made to cross at certain intersection points. The PS collider at CERN is also such an intersecting ring collider, and in both machines, ingenious technological inventions were required to produce sufficient luminosity or beam intensity at high enough energies.

  In quantum mechanics, particles can be described both as particles and as waves, the physical behavior of which can be determined by probability theory. That is, the probability of the particle being at a certain point in space and time can be calculated using quantum mechanics. Short-wavelength particles can be used to examine particle properties at extremely small distances. However, for longer wavelength particles, the experimental results become coarser. The length of the wave of these particles is inversely proportional to their energy, which means that a proton, for example, could be either short wave or longer wave, depending on its energy. A short-wave particle has a higher energy than a longer wave particle.

  In quantum mechanics, the wavelength of the particle is inversely related to its energy or momentum. In atomic physics, this energy is in the range of thousands of electron volts (an atom has a scale of 10−8 cm). At the scale of an atomic nucleus, the energy is in the millions of electron volts. To study the physics of particles such as quarks inside atomic nuclei, we need energies exceeding billions of electron volts. We use accelerators and colliders to produce particles such as short-lived quarks that do not exist as stable particles on their own in a natural state. However, as we know, we still cannot “see” individual quarks in accelerators, but only indirectly as hadronizing jets in detectors.

  American physicist M. Stanley Livingston produced an impressive plot in 1954 showing how the laboratory energy of particle beams produced by accelerators has increased over time (Figure 2.6). This plot has been updated to account for modern accelerators and, remarkably, the energy has increased by factors of 10 every six to eight years. This amazing energy increase is the result of the ingenuity of physicists and engineers creating new technologies, which in turn allow for the construction of bigger machines.

  Figure 2.6 Updated Livingston plot showing the exponential increase in accelerator energies over time. When explaining the units, the Snowmass report stated: “Energy of colliders is plotted in terms of the laboratory energy of particles colliding with a proton at rest to reach the same center of mass energy.” This is why the collision energy at the LHC appears to be almost 100,000 TeV on the graph. Adapted from 2001 Snowmass Accelerator R&D report; graphic by symmetry magazine.

  Yet, increasing the energy of accelerators is not the only significant factor in building these machines. We need higher energies to reach beyond the horizon of known physics, but there are other important factors involved in our study of particle physics. For example, we need beams of colliding particles with a high intensity or luminosity—that is, with a large number of particles accelerated per second. When a particle hits a target, it produces particle reactions that are measured by a cross-section. This cross-section is the effective area of a material composed of nuclei that acts as a target for the beam of particles hitting it, and it is the area containing the resulting particle debris from the collisions that provides the data for the experiment. The important number for physicists is the actual size of the cross-section that contains the reactions. This final number will be the product of the beam intensity, the density of target particles, the cross-section of the actual reaction being produced, and the length of target material that the colliding particle can penetrate.

  Another important number is the so-called duty cycle of the machine, which is the amount of time that the accelerator can actually be running during the course of a period of experimental investigation. Modern accelerators do not produce a continuous flow of particles because, for one thing, the electric power required would be huge and unsustainable. Thus, the accelerators run as experiments with pulses that go on and off; consequently, the circulating beams pulse on and off as well. If the duty cycle is too short, the cross-sections will not be large enough for data analysis.

  A major problem in making sense of cross-section data is the background debris produced by a collision of particles either in a collider or on a fixed target. This background determines the signal-to-noise ratio, in which the noise is the unwanted background of particle collisions that obscures the signal we are trying to isolate, such as identifying a new particle. These backgrounds have to be modeled by large computer simulations and then subtracted from the data to produce a large enough signal-to-background noise ratio to be useful. The background is one of the most important and difficult obstructions to identifying new physics. As the energy and intensity of the particle beams in the accelerators increase and reach those of the LHC at its maximum running energy of 14 TeV, the background noise increases enormously. Very sophisticated computer codes are required to simulate the backgrounds so they can be subtracted during the final analysis of the collisions.

  During the 1950s and 1960s, the designing and building of accelerators was a major undertaking around the developed world. Many new machines were added to the roster, producing higher and higher energy collisions of particles, and major new discoveries in particle physics were made. A big fixed-target machine was built at Brookhaven in 1952, with an energy of about 1 GeV. In 1957, the Russians built a machine with an energy of 7 GeV at Dubna, near Moscow. These were followed at CERN by the PS at 23 GeV and then the SPS at an energy of 400 GeV. T
he DESY, begun in Hamburg, Germany, in 1959/1960, accelerated electrons up to an energy of about 7 GeV, and was instrumental in confirming QED. Important figures in machine building were Wolfgang Panofsky, who helped build SLAC, and Robert Wilson and John Adams. Wilson began building Fermilab in 1967, and Adams built both the PS and the SPS at CERN. Wilson was lab director at Fermilab for several years, and Adams was CERN’s director from 1971 to 1980. The SPS at CERN discovered the W and Z bosons in 1983. Simon van der Meer, who had invented a method for accumulating intense antiproton beams, which were made to counterrotate and collide with protons, and Carlo Rubbia, the head of the project, won the Nobel Prize in 1984 for this discovery. The Fermilab machine accelerated protons up to an energy of 500 GeV, and eventually, using superconducting magnets, that machine, called the Tevatron, succeeded in doubling this energy to about 1 TeV per beam, or 2 TeV in total energy. The Tevatron had 900 superconducting magnets along the ring, with huge currents running through their magnetic coils, and the currents could reach about 5,000 amp. It was important to keep these magnetic coils at extremely low temperatures to reduce their electrical resistance to zero. This was done by using liquid helium at −270 K. The Tevatron, which among other things, discovered the top quark in 1995, was decommissioned in September 2011.

 

‹ Prev