Book Read Free

The God Particle

Page 30

by Leon Lederman


  I worried a lot about the process of research when I was director of Fermilab. How could students and young postdocs experience the joy, the learning, the exercise of creativity experienced by Rutherford's students, by the founders of quantum theory, by my own small group of colleagues as we sweated out the problems on the floor of the Nevis cyclotron? But the more I looked into what was happening at the lab, the better I felt. The nights I visited the CDF (when old Democritus wasn't there), I found students enormously excited as they ran their experiments. On a giant screen events were flashing, reconstructed by the computer to make sense to the dozen or so physicists on shift. Occasionally, an event would be so suggestive of "new physics" that an audible gasp would be heard.

  Each large research collaboration consists of many groups of five or ten people: a professor or two, several postdocs, and several graduate students. The professor looks after his brood, making sure they are not lost in the crowd. Early on they are wrapped up in the design, building, and testing of equipment. Later on comes the data analysis. There is so much data in one of these collider experiments that much of it must wait for some group to complete one analysis before getting around to tackling the next problem. The individual young scientist, perhaps advised by her professor, selects a specific problem that receives the consensual agreement of the council of group leaders. And problems abound. For example, when W+ and W− particles are produced in proton-antiproton collisions, what is the precise form of the process? How much energy do the Ws take away? At what angles are they emitted? And so on. This could be an interesting detail, or it could be a clue to a crucial mechanism in the strong and weak forces. The most exciting task for the 1990s is to find the top quark and measure its properties. Up to mid-1992 this search was carried out by four subgroups of the CDF collaboration at Fermilab doing four independent analyses.

  Here the young physicists are on their own, fighting complex computer programs and the inevitable distortions introduced by an imperfect apparatus. Their problem is to extract a valid conclusion about how nature works, to establish one more piece of the jigsaw puzzle of the microworld. They have the benefit of a huge support group: experts in software, in theoretical analysis, in the art of seeking confirming evidence for tentative conclusions. If there is an interesting glitch in the way W's are thrown out of collisions, is it an artifact of the apparatus (metaphorically, a small crack in the microscope lens)? Is it a bug in the software? Or is it real? And if it is real, wouldn't colleague Harry see a similar effect in his analysis of Z particles—or perhaps Marjorie in her analysis of recoil jets?

  Big Science is not the sole province of particle physicists. Astronomers share giant telescopes, pooling their observations in order to draw valid conclusions about the cosmos. Oceanographers share research ships elaborately equipped with sonar, diving vessels, and special cameras. Genome research is the microbiologists' Big Science program. Even chemists require mass spectrometers, expensive dye lasers, and huge computers. Inevitably, in one discipline after another, scientists are sharing the expensive facilities that are necessary to make progress.

  Having said all this, I must emphasize that it is also extremely important for young scientists to be able to work in more traditional modes, clustered around a tabletop experiment with their peers and a professor. There they have the splendid option of pulling a switch, turning out the lights, and going home to think, perchance to sleep. "Small science" has also been a source of discovery, variety, and innovation, which contribute enormously to the advancement of knowledge. We must strike the proper balance in our science policy and be prayerfully grateful that both options exist. As for high-energy practitioners, one can tsk, tsk, and wish for the good old days when the lonely scientist sat in his folksy laboratory, mixing colorful elixirs. It's a charming vision, but it will never get us to the God Particle.

  BACK TO THE MACHINES: THREE TECHNICAL BREAKTHROUGHS

  Of the many technical breakthroughs that permitted acceleration to essentially unlimited energy (unlimited, that is, except by budgets) we'll look at three up close.

  The first was the concept of phase stability, discovered by V. I. Veksler, a Soviet genius, and independently and simultaneously by Edwin McMillan, a Berkeley physicist. Our ubiquitous Norwegian engineer Rolf Wideröe, independently patented the idea. Phase stability is important enough to call in a metaphor. Think of two identical hemispherical bowls with very small flat bottoms. Turn one bowl upside down, and place a ball on the small flat bottom, which is now the top. Place a second ball at the bottom of the noninverted bowl. Both balls are at rest. Are both stable? No. The test is to give each ball a nudge. Ball No. 1 rolls down the outside of the bowl, changing its condition radically. That's unstable. Ball No. 2 rolls up the side a bit, returns to the bottom, overshoots, and oscillates around its equilibrium position. That's stable.

  The mathematics of particles in accelerators has much in common with the two conditions. If a small disturbance—for example, a particle's gentle collision with a residual gas atom or with a fellow accelerated particle—results in large changes in motion, there is no basic stability, and sooner or later the particle will be lost. On the other hand, if these perturbations result in small oscillatory excursions around the ideal orbit, we have stability.

  Progress in the design of accelerators was an exquisite mixture of analytic (now highly computerized) study and the invention of ingenious devices, many of them building on the radar technology developed during World War II. The concept of phase stability was implemented in a variety of machines by applying radio frequency (rf) electrical forces. Phase stability in an accelerator happens when we organize the accelerating radio frequency so that a particle arrives at a gap at slightly the wrong time, resulting in a slight change in the particle's trajectory; the next time the particle hits the gap, the error is corrected. An example was given earlier with the synchrotron. What actually happens is that the error is overcorrected, and the particle's phase, relative to the radio frequency, oscillates around an ideal phase in which good acceleration is achieved, like a ball at the bottom of the bowl.

  The second breakthrough occurred in 1952, when Brookhaven Laboratory was completing its 3 GeV Cosmotron accelerator. The accelerator group was expecting a visit from colleagues at the CERN lab in Geneva, where a 10 GeV machine was being designed. Three physicists preparing for the meeting made an important discovery. Stanley Livingston (a student of Lawrence's), Ernest Courant, and Hartland Snyder were a new breed of cat: accelerator theorists. They hit on a principle known as strong focusing. Before I describe this second breakthrough, I should make the point that particle accelerators had become a sophisticated and scholarly discipline. It pays to review the key ideas. We have a gap, or radio-frequency cavity, which is what gives the particle its increase in energy at each crossing. To use it over and over we guide the particles into an approximate circle, using magnets. The maximum energy of particles that can be achieved in an accelerator is determined by two factors: (1) the largest radius that the magnet can provide and (2) the strongest magnetic field possible at that radius. We can build higher-energy machines by making the radius bigger, by making the maximum magnetic field stronger, or by doing both.

  Once these parameters are set, giving the particles too much energy would drive them outside of the magnet. Cyclotrons in 1952 could accelerate particles to no more than 1,000 MeV. Synchrotrons provided magnetic fields to guide the particles at a fixed radius. Recall that the synchrotron magnet strength starts out very low (to match the low energy of the injected particles) at the beginning of the acceleration cycle and increases gradually to its maximum value. The machine is doughnut-shaped, and the radius of the doughnut in the various machines constructed during this era varied from 10 to 50 feet. The energies achieved were up to 10 GeV.

  The problem that occupied the clever theorists at Brookhaven was how to keep the particles tightly bunched and stable relative to an idealized particle moving without disturbances in magnetic fields of mat
hematical perfection. Since the transits are so long, extremely small disturbances and magnetic imperfections can drive the particle away from the ideal orbit. Soon we have no beam. So we must provide conditions for stable acceleration. The mathematics was complicated enough, one wag said, "to curl a rabbi's eyebrows."

  Strong focusing involves shaping the magnetic fields that guide the particles so that they are held much closer to an ideal orbit. The key idea is to machine the pole pieces into appropriate curves so that the magnetic forces on the particle generate rapid oscillations with tiny amplitudes around the ideal orbit. That is stability. Before strong focusing, the doughnut-shaped vacuum chambers had to be 20 to 40 inches wide, requiring magnet poles of similar sizes. The Brookhaven breakthrough permitted reduction in the size of the magnet's vacuum chamber to 3 to 5 inches. The result? A huge savings in cost per MeV of accelerated energy.

  Strong focusing changed the economics and, early on, made it thinkable to build a synchrotron with a radius of almost 200 feet. Later we'll talk about the other parameter; the strength of the magnetic field. As long as iron is used for guiding the particles, this is limited to 2 tesla, the strongest magnetic field that iron can support without turning purple. Breakthrough is a correct description of strong focusing. Its first application was a 1 GeV electron machine built by Robert Wilson the Quick at Cornell. Brookhaven's proposal to the AEC to build a strong-focusing proton machine was said to have been a two-page letter! (Here we can lament the growth of bureaucracy but it would do no good.) This was approved, and the result was the 30 GeV machine known as AGS, completed at Brookhaven in 1960. CERN scrapped its plans for a 10 GeV weak-focusing machine and used the Brookhaven strong-focusing idea to build a 25 GeV strong-focusing accelerator for the same price. They turned it on in 1959.

  By the late 1960s, the idea of using tortured pole pieces to achieve strong focusing had given way to a separated function concept. One installs a "perfect" dipole guide magnet and segregates the focusing function in a quadrupole magnet symmetrically arrayed around the beam pipe.

  Using mathematics, physicists learned how complex magnetic fields direct and focus particles; magnets with larger numbers of north and south poles—sextupoles, octupoles, decapóles—became components of sophisticated accelerator systems designed to exercise precise control over the particle orbits. From the 1960s on, computers were more and more important in operating and controlling the currents, voltages, pressures, and temperatures in the machines. Strong focusing magnets and computer automation made possible the remarkable machines that were built in the 1960s and '70s.

  The first GeV (billion-electron-volt) machine was the modestly named Cosmotron, which began operation at Brookhaven in 1952. Cornell followed with a 1.2 GeV machine. Here are the other stars of that era...

  ACCELERATOR ENERGY LOCATION YEAR

  Bevatron 6 GeV Berkeley 1954

  AGS 30 GeV Brookhaven 1960

  ZGS 12.5 GeV Argonne (Chicago) 1964

  The "200" 200 GeV Fermilab 1972 (upgraded to 400 GeV in 1974)

  Tevatron 900 GeV Fermilab 1983

  Elsewhere in the world there were the Saturne (France, 3 GeV), Nimrod (England, 10 GeV), Dubna (USSR, 10 GeV), KEK PS (Japan, 13 GeV), PS (CERN/Geneva, 25 GeV), Serpuhkov (USSR, 70 GeV), SPS (CERN/Geneva, 400 GeV).

  The third breakthrough was cascade acceleration, a concept attributed to Cal Tech physicist Matt Sands. Sands decided that, when one is going for high energy, it is inefficient to do it all in one machine. He envisioned a sequence of different accelerators, each optimized for a particular energy interval, say 0 to 1 MeV, 1 to 100 MeV, and so on. The various stages can be compared to gears on a sports car, with each gear designed to raise the speed to the next level in the optimal manner. As the energy increases, the accelerated beam gets tighter. At the higher energy stages, the smaller transverse dimensions thus require smaller and cheaper magnets. The cascade idea has dominated all machines since the 1960s. Its highest exemplars are the Tevatron (five stages) and the Super Collider under construction in Texas (six stages).

  IS BIGGER BETTER?

  A point that may have been lost in the preceding discussion of technical considerations is why it helps to make cyclotrons and synchrotrons big. Wideröe and Lawrence demonstrated that one doesn't have to produce enormous voltages, as earlier pioneers believed, to accelerate particles to high energies. One just sends the particles through a series of gaps, or designs a circular orbit so that one gap can be reused. Thus in circular machines there are but two parameters: magnet strength and the radius of the orbiting particles. Accelerator builders adjust these two factors to get the energy they want. The radius is limited by money, mostly. Magnet strength is limited by technology. If we can't boost the magnetic field, 'we make the circle bigger to increase the energy. In the Super Collider we know that we want to produce 20 TeV in each beam. And we know (or we think we know) how strong a magnet we can build. From that we can extrapolate how big around the tube must be: 53 miles.

  A FOURTH BREAKTHROUGH: SUPERCONDUCTIVITY

  Back in 1911 a. Dutch physicist discovered that certain metals, when cooled to extremely low temperatures—just a few degrees above absolute zero on the Kelvin scale (−273 degrees centigrade)—lose all their resistance to electricity. A loop of wire at that temperature would carry a current forever with no use of energy.

  In your house, electrical power is supplied via copper wires from the friendly power company. The wires get warm because of the frictional resistance they offer to the flow of current. This waste heat uses power and adds to your bill. In conventional electromagnets for motors, generators, and accelerators, copper wires carry currents that produce magnetic fields. In a motor the magnetic field turns bundles of current-carrying wires. Feel the warm motor. In an accelerator the magnetic field steers and focuses the particles. The magnet's copper wires get hot and are cooled by a powerful flow of water, usually through holes in the thick copper windings. To give you some idea of where the money goes, the 1975 electric bill for the Fermilab accelerator was about $15 million, some 90 percent of which was for the power used in running the magnets for the 400 GeV main ring.

  Early in the 1960s a technical breakthrough took place. New alloys of exotic metals were able to maintain the fragile state of superconductivity while conducting huge currents and producing high magnetic fields. All of this at the more civilized temperatures of 5 to 10 degrees above absolute zero rather than the very difficult 1 to 2 degrees required for common metals. Helium is a true liquid at 5 degrees (everything else solidifies at this temperature), so the possibility of practical superconductivity emerged. Most of the large laboratories began working with wire made of such alloys as niobium-titanium or niobium 3-tin in place of copper and surrounding the wires with liquid helium to cool them to superconducting temperatures.

  Large magnets using the new alloys were built for particle detectors—for example, to surround a bubble chamber—but not for accelerators, which required that magnetic fields increase in strength as the particles gain energy. The changing currents in the magnets generate frictional effects (eddy currents) that normally destroy the superconducting state. Much research was addressed to this problem in the 1960s and '70s, with Fermilab, under Robert Wilson, serving as a leader in the field. Wilson's team began R&D in superconducting magnets in 1973, shortly after the original "200" accelerator began operating. One motivation was the exploding costs of electrical power due to the oil crisis of that era. The other was competition from the European consortium, CERN, based in Geneva.

  The 1970s were lean years for research funds in the United States. After World War II the world leadership in research had been solidly in this country, as the rest of the world labored to rebuild war-shattered economies and scientific infrastructures. By the late 1970s, balance had begun to be restored. The Europeans were building a 400 GeV machine, the Super Proton Synchrotron (SPS), which was better funded and better supplied with the expensive detectors that determine the quality of the research
. (This machine marked the beginning of another cycle in international collaboration and competition. In the 1990s Europe and Japan remain ahead of the United States in some research fields and not far behind in most others.)

  Wilson's idea was that if one could solve the problem of varying magnetic fields, a superconducting ring would save an enormous amount of electrical power while producing more powerful magnetic fields, which for a given radius would translate to higher energy. Aided by Alvin Tollestrup, a Cal Tech professor spending a sabbatical year at Fermilab (he eventually extended this to permanence), Wilson studied in great detail how changing currents and fields create local heating. Research going on in other labs, especially the Rutherford Lab in England, helped the Fermilab group build hundreds of models. They worked with metallurgists and materials scientists and, between 1973 and 1977, succeeded in solving the problem. One could ramp the model magnets from zero current to 5,000 amperes in 10 seconds, and the superconductivity persisted. In 1978–79 a production line began producing twenty-one-foot magnets with excellent properties, and in 1983 the Tevatron began operating as a superconducting "afterburner" at the Fermilab complex. The energy went from 400 GeV to 900 GeV, and the power consumption was reduced from 60 megawatts to 20 megawatts, with most of that used to produce liquid helium.

  When Wilson began his R&D program in 1973, the annual production of superconducting material in the United States was a few hundred pounds. Fermilab's consumption of 125,000 pounds of superconducting material stimulated producers and radically changed the posture of the industry. Today the biggest customers are firms that make magnetic resonance imaging (MRI) devices, for medical diagnosis. Fermilab can take a modicum of credit for this $500-million-a-year industry.

 

‹ Prev