The Higgs Boson: Searching for the God Particle

Home > Other > The Higgs Boson: Searching for the God Particle > Page 9
The Higgs Boson: Searching for the God Particle Page 9

by Scientific American Editors


  * * *

  LARGE ELECTRON-POSITRON COLLIDER creates Z bosons by bringing electrons and positrons into collision in a storage ring 27 kilometers in circumference. The particles countercirculate in bunches. Magnets confine the two beams to their poper orbits, and radio-frequency power accelerates them to a combined energy near 90 billion electron volts, equivalent to the Z mass. The bunches meet head-on 45,000 times a second at points inside the Aleph, Opal, Delphi and L3 detectors.

  Illustration by Ian Warpole

  * * *

  Researchers at CERN attacked the problem by developing the Large Electron-Positron (LEP) Collider, a traditional storage-ring design built on an unprecedented scale. The ring, which measures 27 kilometers in circumference, is buried between 50 and 150 meters under the plain that stretches from Geneva to the French part of the Jura Mountains. Resonance cavities accelerate the two beams with radio-frequency power. The beams move in opposite directions through a roughly circular tube. Electromagnets bend the beams around every curve and direct them to collisions in four areas, each of which is provided with a large detector.

  The ring design has the advantage of storing the particles indefinitely, so that they can continue to circulate and collide. It has the disadvantage of draining the beams of energy in the form of synchroton radiation, an emission made by any charged particle that is diverted by a magnetic field. Such losses, which at these energies appear as X rays, increase as the fourth power of the beam's energy and are inversely proportional to the ring's radius. Designers can therefore increase the power of their beams by either pouring in more energy or building larger rings, or both . If optimal use is made of resources, the cost of such storage rings scales as the square of beam energy. The LEP is thought to approach the practical economic limit for accelerators of this kind.

  At Stanford, the problem of making electrons and positrons collide at high energy was attacked in a novel way in the Stanford Linear Collider (SLC). The electrons and positrons are accelerated in a three-kilometer-long linear accelerator, which had been built for other purposes. They are sent into arcs a kilometer long, brought into collision and then dumped. The electrons and positrons each lose about 2 percent of their energy because of synchrotron radiation in the arcs, but this loss is tolerable because the particles are not recirculated. A single detector is placed at the point of collision.

  The LEP is an efficient device: when the electron and positron beams recirculate, about 45,000 collisions per second occur. The SLC beams collide, at the most, only 120 times per second. Thus, the SLC must increase its efficiency. This task can be accomplished by reducing the beam's cross section to an extremely small area. The smaller the cross section of the area becomes, the more likely it is that an electron will collide head-on with a positron. The SLC has produced beam diameters of four-millionths of a meter, about one fifth the thickness of a human hair.

  One of the main justifications for building the SLC was that it would serve as a prototype for this new kind of collider. Indeed, the SLC has shown that useful numbers of collisions are obtainable in linear colliders, and it has thus encouraged developmental research in this direction, both at SLAC and at CERN. The present Z production rates at the SLC are, however, still more than 100 times smaller than those at the LEP.

  Large teams of physicists analyze the collision products in big detectors. The SLC'S detector is called Mark II, and the LEP'S four detectors are called Aleph, Opal, Delphi and L3. The SLAC team numbers about 150 physicists; each of the CERN teams numbers about 400 people, drawn from research institutes and universities of two dozen countries.

  The function of a detector is to measure the energies and directions of as many as possible of the particles constituting a collision event and to identify their nature, particularly that of the charged leptons. Detectors are made in onionlike layers, with tracking devices on the inside and calorimeters on the outside. Tracking devices measure the angles and momenta of charged particles. The trajectories are located by means of the ionization trails the collision products leave behind in a suitable gas. Other media, such as semiconductor detectors and light-emitting plastic fibers, are also used.

  The tracking devices are generally placed in strong magnetic fields that bend the particles' trajectories inversely with respect to their momenta. Measurement of the curves yields the momenta, which in turn provide close estimates of the energy. (At the energies encountered in these experiments, the energy and the momentum of a particle differ very little.)

  Calorimeters measure the energies of both neutral and charged particles by dissipating these energies in successive secondary interactions in some dense medium. This energy is then sampled in a suitable way and localized as precisely as the granularity of the calorimeter allows. Calorimeters perform their function in a number of ways. The most common method uses sandwiches of thin sheets of dense matter, such as lead, uranium or iron, which are separated by layers of track-sensitive material.

  Particles leave their mark in such materials by knocking electrons from their atoms. Argon, either in liquid form or as a gas combined with organic gases, is the usual medium. Plastic scintillators work differently: when a reaction particle traverses them, it produces a flash of light whose intensity is then measured. The calorimeter usually has two layers, an inner one optimized for the measurement of electrons and photons and an outer one optimized for hadrons.

  To gather all the reaction products, the ideal detector would cover the entire solid angle surrounding the interaction point. Such detectors were pioneered in the 1970s at SLAC. In the LEP'S Aleph detector the tracking of the products from the annihilation of a positron and an electron proceeds in steps.

  A silicon-strip device adjoining the reaction site fixes the forward end point of each trajectory to within tenmillionths of a meter (about half the breadth of a human hair). Eight layers of detection wires then track the trajectory through an inner chamber 60 centimeters in diameter. Finally, a socalled time-projection chamber, 3.6 meters in diameter, uses a strong electric field to collect electrons knocked from gas molecules by the traversing particles. The field causes the electrons to drift to the cylindrical chambers' two ends, where they are amplified and detected on 50,000 small pads. Each electron's point of origin is inferred from the place it occupies on the pads and the time it takes to get there.

  The next step outward brings the reaction products to the electron-photon calorimeter. The products traverse the superconducting coil, which creates a 15,000-gauss magnetic field at the axis of the device, and then enter the hadron calorimeter. This device, a series of iron plates separated by gas counters, also returns the magnetic flux, just as an iron core does in a conventional electromagnet. Aleph weighs 4,000 tons and cost about $60 million to build. Half a million channels of information must be read for each event, and the computer support necessary for the acquisition and later evaluation of the data is considerable.

  The data gathered in the first few months of operation of the two colliders have provided the best support yet adduced for the predictions of the electroweak theory. More important, they have delineated the curve describing the Z width with great precision.

  The overwhelming majority of observed electron-positron annihilations give rise to four sets of products: 88 percent produce a quark and an antiquark; the remaining 12 percent are divided equally among the production of a tau lepton and antitau lepton, muon and antimuon, and electron and positron. (The last case simply reverses the initial annihilation.)

  In the decays into electrons and muons, two tracks are seen back to back, with momenta (and energies) corresponding to half of the combined beam energy. The two products are easily distinguished by their distinct behavior in the calorimeters. The decays to tau leptons are more complex because they subsist for a mere instant–during which they travel about a millimeter–before decaying into tertiary particles that alone can be observed. A tau lepton leaves either closely packed tracks or just one track; in both cases, the signature is mirror
ed by that of another tau lepton moving in the opposite direction (thus conserving momentum).

  The quarks that account for most reactions cannot be seen in their free, or "naked," state, because at birth they undergo a process called hadronization. Each quark "clothes" itself in a jet of hadrons, numbering 15 on average, two thirds of which are charged. This, the most complex of the four main decay events, usually manifests itself as back-to-back jets, each containing many tracks. The results described here are based on the analysis of about 80,000 Z decays into quarks-the combined result of the four LEP teams and the one SLAC team.

  The Z production curve is determined in an energy scan. Production probability is measured at a number of energies: at the peak energy, as well as above and below it. A precise knowledge of the beam energy is of great importance here. It was obtained at the two colliders very differently, in both cases with a good deal of ingenuity and with a precision of three parts in 10,000.

  As was pointed out earlier, the total width of the Z resonance can be determined from either the height at the peak energy or the width of the resonance curve. The height has the smaller statistical error but requires knowledge not only of the rate at which events occur but also of the rate at which particles from the two beams cross. The latter rate is called the luminosity of the collider.

  In the simple case of two perfectly aligned beams of identical shape and size, the luminosity equals the product of the number of electrons and the number of positrons in each crossing bunch, multiplied by the number of bunches crossing each second, divided by the cross-sectional area of the beams. In practice, luminosity is determined only by observing the rate of the one process that is known with precision: the scattering of electrons and positrons that glance off one another at very small angles without combining or otherwise changing state. To record such so-called elastic collisions, two special detectors are placed in small angular regions just off the axis of the beam pipe. One of the detectors is in front of the collision area; the other is behind it. In the case of Aleph, these detectors are electron-photon calorimeters of high granularity.

  The elastically scattered electrons and positrons are identified by the characteristic pattern in which they deposit energy in the detectors and by the way they strike the two detectors back to back, producing a perfectly aligned path. The essence here is to understand precisely the way in which particles are registered, especially in those parts of the detectors that correspond to exceedingly small scattering angles. This is important because the detection rate is extremely sensitive to changes in the angle.

  When the resulting data are fitted to the theoretical resonance shape, three parameters are considered: the height at the peak , the total width and the Z mass. The data, in fact, agree well with the shape of the theoretically expected distribution. The next step, then, is to determine the number of neutrino families from two independent parameters- the width and the peak height.

  * * *

  RESONANCE CURVES predicted for the Z particle vary according to the number of families of matter. Thousands of Z decays into quarks, observed at CERN, appear as points. The measurements agree with the expectation for three families of matter.

  Illustration by Ian Warpole

  * * *

  The combined results of the five teams produced an average estimate of 3.09 neutrino varieties, with an experimental uncertainty of 0.09. This number closely approaches an integer, as it should, and matches the number of neutrino varieties that are already known. A fourth neutrino could exist without contradicting these findings only if its mass exceeded 40 billion eV-a most unlikely possibility, given the immeasurably small masses of the three known neutrinos.

  The Z result fits the cosmological evidence gathered by those who study matter on galactic and supergalactic scales. Astronomers have measured the ratio of hydrogen to helium and other light elements in the universe. Cosmologists and astrophysicists have tried to infer the processes by which these relative abundances came about.

  Shortly after the big bang, the cataclysmic explosion that created the universe and began its expansion, matter was so hot that a neutron was as likely to decay into a proton-electron pair as the latter was to combine to form a neutron. Consequently, as many neutrons as protons existed. But as the universe expanded and cooled, the slightly heavier neutrons changed into protons more readily than protons changed into neutrons. The neutron-proton ratio therefore fell steadily.

  When the expansion brought the temperature of the universe below one billion kelvins, protons and neutrons were for the first time able to fuse, thereby forming some of the lighter elements, mainly helium. The resulting abundances depend critically on the ratio of neutrons to protons at the time light elements were forming. This ratio, in turn, depends on the rate at which the universe expanded and cooled. At this stage, each light neutrino family-that is, any whose constituents have a mass smaller than about a million eV-contributes appreciably to the energy density and cooling rate. The measured abundances of light elements are consistent with cosmological models that assume the existence of three light neutrino families but tend to disfavor those that assume four or more.

  Many questions remain unanswered. Why are there just three families of particles? What law determines the masses of their members, decreeing that they shall span 10 powers of 10? These problems lie at the center of particle physics today. They have been brought one step closer to solution by the numbering of the families of matter.

  -Originally published: Scientific American 264(2), 70-75. (February 1991)

  The Structure of Quarks and Leptons

  By Haim Harari

  In the past 100 years the search for the ultimate constituents of matter has penetrated four layers of structure. All matter has been shown to consist of atoms. The atom itself has been found to have a dense nucleus surrounded by a cloud of electrons. The nucleus in turn has been broken down into its component protons and neutrons. More recently it has become apparent that the proton and the neutron are also composite particles; they are made up of the smaller entities called quarks. What comes next? It is entirely possible that the progression of orbs within orbs has at last reached an end and that quarks cannot be more finely divided. The leptons, the class of particles that includes the electron, could also be elementary and indivisible. Some physicists, however, are not at all sure the innermost kernel of matter has been exposed. They have begun to wonder whether the quarks and leptons too might not have some internal composition.

  * * *

  HIERARCHY OF PARTICLES in the structure of matter currently has four levels. All matter is made up of atoms; the atom consists of a nucleus surrounded by electrons; the nucleus is composed of protons and neutrons; each proton and neutron is thought to be composed of three quarks. Recent speculations might add a fifth level: the quark might be a composite of hypothetical finer constituents, which can be generically called prequarks. The leptons, the class of particles that includes the electron, could also consist of prequarks.

  Illustration by Jerome Kuhl

  * * *

  The main impetus for considering still another layer of structure is the conviction (or perhaps prejudice) that there should be only a few fundamental building blocks of matter. Economy of means has long been a guiding principle of physics, and it has served well up to now. The list of the basic constituents of matter first grew implausibly long toward the end of the 19th century, when the number of chemical elements, and hence the number of species of atoms, was approaching 100. The resolution of atomic structure solved the problem, and in about 1935 the number of elementary particles stood at four: the proton, the neutron, the electron and the neutrino. This parsimonious view of the world was spoiled in the 1950's and 1960's; it turned out that the proton and the neutron are representatives of a very large family of particles, the family now called hadrons. By the mid-1960's the number of fundamental forms of matter was again roughly 100. This time it was the quark model that brought relief. In the initial formulation of th
e model all hadrons could be explained as combinations of just three kinds of quarks.

  Now it is the quarks and leptons themselves whose proliferation is beginning to stir interest in the possibility of a simpler scheme. Whereas the original model had three quarks, there are now thought to be at least 18, as well as six leptons and a dozen other particles that act as carriers of forces. Three dozen basic units of matter are too many for the taste of some physicists, and there is no assurance that more quarks and leptons will not be discovered. Postulating a still deeper level of organization is perhaps the most straightforward way to reduce the roster. All the quarks and leptons would then be composite objects, just as atoms and hadrons are, and would owe their variety to the number of ways a few smaller constituents can be brought together. The currently observed diversity of nature would be not intrinsic but combinatorial.

  It should be emphasized that as yet there is no evidence quarks and leptons have an internal structure of any kind. In the case of the leptons, experiments have probed to within 10-16 centimeter and found nothing to contradict the assumption that leptons are pointlike and structureless. As for the quarks, it has not been possible to examine a quark in isolation, m uch less to discern any possible internal features. Even as a strictly theoretical conception, the subparticle idea has r un into difficulty: no one has been able to devise a consistent description of how the subparticles might move inside a quark or a lepton and how they might interact with one another. They would have, to be almost unimaginably small: if an atom were magnified to the size of the earth, its innermost constituents could be no larger than a grapefruit. Nevertheless, models of quark and lepton substructure make a powerful appeal to the aesthetic sense and to the imagination: they suggest a way of building a complex world out of a few simple parts.

 

‹ Prev