Figure 7.1 Branching ratio magnitudes for possible decay channels of the Higgs boson.
SOURCE: © CERN.
There are nine observable decay channels for a Higgs boson with a mass of 125 GeV. The dominant decay channel is into bottom and antibottom quarks (b-bar-b)—that is, about 60 percent of the time, the Higgs decays into bottom and antibottom quarks. The next dominant decay channel is the decay of the Higgs into two W bosons, followed by two gluons. The smallest decay channel is the Higgs decaying into two photons, which only happens about 0.3 percent of the time. In most of these decay channels, there is a very large hadronic, strong-interaction background so that the signal-to-noise ratio is very small, making it difficult to detect the Higgs boson. However, the two-photon and ZZ decay channels are virtually free of this hadronic background, which is why they are called the golden channels.
A “branching ratio” for a decay channel is the rate of decay of the boson into its lighter decay products divided by the sum of all the decay channel rates. Figure 7.1 shows graphically the magnitudes of the branching ratios for different decay channels (y-axis) for possible masses of the Higgs boson (x-axis).
From the LHC experimental results up to March 2012, both the CMS and ATLAS detector groups claimed that there were hints of a Higgs particle at 125 GeV in the channel of the Higgs decaying into two photons. At that time, the particle physics community eagerly awaited the results of the LHC startup in April 2012 either to reinforce the hints of a 125-GeV Higgs boson or to exclude it. Then, on July 4, 2012, the 125-GeV bump was statistically vindicated with the CERN announcement of the discovery of a “new boson.” The experimentalists are now working on determining whether this new boson is, in fact, a standard-model elementary Higgs boson or some other kind of particle. This entails determining experimentally the spin and the parity of the newly discovered boson—that is, is it a scalar or a pseudoscalar particle? The scalar or pseudoscalar property of the particle is determined by whether the particle changes sign under a parity inversion. That is, when the coordinates x, y, and z are turned into −x, −y, and −z, the scalar does not change sign, whereas the pseudoscalar particle does. Also, into what particles does the new boson actually decay, as opposed to the predicted standard-model branching ratios?
The problem with the Higgs-into-two-photons decay channel is that there is a large photon background (as opposed to hadronic background), and the standard-model predicted resonance at 125 GeV representing the decay of a light Higgs boson into two photons is only about 30 keV. This small signal has to be extracted from the background. Indeed, because of the experimental resolution of the resonance, which currently is about 109 electron volts, or 1 GeV, it may never be possible actually to measure this diphoton decay width of the resonance. However, the very sensitive electromagnetic calorimeter detectors at ATLAS and CMS can still measure the total number of events of all particles decaying into two photons, including those from the purported Higgs boson. Experimentalists are then able to extract the Higgs boson decay signal from the total number of events.
We must also measure more dominant decay channels, such as the decay of the Higgs particle into two W bosons, and its dominant decay into bottom and antibottom quarks (b-bar-b). However, with these decay channels, new problems arise because of the large hadronic background. Too many decays of other particles besides the Higgs boson into WW and b-bar-b quarks occur, making it difficult to avoid a misidentification of the Higgs boson. In the case of the b-bar-b decay channel, the hadronic background is more than a million times bigger than the Higgs boson signal would be.
The other golden decay channel is the decay of the Higgs boson into two Z bosons, with masses of about 90 GeV each, which both then decay individually into electron–positron (e+, e−) or mu+ and mu–lepton pairs. The sensitive detectors must identify the final four leptons and measure the masses of the individual pairs of leptons to add up to the masses of the parent Z bosons. If the Higgs mass is below the threshold of two Z bosons at 180 GeV, then one or both of the Zs must be a virtual particle, with a mass less than 90 GeV. In quantum mechanics, a virtual particle is not a “real” particle on the mass shell (refer back to Chapter 6, footnote 1 for a further explanation). As with the diphoton decay channel, the two-Z boson and four-lepton channel is effectively free of hadronic background.
Because the new boson is observed to decay into two photons, it follows from a theorem published by Landau in 1948 and by Yang in 1950 that the new boson has to have spin 0.1 By using conservation of angular momentum, Landau and Yang showed that an on-shell spin-1 boson cannot decay into two photons, each with spin 1. Therefore, the new boson has to have spin 0 or spin 2. A spin-2 boson is like a graviton. It seems unlikely that a graviton-like particle would be able to fit all the observational data for the new boson, and we must contend with the fact that the new boson is either a scalar or pseudoscalar boson with spin 0. All the experimentally confirmed particles of the standard model have either spin ½ or spin 1, so the question arises: What spin-0 particle can be identified with the new boson? The only elementary particle in the standard model that has spin 0 is the scalar Higgs boson. However, composite bosons made of quarks and antiquarks can have spin 0 and be either scalar or pseudoscalar resonance states.
In two papers, I proposed a model of the 125-GeV resonance bump as a quark–antiquark bound state that I call zeta (ζ)2. This bound state is based on the super-heavy “quarkonium” model, which was developed during the 1970s and 1980s on the basis of meson spectroscopy at lower energies, using the nonrelativistic quark model. The term quarkonium is borrowed from the word positronium, which is a bound state of an electron and a positron. I calculated the partial decay rates of the 125-GeV zeta resonance, which I identified as an electrically neutral pseudo-scalar boson, and found that for the zeta resonance decaying into two photons, the result is approximately the same as that expected for a light Higgs boson at 125 GeV decaying into two photons. I found similar results for the channel of the quarkonium zeta resonance decaying into ZZ* and then into four leptons. (Remember, Z* denotes a virtual Z boson.) However, the decays into fermion and antifermion pairs such as b-bar-b quarks and tau+–tau- leptons were suppressed, or much less, compared with the predicted decay of the Higgs boson. This was a significant difference in predictions between the standard-model Higgs boson and my zeta resonance model. In fact, my prediction of the lack of a signal of the new boson decaying into fermion/antifermion pairs such as b-bar-b and tau+–tau- was consistent with the latest data from the LHC.
The message from my quarkonium paper is that it is possible that a bump such as that at 125 GeV in the two-photon decay channel is the result of already-established QCD–quark experimental spectroscopy taking place at higher energies. The quark/antiquark pseudoscalar particle is bound by QCD–gluon interactions. If, indeed, the quarkonium interpretation of the new boson turns out to be true experimentally, this would constitute new physics in that it would represent at these higher energies an iteration of well-established quarkonium spectroscopy at lower energies, such as the observed “charmonium” and “bottomonium” resonances at energies of about 3 GeV and 9.5 GeV, respectively. This would certainly call into question the identification of the 125-GeV bump as the elementary standard-model light Higgs particle. At any rate, the paper issues a caution: any definitive statement about a 125-GeV resonance bump being a Higgs boson should be questioned. Only further experimental investigation will be able to decide this issue. Such investigations at the LHC could take one or two years after the machine starts up again in 2015.
By March 2012, there was excitement in the media and on physics blogs about the “hints” of a Higgs particle with a 125-GeV mass. Comments ranged from those who claimed that the Higgs boson had obviously been observed to those who were skeptical about its discovery. It appeared that the majority of experimentalists at CERN and elsewhere at that time were skeptical that they had, indeed, discovered the Higgs boson. The theorists, on the other hand, were either
enthusiastic about its discovery or only somewhat skeptical because of the considerable theoretical prejudice in favor of the Higgs that had built up for almost half a century of electroweak physics.
If the Higgs boson was not discovered, then a kind of nightmare scenario would begin to unfold: How can we construct a consistent electroweak theory that permits us to do finite calculations of cross-sections of scattering particles without a spontaneous symmetry-breaking Higgs mechanism? We would have to go back more than 50 years and start all over again with our theories! This would be a wonderful opportunity for younger theoretical physicists who have not spent their whole careers publishing papers on the standard Higgs mechanism model and the Higgs boson. On the other hand, it is a threatening prospect to older physicists who have devoted much of their careers to the standard electroweak theory based on spontaneous symmetry breaking and the Higgs mechanism. However, theoretical physics thrives and progresses on crises. This possible crisis could be compared with the one Max Planck confronted in 1900, when he discovered the formula for black-body radiation, which forced him to conceive of the radical idea of energy being discontinuous, emitted in packages of radiation.
There is an important difference between the hunt for the W and Z bosons in 1983 at CERN and today’s search for the Higgs boson. The standard model predicted the masses of the W and the Z; the experimentalists in 1983 had this crucial information when they began searching for the W and Z bosons. Indeed, the experimental discovery of the W and the Z by Rubbia and his collaborators agreed remarkably well with the mass predictions of these particles in the standard model. In contrast, the standard model does not predict the mass of the Higgs boson. Only indirect clues about its mass coming from fits to electroweak data have provided any guidance regarding where to look for the Higgs boson.
Needless to say, as the failure to detect conclusively the Higgs particle at the Tevatron and the LHC continued until March 2012, theorists started to show some nervousness. Papers appeared on the electronic archive speculating about how to explain away the possible absence of the Higgs boson. One popular gambit in theoretical particle physics: if you don’t see something that you really believe in, assume that it is “hiding”—make it invisible! How does this work? One can hide the Higgs boson by having it decay into particles that are not easily detected at the Tevatron or the LHC. Or one can provide explanations regarding why the decay of the “hidden” Higgs boson cannot be detected with today’s technology.
Amid the flurry of interest in March 2012 over the possible bump at 125 GeV in the two-photon decay channel, there was a problem. The standard model also predicted that the Higgs must decay into leptons, such as electrons and positrons, muons and tau leptons, as well as quarks. These fermion decay channels, which are far more dominant than the two-photon decay channel, had not yet been observed conclusively. So theorists came up with the idea of a “fermiophobic boson,” meaning that the Higgs would somehow contrive not to decay into the fermions that one would expect, which again would make the Higgs effectively invisible in these particular decay processes.
Among other possible scenarios for the Higgs decays was the “vectorphobic” model, motivated before 2012 by the absence of a strong signal for the Higgs decaying into the vector bosons W and Z. Such models would appear to be contrived, because they would not agree with the standard-model explanation of symmetry breaking in electroweak theory, nor with the standard-model calculation of the Higgs boson decay into two photons.
It has even been suggested recently that the Higgs boson can decay into dark-matter particles, which of course will probably never be observed! It is difficult, if not impossible, to detect dark-matter particles, as has become clear from the many null underground experiments to detect dark-matter particles such as WIMPs.
The decay of the Higgs into ZZ and WW gauge bosons is important to detect to verify the standard Higgs mechanism, because its coupling to these bosons is necessary to confirm that the Higgs gives mass to the W and Z bosons. Of course, the coupling of the Higgs to the leptons and quarks is also important to observe, so that the Higgs is observed to be responsible for giving fermions their masses. In addition, the Higgs boson cannot decay directly into two photons, because the photons are massless, and a direct coupling of the Higgs to the photons would give them mass. So for the Higgs to decay into two photons, it must do so indirectly through a top/antitop quark loop and a W+, W−boson loop. Therefore, if a strong signal of the Higgs boson decaying into two photons is observed, then this means that the Higgs has to couple to fermions and to the W bosons.
MOVING BEYOND THE STANDARD MODEL
Given the lack of experimental data during the past three decades, theoreticians have been speculating wildly about fundamental issues in particle physics without any experimental restraint on their vivid imaginations. Their goal has been to go “beyond the standard model,” inventing new particles and new symmetries—anything, in fact, that would keep the standard model with the Higgs particle intact, but would remove the pesky fine-tuning problems associated with the Higgs boson.
However, since September 2010, experimental restraints have started coming into play from the data pouring out of the LHC, and many of these beyond-the-standard-model (BSM) speculations are suffering a sudden death. If it turns out, in the end, that the bump at 125 GeV is something other than the Higgs boson, then we will have to reinvent the standard model. The question of how particles get mass would have to be reconsidered. This is a problem that requires a deeper understanding of the origins of mass, and how gravity and inertia may also play a role in unraveling this mystery.
There have been many kinds of BSM proposals. Most involve a larger symmetry group than the one on which the standard model is based. The grand unified theories (GUTs) of the 1970s were the precursors of this type of model. The objective of GUTs was to unify the four known forces of nature, including gravity.
These BSM ideas involve increasing the number of undetected particles that are supposedly lurking at high energies not yet reached by accelerators. The most prolific increase of particles is generated by the idea of supersymmetry, which enlarges the symmetry group of spacetime. We recall that with the introduction of supersymmetry, the number of particles is doubled by having a supersymmetric partner for every observed elementary particle. There is also a huge increase in the number of free parameters that describe the properties of these hypothesized superpartners, from about 20 to about 120.
Even if we do solve the problem of how to unify the forces of nature in some grand group structure based on some hypothesized symmetry of particles and fields, this will not necessarily resolve the problem of the infinities encountered in the calculations performed in particle physics. This has been a recurring problem in particle physics during the past six decades and still has to be addressed when attempting to formulate BSM proposals.
One attempt to address the problem of infinities in calculations goes under the rubric of “effective field theory.” An effective field theory is not a complete theory that is valid to all energies; rather, it allows one to work up to a certain high energy by means of an energy “cutoff.” Take any speculative unified field theory and deal with the problem of the infinities by cutting off the high energies in the calculations, so that you do not have to face what is called the ultraviolet energy catastrophe. Physics below about 300 GeV is customarily termed infrared, and is in long wave lengths, whereas physics above 1 TeV is in the ultraviolet energy range, with short wave lengths. These short wave lengths correspond to the high energies of particles in accelerators. When the ultraviolet energy extends to infinity in the calculations in quantum field theory, the results of the calculations become infinite and meaningless. This is the ultraviolet energy catastrophe.
Above this ultraviolet energy cutoff, we simply do not understand how to do the mathematics of particle physics, such as calculating the cross-sections of the scattering of particles, and we hope that some future development will rescue us from this impa
sse. Everything below the energy cutoff is just fine. You can do renormalization theory and all the calculations come out finite, and the particle physicists can happily calculate their cross-sections. However, there is a built-in trap in the weak-interaction theory—namely, without the Higgs boson, using the standard methods of quantum field theory based on local interactions of particles, things can go badly wrong at around 1 to 2 TeV; the scattering of particles is no longer unitary. So even if you put in this energy cutoff at some higher energy—say, 10 TeV—the weak-interaction theory breaks down anyway. In the event that the LHC does not discover any new particles beyond the Higgs boson, and no such new particles exist in nature, then for a light Higgs boson, it can be claimed that the theory is renormalizable and valid to all energies. This means that, in such a renormalizable theory with a light Higgs boson, the cutoff can be made to go to infinity without destroying the calculations of physical quantities such as cross-sections.
The problem with BSM theories in which new particles are claimed to exist is that they may not cure the diseases of weak interactions—namely, the lack of renormalizability of the theories. Exit the Higgs, and you may be faced with this terminal disease. There have been proposals to keep adding in undetected particles, starting at 1 TeV or 2 TeV, and making these particles cancel out the fatal probabilities adding up to more than 100 percent. The problem with this is that more and more particles have to be added into the calculations to keep preventing the unitarity problem from recurring as the energy increases. However, these arguments rest on the application of perturbation theory to the electroweak theory and to the electroweak theory being valid to all energies. This may not be the case; the theory could become nonperturbative and require a new approach to solving the unitarity problem.
Cracking the Particle Code of the Universe Page 16