Cracking the Particle Code of the Universe
Page 24
Alexey Drozdetskiy, who belongs to the University of Florida, Gainesville, particle physics group, and who spoke on behalf of the CMS collaboration in Kyoto, presented the results for a spin–parity analysis. The analysis of the golden channel decay of what we are now calling the “X boson” into ZZ* and then into four leptons relied on sophisticated statistical methods called multivariate analysis, designed to determine the parity of the X boson. It compared the standard-model Higgs boson prediction for the angular correlations of the four leptons with the predictions of the effective pseudoscalar Higgs boson for the same decay. The effective pseudoscalar Higgs boson model did not follow naturally from the standard Higgs boson model, and it was not renormalizable. In addition, we recall that this effective pseudoscalar Higgs model would only fit the ZZ* decay data if one manipulated its coupling strengths artificially by increasing the interaction strength more than 100 times. The Florida group claimed a 2.5-sigma result favoring a scalar spin-0 Higgs boson. The ATLAS collaboration also claimed the scalar positive parity is favored over the pseudo-scalar negative parity for the new boson.
Other talks given at the Kyoto meeting still showed that there was a deficit in the signal strength reported by both the CMS and ATLAS collaborations for tau+–tau- and bottom–antibottom decay channels of the X boson. In view of this, whether the X boson was indeed the much-sought standard-model Higgs boson is still an unresolved question.
Yet, theorists and experimentalists alike are now increasingly claiming that the overall data accumulated in the search for the Higgs boson at the LHC is consistent with a standard-model Higgs boson. Already, an institute is being formed and prizes are being awarded as if the new discovery is the Higgs boson. With the supposed dramatic discovery of the Higgs boson at CERN, Peter Higgs’s name suddenly became universally famous throughout the world. The University of Edinburgh, where Peter is a retired professor, quickly capitalized on this and inaugurated the new Higgs Centre for Theoretical Physics in January 2013. On this occasion, the university held a Higgs Symposium attended by several well-known particle physicists. Among them was Joe Incandela, senior experimentalist in the CMS collaboration. He gave a detailed review of the CMS results in their search for the Higgs boson. These results did not contain anything essentially new beyond what had been announced at the Kyoto meeting.
In addition, Russian billionaire Yuri Borisovich Milner, an entrepreneur and venture capitalist, started the Fundamental Physics Prize Foundation in July 2012 and chose the first nine winners of the $3 million annual award. This prize is now the largest academic award in the world, beating both the Nobel and Templeton prizes. In addition to the nine winners, two $3 million special prizes were awarded in 2012 to Stephen Hawking and to seven scientists who led the effort to discover a Higgs-like particle at the LHC, including Joe Incandela and Fabiola Gianotti.
10
Do We Live in a Naturally Tuned Universe?
When constructing models of nature, theoretical physicists expect that calculating fundamental constants, such as the mass and charge of elementary particles, only involves arithmetical operations such as the subtraction of two real numbers that may include a few decimal places. Numbers like 2.134 are of order one, or about the size of one or unity, compared with 21,340,000, which is nowhere near the size of unity. To obtain a number of order one from the much larger number requires fine-tuning—in other words, subtracting a number of similar size. A theory that needs no fine-tuning when determining physical parameters, such as the mass and charge of the particles, is considered “natural.” Theoretical physics contains paradigms, such as electromagnetism, QED, and classical general relativity, that have satisfied this naturalness criterion. If it is not satisfied—and the calculations in the theory involve canceling two numbers to many decimal places, rather than a few—then we can expect that the theory is flawed. The current electroweak theory in the standard model suffers from this lack of naturalness. A theory that has this fine-tuning problem is not falsified by this failing, but it signals strongly that the theory is incomplete and needs serious modification.
FINE-TUNING AND THE HIGGS MASS HIERARCHY PROBLEM
In the event that future data confirm beyond a doubt that the new boson discovered at the LHC at 125 GeV is the standard-model Higgs boson, and at the same time no other new particles or new physics are discovered, then we face a crisis in particle physics. When the Higgs mass is calculated from the standard model, a serious fine-tuning occurs resulting from the divergent or infinite nature of the calculation. This is called the Higgs mass hierarchy problem, which we encountered earlier in this book. The theoretical prediction for the Higgs mass comes from subtracting one potentially infinite term from another.
The scalar-field Higgs model is renormalizable, which means that infinities that occur in the calculations of the mass and charge of a particle can be absorbed by unobservable quantities, such as the bare mass and the self-interaction mass of the particle. This occurs during renormalization to yield a finite measured mass and charge. The renormalization involves infinities that grow significantly with increasing energy, to such an extent that it becomes difficult to control them in the renormalization scheme. Theories are not necessarily expected to be valid up to infinitely high energies. They have to be formulated with a built-in energy cutoff—the highest energy at which the theory remains valid. We cannot have an infinite quantity in the calculation of a constant, so we “cut off” the energy and thereby limit the size of the energy in the calculation. After a theory is defined with such a cutoff, the potentially divergent or infinite terms appear as powers of the cutoff. In the jargon of theoretical physicists, we then have logarithmic, linear, quadratic, and so on, divergences. These quantities, which are dependent on the cutoff, become infinite when the cutoff goes to infinity.
The least harmful of such infinities are those that involve the logarithm of the cutoff in energy needed to make sense of the renormalization calculation. The calculation of the quantum corrections to the Higgs boson mass gives a result proportional to the square of the energy cutoff, which increases drastically as we increase the energy, compared with the logarithm of the cutoff, which has a much slower increase in energy. If the standard model is valid all the way to the Planck mass–energy of 1019 GeV, then this leads to a fine-tuning of about one part in 1017 for a Higgs mass of 125 GeV! For this size of quantum correction, the bare mass of the Higgs has to have the miraculously negative value of this size, so that an improbable cancelation occurs to give the Higgs boson a mass of 125 GeV.
Observation tells us that the Higgs-like boson mass is about 125 GeV, but the calculated quantum energy in the electroweak theory makes the Higgs boson mass bigger. This quantum energy is the result of the interactions of the Higgs boson with virtual particles such as the top quark and the W and Z bosons. The quantum physics keeps the Higgs boson light only through a ridiculous degree of fine-tuning. In the perturbative scheme used to calculate the masses of the elementary particles, a difference occurs between the formally infinite “bare” mass, which is the particle’s mass in the absence of interactions with other particles, and the infinite quantum energy mass. If the quantum energy behaves as a logarithm of the energy cutoff, then it can be controlled; it produces a result that does not require a fine-tuned cancelation between the unknown bare mass and the amount of quantum energy. On the other hand, if the quantum energy is the square or higher power of the cutoff, then the amount of quantum energy becomes uncontrollable, resulting in a fine-tuning to many decimal places in its cancelation with the bare mass to produce the experimentally observed mass.
We have an analogous situation in classical electromagnetism when an electron interacts with other electrons through the Coulomb force. The self-energy of the electron, which is produced by the electron’s interaction with its own electromagnetic field, produces a large mass as a result of its large charge density. The contribution to the electron mass that comes from self-energy adds to the intrinsic bare mass of the elect
ron (the mass that the electron would have in the absence of interaction). In the theory of classical electromagnetism, it is not difficult to see that the electron self-energy becomes infinite as the radius of the electron becomes zero. To get the small measured electron mass, a precise, fine-tuned cancelation must occur between the bare electron mass and the self-mass. However, another particle—the positively charged electron, or positron—comes to the rescue. Pairs of virtual electrons and positrons create a cloud around the negatively charged electron that smears out its charge and reduces the amount of electrostatic self-energy to a manageable size, thus avoiding an unacceptable fine-tuning in the calculation of the electron mass. Quantum physics resolves the fine-tuning problem for the electron mass.
THE GAUGE HIERARCHY PROBLEM
The Higgs mass hierarchy problem is related closely to the gauge hierarchy problem, which refers to the enormous difference between the electroweak energy scale or value of 250 GeV and the Planck energy scale of 1019 GeV. Based on experimental results, we know the symmetry breaking in the electroweak theory occurs at about 250 GeV. However, if the electroweak theory is valid up to the Planck energy, which is considered the ultimate energy reachable in particle physics, when gravity becomes as strong as the other three forces of nature, then one has to ask why there would be no other physically significant energy scale occurring between those two energy scales. At lower energies, the strong QCD force has an energy scale of 150 MeV, where the confinement of quarks occurs.
Without new physics, there is no explanation for this enormous energy gap between the electroweak and Planck energies, which seems very unnatural. You would expect something to occur between these two energy scales. If there are indeed no new particles between the electroweak energy scale and the Planck energy scale, then this constitutes a “desert” in particle physics.
THE SPECTRUM OF FERMION MASSES
A third hierarchy problem in the standard model has to do with the huge mass spectrum of quarks and leptons. The ratio of the electron mass of 0.5 MeV to the top quark mass of 173 GeV is 10−6. The situation is even worse for the neutrino mass, which is about 0.2 eV; here, the ratio to the top quark mass is about 10−12. The standard model does not provide an explanation of why there is such an enormous range in the mass spectrum of the quarks and leptons. After the spontaneous symmetry breaking of the electroweak theory, the Higgs boson is supposed to impart mass to the quarks and lep-tons, but the actual experimental values of their masses are adjusted by hand in an ad hoc manner. We need to have a theoretical calculation that predicts the experimental values of the quark and lepton masses, and that explains the large, unnatural mass difference between the neutrino or the electron and the top quark.
These three related hierarchy problems in the standard model—the Higgs mass hierarchy problem, the gauge hierarchy problem, and the fermion mass hierarchy problem—along with the cosmological constant problem, all exhibit extreme fine-tuning and are known collectively as the naturalness problem. The naturalness problem has been a constant thorn in the side of the standard model, and has been discussed contentiously for decades in the particle physics community.
TRYING TO SOLVE THE NATURALNESS PROBLEM WITH SUPERSYMMETRY
One of the first attempts to solve the naturalness problem in the standard model was supersymmetry, proposed during the 1970s. Here, as in our example of the calculation of the electron mass, other particles come to the rescue to solve the Higgs mass hierarchy problem. These are the superpartners that differ from the known standard-model particles by a half spin, producing a fermion partner for every boson particle and vice versa. Because of the nature of the supersymmetric particle charges, the large quantum self-energy contribution to the Higgs boson mass is neatly canceled. This solution to the Higgs mass hierarchy problem is only valid for an exact, rather than broken, super-symmetry and for superpartner masses that are not too far removed from the observed standard-model particle masses. When calculating the boson masses in supersymmetry, the electric charges of the bosons have the opposite signs to the fermion charges, so for each boson there is an oppositely charged superfermion with approximately the same mass, and a cancelation occurs, reducing the fine-tuning of the Higgs boson mass to an acceptable level.
Martinus Veltman published a paper in 1981,1 in which he investigated the naturalness problem in the standard model, concentrating in particular on the Higgs mass hierarchy problem. Using supersymmetry, he formulated a mathematical condition involving the masses of the standard-model particles, which, when satisfied, would solve the naturalness problem.
Yet, this supersymmetry resolution of the Higgs mass hierarchy problem has more or less been invalidated, because the LHC has not found any super-symmetric partners such as stops and gluinos up to energy bounds of 1 to 2 TeV. Or, in supersymmetry-speak, the LHC has not found any “spartners” such as stops and gluinos, which are the supersymmetric partners of the top quark and the gluon, respectively. As a result of the lack of evidence of superpartners below 1 to 2 TeV, the attempts to solve the hierarchy problems by using supersymmetry models lead, again, to an unacceptable level of fine-tuning.
TRYING TO SOLVE THE NATURALNESS PROBLEM WITH THE MULTIVERSE
Because there is no evidence so far of supersymmetry, physicists have been searching for other possible solutions to the hierarchy and fine-tuning problems. A recent spate of papers by notable particle physicists claims that we must consider the multiverse model. Here, within the posited vast (and possibly infinite) number of different universes, our standard model of particle physics occurs in a finely tuned way in only one—the universe in which we live. In other words, this is just the way our particular universe is, so there’s no sense worrying about it. This ad hoc “solution” to the standard-model hierarchy and fine-tuning problems has been widely criticized in the physics community. For one thing, because we cannot access any of the other universes in the multiverse, we can never observe what goes on in any universe except our own, with the consequence that we cannot verify or falsify any part of the multiverse paradigm.
The multiverse is related to the anthropic principle, which holds that many of the laws of physics are based on very finely tuned constants. The idea of the anthropic principle was first applied to physics and cosmology by Robert Dicke of Princeton University during the late 1950s and early 1960s. He said that the age of the universe was just right—namely, the universe chose a “Goldilocks” age, which made it possible for humans to have evolved to observe it. If the universe were 10 times younger than its 13.7 billion years, then much of astrophysics and cosmology would not be valid as we observe it today. In such a young universe, there would not have been time to build up crucial elements for life such as carbon and oxygen by nuclear synthesis. If the universe were 10 times older than it is measured to be, then most stars would have burned out and become white dwarfs. Planetary systems such as ours would have become unstable a long time ago.
In 1973, cosmologist and astrophysicist Brandon Carter promoted the so-called weak anthropic principle at a symposium in Kraków honoring the 500th birthday of Copernicus. The Copernican principle, flowing from Copernicus’s discovery that the earth was not the center of the universe, states that human beings do not occupy a privileged place in the universe. Carter used the anthropic principle in reaction to the Copernican principle. He said at the symposium, “Although our situation is not necessarily central, it is inevitably privileged to some extent.” Carter disagreed with using the Copernican principle to justify the so-called perfect cosmological principle.
This principle was the basis of the steady-state theory of cosmology proposed by Fred Hoyle, Herman Bondi, and Thomas Gold in 1948, and it held that all the regions of space and time are identical. That is, the large-scale universe looks the same everywhere, the same as it always has and always will. The steady-state theory was falsified eventually in 1965 by the discovery of cosmic microwave background radiation. This discovery provided clear evidence that the universe has changed r
adically over time, as described by the Big Bang theory, and is not the same everywhere, neither in space nor in time. There is a weaker form of this idea called the cosmological principle, which states that we do not occupy a special position in space, and the universe changes its gross features with time, but not in space (in contrast to the perfect cosmological principle, in which the universe never changes in time or space). The weaker form of the Copernican principle is the basis of the standard model of cosmology.
A book originally published in 1986 by John Barrow and Frank Tipler2 distinguished between the weak and strong anthropic principles. The weak anthropic principle states that life as we know it exists because we are here as observers in our universe. Roger Penrose has described the weak anthropic principle as follows: “The argument can be used to explain why the conditions happen to be just right for the existence of (intelligent) life on the earth at the present time. For if they were not just right, then we should not have found ourselves to be here now, but somewhere else, at some other appropriate time.”3
The strong anthropic principle says that we are here in the universe because many constants and laws of physics are finely tuned. For example, if the electric charge of the electron were a tiny bit different from its measured value, or if the mass difference between the proton and neutron were a bit different than its measured value, then the laws of atomic and nuclear physics would fail, and we could not exist as a life form in the universe. Evolutionary biologist Alfred Russell Wallace, who, independent of Charles Darwin, proposed the theory of evolution, anticipated the modern version of the anthropic principle back in 1904: “Such a vast and complex universe as that which we know exists around us, may have been absolutely required… in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man.”4