According to the multiverse paradigm, which is related to the anthropic principle, we have to accept that this is simply the way the universe is, and not attempt to explain away or overcome the extreme fine-tuning that occurs in the standard model, such as the hierarchy problems related to the Higgs boson, because many of the universes in the multiverse would have the same problems. In other words, the multiverse “solves” the naturalness problem by not considering it a problem.
More than 400 years ago, when Johannes Kepler attempted to explain the relative distances of the five planets known in his time from the sun and from each other, he was attempting to do it scientifically. He used the five Platonic solids to demonstrate why the distances between the planets were what they were (Figure 10.1). In effect, he was aiming for a “natural” explanation for the sizes of the planetary orbits, and he assumed that the planets existed independently of human observers on earth. His Platonic solids model gave approximately the right answer to the relative sizes of the orbits. Yet, from a modern astronomical perspective, Kepler’s use of the Platonic solids to explain the planetary orbital radii appears quaint.
Figure 10.1 Kepler’s five Platonic solids, a pleasing but wrong-headed idea that correlated the distances between the orbits of the known planets with the dimensions of nesting octahedron, icosahedron, dodecahedron, tetrahedron, and cube.
SOURCE: Wikipedia.
However, proponents of the anthropic principle today go beyond “quaint.” They say that it was not even meaningful for Kepler to try to obtain an objective physical explanation of the relative distances of the planets. From their point of view, it was disappointing that Newton’s gravity theory had failed to determine and constrain the radii of planetary orbits, but that was just the way things were in our particular universe. It made no sense to ask why.
Instead, they claim, the radii of the planetary orbits are what they are simply because of a historical accident during the formation of the solar system. Rather than focusing on the planets as objective and related celestial bodies, the anthropic principle proponents focus on the earth. They explain that the earth is just the right distance from the sun to create an environment for human beings to exist because, for one thing, the surface temperature allows for liquid water. The important point to them is that human beings developed on the earth, and the orbits of earth and the other planets contributed to that event. If we suppose that our planet is the only inhabited planet in the universe, their argument continues, then it would be miraculous for the earth to be located at exactly the right distance from the sun to support life. However, if we take into account the other planets in our solar system and the myriad other extrasolar system planets, then the “fine-tuning” of the earth’s distance from the sun does not seem so unique. Many other planets in the universe might be as favorably situated.
In the anthropic view, the existence of human beings is central. Our existence explains away any concerns about puzzling planetary orbits or unwieldy constants or the necessity for arithmetical fine-tuning. We should just accept the conditions that allowed for our evolution.
Well-known scientists today are espousing the anthropic view, and science and the scientific method that we have known for centuries may be morphing into philosophy or mysticism. A natural universe is one in which there are no unnatural fine-tunings and there is no need to invoke the anthropic principle to explain the laws of nature. The basic question in considering the multiverse “solution” is: Should we continue to try to remove the fine-tuning problems in deriving the laws of physics—that is, should we continue to try to improve our theories about nature—or should we just accept these problems and accept that we live in an unnatural anthropic universe that might be part of a multiverse?
THE HIGGS BOSON AND THE NATURALNESS PROBLEM
Given the fine-tuning in the standard model and the proposed resolution by means of the multiverse and anthropic principle, we must ask the fundamental question: Do we live in a natural universe or not? If the new boson discovered at the LHC is, in fact, the standard-model Higgs boson responsible for the fine-tuning, the particle physics community cannot ignore the trouble this discovery brings with it. The trouble arises because of the failure to detect new physics beyond the standard model at the LHC, such as supersymmetric superpartners. Such new physics could alleviate the fine-tuning or naturalness problem. However, this requires a complicated conspiracy among any new particles such that the hierarchy problem gets canceled out. If the particles do not conspire to allow this cancelation, then, ironically, the situation could be even worse because the large masses of the new particles would increase the size of the quantum corrections responsible for the Higgs mass hierarchy problem.5
Is there a way, within the current formulation of particle physics, to resolve the hierarchy problems and produce a natural explanation of the standard model, without new physics beyond the standard model, without recourse to severe fine-tuning, and without embracing the multiverse paradigm?
The source of the fine-tuning of the Higgs boson mass and the gauge hierarchy problem is the way the electroweak symmetry SU(2) × U(1) is broken in the standard model by a Higgs boson field. The Higgs field is introduced into the standard model to break the symmetry through a special potential, which contains a scalar field mass squared multiplied by the square of the scalar field. Added to this is the scalar field raised to the fourth power and then multiplied by a coupling constant, lambda. Choosing the scalar mass squared to be negative allows the potential as a function of the scalar field to have a maximum and two minima. The maximum occurs for the vanishing of the scalar field and the minima occur for constant, nonzero values of the scalar field (refer back to Figure 5.2). It is the minima in the ground state that break the basic SU(2) × U(1) symmetry and yield the positive physical Higgs mass (initially not including quantum mass corrections) as well as the W and Z boson masses.
Because the initial form of the Lagrangian in the electroweak theory has massless W and Z bosons, quarks and leptons, and a massless photon, the only constant in the Lagrangian with mass is the Higgs mass. This mass breaks the scale invariance or conformal symmetry of the classical Lagrangian. Theories that have massless particles such as photons satisfy a special symmetry invariance called conformal invariance. The original and best known theory satisfying conformal invariance symmetry is Maxwell’s theory of electromagnetism, with massless photons traveling at the speed of light.
In 1980, Gerard ‘t Hooft proposed a resolution to the electroweak hierarchy problems in which the Higgs mass would be protected by a symmetry that would naturally make it light. In 1995, William Bardeen proposed, at a conference held at Fermi Lab, a way to avoid the hierarchy problems. To keep the initial classical Lagrangian of the electroweak model “conformally invariant,” he set the Higgs mass in the potential to zero. This means that the original form of the mechanism for producing spontaneous breaking of the electroweak symmetry is lost. However, in 1973 Sydney Coleman and Eric Weinberg, both at Harvard, proposed that the symmetry breaking of the Weinberg–Salam electroweak model could be achieved by the purely quantum self-interaction of the Higgs boson.6 These quantum self-energy contributions would produce an effective potential without a classical mass contribution in the electroweak Lagrangian. A minimization of this effective potential breaks the electroweak symmetry. If there are no new particles beyond the ones in the standard model that have been observed, then this could solve the hierarchy problems. In particular, the Higgs mass quantum contributions can be tamed to be small logarithmic corrections that can be accounted for without a fine-tuning of the Higgs mass calculation. Other authors, such as Krzysztof Meissner and Hermann Nicolai at the University of Warsaw, and Mikhail Shaposhnikov and collaborators at the École Polytechnique Fédérale de Lausanne, have developed these ideas further.
There is a serious problem with this potential solution to the hierarchy problems. The top quark mass is large (173 GeV), and it couples strongly to the Higgs boson in the postulat
ed Yukawa Lagrangian, which is a part of the standard (Steven) Weinberg–Salam model introduced to give the quarks and lep-tons masses. Because of this coupling, the final calculation of the quantum field contributions to the Coleman–(Eric) Weinberg effective potential becomes unphysical, which illustrates how the standard model is tightly constrained and not easy to modify without invoking unwanted unphysical consequences.
Another source of fine-tuning is the neutrino masses. A massive fermion has both a left-handed (isospin doublet) component for the Dirac field operator as well as a right-handed (isospin singlet) component. However, the right-handed neutrino has never been observed. Because the theory demands its existence, particle physicists hypothesize that it must have a mass so large that it has not yet been detected at the LHC.
COSMOLOGICAL BEARINGS ON PARTICLE PHYSICS
The vexing problem of the cosmological constant—namely, having to explain why it is zero or, as is required observationally by the standard LambdaCDM cosmological model, why it has a tiny but nonzero value—only arises in particle physics when the standard-model particles couple to gravity. In the absence of gravitational interactions between the particles, there is no cosmological constant problem. The cosmological constant problem has currently not been solved in a convincing way and remains a puzzling issue that has to find a resolution before we can accept confidently the standard model of particle physics and cosmology as the final word. Recently, the Planck satellite mission collaboration published remarkably precise data revealing new features in early-universe cosmology.7 In particular, these data demonstrate that the early universe is simpler than we had anticipated in early theoretical cosmological models. The new precise data eliminate many of the more complicated inflationary models such as hybrid models, which involve several scalar fields with additional ad hoc free parameters. These new data bring into relief the long-standing naturalness problem: How can the universe begin with the Big Bang without extreme fine-tuning of its initial conditions?
The most famous model for solving the naturalness problem in the beginning of the universe is the inflation model, originally proposed by Alan Guth in 1981. The Planck data restrict the possible inflationary models to the simplest ones based on a single scalar field called the inflaton. In a recent paper, Anna Ijjas, Paul Steinhardt, and Abraham Loeb have pointed out that the simple inflationary model is in trouble.8 Although a simple scalar inflaton model can fit the Planck cosmological data, these authors argue, the data make the inflationary model an unlikely paradigm.
It has been known for some years from the published papers of Andre Linde9 at Stanford University, Alexander Vilenkin10 at Tufts University, and Alan Guth at MIT11 that the inflationary models must suffer “eternal inflation.”12 That is, once inflation is induced just after the Big Bang, it will continue producing a multiverse eternally in which, as Guth states, “anything that can happen will happen, and it will happen an infinite number of times.”13 The result is that all possible cosmological models can occur, rendering the inflationary paradigm totally unpredictive. If future observations continue to support the results of the Planck mission, then the inflationary paradigm may be doomed, which opens up the possibility of considering alternative cosmological models that can solve the initial value problems of the universe, such as the VSL models and periodic cyclical models.
In the first papers I published on VSL in 1992/1993, I raised criticisms of the inflationary models, which have been borne out by the latest Planck mission data. VSL models were also published in 1999 by Andreas Albrecht and João Maguiejo.14 The periodic cyclic models, in which the universe expands and contracts with a new birth at the beginning of the Big Bang and a death at the Big Crunch, were considered by Richard Tolman during the 1930s. More recently, the cyclic model has been revived and published by Paul Steinhardt and Neil Turok in 2002.15
Trying to make the standard model of particle physics and cosmology natural without unacceptable fine-tuning problems has become a vital effort in particle physics today. It is possible that the LHC, at much higher energies after 2015, will not detect any new physics. Nevertheless, I believe that there does exist a resolution to the unwanted hierarchy problems in the standard model. It is just extremely challenging to find it. Further research must be done on how electroweak symmetry is broken and how particles get their masses. Historically, some problems in physics have proved to be especially hard to solve. Often, it takes decades to find a solution that is natural and does not resort to speculative, unverifiable, and unfalsifiable proposals such as the multiverse and the anthropic paradigms.
11
The Last Word until 2015
In February 2013, the LHC was shut down for two years of maintenance and upgrading to higher energy levels. The “last word” from CERN about the discovery of the new boson was delivered at the Rencontres de Moriond meeting March 2–16, 2013.
The Moriond workshops consisted of two consecutive sessions: the Electroweak Session and the QCD Session. Highly anticipated talks took place at both of these sessions, with representatives from the ATLAS and CMS collaborations giving updates on the latest analyses of the 2011 and 2012 data. On March 14, during the Moriond workshop, the CERN press office made the following seemingly unequivocal announcement: “New results indicate that the particle discovered at CERN is a Higgs boson.” The hints of the discovery of the Higgs boson on July 4, 2012, followed by stronger hints at the Kyoto meeting in November 2012, had now culminated in the experimentalists’ new analyses of the data. Increasing numbers of papers discussing the LHC data and the properties of the new boson since 2012 appeared to be accepting as fact that the LHC had indeed discovered the Higgs boson at a mass of approximately 125 GeV.
For example, a paper submitted to the electronic archive by a veteran CERN theorist in March 2013 stated: “Beyond any reasonable doubt, the H particle is a Higgs boson.” (Here the “H particle” refers to the new particle discovered at the LHC; other physicists refer to it as the X boson.) Clearly, not only the popular media but the particle physics community has decided that the Higgs boson has been discovered. One can only assume that the Nobel Prize for Physics will be awarded to Peter Higgs and one or two other theorists in 2013 or possibly in 2014. Perhaps the Nobel committee will act conservatively, awarding two or three experimentalists at the LHC the prize in 2013 for discovering a new boson, and presenting the theorists with the prize the following year.
How convincing are these latest data and analyses?
THE TWO-PHOTON DECAY CHANNEL DATA
At the Moriond meeting, experimentalists presented new results updating the various decay rates, branching ratios, and signal strengths of the new boson’s decay channels. In particular, the CMS collaboration finally updated their data for the two-photon decay channel for the X boson. This update, presented by Christophe Ochando, from Laboratory Leprince Ringuet in Palaiseau, France, during the QCD session, had been eagerly anticipated since the July 4 announcement; rumors had been swirling about why the CMS team had not updated these data in November.
It appeared that the team had had problems analyzing the 2011 data and accounting for an unexpected excess in signal strength of the two-photon decay when compared with the theoretical prediction of the Higgs boson decay into two photons. (See Figure 11.1 for a candidate event.) In 2011, the energy in the accelerator was 7 TeV with a luminosity of about 5 inverse femtobarns. As Ochando noted, based on the new 2012 data, the statistical significance of the two-photon decay signal had decreased since the July 4 announcement. The CMS results for the two-photon decay channel gave for the new 8 TeV data, with a luminosity of about 19 inverse femtobarns, a signal strength of 0.93 +0.34 or −0.32 for one method of analysis. Another method of analysis of the same data produced a significant decrease in the signal strength of the diphoton channel. The signal strength for this analysis gave 0.55 +0.29 or −0.27. This is a drop in the signal strength to 2 sigma compared with the 4-sigma result originally announced on the fourth of July.
Wh
at could be done about these conflicting data, for which it appeared that an excess in signal strength had now become a deficit? Would it help to combine the results over the two years of data gathering? There are two ways of interpreting the results of the experiments. One way is to determine the signal strength of the decay products by multiplying the production cross-section of Higgs bosons with the branching ratio of the decay into two photons; the more statistically significant the signal strength, the more likely that the new boson is the Higgs boson. The other way to interpret the results of the experiments is to determine the ratio of the observed cross-section to the cross-section predicted by the standard model; ideally, this ratio should equal unity (one), with a strong statistical certainty.
Ochando revealed that when the CMS collaboration combined the 7-TeV and 8-TeV data from 2011 and 2012, respectively, this resulted in the ratio of the observed cross-section to the predicted standard-model cross-section being equal to 0.78 +0.28 or −0.26 for a Higgs boson mass of 125 GeV. This result is statistically consistent with the standard-model prediction of unity. On the other hand, a different analysis produced, again for the ratio of the observed cross-section to the standard-model predicted cross-section, the combined two-year result of 1.11 +0.32 or −0.30, a result also consistent with the standard model within experimental error. Combining the 2011 data and the 2012 data would also produce a signal strength with a larger statistical standard deviation consistent with the standard-model prediction.
Figure 11.1 A candidate event for a Higgs boson decaying into two photons. © CERN for the benefit of the CMS Collaboration
Cracking the Particle Code of the Universe Page 25