The Trouble With Physics: The Rise of String Theory, The Fall of a Science, and What Comes Next
Page 10
The idea of grand unification was not only to bring the forces together but to invent a symmetry that turned quarks (the particles ruled by the strong force) into leptons (the particles ruled by the electroweak force), hence unifying the two basic kinds of particles, leaving just one kind of particle and one gauge field. The simplest candidate for this grand unification was known as the symmetry SU(5). The name is code for the five kinds of particles rearranged by the symmetry: the three colored quarks of each kind and two leptons (the electron and its neutrino). SU(5) not only unified quarks and leptons, it did so with unparalleled elegance, explaining concisely all that went into the standard model and making a necessity out of much that was previously arbitrary. SU(5) explained all the predictions of the standard model and, even better, made some new predictions.
One of these new predictions was that there had to be processes by which quarks can change into electrons and neutrinos, because in SU(5), quarks, electrons, and neutrinos are just different manifestations of the same underlying kind of particle. As we have seen, when two things are unified, there have to be new physical processes by which they can turn into one another. SU(5) indeed predicts such a process, which is similar to radioactive decay. This is a wonderful prediction, characteristic of grand unification. It is required by the theory and is unique to it.
The decay of a quark into electrons and neutrinos would have a visible consequence. A proton containing that quark is no longer a proton; it falls apart into simpler things. Thus, protons are no longer stable particles—they undergo a kind of radioactive decay. Of course, if this happened very often, our world would fall apart, as everything stable in it is made of protons. So if protons do decay, the rate must be very small. And that is exactly what the theory predicted: a rate of less than one such decay every 1033 years.
But even though this effect is extremely rare, it is within the range of a doable experiment, because there are an enormous number of protons in the world. So in SU(5) we had the best kind of unified theory, one in which there was a surprising consequence that didn’t contradict what we knew and could be confirmed immediately. We could compensate for the extreme rareness of proton decay by building a huge tank and filling it with ultrapure water, on the chance that somewhere in the tank a proton would decay as often as a few times a year. You would have to shield the tank from cosmic rays, because these rays, which are constantly bombarding the earth, can blast protons apart. After that, because the decay of a proton produces a lot of energy, all you had to do was surround the tank with detectors and wait. Funds were raised, and huge tanks were built in mines deep underground. The results were impatiently awaited.
After some twenty-five years, we are still waiting. No protons have decayed. We have been waiting long enough to know that SU(5) grand unification is wrong. It’s a beautiful idea, but one that nature seems not to have adopted.
Recently, I ran into a friend from graduate school—Edward Farhi, who has since become the director of the Center for Theoretical Physics at MIT. We hadn’t had a serious conversation in perhaps twenty years, but we found we had a lot to talk about. We had both been reflecting on what had happened and not happened in particle physics during the twenty-five years since we got our PhDs. Eddie made important contributions to particle theory but now works mostly in the rapidly moving field of quantum computers. I asked him why, and he said that in quantum computing, unlike particle physics, we know what the principles are, we can work out the implications, and we can do experiments to test the predictions we make. He and I found ourselves trying to pinpoint when particle physics had ceased to be the fast-moving field that had excited us in graduate school. We both concluded that the turning point was the discovery that protons don’t decay within the time predicted by the SU(5) grand unified theory. “I would have bet my life—well, maybe not my life, but you know what I mean—that protons would decay,” was how he put it. “SU(5) was such a beautiful theory, everything fit into it perfectly—then it turned out not to be true.”
Indeed, it would be hard to underestimate the implications of this negative result. SU(5) is the most elegant way imaginable of unifying quarks with leptons, and it leads to a codification of the properties of the standard model in simple terms. Even after twenty-five years, I still find it stunning that SU(5) doesn’t work.
Not that it’s hard for us theorists to get around the current failure. You can just add a few more symmetries and particles to the theory, so that there are more constants to adjust. With more constants to adjust, you can then arrange for the decay of the proton to be as rare as you like. So you can easily make the theory safe from experimental failure.
That said, the damage is done. We lost the chance to observe a striking and unique prediction of a deep new idea. In its simplest version, grand unification made a prediction for the rate of proton decay. If grand unification is right but complicated, so that the proton-decay rate can be adjusted to anything we like, it has ceased to be explanatory. The hope was that unification would account for the values of the constants in the standard model. Instead, grand unification, if valid, introduces new constants that must be tuned by hand to hide effects that would disagree with experiment.
We see here an illustration of the general lesson described earlier. When you unify different particles and forces, you risk introducing instabilities into the world. This is because there are new interactions by which the unified particles can morph into each other. There is no way to avoid these instabilities; indeed, such processes are the very proof of unification. The only question is whether we are dealing with a good case—like the standard model, which made unambiguous predictions that were quickly confirmed—or with an unkind case, in which we have to fiddle with the theory to hide the consequences. This is the dilemma of modern theories of unification.
5
From Unification to Superunification
THE FAILURE OF the first grand unified theories gave rise to a crisis in science that continues to this day. Before the 1970s, theory and experiment had developed hand in hand. New ideas were tested within a few years, ten at most. Each decade from the 1780s to the 1970s saw a major advance in our knowledge of the foundations of physics, and in each advance, theory complemented experiment, but since the end of the 1970s there has not been a single genuine breakthrough in our understanding of elementary-particle physics.
When a big idea fails, there are two ways to respond. You can lower the bar and retreat to incremental science, slowly probing the borders of knowledge with new theoretical and experimental techniques. Many particle physicists did this. The result is that the standard model has been very well tested experimentally. The most consequential finding of the last quarter century is that neutrinos have mass, but this revelation can be accommodated by a minor adjustment of the standard model. Apart from that, no modifications have been made.
The other way to respond to the failure of a big idea is to try for an even bigger one. At first a few theorists, then a growing number of them, took this road. It is a road we have had to take alone; so far, none of the new ideas have any support from experiment.
Of the big ideas that have been invented and studied during these years, the one that has gotten the most attention is called supersymmetry. If true, it will become as fundamental a part of our understanding of nature as relativity theory and the gauge principle.
We have seen that the big unifications find hidden connections between aspects of nature that were previously thought distinct. Space and time were originally two very different concepts; the special theory of relativity unified them. Geometry and gravity were once quite unrelated; general relativity unified them. But there are still two big classes of things that make up the world we inhabit: the particles (quarks, electrons, etc.) that comprise matter and the forces (or fields) by which they interact.
The gauge principle unifies three of the forces. But we’re still left with those two distinct entities: particles and forces. Their unification was the goal of two pr
evious attempts, the aether theory and the unified-field theory, and each failed. Supersymmetry is the third attempt.
Quantum theory says that particles are waves and waves are particles, but this does not really unify the particles with the forces. The reason is that in quantum theory there remain two broad classes of elementary objects. These are called fermions and bosons.
All the particles that make up matter, such as electrons, protons, and neutrinos, are fermions. All the forces consist of bosons. The photon is a boson, and so are the particles, like the W and Z particles, associated with the other gauge fields. The Higgs particle is also a boson. Supersymmetry offers a way to unify these two big classes of particles, the bosons and the fermions. And it does so in a very creative way, by proposing that every known particle has a heretofore unseen superpartner.
Roughly speaking, a supersymmetry is a process in which you can replace one of the fermions by a boson in some experiment without changing the probabilities of the various possible outcomes. This is tricky to do, because fermions and bosons have very different properties. Fermions must obey the exclusion principle, invented by Wolfgang Pauli in 1925, which says that two fermions cannot occupy the same quantum state. This is why all the electrons in an atom do not sit in the lowest orbital; once an electron is in a particular orbit, or quantum state, you cannot put another electron in the same state. The Pauli exclusion principle explains many properties of atoms and materials. Bosons, however, behave in the opposite way: They like to share states. When you put a photon into a certain quantum state, you make it more likely that another photon will find its way to that same state. This affinity explains many properties of fields, like the electromagnetic field.
So it seemed at first crazy that you could invent a theory in which you could replace a boson with a fermion and still get a stable world. But plausible or not, four Russians found that they could write down a consistent theory with just such a symmetry, which we now call supersymmetry. They were Evgeny Likhtman and Yuri Golfand in 1971 and Vladimir Akulov and Dmitri Volkov in 1972.
In those days, scientists in the West were fairly out of touch with scientists in the Soviet Union. Soviet scientists were only rarely allowed to travel, and they were discouraged from publishing in non-Soviet journals. Most Western physicists did not read the translations of Soviet journals, so there were several discoveries made in the U.S.S.R. that went unappreciated in the West. The discovery of supersymmetry was one of them.
So supersymmetric theories were invented twice more. In 1973, several kinds were discovered by two European physicists, Julius Wess and Bruno Zumino. Unlike that of the Russians, their work was noticed, and the ideas were quickly developed. One of their new theories was an extension of electromagnetism in which the photon was unified with a particle much like a neutrino. The other discovery of supersymmetry is connected with string theory, and we’ll explore it in more detail later.
Could supersymmetry be true? Not in its initial form, which posited that for each fermion there is a boson with the same mass and charge. This means there must be a boson with the same mass and charge as the electron. This particle, were it to exist, would be called a selectron, for superelectron. But if it existed, we would have already seen it in an accelerator.
This problem can be fixed, however, by applying the idea of spontaneous symmetry breaking to supersymmetry. The result is straightforward. The selectron gets a large mass, so it becomes much heavier than the electron. By adjusting free constants of the theory—of which it turns out to have many—you can make the selectron as heavy as you like. But there is an upper limit to how massive a particle any given particle accelerator can produce. Thus you can explain why it has not yet been seen in any existing particle accelerator. This is precisely what was done.
Notice that this story has a similar arc to others we have described. Someone posits a new unification. There are big consequences for experiment. Unfortunately, experiment disagrees. Scientists then complicate the theory, in a way that incorporates several adjustable constants. Finally, they adjust those constants to hide the missing predicted phenomena, thus explaining why the unification, if true, has not resulted in any observations. But such maneuvering makes the theory hard to falsify, because you can always explain away any negative result by adjusting the constants.
The story of supersymmetry is one in which, from the beginning, the game has been to hide the consequences of the unification. This does not mean that supersymmetry isn’t valid, but it does explain why, even after more than three decades of intensive development, there are still no unambiguous testable predictions.
I can only imagine how Wess, Zumino, and Akulov (the only one of their Russian colleagues still alive) must feel. They may have made the most important discovery of their generation. Or they may simply have invented a theoretical toy that has nothing to do with nature. So far, there is no evidence either way. In the last thirty years, the first thing done with every new elementary-particle accelerator that has come on line has been to look for the particles that supersymmetry predicts. None has been found. The constants are just adjusted upward, and we wait again for the next experiment.
Today that means keeping an eye on the Large Hadron Collider (or LHC), presently under construction at CERN. If all goes according to plan, it should turn on in 2007. There is great hope among particle physicists that this machine will rescue us from the crisis. First of all, we want the LHC to see the Higgs particle, the massive boson responsible for carrying the Higgs field. If it doesn’t, we will be in big trouble.
But the idea with the most at stake is supersymmetry. If the LHC sees supersymmetry, there will certainly be Nobel Prizes for its inventors. If not, there will be dunce caps—not for them, for there is no shame in inventing a new kind of theory, but for all those of my generation who have spent their careers extending that theory.
So many hopes are riding on the LHC because what it finds will also tell us a lot about one of the five key problems mentioned in chapter 1: how to explain the values of the free constants of the standard model. To see why, we need to understand one very striking feature of these values, which is that they are either very large or very small. One example is the wildly differing strengths of the forces. The electrical repulsion between two protons is stronger than their gravitational attraction by a huge factor, around 1038. There are also huge differences in the masses of the particles. For example, the electron has 1/1,800 the mass of a proton. And the Higgs boson, if it exists, has a mass of at least 120 times that of the proton.
A way to summarize the data is to say that particle physics seems hierarchical rather than democratic. The four forces span a large range of strengths, forming a hierarchy from strong to weak, which is to say from nuclear physics to gravity. The various masses in physics also form a hierarchy. At the top is the Planck mass, which is the energy (recall that mass and energy are really the same thing) at which quantum gravity effects will become important. Perhaps ten thousand times lighter than the Planck mass is the scale at which the difference between electromagnetism and the nuclear forces should be seen. Experiments conducted at that energy, which is called the unification scale, will see not three forces but one single force. Moving down the hierarchy, 10−16 times the Planck scale is a TeV (for tera-electron volt, or 1012 electron volts), the energy at which the unification of the weak and electromagnetic forces takes place. This is called the weak interaction scale. This is the region in which we should see the Higgs boson, and it is also where many theorists expect to see supersymmetry. The LHC is being built to probe the physics at this scale. A proton mass is 71,000 of that, another 71,000 brings us down to the electron, and perhaps 71,000,000 of that is the neutrino. Then, way down at the bottom, is the vacuum energy, which exists throughout space even in the absence of matter.
This makes a beautiful but puzzling picture. Why is nature so hierarchical? Why is the difference between the strength of the strongest and weakest force so huge? Why are the masses of protons
and electrons so tiny compared with the Planck mass or the unification scale? This problem is generally referred to as the hierarchy problem, and we hope the LHC will shed light on it.
So what exactly should we see at the LHC? This has been the central question of particle physics since the triumph of the standard model in the early 1970s. Theorists have had three decades to prepare for the day the LHC goes on line. Are we ready? Embarrassingly, the answer is no.
Were we ready, we would have a compelling theoretical prediction for what the LHC will see, and we’d simply be awaiting confirmation. Given everything we do know about particle physics, it is surprising that thousands of the smartest people on the planet have been unable to come up with a compelling guess as to what the next great experimental leap will reveal. But apart from the hope that the Higgs boson will be seen, we have no clear, unambiguous prediction.
You might think that in the absence of consensus, there would be at least a few rival theories making such a prediction. But the reality is far messier. We have several different unification proposals on the table. All potentially work, to some extent, but none has emerged as uniquely simpler or more explanatory than the others. None yet has the ring of truth. To explain why thirty years have not sufficed to put our theoretical house in order, we need to look more closely at the hierarchy problem. Why is there such an enormous range of masses and other constants?
The hierarchy problem contains two challenges. The first is to determine what sets the constants, what makes ratios large. The second is how they stay there. This stability is puzzling, because quantum mechanics has a strange tendency to pull all the masses together toward the value of the Planck mass. We don’t need to explore why here, but the result is as if some of the dials we use to tune the constants were connected by rubber bands that are steadily tightening.