Book Read Free

Hiding in the Mirror: The Quest for Alternate Realities, From Plato to String Theory (By Way of Alicein Wonderland, Einstein, and the Twilight Zone)

Page 18

by Lawrence M. Krauss


  Now, one may wonder about this asymmetry in nature (i.e., why forces are associated with bosons, and matter is associated with both fermions and bosons). The investigation of this asymmetry took a long and convoluted trail that ultimately ended up in—you guessed it—extra dimensions. It began in 1970, when it was realized, even before they were dashed by the development of QCD, that the dual string models in twenty-six dimensions that appeared to be consistent models actually had a serious flaw. These theories predict particles called “tachyons.”

  Tachyons may be familiar to people who like to watch Star Trek, but in the real universe of physics, tachyons are bad news. As the name suggests, they have something to do with time. Strictly speaking, tachyons are particles that can appear to move backward in time, which is something that at the very least is embarrassing. Alternatively, it turns out that one can think of this behavior as due to the fact that they are particles that are restricted to always travel faster than the speed of light. Because of the relation between relative time and velocity for different observers in special relativity, it turns out that particles that somehow are forever moving faster than the speed of light (nothing can cross the threshold from slower to faster in the theory) would behave to other observers as if they are moving backward in time.

  Now, it turns out that the laws of classical physics do not forbid such unusual particles to exist, but all sensible theories tend not to predict them (not to mention the fact that no tachyons have ever been observed in nature). Generally, if a theory predicts a tachyonic particle, it is usually a mathematical indication of some instability in the ground state of the theory—a reflection of the fact that one has somehow misidentified what the true stable particles are. If the instability is removed, so is the tachyon. So, on the surface, the 1970s would seem to have been a very bad time for string theory. First, QCD came along as the correct theory of the strong interaction, and second, the dual string model appeared to be unstable, anyway. But, as has happened numerous times since, string theory has demonstrated an almost chameleon-like ability to morph into something new, its flaws transforming into virtues. The roots of such a novel version of string theory date back to 1971, when physicists André Neveu and John Schwarz, and independently Pierre Ramond, investigated ways of allowing the incorporation of halfinteger spin particles (fermions) into dual string models. Their motivation at the time was to enable these models to incorporate quarks, which by then had been demonstrated to exist inside of protons and neutrons and other strongly interacting particles. If the dual models were supposed to describe strongly interacting particles, then they would have to allow for the existence of such objects.

  The mechanism for doing this is somewhat technical and may seem rather unusual on first, and probably second, glance. Normally we describe distances along a string, or any other object, in terms of regular numbers. We would say, for example: “Move 5.5 units (i.e., feet, miles, whatever) along the string.” However, the mechanism that Neveu, Schwarz, and Ramond investigated did not involve using normal numbers to describe such distances along the strings but instead quantities called Grassmann variables, which obey rather strange relations. For normal numbers, say, 5 and 4, 5x4 = 4x5. However, for two such Grassmann quantities, A and B, it turns out that AB = −BA. Moreover, since this same relation must be true for the individual quantities A and B, this means that A2 = − A2 = 0 and B2 = − B2 = 0.

  I mention this not because it is particularly illuminating, but because it gives a sense of the sometimes highly nonintuitive mathematical manipulations associated with some string miracles, many of which seem unphysical, at least until one gets used to them. In any case, one of the first important developments that occurred when fermions were added to strings using this strange mechanism is that it was realized that the critical dimension on which quantum dual string theories might make sense could be reduced from twenty-six to ten dimensions. Now, ten is close to eleven, which is the number of dimensions that pure Kaluza-Klein-type arguments seemed to favor, as I discussed earlier, but as the saying goes, close is only useful in horseshoes and hand grenades. However, this development was not the end of the story. Once fermions were added to strings, it was realized that another remarkable bit of mathematical wizardry was possible: There could exist a brand-new symmetry that related bosons (integer spin) on the string to fermions (half-integer spin) on the string. Interestingly, it had previously been thought to be impossible to have such a symmetry interchanging bosons and fermions in one’s description of nature, and in fact a theorem to this effect had been proved in 1967 by the brilliant physicist and raconteur Sidney Coleman at Harvard (who you may recall was David Politzer’s supervisor) and his student Jeffrey Mandula.

  It turned out, however, that by introducing those weird Grassmann quantities into the picture, one could in fact circumvent the famous Coleman-Mandula theorem and instead have such a symmetry interchanging bosons and fermions in a single physical description of the natural world. Moreover, such an extended symmetry—or “supersymmetry,” as it became known—ultimately seemed to be an essential part of theories of strings that contained both fermions and bosons.

  Now, interestingly enough, it wasn’t until the 1970s that anyone explored the idea of applying supersymmetry beyond dual strings (i.e., twodimensional objects moving around in ten or twenty-six dimensions) and to our good old four-dimensional universe with elementary particles such as quarks and photons. In 1974, Julius Wess and Bruno Zumino wrote a pivotal paper in which they extended the relation that had held on twodimensional strings to our four-dimensional spacetime time consisting of fermions and bosons.

  The history of supersymmetry is a somewhat convoluted one, primarily because it appeared in several different places in the literature as a mathematical idea in search of a physical application. Such ideas tend to lie dormant until circumstances arise that cause physicists to latch onto them. Once they do, there tends to be an explosion of activity, as theorists smell new opportunities like sharks smell blood.

  Recall that in 1974, following the discovery a year earlier that QCD as the theory of the strong force, we appeared to have had for the first time a full quantum mechanical understanding of all the nongravitational forces in nature. Prompted by this development, Sheldon Glashow and Howard Georgi made the first proposal that same year to unify these forces in a grand unified theory.

  Glashow and Georgi had written down a simple extension of the existing theories that not only appeared to unify these three nongravitational forces using a simple mathematical framework, but also nicely classified all of the known elementary particles at the same time.

  On the surface, it might seem like folly to try and unify three forces whose intrinsic strengths are so different. The electric force beween quarks is tens of thousands of times less powerful than the strong force between quarks within a proton, for example. However, the beauty of asymptotic freedom was that it demonstrated that the strong force gets weaker as you measure it on smaller scales. Perhaps on some very small scale the strengths of all the forces might become similar.

  Just such a calculation was first performed by Georgi and Weinberg, along with physicist Helen Quinn; it demonstrated that the quantum dynamics of the known forces was such that the difference in their strengths should indeed diminish if one examined nature on ever-smaller scales, with the strong force becoming weaker and the electromagnetic force stronger, for example. If one extrapolated to much smaller scales the known behavior at scales one could measure in the laboratory and assumed this behavior persisted without any fundamentally new physical phenomena entering in to change the results along the way, then on a scale approximately one million billion times smaller than the size of the proton, the three known forces would have approximately the same strength. What better signature of possible unification could one expect?

  Everything now pointed to a simple unification of the strong, weak, and electromagnetic interactions, which, I believe, Glashow dubbed “grand unification.” Moreover, thi
s theory was not merely a convenient form of taxonomy but actually made new predictions. The boldest was that the basic building block of all matter, the proton, might not be stable, but could decay within a period of time that, while far longer than the current age of the universe, might nevertheless be measurable. A host of huge experiments was soon underway to attempt such a measurement. As Glashow put it: “Diamonds are not forever!”

  As I have already described, the theoretical exuberance associated with the development of GUTs, following on the flush of success in explaining the electroweak and strong forces, was contagious. The response of the physics community followed a standard trend. Strings were largely forgotten, except by an earnest few, and there was a stampede to explore the possibilities of a new Theory of “Almost” Everything. Suddenly physicists were boldly extrapolating known physics onto scales of energy, space, and time that had previously been unimaginable. These theories promised not just to explain the known forces, but also to answer longstanding puzzles such as how matter in the universe originated and whether matter is absolutely stable. Physicists were now seriously discussing questions associated with the earliest moments of the big bang, and experimentalists were building detectors to explore possible new phenomena on scales a million billion times smaller than the size of a proton!

  Of course, following the first flush of romantic love invariably comes the recognition that the object of one’s affections is not quite perfect. So it was with grand unification. As I have indicated, one of its key predictions was that the proton should not be absolutely stable, but should decay after a lifetime of about 1030 years. This is comfortably older than the current age of the universe (by a factor of about a hundred billion billion), so we don’t have to sell our diamond rings for a loss quite yet. However, long as it is, it was within the reach of larger experiments, with tanks of thousands of tons of water containing enough protons so that one might expect, given the laws of probability, to find a few decaying each year. (With an average lifetime of 1030 years, this means that if one assembles 1030 protons in one place, on average, one will decay each year.)

  Unhappily, alas, while these beautiful experiments have been launched, we have yet to witness the decay of a single proton. This failure has ruled out the GUT of Glashow and Georgi, although, as you can imagine, theorists have proposed other possibilities that still make the cut. Another experimental problem has arisen, however, for even the simplest GUTs. Since 1974 the strength of the weak and strong forces has been measured with greater precision. Taking account of the new, more precise values, and examining theoretically what should happen as one probes on ever smaller scales, one finds that the different strengths of the three forces would not converge precisely together at a single scale, as seemed possible within the earlier, less accurate estimates. Does this mean that grand unification is ultimately untenable? Not at all. For, even as many physicists at the time suggested, making an assumption that no new physics might enter in to change the scaling behavior of the fundamental forces as they evolve over fifteen orders of magnitude in size, from the proton size to the presumed scale of grand unification, was a remarkably conservative supposition. To come up such a vast “desert,” as it became known, and to encounter no new or interesting physics, would at the very least defy a well-established historical tradition in the field. But what could be the source of such new scaling behavior? It turned out that another problem, this time a theoretical one associated with the possible existence of grand unification, pointed the way. The hierarchy problem, as it has become known, can be simply stated: Why are the energy (and mass and length) scales at which grand unification might occur, and the scale of the masses of the known elementary particles, so different? In another way of putting it, if grand unification indeed occurs at a scale a million billion times smaller than the size of the proton, why does nature choose to produce such a dramatic difference in scales? Now, one perfectly good answer might simply be the same answer that parents give their children when they keep nagging them with the question, “Why?” The answer? “Because!”

  Indeed, it could be just an accident of nature that we would have to live with, except that within the framework of the standard model of elementary particle physics, as it was formulated in 1974, such an accident should not happen! For, it turns out that when calculating the effects of virtual particles—the same objects that allow such good predictions for quantum electrodynamics, and also produce such nonsensical predictions for the energy of empty space—such a hierarchy would be unstable. By unstable I mean that one can show that the virtual particles associated with the GUT can affect the measured value of some elementary particle masses at the weak scale, just as virtual particles in QED affect the magnitude of the spacing between energy levels in hydrogen atoms in a way that can be both calculated and measured. However, unlike the case in QED, where the corrections are extremely small, it turns out that the effect of virtual particles at a very high GUT-scale energy can be large enough so as to actually cause the masses of all the known particles to be raised up to this scale. The only way this can be avoided in general within the standard model would be if some very careful fine-tuning of parameters at the highenergy scale occurred, so that various large numbers would cancel out each other to high precision, leaving a remainder that might be fifteen to thirty orders of magnitude smaller. There are no known mechanisms in physics to make such cancellations occur in any natural way. Indeed, this particular feature of the hierarchy problem is known as a “naturalness” problem. Now, as I like to say, unnatural acts probably don’t seem unnatural at the time to those engaged in them. But naturalness in this sense has a well-defined meaning: It is “unnatural” to have a huge hierarchy between the masses of everyday particles and the mass scale associated with grand unification if quantum mechanical corrections to the former due to the latter might be large.

  This problem has not been fully resolved, and it continues to present a tremendous challenge to theorists as they attempt to build models of reality. In fact, the vast difference in scales between the proton size and the scale at which grand unification might occur is itself dwarfed by another larger hierarchy. The predicted GUT energy scale is, in fact, several orders of magnitude smaller still than the energy scale where quantum mechanical effects in gravity should become important, and where, presumably the gravitational force might unify with the other forces. This latter scale, as I have mentioned, is called the Planck mass, and it is the ultimate bogeyman in physics. Once again, we can ask the question: Why is the Planck energy scale so vastly different than the scales of all the known elementary particles?

  A glimmer of hope regarding these conundrums was elaborated by a number of authors, ultimately receiving widest impact in 1981 in a paper by Edward Witten, who had just moved to Princeton University on his way ultimately to the Institute for Advanced Study in Princeton, where he is now one of the most highly regarded and accomplished mathematical physicists and string theorists in the world.

  The hope appeared in the form of supersymmetry. Following 1974 a growing number of physicists began to get interested in the possible implications of space-time supersymmetry in nature beyond its utility solely for dual string models.

  In order to understand the reasons for this interest, I should briefly present the key feature of supersymmetry as a symmetry of space-time. By connecting bosons and fermions, supersymmetry requires that for every boson in nature, there be a fermionic partner of exactly the same mass, electric charge, and so on.

  However, in the world as we know it, this is manifestly not the case!

  No “superpartners,” as they are called, of ordinary elementary particles have ever been seen. There is no evidence for a bosonic version of the electron, or for a fermionic version of the photon. Why on earth, then, would any physicist in her right mind suggest that such a symmetry might be appropriate to our understanding of nature? Well, an optimistic physicist, of whom there have been many in recent years, would counter this argument by insist
ing that it is not that we haven’t discovered all the particles predicted by supersymmetry, but rather that we have discovered precisely half of the particles! Isn’t that progress?

  This is not a completely facetious argument, because it turns out there are many symmetries in nature that are not manifest at first sight. For example, as I have already described, the laws of electromagnetism, which govern much of what we experience on a daily basis, do not distinguish between left and right. Yet, when I look out the window I can clearly distinguish the landscape to the left of me, where there happens to be a mountain at the moment, from the landscape to the right of me, where there doesn’t happen to be one.

  This is an example of what physicists call “spontaneous symmetry breaking,” but it could just as justifiably be called an environmental accident. Namely, while an underlying law of nature may possess some symmetry, like left–right symmetry, that symmetry need not be manifest in the particular circumstances in which we find ourselves, such as me sitting in my office.

  This may sound almost trivial, but the recognition that spontaneous symmetry breaking can occur in nature, along with an investigation of the physical implications of this possibility, have played a central role in many of the fundamental developments in a host of areas of physics over the past four decades. They certainly influenced the formulation of the electroweak theory by Glashow, Salam, and Weinberg. In that theory a fundamental symmetry relates certain facets of the weak force, and the electric force—namely, the two different forces turn out to be based in part on different mathematical realizations of a single theory. However, due to an accident of our circumstances—which, as we shall see momentarily, one can quantitatively and precisely probe—it turns out that environmental factors cause the weak force to end up looking much weaker than the electromagnetic force. This happens because it turns out that due to differing interactions with a background field that is postulated to exist today throughout space, one of the particles that conveys the weak force (as its cousin, the massless photon, conveys the electromagnetic force) ends up behaving differently than the photon. In particular, the interactions of this “weak photon” with the background field make the “weak photon” behave like a very massive particle, almost a hundred times as massive as the proton. This particle acts like a marble being dragged in the mud, while a photon is like a marble rolling on a smooth surface: The two marbles may be intrinsically identical, but they behave very differently due to the accidental circumstances in which they find themselves. As a result, since the weak force is thus conveyed by an apparently massive particle, while the massless photon conveys the electromagnetic force, from our perspective the two forces look quite different. This phenomenon is quite reminiscent of a much more familiar one on earth. We distinguish “north” from all the other directions because of a background magnetic field that makes our compasses point in that direction. However, if the earth had no magnetic field, there would be no such fundamental way to distinguish north from east.

 

‹ Prev