The Perfect Theory

Home > Other > The Perfect Theory > Page 24
The Perfect Theory Page 24

by Pedro G. Ferreira


  The rise of physical cosmology in the past forty years transformed the way we look at spacetime and the universe. In mining general relativity on the grandest of scales and carefully teasing out the large-scale properties of the universe, Jim Peebles and his contemporaries opened up a completely new window on reality. Allied with the stupendous successes in mapping out the distribution of galaxies and the relic radiation, their work has revealed a bizarre universe, full of exotic substances that remain poorly understood. It is a far cry from the cosmology of the 1960s, a “pretty dismal” science, as Peebles called it, with just three numbers. Modern cosmology has been one of the great successes of Einstein’s general theory of relativity and modern science as a whole, raising as many questions about the universe as it answers.

  12

  The End of Spacetime

  STEPHEN HAWKING was offered the Lucasian Professorship of Mathematical Physics at Cambridge in 1979. One of the most prestigious chairs in theoretical physics in the world, it had been held by Isaac Newton and Paul Dirac and was now being offered to a relativist not yet in his forties. Hawking deserved it. In just under two decades of research, he had made lasting contributions touching on the birth of the universe and black hole physics. His crowning achievement had been, without a doubt, the proof that black holes would radiate, had entropy and a temperature, and would ultimately evaporate. Hawking radiation had taken the world of physics by surprise. Black holes were supposed to be black and simple. Building on Jacob Bekenstein’s conjecture, Hawking had shown that black holes must contain a vast amount of disorder, and that disorder is directly related to the black hole’s area and not, as it is in all other familiar physical systems, its volume. The question on everyone’s mind was, How is the entropy housed in a black hole? And deep down, everyone thought that quantum gravity, surely, should have the answer.

  The quest for quantum gravity seemed to have come to a standstill. By the time of the Oxford symposium in 1975, when Hawking had announced his discovery of black hole radiation, it was already becoming obvious that general relativity wasn’t renormalizable and that it was plagued with infinities that couldn’t be hidden away. General relativity was so radically different from other theories of fundamental forces and resistant to the conventional methods that had been used to build the standard model of particles and forces. Something radically different had to be done, and Hawking and his fellow physicists faced a bewildering array of options. By the end of the 1970s, a barrage of new ideas and techniques flooded the field of quantum gravity that would cause deep rifts over the following decades. Opposing camps would cling passionately to their own set of rules on how to quantize general relativity, dogmatically refusing to accept other approaches. The community of physicists working on quantum gravity would break into opposing tribes, caught up in what some would call a veritable war. Yet, out of this turbulent and sometimes fractious environment, a common view would emerge that the old idea of spacetime as a continuum would have to be abandoned and a radically new view of reality would have to be adopted.

  Stephen Hawking has always been one to make bold and controversial statements, often visionary but sometimes mischievous. On taking up the post of Lucasian Professor, Hawking used his inaugural lecture, “Is the End in Sight for Theoretical Physics?” to present his view of the future of physics, announcing that “the goal of theoretical physics might be achieved in the not too distant future, say, by the end of the century.” In Hawking’s mind, the unification of the laws of physics and a quantum theory of gravity was just around the corner.

  He had good reasons for making his bold claim, founded in promising developments in a new idea called supersymmetry. Supersymmetry imagines a deep symmetry in nature that inextricably links all the particles and forces in the universe. Each elementary particle is supposed to have an inverse twin: for every fermion there is a twin boson and vice versa. A theory first proposed in 1976 took supersymmetry one step further and mirrored spacetime itself, creating supergravity. When Hawking gave his lecture, supergravity seemed to be the solution everyone was hoping for: a viable candidate for the quantum theory of gravity. But supergravity proved unwieldy. It extended spacetime into additional dimensions, requiring a vastly more complicated set of equations than those Einstein originally proposed. Calculating anything took months of work, and the results were plagued with infinities and particles that just didn’t fit. A small group of diehards continued plugging away at it, but at least as a theory of quantum gravity, it quickly died away. Hawking would have to look elsewhere for the end of theoretical physics.

  While Hawking had been optimistic in his inaugural talk at Cambridge in 1979, he had been mulling over a strange problem that he had come across while working out that black holes would radiate. This problem hovered ominously over all attempts to quantize gravity and would blow one of the most basic beliefs in physics to smithereens. Hawking would choose a meeting at the mansion of a rich entrepreneur, Werner Erhard, to foist it onto a select group of colleagues.

  Erhard had made his money and fame running self-empowerment courses throughout the United States. He had been influenced by a mishmash of pundits and religions, from Zen Buddhism to Scientology, but had a penchant for physics. Every year he organized a series of lectures on physics and invited illustrious physicists such as Hawking and Richard Feynman. When, in 1981, Hawking was invited to deliver a lecture, he decided to talk about a bizarre result he had first published in 1976 that had been bothering him ever since. The talk was actually delivered by one of Hawking’s young graduate students—by that point, Hawking was unable to give talks himself—and it was called the “Black Hole Information Paradox.”

  The talk addressed the hallowed belief in physics that given complete information about a physical system, it should always be possible to reconstruct that system’s past. Imagine a ball flying by your head. If you knew how fast it was moving and its direction of flight, it would be possible for you to reconstruct exactly where it came from and what it passed along the way. Or take a box filled with gas molecules. If you could measure the positions and velocities of every molecule of gas in the box, it would be possible to determine where every particle had been at any moment in the past. More realistic situations are often much more complicated. Take the laptop I’m using to write this chapter. I would need to know a lot of information about the world to be able to exactly reconstruct how the laptop came into being, but in principle the laws of physics tell me it’s possible. At an even greater level of complication, knowing all the information about a quantum state should make it possible to reconstruct the state’s past. In fact, it is hard-wired into the laws of quantum physics: information is always conserved. Information is at the heart of predictability, and physicists held fast to the fundamental rule that information is never destroyed.

  Information is never destroyed, that is, until it encounters a black hole. If you were to throw a copy of this book into a black hole, the book would disappear from sight. The black hole’s mass and area would increase slightly, and the black hole would radiate light. Eventually the black hole will completely evaporate and disappear, leaving behind a featureless bath of radiation. If you throw in a bag of air with the same mass as the book, the exact same thing happens: the black hole’s area increases, it emits light and eventually disappears, and ultimately you’re left with an identical bath of radiation. The end product will be exactly the same in both situations even though you started off in very different ways. In fact, we don’t even have to wait for the black holes to disappear. While the black holes are radiating, they will look exactly the same and it will be impossible to reconstruct whether the starting point was this book or a bag of air. Information will have disappeared.

  Hawking had identified a paradox: if black holes existed, they would radiate and evaporate, but that meant that the universe was unpredictable. The idea that there was a direct connection between cause and effect, a basic assumption of Newtonian, Einsteinian, and quantum physics, would have
to be thrown away. Hawking’s announcement shocked his colleagues. Many of them simply refused to accept what he was saying. If information was lost, there was no future for physics as a predictive science. The only way it could be salvaged was for a black hole to be much richer than first thought, with some new type of microphysics allowing it to store information as well as making sure that by the end of its lifetime, that information is released again to the outside world. The answer could only come from quantum gravity.

  In 1967 Bryce DeWitt spelled out two opposing manifestos for quantizing general relativity. Already in his forties and having spent almost twenty years trying to tackle the impossible problem, he held in his hands a trio of manuscripts summarizing his work. They became known as the “Trilogy,” and to many they would become the sacred creed for quantum gravity. DeWitt was careful to acknowledge all the work that had been done on quantum gravity before him, but his manuscripts laid the foundation for marrying quantum physics and general relativity in a completely self-contained way, in essence summarizing his own work and that of everyone who had tried before him.

  The first paper of the trilogy described what he called the canonical approach. It was an approach that others, including Peter Bergmann, Paul Dirac, Charles Misner, and John Wheeler, had proposed before. As in general relativity, geometry took center stage. The canonical approach breaks down spacetime into two distinct parts: space and time. General relativity stops being a theory about spacetime as one indivisible whole and becomes a theory of how space evolves in time. DeWitt then showed that it was possible to introduce quantum physics by finding an equation that can be used to calculate the probabilities of a given geometry of space as it evolved in time. Just as Schrödinger had done for quantum physics of ordinary systems, DeWitt found a wave function for the geometry of space.

  While DeWitt would soon reject this canonical approach himself, it was quickly embraced by John Wheeler. The two met in the Raleigh-Durham airport and DeWitt shared his equation. As DeWitt recalls, “Wheeler got tremendously excited at this and began to lecture about it on every occasion.” For many years, DeWitt would call it the Wheeler equation and Wheeler would call it the DeWitt equation. Everyone else simply called it the Wheeler-DeWitt equation.

  The second and third papers in DeWitt’s trilogy were where his heart was. They mapped out the other path, the covariant approach. In this approach, geometry was completely forgotten and gravity was just another force, carried by its messenger particle, the graviton. It was this approach that tried to mimic the successes of QED and the standard model but had led to the devastating infinities that had halted progress so dramatically at the time of the Oxford Symposium on Quantum Gravity in 1974.

  The canonical and covariant approaches embodied two very different philosophies and approached the problem of quantizing gravity in two very different spirits. The canonical approach had geometry at its heart, while the covariant approach was all about particles, fields, and unification. The two approaches would pit two very different communities against each other.

  The banner for the covariant approach would ultimately be carried by a radically new approach to unification called string theory. In fact, string theory started off as a cottage industry in the late 1960s, trying to explain the behavior of a whole zoo of exotic new particles that were appearing in particle accelerator experiments. The basic idea is that these particles, tiny pointlike objects, were better described in terms of microscopic, wiggly pieces of string. Particles with different masses would be nothing more than different vibrations of minute strings that floated around through space. The trick is that only one such object, one string, could describe all the particles. The more a string wiggled, the more energetic it was and the heavier the particle it would describe. It was a unification of sorts, but in a completely different way from what had ever been proposed.

  The idea of fundamental strings was fascinating but initially flawed. Whenever anyone tried to work out physical predictions, infinite numbers kept on popping up, and they couldn’t be renormalized away as in QED or the standard model. Furthermore, this theory with strings predicted the existence of a particle that behaved exactly like the graviton, the particle that was thought to be responsible for the gravitational force. While such a particle would be useful in a quantum theory of gravity, it had no place in what string theory had set out to do: explain the exotic new particles being found in accelerators.

  After an initial burst of interest, string theory fell into oblivion in the mid-1970s, dismissed by most of mainstream physics. One of its few supporters, the Nobel Prize–winning physicist Murray Gell-Mann, described himself as “a sort of patron of string theory” and “a conservationist.” As he recalls, “I set up a nature reserve for endangered superstring theorists at Caltech, and from 1972 to 1984 a lot of the work in string theory was done there.”

  In 1984, one of Murray Gell-Mann’s endangered Caltech string theorists, John Schwartz, teamed up with a young British physicist from London named Michael Green. The two proposed that string theory might actually be more useful as a theory of quantum gravity. They showed how string theory in a ten-dimensional universe could incorporate quantum gravity if it satisfied certain restrictions and obeyed certain symmetries. The following year, a collective of particle physicists and relativists composed of Edward Witten from Princeton, Philip Candelas from Austin, Texas, and Andrew Strominger and Gary Horowitz from Santa Barbara, went even further. They showed that if those six extra dimensions of the universe had a very particular type of geometry known as Calabi-Yau geometry, the equations of string theory had solutions that looked exactly like a supersymmetric version of the standard model. The real standard model had to be only a short step away.

  By the late 1980s, string theory had become a juggernaut. It seemed to have something for everyone. The mathematics seemed new and exciting, much as non-Euclidean geometry must have seemed to Einstein as he wielded it to understand general relativity. Mathematicians used their newest tools—not only geometry but also number theory and topology—to see what string theory could yield.

  As the twentieth century came to a close, string theory hit its stride, becoming more fascinating and coherent and at the same time more complex and puzzling. At the annual string theory conference in California in 1995, Edward Witten announced that the string theory models that had emerged over the previous decade were all connected and were in fact different aspects of one underlying, much richer theory, which he called M-theory. As he put it, “M stands for Magic, Mystery, or Membrane, according to taste.” Indeed, Witten’s M-theory contained not only strings but also higher-dimensional objects, called membranes or branes for short, that could float around in the higher-dimensional universe.

  Despite the euphoria and the hubris, string theory couldn’t avoid an almost existential problem. There seemed to be too many versions of string theory available. And even if you stuck to a single version of string theory, there were many, many possible solutions that could correspond to the real world. A rough estimate led to the possible existence of 10500 solutions for each version of string theory, a truly obscene panorama of possible universes that became known as the landscape. String theory remained unable to make unique predictions.

  A number of prominent skeptics argued that string theory promised too much and delivered too little. “I think all this superstring stuff is crazy and is in the wrong direction,” Richard Feynman said in an interview shortly before his death in 1987. “I don’t like that they’re not calculating anything. I don’t like that they don’t check their ideas. I don’t like that for anything that disagrees with an experiment, they cook up an explanation. . . . It doesn’t look right.”

  Feynman’s view was echoed by Sheldon Glashow, who, along with Steven Weinberg and Abdus Salam, had constructed the extremely successful standard model. He wrote that “superstring physicists have not yet shown that their theory really works. They cannot demonstrate that the standard theory is a logical outcome of string the
ory. They cannot even be sure that their formalism includes a description of such things as protons and electrons.”

  Daniel Friedan, a prominent string theorist in the first string revolution of the 1980s, acknowledges string theory’s shortcomings. As Friedan admits, “The long-standing crisis of string theory is its complete failure to explain or predict any large distance physics. . . . String theory cannot give any definite explanations of existing knowledge of the real world and cannot make any definite predictions. The reliability of string theory cannot be evaluated, much less established. String theory has no credibility as a candidate theory of physics.” These skeptics remained in the minority and were easily drowned out. If you were to enter the field of quantum gravity in the 1980s or 1990s, you might be forgiven if you thought that the covariant approach had won and string theory was all there was.

  There was one thing that really riled many of the general relativists about string theory: in string theory, as in any covariant approach to quantum gravity, the geometry of spacetime, the be-all and end-all of general relativity, seemed to disappear. It was all about describing a force, like the other three forces brought together into the standard model, and how to quantize it. To a small band of relativists, the way forward was by another route, which Wheeler had embraced and DeWitt had discarded: the canonical approach. There it should be possible to cook up a quantum theory of geometry itself. In the mid-1980s, an Indian relativist named Abhay Ashtekar found a way forward.

 

‹ Prev