Book Read Free

About Time

Page 25

by Adam Frank


  Life was moving much, much faster. The electronic datebook living on desktops, laptops and, soon enough, mobile phones would become the new medium for standardizing the personal universe and synchronizing it with the social network beyond. The personal universe, relentlessly networked to the social and cultural cosmos we were embedded within, was accelerating. And all the while, the collective universe of cosmological science was running headlong into its own changes and its own new meaning for the term acceleration.

  SETTING STANDARDS FOR THE UNIVERSE: STANDARD MODELS IN COSMOLOGY AND PARTICLE PHYSICS

  By the mid-1970s, the triumph of the hot Big Bang was complete. Three unassailable pillars of evidence now supported the Big Bang: the expansion of the universe, the abundance of light elements and the astonishing presence of the cosmic microwave background. To deny the Big Bang, an opponent needed to topple each and every pillar. As the years went on and supporting data mounted, critics faced a Herculean task. The Big Bang, with its implied beginning for space and time, had become the standard model for cosmology. Everything else became a kind of crazy speculation, a heresy against a growing mountain of observational facts.

  But two of these pillars (the abundance of light elements and the CMB) rest on the quantum mechanical description of subatomic physics. The Big Bang’s triumph was just as much a victory for particle physics—the study of subatomic structure—as it was for astrophysics. In this fundamental domain of science, another standard model had also emerged.

  Throughout the 1950s and 1960s, physicists had poured enormous effort and resources into the development of giant particle accelerators, machines designed to bring the smallest specks of matter to the highest possible speeds (which really means the highest energies).17 The goal was to smash particles into each other to probe their inner structure. Scanning the remains gave physicists hints of the internal constitution of particles such as the proton and neutron. Physicist Richard Feynman described the effort this way, roughly comparing atoms to watches: “All we can do is smash them together and see all the funny pieces (gears, wheels and springs) which fly out. Then we have to guess how the watch is put together.”18 By the mid-1960s, accelerator-based studies produced a remarkably complete and successful description of the subatomic menagerie known as the standard model of particle physics. Its successes as well as its limitations remain relevant forty years later and are still shaping options for the cosmological revolution we stand poised before today.

  The spray of particles emerging from countless accelerator collisions allowed physicists to see the bedrock on which all forms of mass-energy rest. In their efforts, they found that only two fundamental classes of matter exist: leptons and quarks. The electron is the most familiar of the leptons, but two other lepton “generations”—the muon and tau—fill out the family. Quarks constitute the other classification of matter and are the building blocks of the ubiquitous protons and neutrons inside every atomic nucleus. Each and every particle, quark or lepton, comes with an antimatter twin. The anti- and normal particles have opposite electrical charges. Antimatter and matter are mortal enemies of a kind. When matter and antimatter particles collide, they completely annihilate one another in a spray of energy.

  FIGURE 8.2. The standard model of particle physics. This table shows all the fundamental elementary particles and force carriers of the standard model. There are three “generations” of quarks and leptons and four “force bosons”, one for each of the fundamental forces.

  Taking a census of all the universe’s particles and antiparticles was not the only job of the standard model. Particles “feel” one another by exerting forces. As far as we know there are only four forces at work in the universe: gravity, electromagnetism, the strong nuclear force and the weak nuclear force. Gravity, known since Newton, is remarkably weak and requires large collections of matter to exert significant forces. Electromagnetism is the domain of electric charges and electric currents. The strong nuclear force binds nuclei together, while the weak nuclear force is intimately connected to radioactive decay. Each, in its own way, has been part of our material engagement with the world. We have used knowledge of gravity in everything from harnessing the energy of falling water to accurately tracking the trajectory of an artillery shell. Our material engagement with the electromagnetic force is directly responsible for all the electronic miracles we have come to live with, from mobile phones to the Internet. Our material engagement with the nuclear forces led directly to nuclear weapons and all the changes those devices entailed.

  The standard model provides a description of all these forces and their effects on subatomic particles. The forces are mediated by a separate class of particles called force bosons. The photon, for example, is the particle that carries electromagnetic force. Exchanging photons is the way charged particles “feel” the electromagnetic force. The magnets hanging on your fridge are kept in place by unseen photons dashing back and forth between magnet and metal refrigerator door. Force bosons called gluons mediate the strong nuclear force, while so-called W and Z bosons carry the weak nuclear force between particles.19

  This exquisitely articulated web of particles, antiparticles and forces was the triumph of the standard model. With it, physicists had a grand map of matter at a fundamental level. But it was not the fundamental level. The standard model could not answer a long and important list of questions. Why were there just four forces? Why did each force have a different strength? Why were there two different families of particles, the quarks and the leptons?

  Of particular importance was the set of numbers that had to be fed into the standard model from experiments. These numbers were the so-called constants of the standard model and, among other things, they set the strength of the forces. There were about twenty of these constants scattered across the theory. They were not predicted by the standard model but had to be directly measured. It was like getting a job where you know you will be paid hourly but don’t know the pay rate until you look at your paycheck. Having twenty unpredicted numbers running around your best theory for matter was something of a letdown for physicists. In their hearts, they were hoping for an ultimate and final theory of reality that predicted everything that could and did happen, including all the values for their constants.

  The standard model also seemed to be delicately tuned to the values of these constants. The constants could take only very specific values or else the universe would have evolved in an entirely different direction, including making the development of life impossible. This “fine-tuning problem” was an anathema to physicists, because they not only had to find a way to predict the values but also had to predict why only these exact values were possible. It would have been easier if it were more of a “slop in the system”, meaning that a large range of values for the constants would still lead to the world we see. As the years progressed, the fine-tuning dilemma only got worse.

  Most important was that throughout the standard model’s development, gravity remained stubbornly outside its purview. Physicists adamantly believed a theory of quantum gravity must exist, with a particle (which they took to calling a “graviton”) to mediate its force, but that prospect remained far over the horizon. By the end of the 1970s that horizon began to be obscured by gathering storm clouds.

  TROUBLE AT THE DAWN OF TIME

  The Big Bang had become the dominant model for the birth of time and the universe. But as the decade of disco and punk rock raced along, new problems surfaced. The questions facing the Big Bang came from a variety of directions. Some had emerged from the domains of observational astronomy, while others sprouted from the growing interface between cosmology and particle physics. By the end of the 1970s, at least three of these questions turned out to be especially pressing.

  – The Causality Problem –

  Using the cosmic microwave background, astronomers could make detailed measurements of the properties of the early universe. In particular, they could extract the temperature of the early cosmic plasma with great precis
ion. Looking at the sky in different directions, they could compare conditions such as temperature in widely separated regions of the early universe. To their astonishment, they found temperatures unchanging regardless of which direction they looked. Even when their instruments were pointed in opposite directions of the sky they still found the cosmic plasma’s temperature to be the same, down to one part in ten thousand.20 Why would this temperature be exactly the same everywhere? Since the universe began as a chaotic, expanding fireball, it would be reasonable to expect that some parts of the fireball ended up with slightly different conditions than others.

  The fundamental problem astronomers faced with a universe with conditions of almost perfect uniformity can be posed simply as one of cause and effect. Points on opposite sides of the sky were too far apart in the early universe to ever have been connected. Regions of the universe that are now a hemisphere apart in the sky were so far apart when the CMB photons decoupled that they never, ever could have been in contact. A light signal could not bridge their separation in the time from the Big Bang to the moment when CMB light waves were freed. This is the root of the cause-and-effect dilemma.

  Astronomers know how fast the universe is expanding now. They also thought they had a pretty good idea of how fast that expansion progressed from the first few seconds after the Big Bang onward by using the Friedmann-Lemaître solutions. With this understanding, it was easy to see that regions of space now on opposite sides of the sky could never have been so close together to be causally connected. That means the perfectly constant temperatures implied by the CMB were an amazing cosmic coincidence. For astronomers, it was like opening the morning paper to find that every city on the planet had the exact same temperature down to four decimal places.

  But astronomers do not like profound coincidences. Coincidences are the kind of thing that keep them up at night wondering what else is going on. If light did not even have time to make it from one cosmic neighbourhood to the next, then the different neighbourhoods could not have known about one another. They did not have time to pass information—in the form of light waves—back and forth to allow for such perfect synchronization in temperature. This causal conundrum was a monster problem for astronomers and physicists. If they could not solve it, their hopes for a rational cosmology would collapse.

  – The Flatness Problem –

  Recall that the Friedmann-Lemaître solutions for cosmic evolution contain a critical number determining the shape of the universe as well as its fate. As we saw in Chapter 6, omega (Ω) is the matter-energy density of the universe expressed in terms of a special or “critical” value. Finding omega’s true value had become a holy grail of Big Bang cosmology. By the 1970s, the best answer pointed to omega being slightly less than 1. (The values seemed to hover around Ω = 0.05.) That was close enough to the magic number 1 to cause a real problem for Big Bang theorists.

  According to the theory, omega will, in general, change its value as the universe evolves. Only if omega is exactly 1 will it stay exactly 1 for all cosmic time. If omega equalled exactly 1, it also meant the geometry of cosmic space was, and always had been, exactly flat.

  If the universe began with omega greater or less than 1 by even a tiny amount, then cosmic expansion dramatically alters its value over time, sweeping it to extremely small values like 10–40 or extremely large values like 1040.21 While Ω = 0.05 might not seeem so close to 1 for most of us, for a cosmologist expecting a number like a million trillion billion, its close enough to the critical value of Ω = 1 to make it seem that something fishy was going on. Either cosmic density really had the critical value of 1 and our measurements were missing the rest of the matter-energy, or omega at the Big Bang had a specific value that over time changed such that it was close to 1 at this particular moment in cosmic history. If omega was less than 1 at the start of cosmic evolution, it would have to be set with a special value out to sixty decimal places to end up with what astronomers were seeing in the 1970s. This demand was certainly a fine-tuning of cosmic initial conditions and astronomers were stuck with yet another coincidence. Everything astronomers knew told them an omega of almost 1 was bad news for the theory, and it demanded an explanation.

  – The Magnetic Monopole Problem –

  The third problem for Big Bang cosmology arose from the domain of particle physics. The universe is full of electric charges. Some are negative, like those of electrons, and some are positive, like those of protons. Physicists call these different, separate polarities electric monopoles. One of the strange things about our world is that there are no magnetic monopoles—every magnetic field comes with north and south poles connected. There are no particles that carry only a north or only a south magnetic “charge”. Physicists have always wondered why no separate magnetic charges were ever seen. By the 1970s, theorists attempting to dig deeper into the structure of the world were trying to develop what they called grand unified theories, or GUTs, of particle physics. GUTs were designed to go beyond the standard model. They were the grand attempt by theoretical physicists to understand the structure of matter and its interactions at a deeper level. Particle physicists had long suspected that the four known forces shaping the universe were simply different facets of a single, as yet undiscovered “superforce”. They were passionate about searching for a unification of these forces. Their best grand unified theories demanded the creation of magnetic monopoles in the soup of particles that developed after the Big Bang. In spite of years of searching, however, no monopole has ever been seen.

  For particle physicists the lack of monopoles was an embarrassment. Along with the horizon and flatness problems, the monopole problem stood as an unmovable challenge to Big Bang orthodoxy.

  INFLATION: COSMOLOGY AND PARTICLE PHYSICS RENEW THEIR VOWS

  Inflation theory began as a bold attempt to solve the panoply of paradoxes that plagued cosmologists at the end of the 1970s. The basic idea of inflationary cosmology was simultaneously radical and elegant. With inflation, cosmologists imagined that the part of the universe we can see underwent a brief period of rapid expansion in its very early history. Here, “early” is an understatement. The era of inflation began when the universe was a mere 10–33 second old. That is less than a million billion billion billionth of a second after the Big Bang.22

  During inflation, the universe increased in size by a factor of approximately 1040 (that is, 1 followed by forty zeros), enlarging its scale from a fraction of the size of a subatomic particle all the way up to the diameter of a softball. This expansion on steroids takes place in just 10–33 of a second. After inflation shuts down, the universe resumes the more leisurely expansion we see today. For comparison, in the last half of the universe’s life (the last seven billion years) the cosmic scale has increased by less than a factor of ten.

  While it may seem like a crazy and even unnecessary addition to the Big Bang, inflation’s brief period of hyperexpansion solved all the problems with standard cosmology. With inflation, the causality problem disappears because every part of the universe we see today was in causal contact before space was stretched to the extreme. The flatness problem is also solved: inflation’s rapid expansion naturally pulls space out flat, so omega is forced to a value of 1 no matter what its original value was. Physicists were happy to have a theory requiring no fine-tuning of the universe’s initial conditions (it was assumed that future measurements would eventually find “missing mass-energy” and yield a flat-space value of Ω = 1). Finally, inflation also did away with the magnetic monopole problem. Space is so diluted from its brief period of inflation that the density of monopoles is diluted as well. Monopoles are pulled so far away from one another as space expands wildly that the odds of us observing one now becomes essentially zero.

  Thus, with a single change to the Big Bang (an early, brief period of hyperexpansion) all the problems were resolved. It was hard for physicists and astronomers not to take notice.

  Every good idea needs a champion and inflation found its hero in Alan
Guth, a physicist at MIT. In 1981, Guth wrote a paper describing the inflationary model for cosmology.23 While some of his ideas had been proposed before, Guth brought them together in a coherent, accessible way and added the catchy brand name of inflation. What mattered most however was that Guth was not an astronomer. He was, instead, a particle physicist and he built his inflation theory using tools from the empire of grand unification theories. Guth would use ideas from the same GUTs domains that demanded monopoles to articulate reasons why they could never be observed.

  FIGURE 8.3. Inflation saves cosmology. Inflation imagines taking a small, causally connected patch of the universe and blowing it up into everything we can see, resolving all the classic Big Bang paradoxes in the process.

  GUTs predicted that if one could “heat up” the universe to ever higher energies, the different forces we see today would sequentially “melt” into the superforce as ice crystals would melt into liquid water. Heating up a tiny speck of the universe was exactly what particle physicists try to do in their giant accelerators, smashing subatomic particles together with tremendous energy.

  Of course the entire universe had already been through a stage when it was as hot as any particle physicist could ever desire. Cosmology naturally gave physicists the GUT laboratory they wanted. The critical link between inflationary cosmology and grand unification was the energy source that drove inflation, something inflation cosmologists called the “false vacuum”.

 

‹ Prev