About Time
Page 26
Inflation may seem like the ultimate free lunch—taking a speck of the natal universe and blowing it up into everything we can see. But there has to be some mechanism, some source of energy to make it work. Drawing on grand unification theories, Guth and others imagined the early universe to be pervaded by a field of energy, a background mimicking empty space, but with energy to burn. This energy-rich background was the “false vacuum”. Physicists were used to this kind of all-pervading energy field from their quantum explorations of matter. Extending uniformly throughout space, the false vacuum energy field was unstable, and sooner or later it would have to “decay” into a real vacuum, releasing the energy in the field.
Inflation was the result of that decay. Kicking in just as the universe cooled to the right temperature, the energy given up in the decay of the false vacuum acted as a kind of antigravity. It ripped space-time apart, accelerating its expansion by providing just the right force at just the right time to inflate a speck of the cosmos into the entire observable universe—a big rip right after the Big Bang.
Uniting astronomy and the hairy edge of theories from particle physics, inflation became a new “standard model” for the birth of the universe—an inescapable prologue to element cooking, the CMB and all the rest. But inflation wasn’t the only addition to standard Big Bang cosmology in the last decades of the twentieth century. Throughout the 1980s and 1990s, physicists and astronomers would be forced to struggle with new and unexpected discoveries, and each of these new actors would have to be added into the cosmic story of creation.
THE DARK UNIVERSE, PART I: MATTER
The last decades of the twentieth century would mean a profound step into the dark for cosmology. To understand this movement, however, we have to briefly step back fifty years. Though few astronomers recognized it, their first introduction to the dark universe came long before the triumph of the Big Bang. In 1933, an iconoclastic Swiss expatriate astronomer stumbled upon the discovery of what would eventually come to be called dark matter.
Fritz Zwicky has been described as the “most unrecognized genius of twentieth-century astronomy”.24 A man of both terrible anger and uncompromising humanity, Zwicky was prone to heated arguments with colleagues. A famous galaxy catalogue compiled by Zwicky opens with a rant against other astronomers (names provided), calling them “fawners” and “thieves” who stole his ideas and hid their own errors. A fellow astronomer at Mount Wilson once feared Zwicky was going to kill him after a particularly nasty exchange. But Zwicky had a profoundly human and charitable side as well. He was devoted to those he counted as friends and after World War II he led efforts to ship books to the ravaged libraries of Europe. For all his personal contradictions, however, it was his science that made Zwicky stand out. He was the essence of creativity in science, seeing astonishing solutions to problems others had yet to even recognize. In no domain of science was Zwicky’s leadership more apparent than in the recognition of dark matter.
FIGURE 8.4. Astronomer Fritz Zwicky, the man who first “saw” dark matter.
Galaxies took on new importance in the wake of Hubble’s 1925 discovery that they were separate systems of stars. By 1933, it was recognized that galaxies were not uniformly distributed in space but were grouped together in large-scale structures of various size.25 At the time, Zwicky was studying what appeared to be the largest of these galaxy clusters. As part of his work he first added up the mass in all the cluster’s component galaxies. Then he examined the velocities of the galaxies within the cluster. Comparing the two, he found a paradox: the galaxies were all moving so fast that they could not be bound by the cluster’s total gravitational force. If the cluster was simply a collection mass (the galaxies) bound by gravity like a planet, then the individual galaxies were like rocket ships that were moving so fast they had reached escaped velocity.
According to Zwicky’s calculations, the cluster should have flown apart billions of years ago. Yet there it was in the sky, an obviously dense collection of galaxies in the midst of a cosmic void. Zwicky concluded that if the “luminating matter” he could see (the galaxies) was not enough to keep the cluster together, then there must be more matter in the cluster he couldn’t see. “The average density of the Coma system must be at least four hundred times greater than what is derived from observations of the luminating matter”, he wrote. “Should this be confirmed, the surprising result thus follows that dark matter is present in very much larger density than luminating matter” (emphasis added). It was a stunning result. It was also, for the most part, forgotten, or at least put on the shelf of as yet unexplained results. Astronomers were not ready to admit that what they saw was not all that was.
By the 1970s, astronomers such as Vera Rubin had begun to rediscover dark matter in their studies of individual galaxies and their rotation. Compiling data for one galaxy after another, Rubin found each one was spinning too fast to be explained by the matter emitting light. Rubin’s data suggested that the luminous parts of a galaxy are pulled into rapid rotation by vast halos of dark matter surrounding them. Her results implied that the galactic whirlpools that had so enchanted us in image after image were not the whole story. The bright stars and luminous clouds of galactic gas were nothing but bright Christmas lights hung on a vast, invisible tree. In truth, galaxies were mostly dark.
The term dark matter was more a description of ignorance than anything else. All astronomers knew about dark matter was that it produced a gravitational force driving luminous matter to orbit their galactic centres at high speeds. As evidence piled up that dark matter outweighed normal matter by a factor of ten to one hundred, the race was on to understand both its nature and its place in physics. By the mid-1990s, scientists had concluded that the omnipresent dark matter was essentially different from the material you and I are made of. “Our kind of stuff” is called baryonic matter and it is composed of protons and neutrons. Using a variety of techniques, the possibility of a universe dominated by “dark baryons”—dead stars, or even huge quantities of rocks floating in deep space—had been ruled out. Whatever dark matter was, it clearly did not respond to electromagnetism or the strong nuclear force. It was not “our kind of stuff”.
As computers reworked culture in the 1980s, they also reworked cosmology. Computer simulations rapidly became an important tool in all branches of astronomy, including the study of the universe as a whole. Using the fastest supercomputers of the time, astronomers began running simulations of cosmic history after the era when CMB photons would have decoupled. At this point, galaxy clusters would have begun to form across vast regions of space. The simulations, paired with new observational projects to map the cosmic distribution of galaxies, provided critical clues to the nature of dark matter.
Outweighing baryonic matter by a factor of as much as a hundred, the distribution of dark matter becomes the key player in the era of galaxy formation. Small lumps and bumps in the dark matter left over from the early universe become the seeds for galaxy clusters. Just as the computer simulations of cosmic-scale galaxy cluster formation were ramping up, observational astronomers began compiling their new large-scale surveys of the sky to hunt for traces of the original lumpiness. These maps revealed how galaxies and galaxy clusters were distributed across space at scales of hundreds of millions of light-years or more. The goal of the new computer simulations was to reproduce the observed distribution of galaxy clusters by tracking their formation and evolution in the first few billion years of cosmic history. The computer simulation project was a stunning success. Simulations could recover the data but only if certain kinds of dark matter were used. If the dark matter moved too fast, the gravitational collapse producing galaxies and galaxy clusters never happened. Whatever the dark matter was, it had to be “cold”, meaning it moved slowly relative to the speed of light. Hot (fast) dark matter was out. Cold (slow) dark matter was in.
FIGURE 8.5. Large-scale structure of the universe. This image shows the distribution of galaxies on a scale of 2 billion light-yea
rs (the outer circle). The distribution of galaxies has been mapped in two thin slices, one in the north galactic cap and one in the south. The slice in the south is thinner, which is why it looks sparser.
By the turn of the new millennium, cosmologists had to fit a new player, cold dark matter, into their models. Particle physicists were ready to supply many (some said too many) theoretical candidates for cold dark matter. Almost all were some form of weakly interacting massive particle (WIMP for short). These particles felt only the weak nuclear force and gravity. Physicists were heading into new territory, as the WIMPs were not part of the standard model. While that prospect of finding new physics beyond the standard model was exciting, the reality was that no form of dark matter particle had shown up in any laboratory.
The galaxy cluster observations and cluster formation simulations of the 1980s were the beginning of the heroic effort to map the cosmos across all visible space and time. It was an effort that required the ever faster and ever more sophisticated computers that appeared throughout the decade. A new tool for scientists, these supercomputers created unimagined possibilities for mapping cosmic evolution. The next, wholly unexpected step in cosmology would find map making and a new concept—cosmic acceleration—closely connected. And just at this moment a similar pairing of maps and acceleration would find its place in the construction of human time as well.
READING, ENGLAND · MARCH 8, 2004
“Keep the blue dot on the red line and everything will be OK. Blue dot on red line. That’s all.”
He was late already and that wasn’t good. The meeting started at two-thirty and here he was, miles outside Reading. The flight had been late into Gatwick, the paperwork at the rental car counter had taken too long and his directions had been marginal at best. The rental clerk asked if he wanted a SatNav with his car and in his panic he said yes. He hoped it was going to help him. The deal would fall through if he couldn’t make this presentation.
The bloke with the sleeve tattoos at the rental pickup gave him a three-minute tutorial on the GPS thingy mounted on the dashboard. Together they typed in the address, and bang! There it was—his current position was a blinking blue dot on the map, his route a red line snaking from the airport to the company’s local office. “That’s all you need,” Tattoo Guy said. “Just keep the blue dot on the red line and you’re going to be fine.”
And it was. Every turn was called out to him long before he needed to make it. “Left turn in one mile,” announced the SatNav’s pleasant-sounding female voice. “Yes, yes, yes,” he called back. “Cheers.”
He looked up at the sky, thinking about the web of satellites sailing somewhere overhead, beaming their signals every which way, tracking his position down to a metre or so. It was amazing. At his dentist’s office just a few months ago, he’d read about GPS in a popular science magazine. Now here it was, saving the day.
“Right turn in one hundred metres,” said the GPS. The dashboard clock read 2:41. He had already called ahead on his mobile. They said they could wait if he wasn’t too late. “Left turn in fifty metres.” He was going to make it.
“I love you,” he told the dashboard box, its blinking blue dot hovering on the red line.26
MAPPING TIME TO SPACE: THE GLOBAL POSITIONING SYSTEM FINDS ITS WAY
Reconstructing time has always been a process inseparable from new encounters with space. Time and space are paired in human life by the simple concept of travel: the time it takes for an edict from the Roman emperor to travel across Europe, the time it takes for a gold-laden ship to travel from the Americas to Spain, the time your family takes to reach Cornwall when travelling by car from London, the time it takes for that critical e-mail message to travel across the network.
Our encounters with space through travel, however, are always mediated by our ability to determine location. Travel requires knowing where you are going. And because the human experience of time is intimately connected with our experience of space, we must pay attention to the way space has always been mediated by maps. The long saga of longitude—and its connection with the determination of time—was a very public, culture-wide battle over the ability to make accurate maps and know location relative to them. There is also a personal and more closely experienced encounter with maps as well. As individuals, we deal with maps on a variety of levels, from the internal topologies of neighbourhoods we carry in our heads to the country-spanning maps we began using once automobile-powered travel became the norm in the 1920s and 1930s. The advent of silicon-based material engagement would change our encounter with maps on all levels. Just as e-mail and personal information management reshaped our experience of the day by changing our expectations of time, the radical electronic technologies associated with GPS would change our encounters with space leading to its own changes in time. And, as with e-mail and the Outlook universe, GPS would mean a profound and ubiquitous acceleration of human culture.
It began, as these things often do, with the military. Just as the Royal Navy’s disastrous fogbound loss of warships in 1707 led to accurate methods for longitude determination, the U.S. military’s need for hyperaccurate navigation led to the Global Positioning System.
Just days after Sputnik achieved orbit in 1957, scientists at MIT’s Lincoln Laboratory discovered that they could use frequency changes in the satellite’s pulsing radio beacons—à la the Doppler shift—to accurately pin down Sputnik’s position.27 Recognizing that the process could be reversed by finding a position on the ground from a satellite in a known orbit, the U.S. Navy conceived its TRANSIT satellite-based navigation system. Launched in 1964 TRANSIT used signals in a network of six orbiting satellites to give nuclear submarines accurate positioning at sea.28 The TRANSIT concept was a success, but its coverage was too sparse; sometimes submarines had to wait hours before the satellites passed overhead and could be used to nail down a position.
Looking to build a more reliable and continuously available system, the Pentagon initiated the Navstar (Navigation System with Timing and Ranging) program in 1973. The theory behind Navstar was to ring the planet with a web of satellites and use their signals to allow position determination at any time and any place.29
Each satellite would broadcast its own location in orbit and the exact time, down to a billionth of a second. Ground-based receivers would pick up this information and use it to calculate their position on the planet to within a hundred metres or so. Four satellites were needed for each position determination: three to triangulate the three-dimensional position in space (the range) and one satellite to make any needed corrections for time differences.30 Range is found by comparing signal travel times. The receiver on the ground compares its own clock with the time measured by satellite. The difference between the two times is then used to calculate its distance from the satellite. Since light travels at 186,000 miles (299,790 kilometres) per second, a time difference of one-thousandth of a second implies the ground-based receiver must be 186 miles from the satellite.31 By comparing the receiver’s time and the time signal sent by the three satellites, scientists could get hyperaccurate determinations of the receiver’s position in space—its longitude, latitude and altitude.32
FIGURE 8.6. Schematic of the GPS satellite network in Earth orbit taken from an early RAND Corporation study.
The story of GPS is a story of material engagement involving both of the twentieth century’s great scientific revolutions: relativity and quantum physics. Hyperprecise position measurement requires hyperprecise clocks. This is where quantum mechanics enters the story. Each GPS satellite carries an atomic clock on board. In the 1950s, scientists had started to use quantum-based understanding to control atoms with exquisite accuracy. Rapid-fire transitions of electrons within atoms (the fabled quantum jumps) were manipulated to yield a steady atomic pulse accurate to one second every thirty thousand years. By the 1970s, engineers devising the GPS system could make good use of this quantum time technology. Each satellite would carry four onboard atomic clocks, maintaining time wi
th a precision of a few parts in ten billion over a few hours, or better than ten nanoseconds. Expressed as the distance travelled by the satellite’s signal, each nanosecond corresponds to about thirty centimetres.33
Relativity made its appearance in the technology because the precision required for distance determination was also GPS’s own worst enemy. Each satellite moves at high speed in its orbit (3.8 kilometres/hour). Each satellite is also moving through the gravitational distortion of space-time from the Earth’s mass. Thus, Einstein’s theory of relativity (both the special and general versions) could not be ignored. Relativistic time dilation from the satellite’s motion (in the form of special relativity) causes the onboard satellite clocks to fall behind ground clocks by seven microseconds per day.34 Because of their distance from the Earth and the gravitational bending of space-time, the satellite’s clocks will also tick faster than identical clocks on the ground. General relativity predicts that GPS clocks will run ahead of ground-based clocks by about forty-five microseconds per day.35 Thus, the same theory of relativity that mapped out cosmic structure could not be ignored in mapping out terrestrial positions with GPS. Without correcting for relativistic effects, GPS would quickly become useless. As Richard Pogge of Ohio State University put it, ignoring relativity leads to an error “akin to measuring my location while standing on my front porch in Columbus, Ohio, one day, and then making the same measurement a week later and having my GPS receiver tell me that my porch and I are currently about 5,000 metres in the air somewhere over Detroit.”36
By combining quantum mechanics, relativity and space-age rocket science, the Navstar/GPS system promised a new age of position determination for the military. Throughout the 1980s the network of orbiting GPS satellites was slowly put into place. The success of GPS for military applications got a very human face in 1993 when a U.S. Navy pilot, Captain Scott O’Grady, was shot down over Serbia during the Bosnian conflict. After four days of hiding from Serbian forces and living on grubs, O’Grady was dramatically rescued by the 24th Marine Expeditionary Unit. Behind the heroism of Grady and his rescuers was the GPS device in his life vest allowing the Marine unit to precisely determine O’Grady’s position and swiftly sweep him up and out of harm’s way.37