by Max Tegmark
Table 4.1: By combining cosmic microwave–background maps with 3-D galaxy maps, we can measure key cosmic quantities to percent-level precision.
Click here to see a larger image.
Honestly, though, this data wasn’t a breakthrough at all, just a reflection of the slow but steady progress that the worldwide cosmology community had made in recent years. Our work wasn’t in any way revolutionary and didn’t discover anything surprising—rather, it simply contributed to making cosmology more believable, to its growing up into a more mature science. To me, the most surprising result was that there was no surprise.
The famous Soviet physicist Lev Landau once said that “cosmologists are often wrong, but never in doubt,” and we’ve seen many examples of this, from Aristarchos claiming the Sun was eighteen times too close, to Hubble claiming our Universe was expanding seven times too fast. This Wild West phase is now over: we saw how both Big Bang nucleosynthesis and cosmic clustering gave the same measurement of the atom density, and how both supernovae Ia and cosmic clustering gave the same measurement of the dark-energy density. Of all cross-checks, my favorite is the one in Figure 4.6: here I’ve plotted five different measurements of the power-spectrum curve, and even though the data, the people, and the methods involved are totally different for all five, you can see that they all agree.
The Ultimate Map of Our Universe
A Lot Left to Explore
So here I am sitting in my bed, typing these words and thinking about how cosmology has changed. Back when I was a postdoc, we used to talk about how cool it was going to be to get all that precision data and finally measure those cosmological parameters accurately. Now we can say, “Been there, done that”: the answers are in Table 4.1. So now what? Is cosmology over? Do we cosmologists need to find something else to do?
Here’s my answer: “No!” To appreciate how much fun cosmology research remains, let’s be honest about how little we cosmologists have accomplished: we’ve mainly just parameterized our ignorance, in the sense that behind each parameter in Table 4.1 lies an unexplained mystery. For example:
• We’ve measured the density of dark matter, but what is it?
• We’ve measured the density of dark energy, but what is it?
• We’ve measured the density of atoms (there’s about one atom for every two billion photons)—but what process produced that amount?
• We’ve measured that the seed fluctuations were at the level of 0.002%—but what process created them?
As data continue to improve, we’ll be able to use it to measure the numbers in Table 4.1 even more accurately, to more decimal places. But I’m a lot more excited about using the better data to measure new parameters. For example, we can try to measure other properties of dark matter and dark energy besides their densities. Does the dark matter have a pressure? A velocity? A temperature? This would shed light on its nature. Is the dark-energy density really exactly constant, as it so far seems? If we can measure that it changes even slightly over time or from place to place, this will be a crucial clue about its nature and about how dark energy will affect the future of our Universe. Do the seed fluctuations have any patterns or properties besides their 0.002% amplitude? This would provide clues about the origins of our Universe.
I’ve thought a lot about what we need to do to tackle these questions, and interestingly, the answer is the same for all of them: map our Universe! Specifically, we need to map as much as possible of our Universe in 3-D. The largest volume we can possibly map is the part of space that light has had time to reach us from so far. This volume is essentially the interior of the plasma sphere (Figure 4.7, left) that we have explored, and as you can see in the middle panel of this figure, over 99.9% of the volume remains unexplored. You can also see that our most ambitious 3D galaxy map from the Sloan Digital Sky Survey covers only our cosmological backyard—our Universe is simply huge! If I added the most distant galaxies ever discovered by astronomers to this figure, they’d be just over halfway to the edge, and way too few and far between to represent a useful 3-D map.
If we could somehow map these unexplored parts of our Universe, it would be terrific for cosmology. Not only would it increase our cosmological information a thousandfold, but because far away equals long ago, it would also reveal in great detail what happened during the first half of our cosmic history. But how? All the techniques we’ve discussed will continue to improve in various exciting ways, but it unfortunately doesn’t look like they’ll be able to map a large fraction of that uncharted 99.9% of the volume anytime soon. Cosmic microwave–background experiments map mainly the edge of this volume, since the interior is mostly transparent to microwaves. At such huge distances, most galaxies are very faint and difficult to see even with our best telescopes. Worse still: most of the volume is so far away that it contains almost no galaxies at all—we’re looking so far back in time that most galaxies hadn’t formed yet!
Figure 4.7: The fraction of our observable Universe (left) that has been mapped (center) is tiny, covering less than 0.1% of the volume. Just as for Australia in 1838 (right), we’ve mapped a thin strip around the perimeter while most of the interior remains unexplored. In the middle panel, the circular region is plasma (the cosmic microwave–background radiation we see comes only from the thin gray inner edge), and the tiny structure near the center is the largest 3-D galaxy map from the Sloan Digital Sky Survey.
Click here to see a larger image.
Hydrogen Mapping
Fortunately, there’s another mapping technique that might work better. As we discussed earlier, what we call empty space isn’t really empty: it’s filled with hydrogen gas. Moreover, physicists have long known that hydrogen gas emits radio waves with a wavelength of 21 centimeters, which can be detected with a radio telescope. (When my classmate Ted Bunn was teaching this back in Berkeley, a student asked him a question that became an instant classic: “What’s the wavelength of the 21-centimeter line?”) This means that you can in principle “see” hydrogen with a radio telescope throughout most of our Universe, even long before it’s formed stars and galaxies, back while it was invisible to ordinary telescopes. Better still, we can make a 3-D map of the hydrogen gas, using the redshift idea we discussed Chapter 2: since these radio waves are stretched by the expansion of our Universe, the wavelength they have when reaching Earth tells us how far away (and hence how long ago) they come from. For example, waves that arrive with a wavelength of 210 centimeters have been stretched to 10 times their original length, so they were emitted when our Universe was 10 times smaller than it is now. This technique has become known as 21-centimeter tomography, and since it has the potential to become the next big thing in cosmology, it’s attracted lots of recent attention. Many teams around the world are currently racing to become the first to convincingly detect this elusive signal from hydrogen halfway across our Universe, but so far, nobody has succeeded.
What Is a Telescope, Really?
Why is it so hard? Because the radio signal is very faint. What do you need to detect a really faint signal? A really big telescope. A square kilometer size would be nice. What do you need to build a really big telescope? A really big budget. But how big exactly? This is where it gets interesting! For a traditional radio telescope like the one in Figure 4.8 (background), its cost more than doubles if you double its size, and gets absurd beyond a certain point. If you ask a friend who’s a mechanical engineer to build a square kilometer radio dish with motors that can point it toward arbitrary sky directions, she’ll no longer be your friend.
Figure 4.8: Radio astronomy on a big (background) and small (foreground) budget. My grad student Andy Lutomirski is tinkering with our electronics unit, which we put in a tent for rain protection during our expedition to Green Bank, West Virginia.
For this reason, all the experiments racing toward 21-centimeter tomography are using a more modern type of radio telescope called an interferometer. Since light and radio waves are electromagnetic phenomena, they create a
voltage between different points in space as they fly by. Very faint voltages for sure, vastly lower than the 1.5 volts you have between the two ends of a flashlight battery, but still strong enough that they can be detected with good antennas and amplifiers. The basic idea with interferometry is to measure lots of such voltages using an array of radio antennas, and then have a computer reconstruct what the sky looks like. If all antennas are in a horizontal plane as in Figure 4.8 (foreground), then a wave from straight above will reach them simultaneously. Other waves reach some antennas before others, and the computer uses this fact to figure out what directions they’re coming from. Your brain uses the same method to figure our where sound waves are coming from: if your left ear detects the sound before your right ear, the sound is clearly coming from the left, and by measuring the exact time difference, your brain can even estimate if it’s coming straight from your left or from an angle. Since you have only two ears, you can’t pinpoint the angle very accurately, but you’d do (albeit perhaps not look…) much better if you mimicked a large radio interferometer by having hundreds of ears all over your body. This interferometer idea has been enormously successful ever since Martin Ryle pioneered it in 1946, and it earned him a Nobel Prize in 1974.
However, the slowest computing step, which corresponds to measuring these time differences, needs to be done once for every pair of antennas (or ears), and if you increase the number of antennas, the number of pairs grows roughly as the number of antennas squared. This means that if you make the number of antennas a thousand times larger, the computer cost gets a million times larger—ouch! You want the telescope to be astronomical, not the budget! For this reason, interferometers have so far been limited to tens or hundreds of antennas, not the million or so that we really need for 21-centimeter tomography.
When I moved to MIT, I was generously allowed to join an American-Australian 21-centimeter-tomography experiment spearheaded by my colleague Jackie Hewitt. At our project meetings, I’d sometimes daydream about whether there might be a way of building huge telescopes cheaper. And then one afternoon, during one of our meetings at Harvard, it suddenly clicked for me: there is a cheaper way!
The Omniscope
I think of a telescope as a wave-sorting machine. If you look at your hand and measure the intensity of light across it, this won’t reveal what your face looks like, because light waves from every part of your face are mixed together at each point on your skin. But if you can somehow sort all these light waves by the direction in which they travel, so that waves going in different directions land on different parts of your hand, then you’ll recover an image of your face. This is exactly what a lens does in a camera, in a telescope, or in an eye, and what the curved mirror does in the radio telescope in Figure 4.8. In mathematics, we have a fancy and intimidating name for wave sorting: Fourier transforming. So a telescope is essentially a Fourier transformer. Whereas a traditional telescope does this Fourier transform by analog means, using lenses or curved mirrors, an interferometer does it digitally, using some form of computer. The waves are sorted not only by their travel direction, but also by their wavelength, which for visible light corresponds to its color. My idea that afternoon at Harvard was to design a huge radio interferometer where the antennas were arranged not rather randomly, as for our current project, but in a simple, regular pattern. For a telescope with a million antennas, this would allow the Fourier transform to be computed 25,000 times faster using some clever numerical tricks exploiting the pattern—basically making such a telescope 25,000 times cheaper.
After I’d managed to convince my friend Matias Zaldarriaga that the idea would work, we explored it in detail and published two papers about it, showing that the basic trick worked for a wide range of antenna patterns. We called our proposed telescope an “Omniscope” because it was both omnidirectional (imaging essentially the whole sky at once) and omnichromatic (imaging a broad range of wavelengths/“colors” all at once).
Albert Einstein allegedly said: “In theory, theory and practice are the same. In practice, they are not.” We therefore decided to build a small prototype to see if it really worked. I discovered that the basic idea for the Omniscope had been tried already twenty years earlier by a Japanese group, for different purposes, but they were limited by the electronics of the time to sixty-four antennas. Thanks to the subsequent cell-phone revolution, the key components needed for our prototype had now dropped dramatically in price, so we could do the whole thing on a shoestring budget. I was also very lucky to get help from an amazing group of MIT students, some of whom came from our Electrical Engineering Department and knew the sort of wizardry needed for electronic circuit-board design and digital signal processing. One of them, Nevada Sanchez, taught me the magic-smoke theory of electronics, which we’ve subsequently verified in our lab: electronic components work because they contain magic smoke. So if you accidentally do something to them that lets their magic smoke out, they stop working.…
Having spent my whole academic career doing merely theory and data analysis, suddenly building an experiment was something completely different, and I loved it. It brought back fond memories of tinkering in the basement as a teenager, except that we were now building something much more exciting, and we were having fun doing it as a team. So far, our fledgling Omniscope is doing well, but it’s too early to tell whether we or anyone else will ultimately succeed in making 21-centimeter tomography live up to its full potential.
However, the Omniscope has already taught me something else—something about myself. For me, the most fun part of all has been our team’s expeditions: when we load all our gear into a van and drive to a remote location far from radio stations, cell phones and other human sources of radio waves. During those magic days, my normally so-fragmented life of emails, teaching, committees and family obligations gets replaced by a blissful Zen-like state of total focus: there’s no cell-phone reception, no Internet, no interruption, and every single one of us in the team is 100% focused on this one common goal of making our experiment work. Sometimes I wonder whether we try to multitask too much in our day and age, and whether I should disappear like this more often for other reasons. Like finishing this book…
Where Did Our Big Bang Come From?
In this chapter, we’ve explored how an avalanche of precision data has transformed cosmology from a speculative, philosophical field into the precision science that it is today, where we’ve measured the age of our Universe to 1% accuracy. As is usual in science, answering old questions has uncovered new ones, and I predict an exciting decade ahead as cosmologists around the world build new theories and experiments attempting to shed light on the nature of dark matter, dark energy and other mysteries. In Chapter 13, we’ll return to this quest and its implications for the ultimate fate of our Universe.
To me, one of the most striking lessons from precision cosmology is that simple mathematical laws govern our Universe all the way back to its fiery origins. For example, the equations that constitute Einstein’s theory of general relativity appear to accurately govern the gravitational force over distances ranging from a millimeter up to a hundred trillion trillion (1026) meters, and the equations of atomic and nuclear physics appear to have accurately governed our Universe from the first second after our Big Bang until today, 14 billion years later. And not just crudely, like the equations of economics, but with stunning precision, as illustrated by Figure 4.2. So precision cosmology highlights the mysterious utility of mathematics for understanding our world. We’ll return to this mystery in Chapter 10 and explore a radical explanation for it.
Another striking lesson from precision cosmology is that it’s incomplete. We saw that everything that we observe in our Universe today evolved from a hot Big Bang where nearly uniform gas as hot as the core of our Sun expanded so fast that it doubled its size in under a second. But who ordered that? I like to think of this as the “Bang Problem”: what put the bang into the Big Bang? Where did this hot expanding gas come from? Why was it so unif
orm? And why was it imprinted with these 0.002%–level seed fluctuations that eventually grew into the galaxies and the large-scale structure that we see around us in our Universe today? In short, how did it all begin? As we’ll see, extrapolating Friedmann’s expanding-universe equations even further back in time leads to embarrassing problems, suggesting that we need a radical new idea before we can understand our ultimate origins. That’s what the next chapter is about.
THE BOTTOM LINE
• A recent avalanche of data about the cosmic microwave background, galaxy clustering, etc., has transformed cosmology into a precision science; for example, we’ve gone from arguing about whether our Universe is 10 or 20 billion years old to arguing about whether it’s 13.7 billion or 13.8 billion years old.
• Einstein’s gravity theory arguably broke the record as the most mathematically beautiful theory, explaining gravity as a manifestation of geometry. It shows that the more mass space contains, the more curved space gets. This curvature of space causes things to move not in straight lines, but in a motion that curves toward massive objects.
• By measuring the geometry of universe-sized triangles, Einstein’s theory has let us infer the total amount of mass in our Universe. Remarkably, the atoms that were thought to be the building blocks of everything were found to make up only 4% of this mass, leaving 96% unexplained.