Dancing With Myself
Page 38
Before 1800, that was not a worry. The universe was believed to be only a few thousand years old. (Archbishop Ussher of Armagh, working backwards through the genealogy of the Bible, in 1654 announced that the time of creation was 4004 B.C., on October 26, at 9 o’clock in the morning. No messing about with uncertainty for him.)
In the eighteenth century, the scriptural time-scale provided by religion prevented anyone worrying much about the age of the Sun. It presumably started out very hot and burning in 4000 B.C., and so it hadn’t had time to cool down yet. If it were made entirely of burning coal, it would have lasted long enough.
Around 1800, the geologists started to ruin things. In 1785, James Hutton read his Theory of the Earth to the Royal Society of Edinburgh, advancing the idea of uniformitarianism in geology. He published it in the Proceedings of the society three years later, and then an amplified form in two volumes in 1795. It did not make great waves, but after Hutton’s death John Playfair published (in 1802) a shorter and more accessible version.
Uniformitarianism, in spite of its ugly name, is a beautiful and simple idea. According to Hutton and Playfair, the processes that built the world in the past are exactly those at work today: the uplift of mountains, the tides, the weathering effects of rain and air and water flow, these shape the surface of the Earth. This is in sharp distinction to the idea that the world was created just as it is now, except for occasional great catastrophic changes like the Biblical Flood.
The great virtue of Hutton’s theory is that it removes the need for assumptions. The effectiveness of anything that shaped the past can be measured by looking at its effectiveness today.
The great disadvantage of the theory, from the point of view of anyone pondering what keeps the Sun hot, is the amount of time is takes for all this to happen. We can no longer accept a universe only a few thousand years old. Mountain ranges could not form, seabeds could not be raised, chalk deposits be laid down, and solid rocks could not erode to powder, in so short a time. Hutton and Playfair, and later Charles Lyell who further developed and promoted the same ideas, needed many millions of years in which to work. And it was clearly preposterous to imagine the Sun as orders of magnitude younger than the Earth.
A Sun made of burning coal would not do. Hermann von Helmholtz and Lord Kelvin independently proposed a solution that could give geology more time. They suggested that the source of the Sun’s heat was not burning, but gravitational contraction. If the material of the Sun were slowly falling inward on itself, that would release energy.
If the source of such energy feels obscure, take a hammer and smack a flat stone hard with it, many times in succession. After thirty or forty blows, feel the hammer and the stone. They will both be warmer, heated by the energy of the hammer blows. Now instead of holding the hammer, raise it by a pulley and then drop it. Do this a number of times, and the hammer and stone still become warmer. The heat is now produced by gravity, pulling the hammer down. That was exactly, according to Lord Kelvin, what was heating the Sun.
The amount of energy produced by the Sun’s contraction could be precisely calculated. Unfortunately it was not enough. While Lord Kelvin was proposing an age for the Sun of 20 million years the ungrateful geologists, and still more so the biologists, were asking considerably more. Charles Darwin’s Origin of Species came out in 1859, and evolution seemed to need much longer than mere tens of millions of years to do its work. The biologists wanted hundreds of millions at a minimum, and they preferred a few billion.
No one could give it to them during the whole of the 19th century. Lord Kelvin, who no matter what he did could not come up with any age for the Sun greater than 100 million years and was in favor of a number far less, in particular became an arch-enemy of the evolutionists. An “odious spectre” is what Darwin called him. But no one could refute his physical arguments. A new scientific revolution was needed before an explanation was available for a multi-billion year age of the Sun.
That revolution began in the 1890s, when in quick succession Röntgen discovered X rays (1895), Becquerel discovered radioactivity (1896) and Thomson (J.J. Thomson, not William Thomson—no relation) discovered the electron (1897). Together, these discoveries implied that the atom, previously thought to be an indivisible particle, had an interior structure and could be broken into smaller pieces.
Skipping as lightly over the next quarter of a century of enormous intellectual effort as we did over the period 1600-1800, by the end of the 1920’s we reach a time when it was realized that not only could atoms split, to form smaller atoms and subatomic particles, but light atoms could combine, to form heavier atoms. In particular, four atoms of hydrogen could fuse together to form one atom of helium; and if that happened, huge amounts of energy could be produced. (That’s what a hydrogen bomb does.)
Perhaps the first person to realize that nuclear fusion was the key to what makes the sun go on shining was Arthur Eddington. Certainly he was one of the first persons to develop the idea systematically, and equally certainly he believed that he was the first to think of it. There is a story of Eddington sitting out one balmy evening with a girl friend. She said, “Aren’t the stars pretty.” And he said, “Yes, and I’m the only person in the world who knows what makes them shine.”
It’s a nice story, but it’s none too likely. Eddington was a lifelong bachelor, a Quaker, and a workaholic, by all accounts too busy to have much time for idle philandering. (Just as damning for the anecdote, Rudolf Kippenhahn, in his book 100 Billion Suns tells exactly the same story with minor word changes—but about Fritz Houtermans, who also played an important part in explaining the hydrogen-to-helium energy conversion.)
Astronomy, mathematics, and physics formed Eddington’s life, and he was a superb theoretician. But even he could not say how the hydrogen fused to form helium. That insight came ten years later, with the work of Hans Bethe and Carl von Weizäcker, who in 1938 discovered the “carbon cycle” for nuclear fusion.
However, Eddington didn’t have to know how. He had all the information that he needed, because he knew how much energy would be released when four hydrogen nuclei changed to one helium nucleus. All he required was the mass of hydrogen, the mass of helium, and Einstein’s most famous formula, E = mc2.
From that, Eddington worked out how much hydrogen would have to be converted to provide the Sun’s known energy output. The answer is around 600 million tons a second.
Before we say, wow, what a huge amount, we ought to think of it as a fraction of the total mass of the Sun. That mass is around 2 × 1027 tons, a number big enough to be almost meaningless. But we can put both numbers into a useful perspective, by noting that 600 million tons a second for one billion years is about 2 × 1025 tons—or one percent of the Sun’s mass. Now, the Sun is about 65 percent hydrogen. So to keep the Sun shining as brightly as it shines today for five billion years7 would require only that during that period, less than eight percent of the Sun’s hydrogen be converted to helium.
Why pick a period of five billion years? Because other evidence suggests an age for the Earth of about 4.5 billion years. Nuclear fusion is all we need in the Sun to provide the right time-scale for geology and biology on Earth. More than that, the Sun can go on shining just as brightly for another five billion years, without totally depleting its source of energy.
But how typical a star is the Sun? It certainly occupies a unique place in our lives. All the evidence, however, suggests that the Sun is a rather normal star. There are stars scores of times as massive, and stars tens of times as small. The upper limit is set by stability, because a contracting ball of gas of more than about 90 solar masses will oscillate more and more wildly, until parts of it are blown off into space and what’s left will be 90 solar masses or less. At the lower end, below a certain size, maybe one-twelfth of the Sun’s mass, a starlike object cannot generate enough internal pressure to initiate nuclear fusion, and so should perhaps not be called a “star” at
all.
The Sun sits comfortably in the middle range, designated by astronomers as a G2-type dwarf star, in what is known as the main sequence because most of the stars we see can be fitted into that sequence.
The life history of a star depends more than anything else on its mass. That story also started with Eddington, who in 1924 discovered the mass-luminosity law. The more massive a star, the more brightly it shines.8 This law does not merely restate the obvious, that more massive stars are bigger and so radiate more simply because they are of larger area. If that were true, because the mass of a star grows with the cube of its radius, and its surface area like the square of its radius, we might expect to find that brightness goes roughly like mass to the two-thirds power. (Multiply the mass by eight, and expect the brightness to increase by a factor of four.) In fact, the brightness goes up rather faster than the cube of the mass (multiply the mass by eight, and the brightness increases by a factor of more than a thousand).
The implications of this for the evolution of a star are profound. Dwarf stars can go on steadily burning for a hundred billion years. Massive stars squander their energy at a huge rate, running out of available materials for fusion in just millions of years.
The interesting question is, what happens to massive stars when their central regions no longer have hydrogen to convert to helium? Detailed (and difficult) models, beginning with Fred Hoyle and William Fowler’s fundamental work on stellar nucleosynthesis in the 1940s, have allowed that question to be answered in the past forty years.
Like a compulsive gambler running out of chips, stars coming to the end of their supply of hydrogen seek other energy sources. At first they find it through other nuclear fusion processes. Helium in tie central core “burns” to form carbon, carbon burns to make oxygen and neon and magnesium. These processes call for higher and higher temperatures before they are significant. Carbon burning starts about 600 million degrees (as usual, we are talking degrees Celsius). Neon burning begins around a billion degrees. Such a temperature is available only in the cores of massive stars, so for a star less than nine solar masses that is the end of the road. Many such stars settle down to old age as cooling lumps of dense matter. Stars above nine solar masses can keep going, burning neon and then oxygen. Finally, above 3 billion degrees, silicon, which is produced in a process involving collisions of oxygen nuclei, begins to burn, and all the elements are produced up to and including iron. By the time that we reach iron, the different elements form spherical shells about the star’s center, with the heaviest (iron) in the middle, surrounded by shells of successively lighter elements until we get to a hydrogen shell on the outside.
Now we come to a fact of great significance. No elements heavier than iron can be produced through this nuclear synthesis process in stars. Iron, element 26, is the place on the table of elements where nuclear binding energy is maximum. If you try to “burn” iron, fusing it to make heavier elements, you use energy, rather than producing it. Notice that this has nothing to do with the mass of the star. It is decided only by nuclear forces.
The massive star that began as mainly hydrogen has reached the end of the road. The final processes have proceeded faster and faster, and they are much less efficient at producing energy than the hydrogen to helium reaction. Hydrogen burning takes millions of years for a star of, say, a dozen solar masses. But carbon burning is all finished in a few thousand years, and the final stage of silicon burning lasts only a day or so.
There are obvious next questions: What happens now to the star? Does it sink into quiet old age, like most small stars? Or does it find some new role?
Actually, we have one more question to ask. We can explain through stellar nucleosynthesis the creation of every element lighter than iron. But more than 60 elements heavier than iron are found on Earth. If they are not formed by nuclear fusion within stars, where did they come from?
They were not, as you might argue, “there from the beginning”; but in order to prove that assertion we need some additional facts.
In the best cliff-hanger tradition, therefore, I am going to leave our massive star, running faster and faster through its sources of energy, down to the last one left (silicon to iron), and wondering where it can possibly go next.
We will come back to that mystery, and also to the problem of the source of heavy elements, in a little while. First, however, we must explore another important piece of the universe.
3.…AND GALAXIES
The ancient astronomers, observing without benefit of telescopes, knew and named many of the stars. They also noted the presence of a hazy glow that extends across a large fraction of the sky, and they called it the Milky Way. Finally, those with the most acute vision had noted that the constellation of Andromeda contained within it a much smaller patch of haze.
The progress from observation of the stars to the explanation of hazy patches in the sky came in stages. Galileo started the ball rolling in 1610, when he examined the Milky Way with his telescope and found that he could see huge numbers of stars there, far more than were visible with the unaided eye. He asserted that the Milky Way was nothing more than stars, in vast numbers. William Herschel carried this a stage farther, counting how many stars he could see in different parts of the Milky Way, and beginning to build towards the modern picture, of a great disk made out of billions of separate stars, with the Sun in the plane of the disk but well away (30,000 light-years) from the center.
At the same time, the number of hazy patches in the sky visible with a telescope went up and up as telescope power increased. Lots of them looked like the patch in Andromeda, which had long been known as the Andromeda Nebula (nebula = mist, in Latin).
A dedicated comet hunter, Charles Messier, annoyed at constant confusion of hazy patches (uninteresting) with comets (highly desirable) had already plotted out their locations so as not to be bothered by them. This resulted in the Messier Catalog: the first and almost inadvertent catalog of galaxies (galaxy = milky).
But what were those fuzzy glows identified by Messier?
The suspicion that the Andromeda and other galaxies might be composed of stars, as the Milky Way of our own galaxy is made up of stars, was there from Galileo’s time. Individual stars cannot be seen in most galaxies, but only because of their distance. The number of galaxies, though, probably exceeds anything that Galileo would have found credible. Today’s estimate is that there are about a hundred billion galaxies in the visible universe—roughly the same as the number of individual stars in a typical galaxy, such as our own. Galaxies, fainter and fainter as their distance increases, are seen as far as our telescopes can probe.
In most respects, the distant ones look little different from the nearest ones. But there is one crucial difference, and it is the main reason for introducing the galaxies at this point.
Galaxies increase in numbers as they decrease in apparent brightness, and it is natural to assume that these two go together: if we double the distance of a galaxy, it appears one-quarter as bright, but we expect to see four times as many like it if space is uniformly filled with galaxies.
What we would not expect to find, until it was suggested by Carl Wirtz in 1924 and confirmed by Edwin Hubble in 1929, is that more distant galaxies appear redder than nearer ones.
To be more specific, particular wavelengths of light emitted by galaxies have been shifted towards longer wavelengths in the fainter (and therefore presumably more distant) galaxies. The question is, what could cause such a shift?
The most plausible mechanism, to a physicist, is called the Doppler Effect. According to the Doppler Effect, light from a receding object will be shifted to longer (redder) wavelengths; light from an approaching object will be shifted to shorter (bluer) wavelengths. Exactly the same thing works for sound, which is why a speeding police car’s siren seems to drop in pitch as it passes by.
If we accept the Doppler effect as the cause of the reddened appearance
of the galaxies, we are led (as was Hubble) to an immediate conclusion: the whole universe must be expanding, at a close to constant rate, because the red shift of the galaxies corresponds to their brightness, and therefore to their distance.
Note that this does not mean that the universe is expanding into some other space. There is no other space. It is the whole universe—everything there is—that has grown over time to its present dimension.
And from this we can draw another immediate conclusion. If that expansion had proceeded in the past as it did today, there must have been a time when everything in the whole universe was drawn together to a single point. It is logical to call the time that has elapsed since everything was in that infinitely dense singularity the age of the universe. The Hubble galactic redshift allows us to calculate how long ago that happened.
Our estimate is bounded on the one hand by the constancy of the laws of physics (how far back can we go, before the universe would be totally unrecognizable and far from the places where we believe today’s physical laws are valid); and on the other hand by our knowledge of the distance of the galaxies, as determined by other methods.
Curiously, it is the second problem that forms the major constraint. When we say that the universe is between ten and twenty billion years old, that uncertainty of a factor of two betrays our ignorance of galactic distances. When (and if) the Hubble space telescope performs as it was originally supposed to do, we will obtain a better estimate of the distance of the nearer galaxies; from that we will be able to reduce the uncertainty in the size and age of the universe.
I want to pause here and point out how remarkable it is that observation of the faint agglomerations of stars known as galaxies leads us, very directly and cleanly, to the conclusion that we live in a universe of finite and determinable age. A century ago, no one could have offered even an approximate age for the universe. For an upper bound, most non-religious scientists would probably have said “forever.” For a lower bound, all they had was the age of the Earth.