Book Read Free

Asimov's New Guide to Science

Page 9

by Isaac Asimov


  In 1924, Eddington pointed out that the interior of any star must be very hot. Because of a star’s great mass, its gravitational force is immense. If the star is not to collapse, this huge force must be balanced by an equal internal pressure—from heat and from radiation energy. The more massive the star, the higher the central temperature required to balance the gravitational force. To maintain this high temperature and radiation pressure, the more massive stars must be burning energy faster, and they must be brighter, than less massive ones; this is the mass-luminosity law. The relationship is a drastic one, for luminosity varies as the sixth or seventh power of the mass. If the mass is increased by 3 times, then the luminosity increases by a factor of six or seven 3’s multiplied together—say, 750-fold.

  It follows that the massive stars are spendthrift with their hydrogen fuel and have a shorter life. Our sun has enough hydrogen to last it at its present radiation rate for billions of years. A bright star such as Capella must burn out in about 20 million years, and some of the brightest stars—for example, Rigel—cannot possibly last more than 1 or 2 million years. Hence, the very brightest stars must be very youthful. New stars are perhaps even now being formed in regions where space is dusty enough to supply the raw material.

  Indeed, in 1955, the American astronomer George Herbig detected two stars in the dust of the Orion Nebula that were not visible in photographs of the region taken some years before. These may be stars that were actually born in our lifetime.

  By 1965, hundreds of stars were located that were so cool, they did not quite shine. They were detected by their infrared radiation and are therefore called infrared giants because they are made up of large quantities of rarefied matter. Presumably, these are quantities of dust and gas, gathering together and gradually growing hotter. Eventually, they will become hot enough to shine.

  The next advance in the study of the evolution of stars came from analysis of the stars in globular clusters. The stars in a cluster are all about the same distance from us, so their apparent magnitude is proportional to their absolute magnitude (as in the case of the cepheids in the Magellanic Clouds). Therefore, with their magnitude known, an H-R diagram of these stars can be prepared. It is found that the cooler stars (burning their hydrogen slowly) are on the main sequence, but the hotter ones tend to depart from it. In accordance with their high rate of burning, and their rapid aging, they follow a definite line showing various stages of evolution, first toward the red giants and then back, across the main sequence again, and down toward the white dwarfs.

  From this and from certain theoretical considerations about the manner in which subatomic particles can combine at certain high temperatures and pressures, Fred Hoyle has drawn a detailed picture of the course of a star’s evolution. According to Hoyle, in its early stages, a star changes little in size or temperature. (This is the position our sun is in now and will continue to be in for a long time.) As in its extremely hot interior, a star converts its hydrogen into helium, the helium accumulates at its center. When this helium core reaches a certain size, the star starts to change in size and temperature dramatically. It expands enormously and its surface becomes cooler. In other words, it leaves the main sequence and moves in the red-giant direction. The more massive the star, the more quickly it reaches this point. In the globular clusters, the more massive ones have already progressed varying lengths along the road.

  Despite its lower temperature, the expanded giant releases more heat because of its larger surface area. In the far distant future, when the sun leaves the main sequence, or even somewhat before, it will have heated to the point where life will be impossible on the earth. That point, however, is still billions of years in the future.

  But what precisely is the change in the helium core that brings about expansion to a red giant? Hoyle suggested that the helium core itself contracts and, as a result, rises to a temperature at which the helium nuclei can fuse to form carbon, with the liberation of additional energy. In 1959, the American physicist David Elmer Alburger showed in the laboratory that this reaction actually can take place. It is a very rare and unlikely sort of reaction, but there are so many helium atoms in a red giant that enough such fusions can occur to supply the necessary quantities of energy.

  Hoyle goes further. The new carbon core heats up still more, and still more complicated atoms, such as those of oxygen and neon, begin to form. While this is happening, the star is contracting and getting hotter again; it moves back toward the main sequence. By now the star has begun to acquire a series of layers, like an onion. It has an oxygen-neon core, then a layer of carbon, then one of helium, and the whole is enveloped in a skin of still-unconverted hydrogen.

  However, in comparison with its long life as a hydrogen consumer, the star is on a quick toboggan slide through the remaining fuels. Its life cannot continue for long, since the energy produced by helium fusion and beyond is about one-twentieth that produced by hydrogen fusion. In a comparatively short time, the energy required to keep the star expanded against the inexorable pull of its own gravitational field begins to fall short, and the star contracts ever more swiftly. It contracts not only back to what would have been the size of a normal star, but beyond—to a white dwarf.

  During the contraction, the outermost layers of the star may be left behind or even blown off because of the heat developed by the contraction. The white dwarf is thus surrounded by an expanding shell of gas, which shows up in our telescopes at the edges where the quantity of gas in the line of sight is thickest and therefore greatest. Such white dwarfs seem to be surrounded by a small “smoke ring” or “doughnut” of gas. These are called planetary nebulae because the smoke surrounds the star like a planetary orbit made visible. Eventu ally, the ring of smoke expands and thins into invisibility, and we have white dwarfs such as Sirius B with no sign of any surrounding nebulosity.

  White dwarfs form, in this way, rather quietly; and such a comparatively quiet “death” lies in the future for stars like our sun and smaller ones. What’s more, white dwarfs, if undisturbed, have, in prospect, an indefinitely prolonged life—a kind of long rigor mortis—in which they slowly cool until, eventually, they are no longer hot enough to glow (many billions of years in the future) and then continue for further billions and billions of years as black dwarfs.

  On the other hand, if a white dwarf is part of a binary system, as Sirius B and Procyon B are, and if the other star is main-sequence and very close to the white dwarf, there can be exciting moments. As the main-sequence star expands in its own evolutionary development, some of its matter may drift outward under the pull of the white dwarf’s intense gravitational field and move into orbit about the latter. Occasionally, some of the orbiting material will spiral to the white dwarf’s surface, where the gravitational pull will compress it and cause it to undergo fusion so that it will emit a burst of energy. If a particularly large gout of matter drops to the white dwarf’s surface, the energy emission may be large enough to see from Earth, and astronomers record the existence of a nova. Naturally, this sort of thing can happen more than once, and recurrent novas do exist.

  But these are still not supernovas. Where do these come in? To answer that, we have to turn to stars that are distinctly more massive than our sun. These are relatively rare (in all classes of astronomical objects, large members are rarer than small ones) so that perhaps only one star in thirty is considerably more massive than our sun. Even so there may be 7 billion such massive stars in our galaxy.

  In massive stars, the core is more compressed under a gravitational field pull that is greater than those in smaller stars. The core is therefore hotter, and fusion reactions can continue past the oxygen-neon stage of smaller stars. The neon can combine further to magnesium, which can combine in turn to form silicon, and then, in turn, iron. At a late stage in its life, the star may be built up of more than half a dozen concentric shells, in each of which a different fuel is being consumed. The central temperature may have reached 3 billion to 4 billion degrees
by then. Once the star begins to form iron, it has reached a dead end, for iron atoms represent the point of maximum stability and minimum energy content. To alter iron atoms in the direction of more complex or less complex atoms requires, either way, an input of energy.

  Furthermore, as central temperatures rise with age, radiation pressure rises, too, and in proportion to the fourth power of the temperature. When the temperature doubles, the radiation pressure increases sixteen fold, and the balance between it and gravitation becomes ever more delicate. Eventually, the central temperatures may rise so high, according to Hoyle’s suggestion, that the iron atoms are driven apart into helium. But for this to happen, as I have just said, energy must be poured into the atoms. The only place the star can get this energy from is its gravitational field. When the star shrinks, the energy it gains can be used to convert iron to helium. The amount of energy needed is so great, however, that the star must shrink drastically to a fraction of its former volume, and must do so according to Hoyle, in about a second.

  When such a star starts to collapse, its iron core is still surrounded with a voluminous outer mantle of atoms not yet built up to a maximum stability. the outer regions collapse and their temperature rises, these still combinable substances “take fire” all at once. The result is an explosion that blasts the material away from the body of the star. This explosion is a supernova. It was such an explosion that created the Crab Nebula.

  The matter blasted into space as a result of a supernova explosion is of enormous importance to the evolution of the universe. At the time of the big bang, only hydrogen and helium atoms were formed. In the core of stars, other atoms, more complex ones, are formed—all the way up to iron. Without supernova explosions, these complex atoms would remain in the cores and, eventually, in white dwarfs. Only trivial amounts would make their way into the universe generally through the halos of planetary nebulas.

  In the course of the supernova explosion, material from the inner layers of stars would be ejected forcefully into surrounding space. The vast energy of the explosion would even go into the formation of atoms more complex than those of iron.

  The matter blasted into space would be added to the clouds of dust and gas already existing and would serve as raw material for the formation of new, second-generation stars, rich in iron and other metallic elements. Our own sun is probably a second-generation star, much younger than the old stars of some of the dust-free globular clusters. Those first-generation stars are low in metals and rich in hydrogen. The earth, formed out of the same debris of which the sun was born, is extraordinarily rich in iron—iron that once may have existed at the center of a star that exploded many billions of years ago.

  But what happens to the contracting portion of the stars that explode in supernova explosions? Do they form white dwarfs? Do larger, more massive stars simply form larger, more massive white dwarfs?

  The first indication that they cannot do so, and that we cannot expect larger and larger white dwarfs, came in 1939 when the Indian astronomer Subrahmanyan Chandrasekhar, working at Yerkes Observatory near Williams Bay, Wisconsin, calculated that no star more than 1.4 times the mass of our sun (now called Chandrasekhar’s limit) could become a white dwarf by the “normal” process Hoyle described. And, in fact, all the white dwarfs so far observed turn out to be below Chandrasekhar’s limit in mass.

  The reason for the existence of Chandrasekhar’s limit is that white dwarfs are kept from shrinking farther by the mutual repulsion of the electrons (subatomic particles I will discuss later, in chapter 7) contained in its atoms. With increasing mass, gravitational intensity increases; and at 1.4 times the mass of the sun, electron repulsion no longer suffices, and the white dwarf collapses to form a star even tinier and denser, with subatomic particles in virtual contact. The detection of such further extremes had to await new methods of probing the universe, taking advantage of radiations other than those of visible light.

  The Windows to the Universe

  The greatest weapons in the conquest of knowledge are an understanding mind and the inexorable curiosity that drives it on. And resourceful minds have continually invented new instruments which have opened up horizons beyond the reach of our unaided sense organs.

  THE TELESCOPE

  The best-known example is the vast surge of new knowledge that followed the invention of the telescope in 1609. The telescope, essentially, is simply an oversized eye. In contrast to the quarter-inch pupil of the human eye, the 200-inch telescope on Palomar Mountain has more than 31,000 square inches of light-gathering area. Its light-collecting power intensifies the brightness of a star about a million times, compared with what the naked eye can see. This telescope, first put into use in 1948, is the largest in use today in the United States; but in 1976, the Soviet Union began observations with a 236.2-inch telescope (that is, one with a mirror that is 600 centimeters in diameter) located in the Caucasus mountains.

  This is about as large as telescopes of this kind are likely to get; and, to tell the truth, the Soviet telescope does not work well. There are other ways, however, of improving telescopes than by simply making them larger. During the 1950s Merle A. Ture developed an image tube which electronically magnifies the faint light gathered by a telescope, tripling its power. Clusters of comparatively small telescopes, working in unison, can produce images that are equivalent to those produced by a single telescope larger than any of the components; and plans are in progress both in the United States and the Soviet Union to build clusters that will far outstrip the 200-inch and 236.2 inch telescopes. Then, too, a large telescope put into orbit about the earth would be able to scan the skies without atmospheric interference and to see more clearly than any telescope likely to be built on Earth. That, too, is in the planning stage.

  But mere magnification and light-intensification are not the full measure of the telescope’s gift to human beings. The first step toward making it some thing more than a mere light collector came in 1666 when Isaac Newton discovered that light could be separated into what he called a spectrum of colors, He passed a beam of sunlight through a triangularly shaped prism of glass and found that the beam spread out into a band made up of red, orange, yellow, green, blue, and violet light, each color fading gently into the next (figure 2.6). (The phenomenon itself, of course, has always been familiar in the form of the rainbow, the result of sunlight passing through water droplets, which act like tiny prisms.)

  Figure 2.6. Newton’s experiment splitting the spectrum of white light.

  What Newton showed was that sunlight, or white light, is a mixture of many specific radiations (now recognized as wave forms of varying wavelengths) which impress the eye as so many different colors. A prism separates the colors because, on passing from air into glass, and from glass into air, light is bent, or refracted, and each wavelength undergoes a different amount of refraction—the shorter the wavelength, the greater the refraction. The short wave lengths of violet light are refracted most; the long wavelengths of red, least.

  This phenomenon explains, among other things, an important flaw in the very earliest telescopes, which was that objects viewed through them were surrounded by obscuring rings of color, which were spectra caused by the dispersion of light as it passed through the lenses.

  Newton despaired of correcting this effect as long as lenses of any sort were used. He therefore designed and built a reflecting telescope in which a parabolic mirror, rather than a lens, was used to magnify an image. Light of all wavelengths was reflected alike, so that no spectra were formed on reflection, and rings of color (chromatic aberration) were not to be found.

  In 1757, the English optician John Dollond prepared lenses of two different kinds of glass, one kind canceling out the spectrum-forming tendency of the other. In this way, achromatic (“no color”) lenses could be built. Using such lenses, refracting telescopes became popular again. The largest such telescope, with a 40-inch lens, is at Yerkes Observatory and was built in 1897. No larger refracting telescopes have been bu
ilt since or are likely to be built, for still larger lenses would absorb so much light as to cancel their superior magnifying powers. The giant telescopes of today are, in consequence, all of the reflecting variety, since the reflecting surface of a mirror absorbs little light.

  THE SPECTROSCOPE

  In 1814, a German optician, Joseph von Fraunhofer, went beyond Newton. He passed a beam of sunlight through a narrow slit before allowing it to be refracted by a prism. The spectrum that resulted was actually a series of images of the slit in light of every possible wavelength. There were so many slit images that they melted together to form the spectrum. Fraunhofer’s prisms were so excellently made and produced such sharp slit images that it was possible to see that some of the slit images were missing. If particular wavelengths of light were missing in sunlight, no slit image would be formed at that wavelength, and the sun’s spectrum would be crossed by dark lines.

  Fraunhofer mapped the location of the dark lines he detected, and recorded over 700. They have been known as Fraunhofer lines ever since. In 1842, the lines of the solar spectrum were first photographed by the French physicist Alexandre Edmond Becquerel. Such photography greatly facilitated spectral studies; and, with the use of modern instruments, more than 30,000 dark lines have been detected in the solar spectrum, and their wavelengths measured

  In the 1850s, a number of scientists toyed with the notion that the lines were characteristic of the various elements present in the sun. The dark line:, would represent absorption of light, at the wavelengths in question, by certain elements; bright lines would represent characteristic emissions of light by elements. About 1859, the German chemists Robert Wilhelm Bunsen and Gustav Robert Kirchhoff worked out a system for identifying elements in this way. They heated various substances to incandescence, spread out their glow into spectra, measured the location of the lines (in this case, bright lines of emission, against a dark background) on a background scale, and matched up each line with a particular element. Their spectroscope was quickly applied to discovering new elements by means of new spectral lines not identifiable with known elements. Within a couple of years, Bunsen and Kirchhoff discovered cesium and rubidium in this manner.

 

‹ Prev