Asimov's New Guide to Science

Home > Science > Asimov's New Guide to Science > Page 4
Asimov's New Guide to Science Page 4

by Isaac Asimov


  II is natural to suppose, to begin with, that the sky is simply a hard canopy in which the shining heavenly bodies are set like diamonds. (Thus the Bible refers to the sky as the “firmament,” from the same Latin root as the word firm.) As early as the sixth to the fourth centuries B.C., Greek astronomers realized that there must be more than one canopy, For while the “fixed” stars moved around Earth in a body, apparently without changing their relative positions, this was not true of the sun, the moon, and five bright starlike objects (Mercury, Venus, Mars, Jupiter, and Saturn): in fact, each moved in a separate path. These seven bodies were called planets (from a Greek word meaning “wanderer”), and it seemed obvious that they could not be attached to the vault of the stars.

  The Greeks assumed that each planet was set in an invisible spherical vault of its own, and that the vaults were nested one above the other, the nearest belonging to the planet that moved fastest. The quickest motion belonged to the moon, which circled the sky in about twenty-seven and a third days. Beyond it lay in order (so thought the Greeks) Mercury, Venus, our sun, Mars, Jupiter, and Saturn.

  EARLY MEASUREMENTS

  The first scientific measurement of any cosmic distance came about 240 B.C. Eratosthenes of Cyrene, the head of the Library at Alexandria, then the most advanced scientific institution in the world, pondered the fact that on 21 June, when the noonday sun was exactly overhead at the city of Syene in Egypt, it was not quite at the zenith at noon in Alexandria, 500 miles north of Syene. Eratosthenes decided that the explanation must be that the surface of the earth curved away from the sun. From the length of the shadow in Alexandria at noon on the solstice, straightforward geometry could yield the amount by which the earth’s surface curved in the 500-mile distance from Syene to Alexandria. From that one could calculate the circumference and the diameter of the earth, assuming it to be spherical in shape—a fact Greek astronomers of the day were ready to accept (figure 2.1).

  Eratosthenes worked out the answer (in Greek units), and, as nearly as we can judge, his figures in our units came out at about 8,000 miles for the diameter and 25,000 miles for the circumference of the earth. These figures, as it happens, are just about right. Unfortunately, this accurate value for the size of the earth did not prevail. About 100 B.C. another Greek astronomer, Posidonius of Aparnea, repeated Eratosthenes’ work but reached the conclusion that the earth was but 18,000 miles in circumference.

  It was the smaller figure that was accepted throughout ancient and medieval times. Columbus accepted the smaller figure and thought that a 3,000-mile westward voyage would take him to Asia. Had he known the earth’s true size, he might not have ventured. It was not until 1521-23, when Magellan’s fleet (or rather the one remaining ship of the fleet) finally circumnavigated the earth, that Eratosthenes’ correct value was finally established.

  In terms of the earth’s diameter, Hipparchus of Nicaea, about 150 B.C, worked out the distance to the moon. He used a method that had been suggested a century earlier by Aristarchus of Samos, the most daring of all Greek astronomers. The Greeks had already surmised that eclipses of the moon were caused by the earth coming between the sun and the moon. Aristarchus saw that the curve of the earth’s shadow as it crossed the moon should indicate the relative sizes of the earth and the moon. On this basis, geometric methods offered a way to calculate how far distant the moon was in terms of the diameter of the earth. Hipparchus, repeating this work, calculated that the moon’s distance from the earth was 30 times the earth’s diameter. If Eratosthenes’ figure of 8,000 miles for the earth’s diameter was correct, the moon must be about 240,000 miles from the earth. This figure again happens to be about correct.

  Figure 2.1. Eratosthenes measured the size of the earth from its curvature. At noon, on 21 June, the sun is directly overhead at Syene, which lies on the Tropic of Cancer. But, at the same time, the sun’s rays, seen from farther north in Alexandria, fall at an angle of 7.S degrees to the vertical and therefore cast a shadow. Knowing the distance between the two cities and the length of the shadow in Alexandria, Eratosthenes made his calculations.

  But finding the moon’s distance was as far as Greek astronomy managed to carry the problem of the size of the universe—at least correctly. Aristarchus had made a heroic attempt to determine the distance to the sun. The geometric method he used was absolutely correct in theory, but it involved measuring such small differences in angles that, without the use of modern instruments, he was unable to get a good value. He decided that the sun was about 20 times as far as the moon (actually it is about 400 times). Although his figures were wrong, Aristarchus nevertheless did deduce from them that the sun must be at least 7 times larger than the earth. Pointing out the illogic of supposing that the large sun circled the small earth, he decided that the earth must be revolving around the sun.

  Unfortunately, no one listened to him. Later astronomers, beginning with Hipparchus and ending with Claudius Ptolemy, worked out all the heavenly movements on the basis of a motionless earth at the center of the universe, with the moon 240,000 miles away and other objects an undetermined distance farther. This scheme held sway until 1543, when Nicolaus Copernicus published his book, which returned to the viewpoint of Aristarchus and forever dethroned Earth’s position as the center of the universe.

  MEASURING THE SOLAR SYSTEM

  The mere fact that the sun was placed at the center of the solar system did not in itself help determine the distance of the planets. Copernicus adopted the Greek value for the distance of the moon, but he had no notion of the distance of the sun. It was not until 1650 that a Belgian astronomer, Godefroy Wendelin, repeated Aristarchus’ observations with improved instruments and decided that the sun was not 20 times the moon’s distance (5 million miles) but 240 times (60 million miles). The estimate was still too small, but it was much more accurate than before.

  In 1609, meanwhile, the German astronomer Johannes Kepler had opened the way to accurate distance determinations with his discovery that the orbits of the planets were ellipses, not circles. For the first time, it became possible to calculate planetary orbits accurately and, furthermore, to plot a scale map of the solar system: that is, the relative distances and orbit shapes of all the known planets in the system could be plotted. Thus, if the distance between any two planets in the system could be determined in miles, all the other distances could be calculated at once. The distance to the sun, therefore, need not be calculated directly, as Aristarchus and Wendelin had attempted to do. The determination of the distance of any nearer body, such as Mars or Venus, outside the Earth-moon system would do.

  One method by which cosmic distances can be calculated involves the use of parallax. It is easy to illustrate what this term means. Hold your finger about 3 inches before your eyes and look at it first with just the left eye and then with just the right. Your finger will shift position against the background, because you have changed your point of view. Now if you repeat this procedure with your finger farther away—say, at arm’s length—the finger again will shift against the background, but this time not so much. The amount of shift can be used to determine the distance of the finger from your eye.

  Of course, for an object 50 feet away, the shift in position from one eye to the other begins to be too small to measure; we need a wider “baseline” than just the distance between our two eyes. But all we have to do to widen the change in point of view is to look at the object from one spot, then move 20 feet to the right and look at it again. Now the parallax is large enough to be measured easily, and the distance can be determined. Surveyors make use of just this method for determining the distance across a stream or ravine.

  The same method, precisely, can be used to measure the distance to the moon, with the stars playing the role of background. Viewed from an observatory in California, for instance, the moon will be in one position against the stars. Viewed at the same instant from an observatory in England, it will be in a slightly different position. From this change in position, and the known
distance between the two observatories (in a straight line through the earth), the distance of the moon can be calculated. Of course, we can, in theory, enlarge the baseline by making observations from observatories at directly opposite sides of the earth; the length of the baseline is then 8,000 miles. The resulting angle of parallax, divided by two, is the geocentric parallax.

  The shift in position of a heavenly body is measured in degrees or in subunits of a degree—minutes and seconds. One degree is 1/360 of the circuit around the sky; each degree is split into 60 minutes of arc, and each minute into 60 seconds of arc. A minute of arc is therefore 1/(360 × 60) or 1/21,600 of the circuit of the sky, while a second of arc is 1/(21,600 × 60) or 1/1,296,000 of the circuit of the sky.

  Using trigonometry (the interrelationship of the sides and angles of triangles), Claudius Ptolemy was able to measure the distance of the moon from its parallax, and his result agreed with the earlier figure of Hipparchus. It turned out that the geocentric parallax of the moon is 57 minutes of arc (nearly a full degree). The shift is about equal to the width of a twenty-five-cent piece as seen at a distance of five feet. This is easy enough to measure even with the naked eye. But when it carne to measuring the parallax of the sun or a planet, the angles involved were too small. The only conclusion that could be reached was that the other bodies were much farther than the moon. How much farther, no one could tell.

  Trigonometry alone, in spite of its refinement by the Arabs during the Middle Ages and by European mathematicians of the sixteenth century, could not give the answer. But measurement of small angles of parallax became possible with the invention of the telescope (which Galileo first built and turned to the sky in 1609, after hearing of a magnifying tube that had been made some months earlier by a Dutch spectaclemaker).

  The method of parallax passed beyond the moon in 1673, when the Italian born French astronomer Jean Dominique Cassini determined the parallax of Mars. He determined the position of Mars against the stars while, on the same evenings, the French astronomer Jean Richer, in French Guiana, was making the same observation. Combining the two, Cassini obtained his parallax and calculated the scale of the solar system. He arrived at a figure of 86 million miles for the distance of the sun from the earth—a figure only 7 percent less than the actual one.

  Since then, various parallaxes in the solar system have been measured with increasing accuracy. In 1931, a vast international project was made out of the determination of the parallax of a small planetoid named Eros, which at that time approached the earth more closely than any heavenly body except the moon. Eros on this occasion showed a large parallax that could be measured with considerable precision, and the scale of the solar system was determined more accurately than ever before. From these calculations and by the use of methods still more accurate than those involving parallax, the distance of the sun from the earth is now known to average approximately 92,965,000 miles, give or take a thousand miles or so. (Because the earth’s orbit is elliptical, the actual distance varies from 91,400,­000 to 94,600,­000 miles.)

  This average distance is called an astronomical unit (A.U.), and other distances in the solar system are given in this unit. Saturn, for instance, turned out to be, on the average, 887 million miles from the sun, or 9.54 A.U. As the outer planets—Uranus, Neptune, and Pluto—were discovered, the boundaries of the solar system were successively enlarged. The extreme diameter of Pluto’s orbit is 7,300 million miles, or 79 A.U. And some comets are known to recede to even greater distances from the sun. By 1830, the solar system was known to stretch across billions of miles of space, but obviously this was by no means the full size of the universe. There were still the stars.

  THE FARTHER STARS

  The stars might, of course, still exist as tiny objects set into the solid vault of the sky that formed the boundary of the universe just outside the extreme limits of the solar system. Until about 1700, this remained a rather respectable view, although there were some scholars who did not agree.

  As early as 1440, a German scholar, Nicholas of Cusa, maintained that space was infinite, and that the stars were suns stretching outward in all directions without limit, each with a retinue of inhabited planets, That the stars did not look like suns but appeared as tiny specks of light, he attributed to their great distance. Unfortunately Nicholas had no evidence for these views but advanced them merely as opinion. The opinion seemed a wild one, and he was ignored. In 1718, however, the English astronomer Edmund Halley, who was working hard to make accurate telescopic determinations of the position of various stars in the sky, found that three of the brightest stars—Sirius, Procyon, and Arcturus—were not in the positions recorded by the Greek astronomers. The change was too great to be an error, even allowing for the fact that the Greeks were forced to make naked-eye observations. Halley concluded that the stars are not fixed to the firmament after all, but that they move independently, like bees in a swarm. The movement is very slow and so unnoticeable until the telescope was available that they seemed fixed.

  The reason this proper motion is so small is that the stars are so distant from us. Sirius, Procyon, and Arcturus are among the nearer stars, and their proper motion eventually became detectable. Their relative proximity to us make, them seem so bright. Dimmer stars are, in general, farther away, and their proper motion remained undetectable even over the time that elapsed between the Greeks and ourselves.

  The proper motion itself, while testifying to the distance of the stars, did not actually give us the distance. Of course, the nearer stars should show a parallax when compared with the more distant ones. However, no such parallax could be detected. Even when the astronomers used as their baseline the full diameter of the earth’s orbit around the sun (186 million miles), looking at the stars from the opposite ends of the orbit at half-year intervals, they still could observe no parallax. Hence, even the nearest stars must be extremely distant. As better and better telescopes failed to show a stellar parallax, the estimated distance of the stars had to be increased more and more. That they were visible at all at the vast distances to which they had to be pushed made it plain that they must be tremendous balls of flame like our own sun. Nicholas of Cusa was right.

  But telescopes and other instruments continued to improve. In the 1830s, the German astronomer Friedrich Wilhelm Bessel made use of a newly in vented device, called the heliometer (“sun measure”) because it was originally intended to measure the diameter of the sun with great precision. It could be used equally well to measure other distances in the heavens, and Bessel used it to measure the distance between two stars. By noticing the change in this distance from month to month, he finally succeeded in measuring the parallax of a star (figure 2.2). He chose a small star in the constellation Cygnus, called 61 Cygni. His reason for choosing it was that it showed an unusually large proper motion from year to year against the background of the other stars and thus must be nearer than the others. (This steady proper motion should not confused with the back-and-forth shift against the background that indicates parallax) Bessel pinpointed the successive positions of 61 Cygni against “fixed” neighboring stars (presumably much more distant) and continued observations for more than a year. Then, in 1838, he reported that 61 Cygni had a parallax of 0.31 second of arc—the width of a twenty-five-cent piece as seen from a distance of 10 miles! This parallax, observed with the diameter of the earth’s orbit as the baseline, meant that 61 Cygni was about 64 trillion (64,000,­000,­000,­000) miles away—9,000 times the width of our solar system. Thus, compared with the distance of even the nearest stars, the solar system shrinks to an insignificant dot in space.

  Figure 2.2. Parallax of a star measured from opposite points on the earth’s orbit around the sun.

  Because distances in trillions of miles are inconvenient to handle, astronomers shrink them by giving them in terms of the speed of light—186,282 miles per second. In a year, light travels 5,880,000,­000,­000 (nearly 6 trillion) miles. That distance is therefore called a light-year. In te
rms of this unit, 61 Cygni is about 11 light-years away.

  Two months after Bessel’s success (so narrow a margin by which to lose the honor of being the first!), the British astronomer Thomas Henderson reported the distance of the star Alpha Centauri. This star, located low in the southern skies and not visible north of the latitude of Tampa, Florida, is the third brightest in the heavens. It turned out that Alpha Centauri has a parallax of 0.75 second of arc, more than twice that of 61 Cygni. Alpha Centauri is therefore correspondingly closer. In fact, it is only 4.3 light-years from the solar system and is our nearest stellar neighbor. Actually it is not a single star, but a cluster of three.

  In 1840, the German-born Russian astronomer Friedrich Wilhelm von Struve announced the parallax of Vega, the fourth brightest star in the sky. He was a little off in his determination as it turned out, but understandably, because Vega’s parallax is very small and it is much farther away—27 light years.

  By 1900, the distances of about seventy stars had been determined by the parallax method (and by the 1980s, many thousands). One hundred light-years is about the limit of the distance that can be measured with any accuracy, even with the best instruments. And beyond are countless stars at much greater distances.

 

‹ Prev