Great Calculations: A Surprising Look Behind 50 Scientific Inquiries
Page 8
Similar geometric reasoning concerning the areas in the diagram in figure 4.6 (b) corresponding to the equal time intervals AC, CI, and IO leads Galileo to his famous result that in a series of equal time intervals, the distance traveled “will be to one another as are the odd numbers from unity, that is, as 1, 3, 5, 7,…” The total distances traveled are of course like 1, 1+3 = 4, 1+3+5 = 9, 1+3+5 +7 = 16,…, “that is as the squares of the times.”
These examples illustrate Galileo's geometric way of working; the geometric figure with its line lengths and areas replaces the algebra in a modern calculation.
4.5.2 Projectile Motion
Day Four of the dialogues is called On Projectile Motion, and here Galileo settles an old question: how to determine the path of a projectile and discover its form. Galileo has talked about equable motion in a horizontal plane, and now he must describe motion in a vertical plane. In his book he explains that
a heavy body has from nature an intrinsic principle of moving toward the center of heavy objects (that is, of our terrestrial globe) with a continually accelerated movement, and always equally accelerated.16
Hence Galileo can use his results for uniformly accelerated motion for movement toward the center of the earth. He can now deal with a projectile by considering together, but independently, its horizontal and vertical motions. He has one of the debaters clearly make this point:
Assuming that the transverse motion is always kept equable, and that the natural downward motion likewise maintains its tenor of always accelerating according to the squared ratio of times; and also that such motions, or their speeds, in mixing together, do not alter, disturb, or impede one another.17
Because it involves equable or uniform motion, a horizontal distance traveled is a measure of the time taken. Galileo thus traces out a motion with a vertical distance traveled in a given time proportional to the square of the horizontal distance traveled in that same time. Comparing this with the geometry of a parabola, he reaches his monumental conclusion:
When a projectile is carried in motion compounded from equable horizontal and from naturally accelerated downward motions, it describes a semiparabolic line in its movement.18
The great question is answered: using his geometric method of calculation Galileo has shown that projectiles trace out parabolic paths.
4.5.3 The Maximum Range Property
Using his result, Galileo can now settle another question: At what angle should a projectile be launched to give maximum range? Galileo uses his geometric method to show that
the maximum projection, or amplitude of semiparabola (or whole parabola) will be that corresponding to the elevation of half a right angle.19
Thus the launch angle should be 45º to get the maximum range. Galileo gives a table of numerical results giving the range for varying launch angles at one degree intervals, and he limits the extent of the table by using another of his results: the range reduces by the same amount if the launch angle is increased or decreased by a certain number of degrees around 45º. Thus the range is the same for launch angles 48º and 42º, or for angles 35º and 55º, or for angles 30º and 60º.
Galileo was clearly proud of his achievement, and he has one debater saying the demonstration “is full of marvel and delight.” He has also demonstrated how “the knowledge of one single effect acquired through its causes opens the mind to the understanding and certainty of other effects without need of recourse to experiments.” Undoubtedly a worthy addition to my list is calculation 11, Galileo describes projectile motion.
4.5.4 Afterward
Galileo recognizes that his model for projectile motion does not include air resistance. A theory including air resistance had to wait for Newton and the idea of forces in dynamics. Except for a very special case, the force due to air resistance couples the horizontal and vertical motions and destroys the parabolic nature of the trajectory. Eventually other refinements, like the effect of projectile shape and spin, were built into calculations. However, the most basic result of all—a first-approximation parabolic trajectory—was established by Galileo with his crystal-clear explanation of its dependence on independent horizontal and vertical motions. His use of his theory to probe properties of projectile motion and give numerical tables provided a fine methodological example for other scientists to follow.
4.6 PREDICTING TIDES
Tides have always been of importance for sailors and those managing port facilities. People living near the oceans and seas need to know when their houses are safe from floods and how best to fish, maintain oyster beds, and otherwise interact with water and beaches. Along with this, there has always been a curiosity about the origin of tides and a desire to find a useful understanding of the mechanism driving them. (The book by Cartright is an excellent introduction to this subject. The 1882 paper by Lord Kelvin and the 1953 paper by Doodson tell the story behind this topic as seen by two of the scientists involved.)
The basic mechanism for the tides was set out by Isaac Newton in his Philosophiae Naturalis Principia Mathematica: it is the slight variation over the surface of the earth in the gravitational force exerted on water by the moon and the sun, along with the rotation of the earth, that produces the tides. Many people followed Newton in developing the mathematics of tides, with Laplace making outstanding advances. The problem is difficult and made almost impossibly complex by the great variations in topography limiting the motion of oceans and rivers. Only the very simplest of situations can be analyzed from first principles in any detail. However, the theory does supply crucial data to be used in methods for predicting water levels generated by tides.
4.6.1 A Simpler, Pragmatic Approach
Suppose we ask for the water level at some particular place; the port of Dover, for example. We can use a tide gauge to make records of the level over a long time period and, although such records will reveal intricate variations, we may hope to use them to predict levels at some future time. But how to do that? The idea of using harmonic analysis was introduced by Sir William Thomson, later Lord Kelvin. He was a great believer in the general methods set out by Fourier (which I will discuss in detail in chapter 12), so naturally he suggested that at time t the water level or height H(t) should be represented by a sum of components each of which varies harmonically with a particular frequency. Mathematically, assuming ten components are required, we write H(t) in terms of sine functions as
The nth component contributes an amount oscillating in time with angular frequency ωn (and hence period 2π/ωn). The strength of its contribution is measured by its amplitude An. The phase angle θn specifies how it is shifted in time. If the set of frequencies is given, then the amplitudes and phases are to be chosen so that the total sum of components matches the tidal record for a particular site. Two questions obviously arise: How do we choose the frequencies? How do we find the appropriate amplitudes and phases?
It may not be possible to predict tides using the complete gravitational theory, but knowing that the moon and the sun are the drivers of the tides tells us that the frequencies ωn, involved in their various orbital motions, are the ones to use in equation (4.3). There are also shallow-water motions that introduce nonlinear effects and double frequency terms. In Thomson's (Kelvin's) language (see appendix B in his book coauthored with Tait) the constituents in the theory are:
The mean lunar semi-diurnal.
The mean solar semi-diurnal.
The larger elliptic semi-diurnal.
The luni-solar diurnal declinational.
The lunar diurnal declinational.
The luni-solar semi- diurnal declinational.
The smaller elliptic semi-diurnal.
The solar diurnal declinational.
The lunar quarter-diurnal, or first shallow-water tide of mean lunar semi-diurnal.
The luni-solar quarter-diurnal, the shallow-water tide.20
The next step is to find the amplitudes and phases using the data in the tidal records. Sir William Thomson proposed using the method of h
armonic analysis, which allows a general function to be decomposed into a number of periodic components as explained above. (This is a technical matter, and the interested reader is referred to Cartright (chapter 8); Thomson's 1882 lecture; Thomson and Tait (articles 57 to 77 and appendix B, part 7); the expository article by Tony Phillips; and the discussion of Fourier methods that we will come to later in chapter 11. A very instructive example of the tides at San Diego is given on the website overseen by the Center for Operational Oceanographic Products and Services.)
Thus a method is established in which the astronomical data and tidal records are used to give a formula for the tidal heights as in equation (4.3). We must now recall that the 1870s are well before our electronic computer era and realize what a difficult and tiresome business it is to calculate H(t) using that equation.
4.6.2 Tide-Predicting Machines
Sir William Thomson lived at a time when machines were being developed for carrying out calculations, and he drew together various ideas to design a tide-predicting machine. Thomson's machine is shown in figure 4.7. Essentially a rope moves up and down, controlling a pen whose trace indicates the required height H(t). The rope threads around a series of pulleys, each of which mechanically represents one of the components in the tide-predicting formula (so ten pulleys for equation (4.3)). The pulleys move up and down in a harmonic motion at the defined frequencies, and the motion is set to the formula amplitudes and phases. Finally, in the complete machine, the harmonic-motion-generating devices are all linked by a series of gear wheels to a shaft, and turning the shaft is equivalent to time evolution in the tide-predicting formula. (For sketches of the complete mechanism see figure 11.1 in the Smith and Wise biography of Kelvin; Thomson's 1882 lecture; and the Wikipedia article that is cited. Photographs of actual machines are in the Wikipedia article and in the Parker paper.) Today we call Thomson's machine an analogue computer.
Figure 4.7. Thomson's tide-predicting machine. From Wikipedia, user Terry0051.
Tide-predicting machines stand as a gleaming brass tribute to Thomson's (and others) ingenuity. The ten-component machine was built with the help of Edward Roberts and Alexander Légé in 1872. Thomson's fifteen-component machine could run off a year's worth of data in about twenty-five minutes. Other machines were built, for example, William Ferrel, in the United States, constructed a nineteen-component predictor in 1882. Later machines built in the twentieth century used up to forty components. The tide-predicting machines were widely used in several countries and, once local conditions were matched, they produced accurate data.
4.6.3 A Wartime Challenge
Knowledge of tides played a crucial part in military planning in the Second World War. A turning point came in 1944 when the Allies planned to invade France. Hitler ordered Field Marshal Rommel to prepare defenses against such an invasion, and Rommel responded by placing thousands of obstacles of all types on the beaches likely to be used. (The paper by Bruce Parker is a wonderful account of this part of history.) Those planning the invasion thus needed detailed information about all the conditions facing an invading force. The magnitude of the task is explained by Parker:
The Allies would certainly have liked to land at high tide, as Rommel expected, so their troops would have less beach to cross under fire. But the underwater obstacles changed that. The Allied planners now decided that initial landings must be soon after low tide so that demolition teams could blow up enough obstacles to open a corridor through which the following landing craft could navigate to the beach. The tide also had to be rising, because the landing craft had to unload troops and then depart without danger of being stranded by the receding tide.
There were also nontidal constraints. For secrecy, Allied forces had to cross the English Channel in darkness. But naval artillery need about an hour of daylight to bombard the coast before landings. Therefore, low tide had to coincide with first light, with the landings to begin one hour after. Airborne drops had to take place the night before, because the paratroopers had to land in darkness. But they also needed to see their targets, so there had to be a late-rising Moon.21
These constraints had to be built into one of the most important calculations ever made. The range of data made available for one of the Normandy beaches is shown in figure 4.8.
Figure 4.8. Data on tides and light conditions for Omaha beach, June 5–21, 1944. Parker's caption: “Tidal and illumination diagram for Omaha beach, 5–21 June, 1944, shows one of the formats in which Doodson's predictions were provided to military commanders. The diagram gives not only tides but also moonlight and degrees of twilight. Times are given in Greenwich Mean Time.” Reprinted with permission, from “The Tide Predictions for D-Day,” by Bruce Parker, Physics Today (September 2011). © 2011, American Institute of Physics.
The tight requirements meant that D-Day could only be on the 5th, 6th, or 7th of June, 1944. (In fact weather conditions led to the 6th being chosen.) The crucial tide calculations were in the hands of Arthur Doodson, one of the great figures in this area of research. He used Thomson's 1872 machine (overhauled in 1942 to handle twenty-six components) and a Robert-designed machine built in 1906, which incorporated forty components. It is hard to imagine that critical decisions made in World War II were linked directly to the initiatives of Lord Kelvin some seventy years earlier. Surely no one will quibble with my choice of calculation 12, tide predictions.
4.7 OTHER CANDIDATES
Some readers may call for other calculations affecting life on Earth to be included. An obvious example is weather forecasting, which has gradually become more reliable as fast electronic computers became available. Like most other phenomena on Earth, weather forecasting is extremely complex and driven by available input data.
The book by Wainwright and Mulligan gives a careful introduction to environmental modeling, and a set of contributory papers provides many examples. The theme of the book is “finding simplicity in complexity.” This is an area where great progress is being made, but the approximations used to give viable models will always be the subject of debate. Surely Perry and Lord Kelvin would approve. At the forefront of such activities is the question of climate change and the prediction of future temperature rises and such things as the extent of the polar ice cover. There will continue to be intense scrutiny of the results of various climate models, but one day I am sure they will make the lists of great calculations.
4.8 STYLES OF CALCULATION
This has been a long chapter, but before moving on, it is useful to look back and note the very different ways that calculations are made. We have seen calculations:
that involved only simple arithmetic;
that required the solution of a differential equation and the use of assumptions and observed data to fit the solution to the problem being investigated;
that needed geometrical ray tracing, evaluation of times along ray paths, and the matching to experimental data;
that used diagrams to represent physical processes and required the use of geometry to analyze them;
and that made use of known data to suggest a formula which could be evaluated using an analogue computer.
in which we see the roles played by mathematical models in describing and understanding the solar system; and meet some of the great scientists involved.
In this chapter, I turn to questions about our home, our planet Earth, in the larger framework of its position in the solar system. Many people now live in cities and regions where light pollution hides the dramatic nature of the night sky. Anyone camping out in remote locations experiences the breathtaking beauty and grandeur of the night sky, which must have equally enthralled, and perhaps overawed, our ancient ancestors. It is also the case that today we do not rely so much on the sky for navigation and for ideas about what is happening to us and why that might be so (although astrology columns still appear in many newspapers and magazines). Nevertheless, we are still fascinated by the stars and the planets, and things like the missions to Mars still make big news.
/>
Thus, in ancient times, the elements of the night sky were more familiar and a cause of wonderment and curiosity. In particular, people were familiar with the objects in the solar system: the sun, the moon and the five visible planets and their daily, monthly, yearly, or longer-time motions viewed with the fixed stars as a background. It is no surprise then that astronomy, both observational and theoretical, played a major part in ancient science. The calculations in this chapter trace the evolution of that ancient astronomy into our modern picture of the solar system.
5.1 AN EARLY PINNACLE
All ancient civilizations were interested in astronomy and astrology. Over the centuries, a large number of observations were collected, and astronomical ideas and calculations were accumulated. (See Thurston or Pedersen for a concise summary.) This culminated with the publication around 150 CE of Ptolemy's Almagest, which reviews and builds on early work. Ptolemy's Almagest may be thought of as the astronomical equivalent of the mathematical compilation forming Euclid's Elements. Strangely, we know little of the personal lives of either of those great writers. Claudius Ptolemy almost certainly lived around the years 100 to 175 CE in Alexandria in Greco-Roman Egypt. He was a brilliant and highly productive man with published works ranging over astronomy, optics, musical theory, astrology, geography, and cartography.
The original Greek title of Ptolemy's book was Mathematical Compilation, and later it became known as The Great (or Greatest) Compilation. Like so much early science and mathematics, it was preserved in the Arab world with the title Al-majisti, and later in the medieval translation into Latin it became Almagestum, and hence today we use Almagest. To see why it is simply “the greatest,” we need only look at G. J. Toomer's magnificent translation, a book running to over six hundred pages. (Toomer's biographical article is also a standard reference.) In Toomer's words, the Almagest