Great Calculations: A Surprising Look Behind 50 Scientific Inquiries

Home > Other > Great Calculations: A Surprising Look Behind 50 Scientific Inquiries > Page 29
Great Calculations: A Surprising Look Behind 50 Scientific Inquiries Page 29

by Colin Pask


  Another natural extension of dynamical problems was from the one-dimensional vibrating string to the two-dimensional vibrating membrane, or drum. Solutions for this problem were obtained by Leonard Euler in 1764. For a circular membrane and using polar coordinates r and θ as in figure 12.4, Euler found a solution that may be written as

  Now Bessel functions of higher order have been introduced. The displacement will be zero at the rim of the membrane and that will impose a condition which gives the frequencies of the membrane vibrations in terms of the zeros of the Bessel functions. Euler made some very accurate calculations of a few Bessel function zeros. As usual, the superposition theory may be used to cover all possible vibrations of a drum.

  It soon became clear that any problem involving vibrations or waves in systems with circular or cylindrical symmetry would involve Bessel functions. Fourier in 1822 and Poisson in 1823 found Bessel functions appearing in heat-conduction problems with those same symmetries. We already saw in chapter 9 that Airy's calculation of the diffraction pattern formed by light waves leaving a circular aperture involved the J1 Bessel function. Personally, I spent a long time playing with Bessel functions while working on the way light propagates along optical fibers. (See the book by McLachlan for the viewpoint in 1934.)

  Bessel functions have a habit of popping up in all sorts of places. An example is given by a problem of obvious interest to Bessel himself. When describing the motion of a planet in its elliptical orbit, it is necessary to use the eccentric anomaly E and the mean anomaly M. For an orbit with eccentricity ε the two are linked through Kepler's equation:

  In the planetary problem, M is given and it is required to find E, something that cannot be done exactly. (See Pask chapter 12 for an introduction to this problem and its analysis by Newton and others.) In 1770, Lagrange showed that a solution to Kepler's equation could be expressed as an infinite series involving Bessel functions:

  This solution has been much studied, including by Bessel himself (see Watson and Dutka).

  The message is: if you want to use mathematics in science, be prepared to confront Bessel functions!

  12.2.3 Tables

  Sometimes an analytical result can be used to make further progress in science, but very often it is numerical results that are needed. (The books by Watson and McLachlan give an incredible assortment of analytical results involving Bessel functions—sums, products, derivatives and differential equations, recurrence relations, integrals and special forms, and limiting cases.) In chapter 4, we saw Lord Kelvin calculating details of terrestrial phenomena, and he firmly and explicitly set out his views in this famous statement:

  In the physical sciences a first essential step in the direction of learning any subject is to find principles of numerical reckoning and practical methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about and express it in numbers you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.4

  Thinking of this sort led to the development of mathematical tables. The obvious idea is to tabulate results that may be simply read off rather than recalculated every time they were needed. For many readers, growing up in the computer age, the idea of using tables may seem curious, but as labor-saving devices, they were essential. There were even the common “ready reckoners” which allowed shop assistants and business people to quickly work out quantities required and prices for various goods—a world away from the automatic tills at shop checkouts today.

  We have already seen tables constructed by the Babylonians (chapter 2), tables in the great astronomical works by Ptolemy and Kepler (chapter 5), tables of logarithms (chapter 3), and population and annuity tables (chapter 8). (The book edited by Horsburgh takes 15 pages to list tables available in 1915; a more recent history is given by Campbell-Kelly and collaborators.)

  You would expect early tables to cover things like logarithms and trigonometric functions, but there is also a wealth of information about numbers themselves, some of it quite strange to our modern eyes. For example, there are tables giving one-quarter of the squares of numbers. So if the number is six, say, the table gives ¼(62) = 9. In 1887, Joseph Blater published a quarter-squares table for all numbers from 1 to 200,000. (See the articles by McFarland Roegel if you would like to read more about that.) Before you dismiss that as totally weird, let me remind you that we are talking about an era where calculations were done largely without the assistance of a machine. All sorts of clever techniques were invented for making arithmetic easier; one of those techniques involved the use of quarter-square tables. The identity

  tells us how to work out a product of two numbers x and y using the quarter-squares table. Suppose you want 234 times 568 (a tedious task!); we form 234+568 = 802 and 568–234 = 334, and then consult the quarter-squares table to find that 234 times 568 is 160,801–27,889 = 132,912. It takes one simple addition and two easy subtractions along with two table look-ups to get the answer. The same simple set of operations would deal with 119,234 times 78,999 if you had Blater's table on hand. A great amount of ingenuity and mathematical insight was required for the efficient production of mathematical tables, not to mention methods for checking them for errors.

  Tables were constructed at a time when the word computer meant a person who did calculations. Teams of these “computers” were organized for big projects, and there was a structure of chiefs and supervisors who set out what was to be calculated, decided which techniques to use, and checked the methods to be applied. (The fascinating history of this work is described in articles in the Campbell-Kelly book and in Grier's When Computers Were Human.)

  Gradually, tables of Bessel functions began to appear, often as part of a larger publication. Bessel himself included tables of J0(x) and J1(x) in his memoir on planetary perturbations. Airy gave tables of Bessel functions in his papers, including J1(x) (actually 2J1(x)/x) in his optics paper referred to in section 9.3.1). Lord Rayleigh included a table of J0(x) and J1(x) values for x between zero and 13.4 in his 1877 Theory of Sound. One of the first extensive tables was published by Ernst Meissel in 1888; he gave J0(x) and J1(x) values to an accuracy of twelve decimal places for values of x ranging from 0 to 15.5 in steps of 0.01. A table with an accuracy of twenty decimal places was published in the Proceedings of the Royal Society (1900) by William Aldis for x increasing in steps of 0.1 from 0 to 6. Just imagine the work required to produce such tables! (For details of other early calculations, see Watson chapter 20.)

  Systematic table preparation and publication was given a boost in 1871 when the British Association for the Advancement of Science established its Mathematical Tables Committee. It will be no surprise to learn that Lord Kelvin was involved. From 1889 onward, a variety of Bessel-function tables were produced, although the type of functions tabulated tended to depend on the particular membership of the committee. (The detailed history of the BA Tables Committee is described in the articles by Mary Croaken.) Eventually, the Bessel Function Subcommittee was formed, and the wonderful Bessel-function tables were published in the British Association series of volumes in 1937, 1952, 1960, and 1964.

  For many scientists (me included), the “bible” in the area was “Abramowitz and Stegun.” This is the Handbook of Mathematical Functions, edited by Milton Abramowitz and Irene Stegun and published in 1964. Digging out my copy, I immediately noticed how worn are the pages of F. W. J. Olver's section on Bessel functions. Extensive tables are given including those for the values of J0(x) (to fifteen decimal places) and J1(x) (to ten decimal places), and for their first twenty zeros (to ten decimal places for J0(x). Much of this material was taken from British Association publications.

  Naturally, the arrival of electronic computers meant that it was far simpler to produce mathematical tables. Soon of course they also made such tables redundant; computers can now easily produce values of Bessel functions and other special functions on demand. I
t might be hard to appreciate the role played by Bessel functions and their tables in the development of science, but it was crucial and involved a level of expertise and dedication that today we can only marvel at. Calculation 48, tabulating Bessel functions must find a place in my list of important calculations.

  12.3 ABOUT LINEARITY AND BEYOND

  The previous two topics are intimately connected with the concept of linearity, which is vitally important when describing a great many physical systems. The next two topics relate to what happens when we look at systems taking us beyond linearity, so a little about the concept of linearity and its implications might be in order. (Those readers familiar with the details of linear systems should skip on to section 12.4.)

  Think about a system which has an input p and gives an output q. If the system is linear, we know that

  if the input is multiplied by some number α then the output will be multiplied by α;

  if an input p1 and gives an output q1 and an input p2 and gives an output q2, then an input (p1+p2) and gives an output (q1+q2).

  We can write this symbolically by using L to represent the linear system, and then

  L(p) = q means that L(αp) = αq and L(p1 + p2) = q1 + q2.

  For example, if the system is simply “multiply by 3” and the input is numbers, we clearly have a linear system because 3(n + m) = 3n + 3m for any numbers n and m. But taking a square root is not a linear process; √9 = 3 and √16 = 4 but √(9 + 16) = √25 = 5 which is not 3 + 4 as it would be if √ was a linear operation.

  Very importantly, the calculus operation of differentiation is a linear operation:

  A similar result holds for higher-order derivatives. Thus we are led to the idea of linear differential equations like equations (12.1) and (12.7). In linear differential equations, each term involves only the unknown function or a derivative; there are no terms involving products like y2 or y(dy/dx). The linearity property allows us to add solutions (as we have just seen in section 12.1), and this underpins the mathematics discussed in the previous two sections.

  The mathematics of linear systems is highly developed, and the linearity properties are enormously helpful in finding general solutions. We saw an example of that in section 12.1. Essentially, we look for a set of basis solutions and then use the linearity property to write any possible solution in terms of them. The only thing required for a particular solution is to find the coefficients or multiplying constants like the an in equations (12.2) and (12.3).

  Physically, we say that a linear system is defined in terms of its modes, and we will find that the strength or amplitude an of a given mode n depends on the particular excitation conditions. (The plucked string is a good example.) There are two properties of such a linear system that should be emphasized.

  First, to describe the system in any particular case we need only specify those amplitudes an. They will allow us to calculate anything else we want to know. Those amplitudes fully characterize the behavior of the system.

  Second, the modes of a linear system behave independently, and the an do not change with time; each mode maintains its amplitude or strength and thus contains the same energy at all times. The total energy is the sum of the individual modal energies. For example, a plucked string will have several modes vibrating at the same time (each with its own frequency), and each mode carries a certain amount of energy which remains constant for all time. (Of course, there are energy dissipation effects, but they are not considered here.) Light propagating along an optical fiber is carried in the optical modes of the fiber; each mode carries a fixed amount of energy as it propagates, with that modal energy determined by the light directed into the fiber.

  In dynamical systems, particles may be in an equilibrium state and then oscillate around that state if they are displaced. If the forces pulling the particles back toward the equilibrium state depend simply on their displacements, then the system is linear and the equations of motion are linear differential equations. This is the case when Hooke's law holds for elastic systems and springs. The simplest example is a pendulum. If the bob is displaced so the string makes an angle θ with the vertical, the force pulling the bob back toward the θ = 0 position is proportional to θ, and we get the well-known linear equation for the periodic motion of the pendulum.

  A large number of physical systems behave in a linear manner, and that is why science has made great progress in many areas. However, there are physical systems that are not linear, and, in fact, if most physical systems are pushed to extremes, they start to lose those beautiful linear properties. (If a pendulum is displaced through very large angles, the exact linearity property is lost.) The question becomes: What happens when a physical system behaves in a nonlinear manner? The exact mathematics of nonlinear systems tends to be very difficult, and the next two sections look at other ways of exploring this area of science.

  12.4 A NEW KIND OF EXPERIMENT

  Enrico Fermi was one of that rare breed: a first-rate theorist and a superb experimentalist. We met him in section 11.3 in connection with the theory for weak decay processes and in section 11.8, which noted his pioneering experiment on nuclear chain reactions. Fermi had a wide range of interests, and from the very start of his career, he worked on the properties and statistics of systems of many particles. Fermi was intrigued by questions like: Why do we find irreversibility in nature when so many of the fundamental laws are time reversible? Do systems cycle through all possible states (as suggested in the ergodic hypothesis) and tend to equilibrium states with energy equally distributed over all possible degrees of freedom? (That is called the equipartition of energy.) These are difficult questions to answer, and, in 1955, Fermi reported on a new approach for tackling them. It was an approach that led to surprising results that have been the subject of investigation and extension ever since.

  Fermi worked with John Pasta and Stanislaw (Stan) Ulam at the Los Alamos laboratories, and their first investigations were published in the now-famous “Studies of Nonlinear Problems.” (Perhaps there should have been another author listed—a point I return to in section 12.4.3.)

  The particular problem studied in this report has become known as the FPU problem (after the names of the investigators: Fermi, Pasta, and Ulam). The work can best be introduced by quoting from the beginning of the report itself:

  This report is intended to be the first in a series dealing with the behavior of certain nonlinear physical systems where the non-linearity is introduced as a perturbation to a primarily linear problem. The behavior of the system is to be studied for times which are long compared to the characteristic periods of the corresponding linear problem.5

  The linear problem has characteristic oscillations, and the intention was to find out what happens when the linear force terms have nonlinear components added to them. The report immediately notes a difficulty and the way to get around it:

  The problems in question do not seem to admit of analytic solutions in closed form, and heuristic work was performed numerically on a fast electronic computing machine (MANIAC I at Los Alamos). [There is a footnote accompanying this sentence which I return to in section 12.4.3.]

  Here was the major step: forget trying to find nice analytic solutions to the problem; approximate the equations so that a computer can trace out the solution as a series of numbers. Time as a continuous variable is replaced by a number of discrete intervals. The only way to do this calculation is to use a computer, and some of the first digital computers had been assembled at Los Alamos to support the nuclear bomb program. One of those was the MANIAC—Mathematical Numerical Integrator and Computer. Fermi had access to the MANIAC, and he recognized the possibilities it could bring to science. One of these was to search for an understanding of the fundamental properties of many particle nonlinear systems as the report continues:

  The ergodic behavior of such systems was studied with the primary aim of establishing, experimentally, the rate of approach to the equipartition of energy among the various degrees of freedom of the s
ystem.

  Note that word experimentally. Fermi recognized that what they were doing was actually a type of experiment. But instead of watching a real physical system as it moves and changes, they are watching how the output from a set of equations evolves as time goes by in very many discrete steps. Here is a whole new problem for theory: how to suitably approximate the equations to write them in terms of time steps; and then how to choose those steps and make sure that the numbers obtained are a good representation of the exact solution. This is the equivalent of apparatus design for physical experiments, and the literature on numerical analysis and the ideas of computer simulation is now vast.

  12.4.1 The FPU Problem: Definition and Results

  The FPU problem takes a chain or string of up to 64 particles interacting pairwise through a mixture of linear and nonlinear forces. It is usually thought of in terms of particles joined by springs as in figure 12.5 (a). The particles are all identical, as are the springs. Stretching or compressing a spring by an amount s gives a Hooke's law linear force proportional to s when no nonlinearities are present. (The FPU report talks about a “string” rather than a chain of particles as we do here. The book by Kibble gives a good, simple introduction to the model of a vibrating string in terms of discrete masses.)

  Figure 12.5 also shows the case when there are just two particles. In the linear case, there will be two modes for this system, and they are shown in figure 12.5 (b). If the system is set oscillating in one of those forms, it will continue that way for all time. A general displacement of the system will excite a mixture of those two modes, and they will both persist with their energies fixed equal to their initial energies for all time. If the spring forces have a nonlinear component, the linear modes can still be used to describe the system, but now they will be coupled together and can exchange energy. The modes can be thought of like the modes in figure 12.1.

 

‹ Prev