Book Read Free

Men of Mathematics

Page 14

by E. T. Bell


  The simplest instances of rates in physics are velocity and acceleration, two of the fundamental notions of dynamics. Velocity is rate of change of distance (or “position,” or “space”) with respect to time; acceleration is rate of change of velocity with respect to time.

  If s denotes the distance traversed in the time t by a moving particle (it being assumed that the distance is a function of the time), the velocity at the time t is Denoting this velocity by v, we have the corresponding acceleration,

  This introduces the idea of a rate of a rate, or of a second derivative. For in accelerated motion the velocity is not constant but variable, and hence it has a rate of change: the acceleration is the rate of change of the rate of change of distance (both rates with respect to time); and to indicate this second rate, or “rate of a rate,” we write for the acceleration. This itself may have a rate of change with respect to the time; this third rate is written And so on for fourth, fifth, . . . rates, namely for fourth, fifth, . . . derivatives. The most important derivatives in the applications of the calculus to science are the first and second.

  * * *

  If now we look back at what was said concerning Newton’s second law of motion and compare it with the like for acceleration, we see that “forces” are proportional to the accelerations they produce. With this much we can “set up” the differential equation for a problem which is by no means trivial—that of “central forces”: a particle is attracted toward a fixed point by a force whose direction always passes through the fixed point. Given that the force varies as some function of the distance s, say as F(s), where s is the distance of the particle at the time t from the fixed point O,

  it is required to describe the motion of the particle. A little consideration will show that

  the minus sign being taken because the attraction diminishes the velocity. This is the differential equation of the problem, so called because it involves a rate (the acceleration), and rates (or derivatives) are the object of investigation in the differential calculus.

  Having translated the problem into a differential equation we are now required to solve this equation, that is, to find the relation between s and t, or, in mathematical language, to solve the differential equation by expressing s as a function of t. This is where the difficulties begin. It may be quite easy to translate a given physical situation into a set of differential equations which no mathematician can solve. In general every essentially new problem in physics leads to types of differential equations which demand the creation of new branches of mathematics for their solution. The particular equation above can however be solved quite simply in terms of elementary functions if as in Newton’s law of gravitational attraction. Instead of bothering with this particular equation, we shall consider a much simpler one which will suffice to bring out the point of importance:

  We are given that y is a function of x whose derivative is equal to x it is required to express y as a function of x. More generally, consider in the same way

  This asks, what is the function y (of x) whose derivative (rate of change) with respect to x is equal to f(x)? Provided we can find the function required (or provided such a function exists), we call it the anti-derivative of f(x) and denote it by ∫ f(x)dx—for a reason that will appear presently. For the moment we need note only that ∫f(x)dx symbolizes a function (if it exists) whose derivative is equal to f(x).

  By inspection we see that the first of the above equations has the solution ½x2 + c, where c is a constant (number not depending on the variable x); thus ∫x dx = ½x2 + c.

  Even this simple example may indicate that the problem of evaluating ∫f(x)dx for comparatively innocent-looking functions f(x) may be beyond our powers. It does not follow that an “answer” exists at all in terms of known functions when an f(x) is chosen at random—the odds against such a chance are an infinity of the worst sort (“non-denumerable”) to one. When a physical problem leads to one of these nightmares approximate methods are applied which give the result within the desired accuracy.

  With the two basic notions, and ∫f(x)dx, of the calculus we can now describe the fundamental theorem of the calculus connecting them. For simplicity we shall use a diagram, although this is not necessary and is undesirable in an exact account.

  Consider a continuous, unlooped curve whose equation is y = f(x) in Cartesian coordinates. It is required to find the area included between the curve, the x-axis and the two perpendiculars AA′ BB′ drawn to the x-axis from any two points A, B on the curve. The distances OA′ OB′ are a, b respectively—namely, the coordinates of A′, B′ are (a, 0), (b, 0). We proceed as Archimedes did, cutting the required area into parallel strips of equal breadth, treating these strips as rectangles by disregarding the top triangular bits (one of which is shaded in the figure), adding the areas of all these rectangles, and finally evaluating the limit of this sum as the number of rectangles is increased indefinitely. This is all very well, but how are we to calculate the limit? The answer is surely one of the most astonishing things a mathematician ever discovered.

  First, find ∫f(x)dx. Say the result is F(x). In this substitute a and 6, getting F(a) and F(b). Then subtract the first from the second, F(b)—F(a). This is the required area.

  Notice the connection between y = f(x), the equation of the given curve; which (as seen in the chapter on Fermat) gives the slope of the tangent line to the curve at the point (x, y); and ∫f(x)dx, or F(x), which is the function whose rate of change with respect to x is equal to f(x). We have just stated that the area required, which is a limiting sum of the kind described in connection with Archimedes, is given by F(b)—F(a). Thus we have connected slopes, or derivatives, with limiting sums, or, as they are called, definite integrals. The symbol J is an old-fashioned S, the first letter of the word Summa.

  Summing all this up in symbols, we write for the area in question a is the lower limit of the sum, b the upper limit; and in which F(b), F(a) are calculated by evaluating the “indefinite integral” f∫(x)dx, namely, by finding that function F(x) such that its derivative with respect to x, is equal to f(x). This is the fundamental theorem of the calculus as it presented itself (in its geometrical form) to Newton and independently also to Leibniz. As a caution we repeat that numerous refinements demanded in a modern statement have been ignored.

  * * *

  Two simple but important matters may conclude this sketch of the leading notions of the calculus as they appeared to the pioneers. So far only functions of a single variable have been considered. But nature presents us with functions of several variables and even of an infinity of variables.

  To take a very simple example, the volume, V, of a gas is a function of its temperature, T, and the pressure, P, on it; say V = F(T, P)—the actual form of the function F need not be specified here. As T, P vary, V varies. But suppose only one of T, P varies while the other is held constant. We are then back essentially with a function of one variable, and the derivative of F(T, P) can be calculated with respect to this variable. If T varies while P is held constant, the derivative of F(T, P) with respect to T is called the partial derivative (with respect to T), and to show that the variable P is being held constant, a different symbol, d, is used for this partial derivative, Similarly, if P varies while T is held constant, we get Precisely as in the case of ordinary second, third, . . . derivatives, we have the like for partial derivatives; thus signifies the partial derivative of with respect to T.

  The great majority of the important equations of mathematical physics are partial differential equations. A famous example is Laplace’s equation, or the “equation of continuity,” which appears in the theory of Newtonian gravitation, electricity and magnetism, fluid motion, and elsewhere:

  In fluid motion this is the mathematical expression of the fact that a “perfect” fluid, in which there are no vortices, is indestructible. A derivation of this equation would be out of place here, but a statement of what it signifies may make it seem less mysterious. If there are no vortices in the
fluid, the three component velocities parallel to the axes of x,y,z of any particle in the fluid are calculable as the partial derivatives

  of the same function u—which will be determined by the particular type of motion. Combining this fact with the obvious remark that if the fluid is incompressible and indestructible, as much fluid must flow out of any small volume in one second as flows into it; and noting that the amount of flow in one second across any small area is equal to the rate of flow multiplied by the area; we see (on combining these remarks and calculating the total inflow and total outflow) that Laplace’s equation is more or less of a platitude.

  The really astonishing thing about this and some other equations of mathematical physics is that a physical platitude, when subjected to mathematical reasoning, should furnish unforeseen information which is anything but platitudinous. The “anticipations” of physical phenomena mentioned in later chapters arose from such commonplaces treated mathematically.

  Two very real difficulties, however, arise in this type of problem. The first concerns the physicist, who must have a feeling for what complications can be lopped off his problem, without mutilating it beyond all recognition, so that he can state it mathematically at all. The second concerns the mathematician, and this brings us to a matter of great importance—the last we shall mention in this sketch of the calculus—that of what are called boundary-value problems.

  Science does not fling an equation like Laplace’s at a mathematician’s head and ask him to find the general solution. What it wants is something (usually) much more difficult to obtain, a particular solution which will not only satisfy the equation but which in addition will satisfy certain auxiliary conditions depending on the particular problem to be solved.

  The point may be simply illustrated by a problem in the conduction of heat. There is a. general equation (Fourier’s) for the “motion” of heat in a conductor similar to Laplace’s for fluid motion. Suppose it is required to find the final distribution of temperature in a cylindrical rod whose ends are kept at one constant temperature and whose curved surface is kept at another; “final” here means that there is a “steady state”—no further change in temperature—at all points of the rod. The solution must not only satisfy the general equation, it must also fit the surface-temperatures, or the initial boundary conditions.

  The second is the harder part. For a cylindrical rod the problem is quite different from the corresponding problem for a bar of rectangular cross section. The theory of boundary-value problems deals with the fitting of solutions of differential equations to prescribed initial conditions. It is largely a creation of the past eighty years. In a sense mathematical physics is co-extensive with the theory of boundary-value problems.

  * * *

  The second of Newton’s great inspirations which came to him as a youth of twenty two or three in 1666 at Woolsthorpe was his law of universal gravitation (already stated). In this connection we shall not repeat the story of the falling apple. To vary the monotony of the classical account we shall give Gauss’ version of the legend when we come to him.

  Most authorities agree that Newton did make some rough calculations in 1666 (he was then twenty three) to see whether his law of universal gravitation would account for Kepler’s laws. Many years later (in 1684) when Halley asked him what law of attraction would account for the elliptical orbits of the planets Newton replied at once the inverse square.

  “How do you know?” Halley asked—he had been prompted by Sir Christopher Wren and others to put the question, as a great argument over the problem had been going on for some time in London.

  “Why, I have calculated it,” Newton replied. On attempting to restore his calculation (which he had mislaid) Newton made a slip, and believed he was in error. But presently he found his mistake and verified his original conclusion.

  Much has been made of Newton’s twenty years’ delay in the publication of the law of universal gravitation as an undeserved setback due to inaccurate data. Of three explanations a less romantic but more mathematical one than either of the others is to be preferred here.

  Newton’s delay was rooted in his inability to solve a certain problem in the integral calculus which was crucial for the whole theory of universal gravitation as expressed in the Newtonian law. Before he could account for the motion of both the apple and the Moon Newton had to find the total attraction of a solid homogeneous sphere on any mass particle outside the sphere. For every particle of the sphere attracts the mass particle outside the sphere with a force varying directly as the product of the masses of the two particles and inversely as the square of the distance between them: how are all these separate attractions, infinite in number, to be compounded or added into one resultant attraction?

  This evidently is a problem in the integral calculus. Today it is given in the textbooks as an example which young students dispose of in twenty minutes or less. Yet it held Newton up for twenty years. He finally solved it, of course: the attraction is the same as if the entire mass of the sphere were concentrated in a single point at its centre. The problem is thus reduced to finding the attraction between two mass particles at a given distance apart, and the immediate solution of this is as stated in Newton’s law. If this is the correct explanation for the twenty years’ delay, it may give us some idea of the enormous amount of labor which generations of mathematicians since Newton’s day have expended on developing and simplifying the calculus to the point where very ordinary boys of sixteen can use it effectively.

  * * *

  Although our principal interest in Newton centers about his greatness as a mathematician we cannot leave him with his undeveloped masterpiece of 1666. To do so would be to give no idea of his magnitude, so we shall go on to a brief outline of his other activities without entering into detail (for lack of space) on any of them.

  On his return to Cambridge Newton was elected a Fellow of Trinity in 1667 and in 1669, at the age of twenty six, succeeded Barrow as Lucasian Professor of Mathematics. His first lectures were on optics. In these he expounded his own discoveries and sketched his corpuscular theory of light, according to which light consists in an emission of corpuscles and is not a wave phenomenon as Huygens and Hooke asserted. Although the two theories appear to be contradictory both are useful today in correlating the phenomena of light and are, in a purely mathematical sense, reconciled in the modern quantum theory. Thus it is not now correct to say, as it may have been a few years ago, that Newton was entirely wrong in his corpuscular theory.

  The following year, 1668, Newton constructed a reflecting telescope with his own hands and used it to observe the satellites of Jupiter. His object doubtless was to see whether universal gravitation really was universal by observations on Jupiter’s satellites. This year is also memorable in the history of the calculus. Mercator’s calculation by means of infinite series of an area connected with a hyperbola was brought to Newton’s attention. The method was practically identical with Newton’s own, which he had not published, but which he now wrote out, gave to Dr. Barrow, and permitted to circulate among a few of the better mathematicians.

  On his election to the Royal Society in 1672 Newton communicated his work on telescopes and his corpuscular theory of light. A commission of three, including the cantankerous Hooke, was appointed to report on the work on optics. Exceeding his authority as a referee Hooke seized the opportunity to propagandize for the undulatory theory and himself at Newton’s expense. At first Newton was cool and scientific under criticism, but when the mathematician Lucas and the physician Linus, both of Liège, joined Hooke in adding suggestions and objections which quickly changed from the legitimate to the carping and the merely stupid, Newton gradually began to lose patience.

  A reading of his correspondence in this first of his irritating controversies should convince anyone that Newton was not by nature secretive and jealous of his discoveries. The tone of his letters gradually changes from one of eager willingness to clear up the difficulties which others found, to o
ne of bewilderment that scientific men should regard science as a battleground for personal quarrels. From bewilderment he quickly passes to cold anger and a hurt, somewhat childish resolution to play by himself in future. He simply could not suffer malicious fools gladly.

  At last, in a letter of November 18, 1676, he says, “I see I have made myself a slave to philosophy, but if I get free of Mr. Lucas’s business, I will resolutely bid adieu to it eternally, excepting what I do for my private satisfaction, or leave to come out after me; for I see a man must either resolve to put out nothing new, or become a slave to defend it.” Almost identical sentiments were expressed by Gauss in connection with non-Euclidean geometry.

  Newton’s petulance under criticism and his exasperation at futile controversies broke out again after the publication of the Principia. Writing to Halley on June 20, 1688, he says, “Philosophy [science] is such an impertinently litigious Lady, that a man had as good be engaged to lawsuits, as to have to do with her. I found it so formerly, and now I am no sooner come near her again, but she gives me warning.” Mathematics, dynamics, and celestial mechanics were in fact—we may as well admit it—secondary interests with Newton. His heart was in his alchemy, his researches in chronology, and his theological studies.

  It was only because an inner compulsion drove him that he turned as a recreation to mathematics. As early as 1679, when he was thirty seven (but when also he had his major discoveries and inventions securely locked up in his head or in his desk), he writes to the pestiferous Hooke: “I had for some years last been endeavoring to bend myself from philosophy to other studies in so much that I have long grutched the time spent in that study unless it be perhaps at idle hours sometimes for diversion.” These “diversions” occasionally cost him more incessant thought than his professed labors, as when he made himself seriously ill by thinking day and night about the motion of the Moon, the only problem, he says, that ever made his head ache.

 

‹ Prev