by Peter Watson
At first Newton was interested in chemistry, rather than mathematics or physics.23 But, at Trinity College, Cambridge, he started reading Euclid and attended the lectures of Isaac Barrow, the (first) Lucasian professor, and became acquainted with the work of Galileo and others. The early seventeenth century was a time when mathematics became modern, taking a form that resembles what it has now.24 In addition to Newton (1642–1727), Gottfried Leibniz (1646–1716) and Nicholas Mercator (1620–1687) were near-contemporaries and René Descartes (1596–1650), Pierre de Fermat (1601–1665) and Blaise Pascal (1623–1662) not long dead by the time he graduated.25 Among the new mathematical techniques were symbolic expression, the use of letters, the working out of mathematical series, and a number of new ideas in geometry. But most of all, there was the introduction of logarithms, and the calculus.
Some form of decimals had been used by both the Chinese and the Arabs and, in 1585, the French mathematician François Viète had urged their introduction in the West. But it was Simon Stevin, of Bruges who, in the same year, published in Flemish De thiende (‘The Tenth’; French title La disme), which explained decimals in a way that more or less everyone could understand them. Stevin did not use the decimal point, however. He set out the value for, *** pi, for instance, as:
Instead of the words ‘tenth’, ‘hundredth’ and so on, he used ‘prime,’ ‘second’ etc. It wasn’t until 1617 that John Napier, referring to Stevin’s method, proposed a point or comma as the decimal separatrix.26 The decimal point became standard in Britain but the comma was (and is) widely used elsewhere.
Napier (or Neper) was not a professional mathematician but an anti-Catholic Scottish laird, the baron of Murchiston, who wrote on many topics. He was interested in mathematics, in trigonometry and he conceived logarithms some twenty years before he published anything. Logarithm takes its name from two Greek words, Logos (ratio) and arithmos (number). Napier had been thinking about sequences of numbers since 1594, and while he was ruminating on the problem he was visited by a Dr John Craig, physician to James VI of Scotland (the future James I of England), who told him of the use of prosthaphaeresis in Denmark. Craig, almost certainly, had been with James when he crossed the North Sea to meet his bride-to-be, Anne of Denmark. A storm had forced the party ashore not far from Tycho Brahe’s observatory and, while awaiting an improvement in the weather, they had been entertained by the astronomer and the device of prosthaphaeresis had been mentioned.27 This term, derived from a Greek word meaning ‘addition and subtraction’, was a set of rules for converting the product (i.e., multiplication) of functions into a sum or difference. This is essentially what logarithms are: numbers, viewed geometrically, are converted to ratios, and in this way multiplication becomes a matter of simple addition or subtraction, making calculation much, much easier.* The tables Napier started were completed and refined by Henry Briggs, the first Savilian professor of mathematics at Oxford. He eventually produced logarithms for all numbers up to 100,000.28
It is no criticism of Newton’s genius to say, therefore, that he was fortunate to be the intellectual heir of so many illustrious predecessors. The air had, so to speak, been primed. Of his many sparkling achievements we may begin with pure mathematics, where his greatest innovation was the binomial theorem, which led to his idea of the infinitesimal calculus.29 The calculus is essentially an algebraic method for understanding (i.e., calculating and measuring) the variation in properties (such as velocities) which may be altered in infinitesimal differences, that is, in properties that are continuous. In our study at home we may have 200 books or 2,000, or 2,001, but we don’t have 20034 books, or 200112. However, when travelling on a train its speed can vary continuously, infinitesimally, from 0 mph to 186 mph (if it is Eurostar). The calculus concerns infinitesimal differences and is important because it helps explain the way so much of our universe varies.
The measure of Newton’s advance may be seen from the fact that, for a time, he was the only person who could ‘differentiate’ (calculate the area under a curve). For a time it was so difficult that when he wrote his greatest book, Principia Mathematica, he did not use differential notation as he thought no one would understand it. Published in 1687, Philosophae naturalis principia mathematica, to give the book its full title, has been described as ‘the most admired scientific treatise of all times’.30
But Newton’s main achievement was his theory of gravitation. As J. D. Bernal points out, although Copernicus’ theory was accepted widely by this time, ‘it was not in any way explained’. One problem had been pointed up by Galileo: if the earth really was spinning, as Copernicus had argued, ‘why was there not a terrific wind blowing all round, blowing in the opposite direction to that in which the earth was rotating, from west to east?’31 At the speed the earth was alleged to be rotating, the wind generated should destroy everything. There was at that stage no conception of the atmosphere, so Galileo’s objection seemed reasonable.32 Then there was the problem of inertia. If the planet was spinning, what was pushing it? Some people proposed that it was pushed by angels but that didn’t satisfy Newton. Aware of Galileo’s work on pendulums, he introduced the notion of the centrifugal force.33 Galileo had begun with the swinging pendulum before moving on to circular pendulums. And it was this, the circular pendulum, which led to the concept of the centrifugal force which, in turn, led Newton to his idea that it was gravity which held the planets in, while they swing around perfectly freely. (In the case of the circular pendulum, gravity is represented by the weight of the bob and its tendency towards the centre.)
The beauty of Newton’s solution to the problem of gravity is astounding to modern mathematicians, but we should not overlook the fact that the theory was itself part of the changing attitudes in the wider society. Although no serious thinker any longer believed in astrology, the central problem in astronomy had been to understand the workings of the divine mind. By Newton’s day, however, the aim was much less theological and rather more practical: the calculation of longitude. Galileo had already used the satellites of Jupiter as a form of clock, but Newton wanted to understand the more fundamental laws of motion. Though his main interest was in these fundamentals, he was not blind to the fact that a set of tables–based on them–would be very practical.
The genesis of the idea has been recreated by historians of science. To begin with, G. A. Borelli, an Italian, introduced the notion of something he called gravity, as a balancing force against the centrifugal force–otherwise, he said, the planets would just fly off at a tangent. Newton had grasped this too, but he went further, arguing that, to account for an elliptical orbit, where a planet moves faster the closer it gets to the sun, then the force of gravity ‘must increase to balance the increased centrifugal force’. It follows that gravity is a function of the distance. But what function? Robert Hooke, the talented son of a clergyman from the Isle of Wight, who was in charge of the plans to rebuild the City of London after the Great Fire of 1666, had gone so far as to measure the weight of different objects deep in a mine shaft, and at the very summit of a church steeple. But his instruments were nowhere near accurate enough to confirm what he was looking for. From France Descartes, who had sought his own copy of Galileo’s Two Systems, came up with the idea of the solar system as a form of whirlpool or vortex: as objects approach the centre of the whirlpool, so they are sucked in, unless they have enough spin to keep them out.34 These ideas were all close to the truth but not the real thing. The breakthrough came with Edmund Halley. A passionate astronomer, he had sailed as far south as St Helena to observe the heavens of the southern hemisphere. Halley, who was to help pay for the printing of the Principia, urged several scientists, among them Hooke, Wren and Newton, to work on the proof of the inverse square law. Beginning with Kepler, several scientists had suspected that the length of time of an elliptical orbit was proportional to the radius but no one had done the work to prove the exact relationship. At least, no one had published anything. In fact, Newton, sitting in Cambridge, hard at wor
k on what he considered the much more important problems of prisms, had already solved the inverse square law but, not sharing the modern scientist’s urge to publish, had kept the results to himself. Goaded by Halley, however, he finally divulged his findings. He sat down and wrote the Principia, ‘the bible of science as a whole and in particular the bible of physics’.35
Like Copernicus’ major work, the Principia is not an easy book to read but there is a clarity of understanding that underlies the more complex prose. In explaining ‘the system of the world’, by which he meant the solar system, Newton identified mass, density of matter–an intrinsic property–and an ‘innate force’, what we now call inertia. In Principia the universe is, intellectually speaking, systematised, stabilised and demystified. The heavens had been tamed and had become part of nature. The music of the spheres had been described in all its beauty. But it had told man nothing of God. Sacred history had become natural history.
It is now accepted by most historians of science that Leibniz discovered the calculus entirely unaware that Newton had discovered it too, nine years earlier. The German (he was born in Leipzig) was no less versatile than his English counterpart–he discovered/invented binary arithmetic (representing numbers as a combination of 0s and 1s), an early form of relativity, the notion that matter and energy are fundamentally the same, and entropy (the idea that the universe will one day run out of energy), not to mention his concept of ‘monads’, from the Greek, , meaning ‘unit’, constituent parts of matter, not just atoms, but incorporating a primitive idea of cells, too, that organisms are also made up of parts. In the case of both Leibniz and Newton, however, it is the calculus that represents their highest achievement. ‘Any development of physics beyond the point reached by Newton would have been virtually impossible without the calculus.’36
Beautiful and complete as it was, in its way, Principia Mathematica and the calculus represented but two of Newton’s achievements. His other great body of work was in optics. Optics, for the Greeks, involved the study of shadows and mirrors, in particular the concave mirror, which formed an image but could also be used as a burning glass.37 In the late Middle Ages lenses and spectacles had been invented and later still, in the Renaissance, the Dutch had developed the telescope, from which the microscope derived.
Newton had combined two of these inventions–into the reflecting telescope. He had noticed that images in mirrors never showed the coloured fringes that stars usually had when seen directly through telescopes and he wondered why the fringes occurred in the first place. It was this which led him to experiment with the telescope, which in turn led on to his exploration of the properties of the prism. Prisms were originally objects of fascination because of their link to the rainbow which, in medieval times, had a religious significance. However, anyone with a scientific bent could observe that the colours of the rainbow were produced by the sun’s light passing through water drops in the sky.38 Subsequently it had been observed that the make-up of the rainbow was related to the elevation of the sun, with red rays being bent less than purple ones. In other words, refraction had been identified as a phenomenon but was imperfectly understood.39
Newton’s first experiments with light involved him making a small hole in the wooden shutter to his rooms in Trinity College, Cambridge. This let in a narrow shaft of light, which he so arranged that it struck a prism and was then refracted on to the wall opposite. Newton observed two things. One, the image was upside down, and two, the light was broken up into its constituent colours. To him it was clear from this that light consisted of rays, and that the different colours were affected by the prism to a different extent. The ancients had had their own concept of light rays but it had been the opposite of Newton’s idea. Previously, light was believed to travel from the observer’s eye to the object being observed. But for Newton light was itself a kind of projectile, shot this way and that from the object looked at: he had in effect identified what we now call photons. In his next experiment, he arranged for the light to come in from the window and pass through a prism, which cast a rainbow of light on to a lens which, in turn, focused the coloured rays on to a second prism which cancelled the effect of the first.40 In other words, given the right equipment, white light could be broken up and put back together again at will. As with his work on the calculus, Newton didn’t rush into print but once his findings were published (by the Royal Society) their wider importance was soon realised. For example, it had been observed since antiquity (in Egypt especially) that stars near the horizon take longer to set and rise sooner than expected. This could be explained if it were assumed that, near Earth, there was some substance that caused light to bend. At that stage there was no understanding of the concept of the atmosphere but it is to Newton’s credit that his observations kick-started this notion. In the same way, he noticed that both diamond and oils refracted light, which made him think that diamond ‘must contain oily material’. He was right, of course, in that diamond consists largely of carbon. This too was a forerunner of modern ideas–the twentieth-century discoveries of spectrography and X-ray crystallography.41
Tycho Brahe’s laboratory, on the Danish island of Hveen, has already featured in this story. In 1671 it featured again, when the French astronomer Jean Picard arrived there, to find that the whole place had been destroyed by ignorant locals. As he wandered around, however, traipsing through the ruins, he met a young man who seemed different from the others. Olaus Römer appeared very interested in–and knowledgeable about–astronomy. Touched that the man had worked so hard to better his knowledge, Picard invited Römer back to France. There, under Picard’s guidance, the young man initiated his own observations of the heavens and, very early on, and to his considerable amazement, he discovered that Galileo’s famous theory, based on the orbits of the ‘moons’ of Jupiter, was wrong. The speed of the ‘moons’ was not constant as Galileo had said, but appeared to vary systematically according to the time of the year. When Römer sat back and considered his data quietly, he realised that the speed of the ‘moons’ seemed to be related to how far Jupiter was from the earth. It was this observation which led to Römer’s fantastic insight–that light had a speed. A lot of people took some convincing but the idea did have a precedent of sorts. By watching cannonballs fired on battlefields, soldiers knew all too well that sound had a speed: they saw the smoke from the gun well before they heard the sound of the shot. If sound had speed, was it so far-fetched that light could too?42
These were enormous advances in physics, reflecting a continuous period of innovation and creative thought. Newton himself, in a famous quote, comparing himself to Descartes, said in a letter to Robert Hooke, ‘If I have seen farther than Descartes, it is because I have stood on the shoulders of giants.’43 But on one question, Newton was wrong, and wrong in an important way. He thought that matter was made up of atoms and set out his view as follows: ‘All these things being consider’d, it seems probable to me, that God in the Beginning form’d Matter in solid, massy, hard, impenetrable, movable Particles, of such Sizes and such Figures, and with such other Properties, and in such Proportion to Space, as most conduced to the End for which he form’d them; and that the primitive Particles being Solids are incomparably harder than any porous Bodies compounded of them; even so very hard, as never to wear or break in pieces…But…compound Bodies being apt to break, not in the midst of solid Particles, but where those Particles are laid together, and only touch in a few points.’44
As we have seen, Democritus had proposed that matter consisted of atoms two thousand years before Newton. His ideas had been elaborated on and introduced into western Europe by Pierre Gassendi, a Provençal priest. Newton had built on this but despite all the innovations he had made, his view of the universe and the atoms within it did not include the concept of change or evolution. As much as he had improved our understanding of the solar system, the idea that it might have a history was beyond him.
In 1543, the year in which Copernicus finally published De rev
olutionibus orbium celestium, Andreas Vesalius presented to the world in printed form his book on the structure of the human body. Arguably, this was even more important. Copernicus’ theory never had much direct influence on the thought of the sixteenth century–its theological ramifications would spark controversy only much later. For biology, on the other hand, 1543 is a natural end-point and the beginning of a new epoch, for Vesalius’ observations had an immediate influence.45 Everyone was curious about their own make-up (Vesalius’ students begged him to make charts of the veins and arteries) and it was by no means unusual in the sixteenth century to see anatomical plates of skeletons displayed in barber shops and public baths. Vesalius’ extremely meticulous study of anatomy also raised philosophical speculation, about man’s purpose.46
His advances have to be placed in context. Until he published his book the dominant intellectual force in human biology was still Galen (131–201). It will be recalled from Chapter 9 that Galen was one of the monumental figures in the history of medicine, the last of the great anatomists of antiquity, but who worked under unfavourable conditions. Ever since Herophilus (born c. 320 BC) and Erasistratus (born c. 304 BC) dissection of the human body had been proscribed and Galen had been forced to make deductions based on his observations of dogs, swine, oxen and the barbary ape.47 For more than a millennium, almost no advances had been made beyond him. Change had begun only in the time of Frederick II (1194–1250), king of Sicily and Holy Roman Emperor. A general concern for his subjects, combined with a genuine interest in knowledge, led Frederick to decree in 1231 ‘that no surgeon be admitted to practise unless he be learned in the anatomy of the human body’. The emperor backed this with a law that provided for the public dissection of the human body ‘at least once in five years’, at Salerno. This, the initial legislation for dissection, was followed by other states in due course. Early in the following century, the college of medicine for Venice, which was located at Padua, was authorised to dissect a human body once every year. In the early decades of the sixteenth century, Vesalius travelled to Padua for his training.48