Ideas
Page 134
Maxwell also established a research programme at the Cavendish, designed to devise an accurate standard of electrical measurement, in particular the unit of electrical resistance, the ohm. Because of the huge expansion of telegraphy in the 1850s and 1860s, this was a matter of international importance, and Maxwell’s initiative both boosted Britain to the head of this field, and at the same time established the Cavendish as pre-eminent in dealing with practical problems and devising new forms of instrumentation. It was this latter fact, as much as anything, that helped the laboratory play such a crucial role in the golden age of physics, between 1897 and 1933. Cavendish scientists were said to have ‘their brains in their fingertips’.6
Maxwell died in 1879 and was succeeded by Lord Rayleigh, who built on his work, but retired after five years to his estates in Essex. The directorship then passed, somewhat unexpectedly, to a twenty-eight-year-old, Joseph John Thomson, who had, despite his youth, already made a reputation in Cambridge as a mathematical physicist. Universally known as ‘J. J.’, Thomson, it can be said, kick-started the second scientific revolution, to create the world we have now. The first scientific revolution, it will be recalled from Chapter 23, occurred–roughly speaking–between the astronomical discoveries of Copernicus, released in 1543, and those of Isaac Newton, centring around gravity, and published in 1687 as Principia Mathematica. The second scientific revolution would revolve around new findings in physics, biology, and psychology.
But physics led the way. It had been in flux for some time, due mainly to a discrepancy in the understanding of the atom. As an idea, the atom–an elemental, invisible and indivisible substance–went back to ancient Greece, as we have seen. It was built on in the seventeenth century, when Newton conceived it as rather like a minuscule billiard ball, ‘hard and impenetrable’. In the early decades of the nineteenth century, chemists such as John Dalton had been forced to accept the theory of atoms as the smallest units of elements, in order to explain chemical reactions–how, for example, two colourless liquids, when mixed together, immediately formed a white solid or precipitate. Similarly, it was these chemical properties, and the systematic way they varied, combined with their atomic weights, that suggested to the Russian Dimitri Mendeleyev, playing ‘chemical patience’ with sixty-three cards at Tver, his estate 200 miles from Moscow, the layout of the periodic table of elements. This has been called ‘the alphabet out of which the language of the universe is composed’ and suggested, among other things, that there were elements still to be discovered. Mendeleyev’s table of elements would dovetail neatly with the discoveries of the particle physicists, linking physics and chemistry in a rational way and providing the first step in the unification of the sciences that would be such a feature of the twentieth century.
Newton’s idea of the atom was further refined by Maxwell, when he took over at the Cavendish. In 1873 Maxwell introduced into Newton’s mechanical world of colliding miniature billiard balls the idea of an electro-magnetic field. This field, Maxwell argued, ‘permeated the void’–electric and magnetic energy ‘propagated through it’ at the speed of light.7 Despite these advances, Maxwell still thought of atoms as solid and hard and essentially mechanical.
The problem was that atoms, if they existed, were too small to observe with the technology then available. Things only began to change with Max Planck, the German physicist. As part of the research for his PhD, Planck had studied heat conductors and the second law of thermodynamics. This law was initially identified by Rudolf Clausius, a German physicist who had been born in Poland, though Lord Kelvin had also had some input. Clausius had presented his law at first in 1850 and this law stipulates what anyone can observe, that energy dissipates as heat when work is done and, moreover, that heat cannot be reorganised into a useful form. This otherwise common-sense observation has very important consequences. One is that since the heat produced–energy–can never be collected up again, can never be useful or organised, the universe must gradually run down into complete randomness: a decayed house never puts itself back together, a broken bottle never reassembles of its own accord. Clausius’ word for this irreversible, increasing disorder was ‘entropy’, and he concluded that the universe would eventually die. In his PhD, Planck grasped the significance of this. The second law shows in effect that time is a fundamental part of the universe, or physics. This book began, in the Prologue, with the discovery of deep time, and Planck brings us full circle. Whatever else it may be, time is a basic element of the world about us, is related to matter in ways we do not yet fully understand. Time means that the universe is one-way only, and that therefore the Newtonian, mechanical, billiard ball picture must be wrong, or at best incomplete, for it allows the universe to operate equally in either direction, backwards and forwards.8
But if atoms were not billiard balls, what were they?
The new physics came into view one step at a time, and emerged from an old problem and a new instrument. The old problem was electricity–what, exactly, was it?* Benjamin Franklin had been close to the mark when he had likened it to a ‘subtile fluid’ but it was hard to go further because the main naturally-occurring form of electricity, lightning, was not exactly easy to bring into the laboratory. An advance was made when it was noticed that flashes of ‘light’ sometimes occurred in the partial vacuums that existed in barometers. This brought about the invention of a new–and as it turned out all-important–instrument: glass vessels with metal electrodes at either end. Air was pumped out of these vessels, creating a vacuum, before gases were introduced, and an electrical current passed through the electrodes (a bit like lightning) to see what happened, how the gases might be affected. In the course of these experiments, it was noticed that if an electric current were passed through a vacuum, a strange glow could be observed. The exact nature of this glow was not understood at first, but because the rays emanated from the cathode end of the electrical circuit, and were absorbed into the anode, Eugen Goldstein called them Cathodenstrahlen, or cathode rays. It was not until the 1890s that three experiments stemming from cathode-ray tubes finally made everything clear and set modern physics on its triumphant course.
In the first place, in November 1895, Wilhelm Röntgen, at Würzburg, observed that when the cathode rays hit the glass wall of a cathode-ray tube, highly penetrating rays were emitted, which he called X-rays (because x, for a mathematician, signified the unknown). The X-rays caused various metals to fluoresce and, most amazingly, were found to pass through the soft tissue of his hand, to reveal the bones within. A year later, Henri Becquerel, intrigued by the fluorescing that Röntgen had observed, decided to see whether naturally-fluorescing elements had the same effect. In a famous but accidental experiment, he put some uranium salt on a number of photo-electric plates, and left them in a closed (light-tight) drawer. Four days later, he found images on the plates, given off by what we now know was a radio-active source. Becquerel had discovered that ‘fluorescing’ was naturally-occurring radio-activity.9
But it was Thomson’s 1897 discovery which capped everything, produced the first of the Cavendish’s great successes and gave modern physics its lift-off, into arguably the most exciting and important intellectual adventure of the modern world. In a series of experiments J. J. pumped different gases into the glass tubes, passed an electric current, and then surrounded them either with electrical fields or with magnets. As a result of this systematic manipulation of conditions, Thomson convincingly demonstrated that cathode ‘rays’ were in fact infinitesimally minute particles erupting from the cathode and drawn to the anode. Thomson further found that the particles’ trajectory could be altered by an electric field and that a magnetic field shaped them into a curve.10 More important still, he found that the particles were lighter than hydrogen atoms, the smallest known unit of matter, and exactly the same whatever the gas through which the discharge passed. Thomson had clearly identified something fundamental–this was in fact the first experimental establishment of the particulate theory of matter.
r /> The ‘corpuscles’, as Thomson called these particles at first, are today known as electrons. It was the discovery of the electron, and Thomson’s systematic examination of its properties, that led directly to Ernest Rutherford’s further breakthrough, a decade later, in conceiving the configuration of the atom as a miniature ‘solar system’, with the tiny electrons orbiting the massive nucleus like stars around the sun. In doing this, Rutherford demonstrated experimentally what Einstein discovered inside his head and revealed in his famous calculation, E = mc2 (1905), that matter and energy are essentially the same.11 The consequences of these insights and experimental results–which included thermonuclear weapons, and the ensuing political stand-off known as the Cold War–fall outside the time-frame of this book.* But Thomson’s work is important for another reason that does concern us here.
He achieved the advances that he did by systematic experimentation. At the beginning of this book, in the Introduction, it was asserted that the three most influential ideas in history have been the soul, the idea of Europe, and the experiment. It is now time to support this claim. It is most convincingly done by taking these ideas in reverse order.
It is surely beyond reasonable doubt that, at the present time, and for some considerable time in the past, the countries that make up what we call the West–traditionally western Europe and northern America in particular, but with outposts such as Australia–have been the most successful and prosperous societies on earth, in terms of both the material advantages enjoyed by their citizens and the political and therefore moral freedoms they have. (This situation is changing now but these sentiments are true as far as they go.) These advantages are linked, intertwined, in so far as many material advances–medical innovations, printing and other media, travel technology, industrial processes–bring with them social and political freedoms in a general process of democratisation. And these are the fruit, almost without exception, of scientific innovations based on observation, experimentation, and deduction. Experimentation is all-important here as an independent, rational (and therefore democratic) form of authority. And it is this, the authority of the experiment, the authority of the scientific method, independent of the status of the individual scientist, his proximity to God or to his king, and as revealed and reinforced via myriad technologies, which we can all share, that underlies the modern world. The cumulative nature of science also makes it a far less fragile form of knowledge. This is what makes the experiment such an important idea. The scientific method, apart from its other attractions, is probably the purest form of democracy there is.
But the question immediately arises: why did the experiment occur first and most productively in what we call the West? The answer to this shows why the idea of Europe, the set of changes that came about between, roughly speaking, AD 1050 and 1250, was so important. These changes were covered in detail in Chapter 15 but to recap the main points here, we may say that: Europe was fortunate in not being devastated to the same extent as Asia was by the plague; that it was the first landmass that was ‘full’ with people, bringing about the idea of efficiency as a value, because resources were limited; that individuality emerged out of this, and out of developments in the Christian religion, which created a unified culture, which in turn helped germinate the universities where independent thought could flourish and amid which the ideas of the secular and of the experiment were conceived. One of the most poignant moments in the history of ideas surely came in the middle of the eleventh century. In 1065 or 1067 the Nizamiyah was founded in Baghdad (see above, page 274). This was a theological seminary and its establishment brought to an end the great intellectual openness in Arabic/Islamic scholarship, which had flourished for two to three hundred years. Barely twenty years later, in 1087, Irnerius began teaching law at Bologna and the great European scholarship movement was begun. As one culture ran down, another began to find its feet. The fashioning of Europe was the greatest turning-point in the history of ideas.
It may seem odd to some readers that the ‘soul’ should be a candidate as the third of the most influential ideas in history. Surely the idea of God is more powerful, more universal, and in any case isn’t there a heavy overlap? Certainly, God has been a very powerful idea throughout history, and indeed continues to be so across many parts of the globe. At the same time, there are two good reasons why the soul has been–and still is–a more influential and fecund idea than the Deity itself.
One is that, with the invention of the afterlife (which not all religions have embraced), and without which any entity such as the soul would have far less meaning, the way was open for organised religions the better to control men’s minds. During late antiquity and the Middle Ages, the technology of the soul, its relation with the afterlife, with the Deity, and most importantly with the clergy, enabled the religious authorities to exercise an extraordinary authority. It is surely the idea of the soul which, though it enriched men’s minds immeasurably over many centuries, nevertheless kept thought and freedom back during those same centuries, hindering and delaying progress, keeping the (largely) ignorant laity in thrall to an educated clerisy. Think of Friar Tetzel’s assurance that one could buy indulgences for souls in purgatory, that they would fly to heaven as soon as the coin dropped in the plate. The abuses of what we might call ‘soul technology’ were one of the main factors leading to the Reformation which, despite John Calvin in Geneva, took faith overall away from the control of the clergy, and hastened doubt and non-belief (as was discussed in Chapter 22). The various transformations of the soul (from being contained in semen, in Aristotle’s Greece, the tripartite soul of the Timaeus, the medieval and Renaissance conception of Homo duplex, the soul as a woman, a form of bird, Marvell’s dialogue between the soul and the body, Leibniz’s ‘monads’) may strike us as quaint now, but they were serious issues at the time and important stages on the way to the modern idea of the self. The seventeenth-century transformation–from the humours, to the belly and bowels, to the brain as the locus of the essential self–together with Hobbes’ argument that no‘spirit’ or soul existed, were other important steps, as was Descartes’reconfiguration of the soul as a philosophical as opposed to a religious notion.12 The transition from the world of the soul (including the afterlife) to the world of the experiment (here and now), which occurred first and most thoroughly in Europe, describes the fundamental difference between the ancient world and the modern world, and still represents the most important change in intellectual authority in history.
But there is another–quite different–reason why, in the West at least, the soul is important, and arguably more important and more fertile than the idea of God. To put it plainly, the idea of the soul has outlived the idea of God; one might even say it has evolved beyond God, beyond religion, in that even people without faith–perhaps especially people without faith–are concerned with the inner life.
We can see the enduring power of the soul, and at the same time its evolving nature, at various critical junctures throughout history. It has revealed this power through one particular pattern that has repeated itself every so often, albeit each time in a somewhat different form. This may be characterised as a repeated ‘turning inwards’ on the part of mankind, a continual and recurrent effort to seek the truth by looking ‘deep’ within oneself, what Dror Wahrman calls our ‘interiority complex’. The first time this ‘turning in’ took place (that we know about) was in the so-called Axial Age (see Chapter 5), very roughly speaking around the seventh to fourth centuries BC. At that time, more or less simultaneously in Palestine, in India, in China, in Greece and very possibly in Persia, something similar was occurring. In each case, established religion had become showy and highly ritualistic. In particular a priesthood had everywhere arisen and had arrogated to itself a highly privileged position: the clerisy had become an inherited caste which governed access to God or the gods, and which profited–in both a material and sacred sense–from its exalted position. In all of the above countries, however, prophets (in Israel
) or wise men (the Buddha and the writers of the Upanishads in India, Confucius in China) arose, denounced the priesthood and advocated a turning inward, arguing that the way to genuine holiness was by some form of self-denial and private study. Plato famously thought that mind was superior to matter.13
These men led the way by personal example. Much the same message was preached by Jesus and by St Augustine. Jesus, for example, emphasised God’s mercy, and insisted on an inner conviction on the part of believers rather than the outward observance of ritual (Chapter 7). St Augustine (354–430) was very concerned with free will and said that humans have within themselves the capacity to evaluate the moral order of events or people and can exercise judgement, to decide our priorities. According to St Augustine, to look deep inside ourselves and to choose God was to know God (Chapter 10). In the twelfth century, as was discussed in Chapter 16, there was another great turning inward in the universal Roman Catholic church. There was a growing awareness that inner repentance was what God wanted, not external penance. This was when confession was ordered to be made regularly by the Fourth Lateran Council. The Black Death, in the fourteenth century, had a similar impact. The very great number of deaths made people pessimistic and drove them inwards towards a more private faith (many more private chapels and charities were founded in the wake of the plague, and there was a rise in mysticism). The rise of autobiography in the Renaissance, what Jacob Burckhardt called the ‘abundance of pictures of the inmost soul’ was yet another turning in. In Florence, at the end of the fifteenth century, Fra Girolamo Savonarola, convinced that he had been sent by God ‘to aid the inward reform of the Italian people’, sought the regeneration of the church in a series of Jeremiads, terrible warnings of the evil to come unless this inward reform was immediate and total. And of course the Protestant Reformation of the sixteenth century (Chapter 22) was conceivably the greatest ‘turning in’ of all time. In response to the Pope’s claim that the faithful could buy relief for their relatives’ souls ‘suffering in purgatory’, Martin Luther finally exploded and advocated that men did not need the intervention of the clergy to receive the grace of God, that the great pomp of the Catholic church, and its theoretical theological stance as ‘intercessor’ between man and his maker, was a nonsense and nowhere supported by the scriptures. He urged a return to ‘true inward penitence’ and said that above all inner contrition was needed for the proper remission of sins: an individual’s inner conscience was what mattered most. In the seventeenth century, Descartes famously turned in, arguing that the only thing man could be certain of was his inner life, in particular his doubt. Late-eighteenth-century/early-nineteenth-century romanticism was likewise a turning-in, a reaction against the Enlightenment, the eighteenth-century attitude/idea that the world could best be understood by science. On the contrary, said the romantics, the one unassailable fact of human experience is inward human experience itself. Following Vico, both Rousseau (1712–1778) and Kant (1724–1804) argued that, in order to discover what we ought to do, we should listen to an inner voice.14 The romantics built on this, to say that everything we value in life, morality above all, comes from within. The growth of the novel and the others arts reflected this view.