Book Read Free

The Ascent of Gravity

Page 19

by Marcus Chown


  There was no escaping the inconvenient truth. All along, Einstein’s theory of gravity had contained the seeds of its own destruction. Although it correctly predicts light bending, the precession of the perihelion of Mercury and the slowing of time in strong gravity, it also predicts singularities, which are nonsensical. It breaks down in the heart of black holes and at the beginning of time. ‘If we can’t understand what happened at the singularity we came out of, then we don’t seem to have any understanding of the laws of particle physics,’ says Neil Turok of the Perimeter Institute in Waterloo, Canada.

  The breakdown of Einstein’s theory of gravity at singularities can only mean that it is an approximation of a better, deeper theory.

  Quantum gravity

  The two towering achievements of twentieth-century physics are Einstein’s theory of gravity – the general theory of relativity – and quantum theory.27 Each has passed every experimental and observational test with flying colours and each, in its own domain, reigns supreme. General relativity is a theory of big things like stars and the Universe while quantum theory is a theory of small things like atoms and their constituents.28 But near a singularity – either in a black hole or in the big bang — something big is squeezed into a volume smaller than an atom. So, in order to understand what happens at the heart of black holes and, most importantly, shed light on the origin of the Universe, it is necessary to unite general relativity with quantum theory to create a ‘quantum theory of gravity’. This is the name given to the deeper theory, desperately being sought by physicists.

  As early as 1916, Einstein recognised that quantum theory, if it was nature’s last word – and he did not believe that it was – would require a modification of general relativity. ‘Due to the inner-atomic movement of electrons, atoms would have to radiate not only electromagnetic but also gravitational energy, if only in tiny amounts,’ he wrote. ‘As this is hardly true in Nature, it appears that quantum theory would have to modify not only Maxwellian electrodynamics, but also the new theory of gravitation.’29 Some idea of the extreme difficulty in finding a quantum theory of gravity can be appreciated by understanding the bizarreness of quantum theory and how fundamentally different it is from general relativity . . .

  Further reading

  Fölsing, Albrecht, Albert Einstein, Penguin, London, 1998.

  Levenson, Thomas, Einstein in Berlin, Bantam Books, New York,2003.

  Levenson, Thomas, The Hunt for Vulcan . . . And how Albert Einstein destroyed a planet, discovered relativity and deciphered the Universe, Head of Zeus, London, 2015.

  Miller, Arthur, Empire of the Stars: Friendship, betrayal and obsession in the quest for black holes, Little, Brown, London, 2005.

  PART THREE

  Beyond Einstein

  8

  A quantum of space-time

  How quantum theory implies that space and time are doomed and must somehow emerge from something more fundamental

  On Mondays, Wednesdays and Fridays, we teach the wave theory and on Tuesday, Thursdays and Saturdays the particle theory.

  William Bragg1

  Your theory is crazy but is it crazy enough to be true?

  Niels Bohr2

  Quantum theory is fantastically successful. It has given us lasers and computers and nuclear reactors, and explains why the ground beneath our feet is solid and the Sun shines. But, in addition to being a recipe for understanding things and building things, quantum theory provides a unique window on a counterintuitive, Alice in Wonderland world just beneath the skin of reality. It is a place where a single atom can be in two locations at once – the equivalent of you being in New York and London at the same time; a place where things happen for absolutely no reason at all; and a place where two atoms can influence each other instantaneously even if on opposite sides of the Universe.

  The need for quantum theory arose out of Maxwell’s theory of electromagnetism, which describes all electrical and magnetic phenomena in one elegant and seamless framework. The theory actually contains not one but two paradoxes, both of which involve light. The resolution of the first – how there can be a unique speed of light in a vacuum, independent of the speed of any observer – led to one of the great revolutions in twentieth-century physics: Einstein’s special theory of relativity. The resolution of the second paradox led to the other great revolution: quantum theory.

  The second paradox arises because Maxwell’s theory permits electromagnetic waves of any size. So, in addition to visible light, which has a ‘wavelength’ of just under a thousandth of a millimetre, it is possible to have waves with a longer wavelength such as radio waves – discovered by Heinrich Hertz in 1888 – and waves of shorter wavelength such as X-rays – discovered by Wilhelm Röntgen in 1895. The size of the wave is related to the energy it carries: sluggish radio waves are far less energetic than waves of visible light which in turn are far less energetic than rapidly oscillating X-rays.

  In a hot gas of atoms, light waves are repeatedly emitted and absorbed, and the result, if enough time passes, is the creation of all possible light waves. In such a state of ‘thermal equilibrium’, the energy is shared out equally among all the wavelengths. But herein lies a problem. Although there is a limit on how long the wavelength of light can be – set by the size of any container – there is no corresponding limit on how short the wavelength can be. This means that if we pick a wavelength – any wavelength whatsoever – there will always be a finite number of waves with longer wavelength but, crucially, an infinite number with shorter wavelength.

  As pointed out before, the energy must be shared out equally among all the waves. Since there are hugely more waves with shorter wavelengths than longer wavelengths, this means that most of the energy will always be carried by the shorter waves. Inevitably, then, all the energy in a hot gas will end up in the highest-energy X-rays. Before the discovery of X-rays in 1895, the highest-energy light known was ultraviolet, which was why this result became known as the ‘ultraviolet catastrophe’.3

  The paradox is put in its most dramatic form by considering our Sun. Theory predicts that our local star should instantly radiate away all of its heat in a blinding flash of X-rays. So how is it that it is stiil shining? ‘There is hardly any paradox without utility,’ wrote the German mathematician Gottfried Leibniz. And by finding the revolutionary answer in 1900, the German physicist Max Planck proved him right.

  Quanta

  The ‘killer app’ for electricity in the late nineteenth century was the light bulb. A question of key technical and economic importance was therefore: How do you maximise the visible light given out by the heated filament of a light bulb? There was obviously no chance of answering such a question while the best theory of light predicted that a hot filament – just like the hot gas of the Sun – should instantly radiate all its light in a flash of X-rays.

  What was needed was a way to tame light and so avoid the nonsensical scenario of the ultraviolet catastrophe. And Planck, after a great deal of head-scratching and mental torment, finally found one.

  According to Maxwell’s theory, an oscillating electric charge such as an electron radiates light at its oscillation ‘frequency’. Actually, the theory says that an accelerated charge broadcasts electromagnetic radiation, but an oscillating charge is simply one that is repeatedly accelerating. So, Planck imagined a container whose walls are made of electrons attached like weights to springs. Nowadays, of course, we know that Planck’s oscillating electrons exist inside atoms but, at the end of the nineteenth century, many physicists still doubted the existence of atoms. Planck’s picture of electrons on springs is good enough, however.

  If the container is heated, the heat energy makes the springs oscillate, and the oscillating springs produce oscillating light waves at exactly the same frequency. These waves cross the container and are absorbed by other oscillating springs, which in turn produce oscillating light waves at their own frequencies. And the result of all the countless interactions is that the heat-energy is shared
out equally between all the springs and light waves. This is the situation in which the highest-frequency light waves get most of the energy because they are overwhelmingly more common.

  Planck realised that this catastrophe can be tamed if the oscillating springs are not free to give out or absorb any amount of energy whatsoever but are instead restricted to giving out or absorbing energy at only multiples of a basic amount. The amount, he proposed, was h times their frequency, f, where h was a very small number (frequency is defined as the number of oscillations per second).

  Think how ridiculous this is. It is like a high-jumper being able only to jump heights that are multiples of, say 0.5 metres. In other words, they can jump 0.5 metres, or 1.0 metre, or 1.5 metres. But they cannot in any circumstances jump a height of 0.75 metres or 1.2 metres or 1.81 metres.

  There was no plausible reason why Planck’s atom-springs should only give out energy in multiples of hf. His scheme was utterly mad. He came up with it for one reason and one reason only: it worked. It correctly predicted the way in which the amount, or intensity, of light from a gas of hot atoms varied with frequency, or equivalently energy.

  According to Planck, an oscillator cannot simply absorb light and then emit light at a slightly higher energy. It can emit light at only the next highest energy permissible. It is an all-or-nothing thing. If the oscillator does not have enough energy to make the light, the light simply does not get made. So, when the energy gets shared out between the light waves, crucially, the highest-frequency waves do not get the lion’s share of that energy, or even any energy at all. They are simply too energy-expensive. With the highest-energy light tamed in this way, there is no ultraviolet catastrophe.

  The paradox of travelling alongside a light beam and seeing something impossible arose because Newton’s theory imposed no limit on the speed of a body. The paradox of the ultraviolet catastrophe arose because Maxwell’s theory imposed no limit on the smallness of the wavelength of light. Just as Einstein’s finite speed-of-light limit tamed the infinite, Planck’s quantum tamed the infinitesimal.

  To Planck, his scheme was nothing more than a mathematical fudge. Though he claimed that energy was absorbed by atoms in discrete chunks, or ‘quanta’, with energy always a multiple of hf, he did not for a moment think that light actually flew through space in this way. That claim was left to Einstein, who spawned two revolutions: relativity and quantum theory. In his ‘miraculous year’ of 1905, he wondered about the striking similarity between Planck’s formula for the spread of energy among the different wavelengths of light in a container and Maxwell’s formula for the spread of energy among the particles of a gas.

  Maxwell was a genius who, despite dying at the tragically early age of forty-eight, made key contributions not only to electromagnetism hut also to astronomy and the microscopic theory of gases. To obtain his formula for the distribution of energy among particles of a gas, he imagined atoms flying about like tiny bullets and worked out how countless collisions between them, each of which transferred energy from fast-moving to slow-moving particles, shared out the total energy. The striking similarity between Maxwell’s formula and Planck’s formula, Einstein reasoned, could mean only one thing: light too consists of bullet-like particles. What Planck had considered to be no more than a mathematical sleight of hand was reality. Light really is emitted and absorbed in particle-like chunks, later christened ‘photons’.

  We now know that everything comes in indivisible chunks, or quanta: energy, matter, electric charge, and so on. On the smallest scales nature is not continuous, as classical physics imagined, but grainy, like a newspaper photograph inspected close-up.

  The ‘physical constant’ h became known as Planck’s constant. Because it is extremely small, the energy carried by a single photon is minuscule and so we never notice that the light from a light bulb is in fact a torrent of tiny bullets. There are simply too many of them.

  To visualise what h does to the microscopic world, imagine it is possible to make it bigger and bigger until its consequences become apparent in the everyday world. Eventually, individual photons carry so much energy that the filament of the light bulb can create only small numbers of them. So it starts flickering. One moment it makes 10 photons, the next 7, the next 15, and so on. If h is made bigger still, photons become so energy-expensive that the filament cannot make even a single photon, and the bulb stops stuttering and goes dark.

  Einstein used the idea that light consists of photons to explain a puzzling phenomenon – the ejection of electrons from the surface of certain metals.4 His explanation of the ‘photoelectric effect’ not only earned him the 1921 Nobel Prize for Physics but was the only work Einstein himself considered to be ‘revolutionary’.5 The reason can be appreciated from a remarkably ordinary, everyday observation . . .

  Random reality

  Look out of a window. You will see the scene outside and, if you look closely enough, a faint reflection of your face as well. This is because glass is not perfectly transmitting. Although most of the light that strikes it goes through, a small amount bounces back.

  What happens at a window is easy to explain if light is a wave. Think of a wave spreading across a lake and encountering an obstacle — say, a submerged log. Most of the wave continues on while part of it is turned back. But what happens at a window is not easy to explain if light is a stream of photons, all of which are the same. After all, if they are all identical, surely they should be affected identically by the window (we are assuming perfect, flawless glass, by the way!). Either all should go through or all should be reflected. There is no way that most can go through while some bounce back.

  To explain why you can see your face in a window, physicists had no choice but to water down their definition of ‘identical’. For photons, identical must mean only that they have the same ‘chance’ of being transmitted by the glass – say 95 per cent – and the same ‘chance’ of being reflected – say 5 per cent. But the introduction of the word chance into physics, as Einstein realised, is catastrophic.

  Physics is a recipe for predicting the future with 100 per cent certainty. If the Moon is at a particular location today, its location tomorrow can be predicted with 100 per cent certainty by using Newton’s law of gravity. But seeing your face in a window tells us that it is impossible to predict with certainty what an individual photon will do on encountering a window pane. It is possible to predict only its ‘probability’ of being transmitted or reflected.

  Think for a moment what this means. If you roll a dice, you may think the outcome is unpredictable. But, actually, if you knew the exact velocity with which the dice was rolled, the motion of the air currents in its vicinity, and so on, it would be possible, with the aid of a big enough computer, to predict the number that comes up. Everything we think of in the everyday world as random is not really random. It is unpredictable only in practice. In marked contrast, what a photon does when it strikes a window pane is unpredictable in principle. It would not matter how much information was available, or how big a computer was used, the photon’s course of action would never be 100 per cent predictable. For the quantum dice, every throw is always the first throw.

  And what is true for photons is also true for all the other microscopic building blocks of the world – from electrons to quarks. Every last one of them behaves in a manner which is fundamentally unpredictable.

  How, then, is the everyday world predictable? The Sun will come up tomorrow morning and a ball thrown through the air will follow a trajectory predictable enough that it is possible to catch it. The answer is that what nature takes with one hand it grudgingly gives back with the other. Although the world is fundamentally unpredictable, its unpredictability is predictable. And the recipe for predicting the unpredictability is ‘quantum theory’.

  The revelation that the Universe is ultimately founded on random chance is arguably the single most shocking discovery in the history of science. And the remarkable thing is that it stares you in the face every time
you look through a window. Einstein so hated the idea that he famously said: ‘God does not play dice with the Universe.’ Niels Bohr, the quantum pioneer, retorted: ‘Stop telling God what to do with his dice.’

  Einstein was not only wrong but spectacularly wrong. Not only does God play dice but, if He did not, there would not be a Universe – or at least a Universe of the complexity necessary for us to be here.6

  Wave-particle duality

  Seeing your face reflected in a window can be understood if light is a wave and it can also be understood if light is a stream of particles. In fact, this wave-particle duality is a key feature of the microscopic world of atoms and their constituents.7

  Particles, which are localised, and waves, which are spread-out, would appear to be fundamentally incompatible. Certainly, that was the view of the physicists of the 1920s, who picked up the ideas of Planck and Einstein and ran with them. ‘I remember discussions which went through many hours until very late at night and ended almost in despair,’ wrote German physicist Werner Heisenberg. ‘When, at the end of the discussion, I went alone for a walk in the neighbouring park I repeated to myself again and again the question: Can nature possibly be so absurd as it seemed to us in these atomic experiments?’8

  The answer is yes. The microscopic world of atoms and their constituents is utterly unlike the everyday world (since it is a billion times smaller, perhaps we should never have expected it to be the same). Photons and their microscopic compatriots are neither particles nor waves but something else for which we have no word in our vocabulary and nothing in the everyday world around us to compare them with. Like shadows of an object we cannot see, we are limited to seeing particle-like shadows and wave-like shadows but never the thing itself. ‘It has been possible to invent a mathematical scheme [quantum theory] . . . which seems entirely adequate for the treatment of atomic processes,’ said Heisenberg. ‘For visualisation, however, we must content ourselves with two incomplete analogies – the wave picture and the corpuscular picture.’

 

‹ Prev