by Tim James
1618
Descartes proposes light to be waves in the plenum.
1672
Newton proposes light to be made of corpuscles.
1801
Young does the double-slit experiment showing light is made of waves.
1846
Faraday speculates light is an electromagnetic wave.
1861
Maxwell proves him right.
1897
J. J. Thomson discovers the electron.
1899
Rutherford discovers that radioactivity is made of particles.
1900
Planck invents light quanta.
1905
Einstein proves all matter is made from atoms and that light is made of photons and publishes the theory of special relativity. Buys cake on birthday.
1908
Rutherford discovers the nucleus.
1912
Einstein discovers general relativity. Tells no one.
1913
Bohr discovers electron energy is quantised in shells.
1915
Noether comes up with her theorem. Girl power prevails.
1916
Einstein rediscovers general relativity. More open about it.
1917
Rutherford discovers the proton.
1922
The Stern–Gerlach experiment is conducted. Makes no sense.
1924
De Broglie suggests wave–particle duality.
1926
Schrödinger writes his wave equation.
1926
Born interprets the wavefunction as the square root of probable behaviour and properties.
1927
Pauli adapts the Schrödinger equation to include ‘spin’.
1927
Heisenberg discovers the uncertainty principle.
1927
George Thomson shows electrons can be diffracted like waves.
1927
De Broglie presents the pilot-wave interpretation.
1928
Dirac comes up with quantum field theory.
1930
Heisenberg outlines the Copenhagen interpretation. Einstein is not happy.
1930
Pauli proposes the existence of neutrinos.
1932
Chadwick discovers the neutron.
1932
Von Neumann tries to find the source of wavefunction collapse. Finds nothing.
1932
Anderson discovers the positron.
1933
Fermi proposes the weak field.
1935
Schrödinger suggests we kill/do not kill a cat.
1935
Yukawa proposes the strong force to explain nuclear stability.
1935
Einstein, Podolsky and Rosen publish a paradox.
1936
The muon is discovered.
1939
Batman is born.
1947
The pion is discovered.
1947
The kaon is discovered, acting strangely.
1949
Feynman, Schwinger and Tomonaga create a successful form of QED.
1952
Bohm expands on the pilot-wave interpretation.
1956
Electron neutrinos are finally discovered.
1956
Wu discovers the weak field is asymmetric with respect to chirality (and therefore weak hypercharge).
1957
Everett proposes the many worlds interpretation.
1961
Wigner suggests consciousness could trigger wavefunction collapse.
1962
The muon neutrino is discovered.
1964
Bell proposes a way to test the EPR paradox.
1964
Gell-Mann outlines quantum chromodynamics including up, down and strange quarks.
1964
Glashow proposes the charm quark… because obviously.
1964
Brout, Englert and Higgs propose a new field to explain mass.
1968
The up, down and strange quarks are discovered.
1968
Weinberg, Salam and Glashow complete the electroweak theory.
1971
Hafele, Keating and Mr Clock verify relativity.
1973
Kobayashi proposes the top and bottom quarks.
1973
The Z boson is discovered.
1974
The charm quark is discovered.
1974
The tauon is discovered.
1974
The tauon neutrino is discovered.
1977
The bottom quark is discovered.
1982
Aspect successfully carries out a Bell experiment, proving classical physics cannot explain entanglement.
1983
The W+ and W− bosons are discovered.
1986
Cramer proposes the transactional interpretation.
1993
Peres, Wootters and Bennett propose quantum teleportation.
1994
Tonomura carries out the single-electron double-slit experiment, proving unambiguously that particles self-interfere.
1995
The top quark is discovered.
1998
Construction begins on the Large Hadron Collider.
1999
Kim builds the first delayed choice quantum eraser, showing apparent backward-time quantum entanglement. Presumably sends message to Cramer in 1986.
2005
Couder gives some evidence that might validate the de Broglie–Bohm interpretation.
2008
Large Hadron Collider switches on for the first time.
2012
Large Hadron Collider discovers the Higgs boson.
2014
O’Connell puts first classical object in quantum superposition.
2015
Bohr potentially rules out the de Broglie–Bohm explanation.
2017
Jianwei achieves record quantum teleportation to a satellite.
2017
Lidzey accidentally entangles some bacteria with a laser beam.
2018
Vanner creates a quantum drum.
APPENDIX I A Closer Look at Spin
Spin comes in quantised multiples of a number called Planck’s constant, which is the number you get if you divide the energy of a particle by its associated frequency and it always comes to the same value: 6.6 × 10−34 Joule seconds. Particles can have spin values that are either half or whole multiples of this number, i.e. a particle can have a spin value of ½ Planck’s constant, 1 Planck’s constant, 1 ½ Planck’s constant, 2 Planck’s constant, 2 ½ Planck’s constant and so on, as well as being positive or negative.
Particles with half-value spins are called ‘fermions’ while particles with full-value spins are called ‘bosons’ and they behave very differently (examples of this are in Chapter Fourteen). But not all particles are magnetic, despite them all having spin.
There is a term for a particle’s magnetic character: its ‘magnetic moment’. Magnetic moment determines how strong a particle’s magnetic field is and it arises from the following relationship:
The symbol μ is the magnetic moment of the particle, which we can think of as ‘magnetic charge’. The g is called the gyromagnetic ratio and is a number unique to every particle, relating the other properties together.
The e represents electric charge, the M represents mass, c represents the universal speed limit (see Chapter Eight) and S represents something called a spin matrix, which is a 2 × 2 number grid keeping track of the different ways a particle can be spinning.
For an everyday object we can define spin using something called ‘angular momentum’, which measures how heavy the particle is, how fast it is spinning and which way round the rotations are happening (clockwise or anticlockwise). For quantum mechanical spin t
hese numbers are not enough and we have to describe it as having four possible ways of pointing (we call them the four vectors of spin). Sometimes spin is referred to as ‘intrinsic angular momentum’ because it is a property that resembles angular momentum but it is nestled within the particle’s identity even when it is stationary.
What this equation reveals is that the magnetic moment of a particle is a product of all its properties together. In the Stern–Gerlach experiment, what they were actually measuring was the magnetic moment of silver atoms but since the mass, charge and g were identical for each particle, the two directions in which the atoms flew had to be a result of S – the spin property. So although it was not quite measuring the spin of each particle (we have no way of doing that directly since we do not know what a spin experiment would even look like) for all intents and purposes, magnetism lets us measure spin differences.
What is also important to note is that in order for a particle to be magnetic it must have both spin and electrical charge. A particle that has spin but no charge, e.g. a neutrino (a particle discussed in Chapter Fourteen), has spin of ½ but no electric charge, so in the equation above we would write zero for the e term and the overall answer would be zero as well. Electric charge and magnetism are always linked, and if a particle has one it definitely has the other.
APPENDIX II Solving Schrödinger
Solving the Schrödinger equation for a single electron around a single proton (a hydrogen atom) is doable. But when you include more particles it becomes very tricky, very fast.
A helium atom has two protons and two electrons so you need to include interactions from both electrons to one proton, both electrons to the other proton, both electrons to each other, both protons to each other, and then combine them all. In three dimensions.
The bigger our atoms or molecules get, the more interactions we have to handle and it reaches the point where even powerful computers struggle to take everything into account. It therefore makes sense to use a few approximations in your sum, which saves time while still giving answers close to the full-on Schrödinger version.
One of the approximations we often make is called the orbital approximation. You imagine your atom has only one electron and you boost it to a higher energy, forcing it into all the high-energy orbitals.
When we try to work out the shape of an atom with twenty-six electrons, for example, we often do it by imagining a hydrogen atom and boost its electron twenty-six times until it looks about right.
The result is like a child doing an impression of an adult by standing on stilts and wearing bigger clothes. It is not quite an accurate picture but it is a decent way of getting a feel for ‘this is what a bigger version might look like’.
Another technique we can use is called the Born–Oppenheimer approximation, where we make the assumption that the energy and vibrations of the nucleus are so slow compared to those of the electron that they can be ignored. We imagine the electrons are orbiting/waving around a single positive point, which does not have an interesting life of its own. This allows us to focus on electrons and their interactions exclusively, without worrying about how the nucleus is going to interfere.
Hands down though, the best way of fudging the Schrödinger equation is a method called density functional theory, invented by Walter Kohn and John Pople, who shared the Nobel Prize for its invention in 1998.
Density functional theory, or DFT to those dwelling inside the circle of nerds, is a beautiful way of solving a molecular wavefunction with loads of particles in it. Rather than modelling each particle as an individual point and calculating every interaction one at a time, DFT replaces it all with an ‘electron cloud’ representing all the electrons in one big smush.
Once you have blurred every electron together and calculated the ‘thickness’ of the electron density you can talk about how the atom or molecule behaves over time. Where the cloud is thickest corresponds to where the electrons are most likely to be and where it is thinnest is where the electrons are rarely observed.
A DFT calculation on a small molecule can be processed in a few hours, giving you an answer that is usually over 90 per cent accurate. Compared with solving the Schrödinger equation (which would take years for a large molecule) it has become the industry standard for quantum calculations.
APPENDIX III Einstein’s Bicycle
This simple exercise, which illustrates light-speed permanence, comes from the science author and television presenter Carl Sagan. He imagined a scenario where a cyclist is pedalling towards you down a road, when suddenly a large truck cuts across the cyclist’s path and they swerve to avoid it.
The truck is not advancing towards you, so the light coming off its flank is approaching you at regular light speed, which physicists represent using the letter c for constant. The cyclist is pedalling towards you, however, so the light coming from them approaches you at c + their cycling speed. This means any light from the bicycle will reach your eyes before any light from the truck.
When the truck cuts in front of the cyclist, the cyclist swerves to one side and the light coming from their new position (telling you that they moved) will reach you first, followed soon after by light from the truck as it cuts across the road.
What you should see is the cyclist swerving for no apparent reason (the light from the truck has not reached you yet) and then a few seconds later the truck pulling out behind it. You would, in this scenario, wonder why the cyclist swerved a few seconds too early. But of course that is not what happens.
The light coming from the truck and the swerving bicycle hit your eyes at the same, telling a more sensible story. But, since the bicycle was moving at a faster speed, the light beam it sent out should have arrived first. The only way of accounting for this is that the speed of light coming from the bicycle was not c + cycling speed, but c itself, the same as the truck. Light speed must therefore be the same, no matter how fast everyone is moving.
APPENDIX IV Taming Infinity
A lot of the problems in theoretical physics come from infinity. Take the double-slit experiment, for instance. We can pierce two holes in a wall and send a photon towards it, calculating where it is likely to arrive by combining the two possible paths as probable outcomes.
If you cut a third hole in the wall, the story is much the same. You calculate three routes for the photon instead of two and compute the probable outcomes of all three. Same thing with four holes, forty holes or four hundred. But eventually you get to a point where the wall has so many holes it is no longer a wall but a great big empty space.
This means we have to calculate an infinite number of paths for the photon because there is an infinite number of holes (no wall = infinite holes). Yet, if we shine a photon at a detector screen with no wall in the way, the photon obviously goes in a straight line. It would appear that the photon ‘sniffs out’ (Feynman’s phrase) the infinite possible paths it can take and then picks the classical route as if the infinities somehow cancel out.
Another can of worms in QED is the issue of particle self-interaction. An electron has a negative charge, which means it interacts with other negatively charged particles. Technically, an electron should therefore interact with itself seeing as it is charged, but because an electron is infinitely close to itself, the self-interactions give infinite answers when we try to compute them.
These two examples are a real pain because infinity is not a real thing in physics. It exists in the world of abstract mathematics but in the actual universe there are no infinities (the universe could not fit them), so when a theory predicts an infinite answer that is a sure sign something is wrong with the theory.
When an equation starts heading toward infinity, scientists say it is ‘blowing up’ and a lot of theoretical physics is spent trying to defuse these numerical explosions. Usually by altering the equations, deriving new ones, or changing the input values to get more sensible answers.
One of the clunkier tricks is to simply chop the numbers off at the point where they g
et too big (a method called regularisation) but this is hopelessly crude; not much better than crying about the equation and ignoring it.
A more sophisticated approach is to do something called ‘renormalisation’. The idea this time is to pick properties for the fields (mostly from educated guesswork) and solve a bunch of different equations with these values until you get matching answers. The more details you include, the closer you get to experimental results.
It is the mathematical equivalent of compositing a sketch of a criminal from a number of eyewitnesses. You start by making a few assumptions, e.g. the facial structure of the criminal, and get different witnesses to extrapolate from there. Different sketches are created from the same starting point and then you see if they match.
If they are reasonably close, you check with a photograph of a known criminal (a real world value) and see how close you are. If it matches then your starting assumptions and method of drawing were good. If not, you go back and start afresh with different assumptions and drawing techniques, over and over until something finally succeeds. It is a form of trial and error, if we are honest, but it does the trick.