This explains how particles in the same history can be fungible too, in something like an atomic laser. Two ‘ink-blot’ particles, each of which is a multiversal object, can coincide perfectly in space, and their entanglement information can be such that no two of their instances are ever at the same point in the same history.
Now, put a proton into the middle of that gradually spreading cloud of instances of a single electron. The proton has a positive charge, which attracts the negatively charged electron. As a result, the cloud stops spreading when its size is such that its tendency to spread outwards due to its uncertainty-principle diversity is exactly balanced by its attraction to the proton. The resulting structure is called an atom of hydrogen.
Historically, this explanation of what atoms are was one of the first triumphs of quantum theory, for atoms could not exist at all according to classical physics. An atom consists of a positively charged nucleus surrounded by negatively charged electrons. But positive and negative charges attract each other and, if unrestrained, accelerate towards each other, emitting energy in the form of electromagnetic radiation as they go. So it used to be a mystery why the electrons do not ‘fall’ on to the nucleus in a flash of radiation. Neither the nucleus nor the electrons individually have more than one ten-thousandth of the diameter of the atom, so what keeps them so far apart? And what makes atoms stable at that size? In non-technical accounts, the structure of atoms is sometimes explained by analogy with the solar system: one imagines electrons in orbit around the nucleus like planets around the sun. But that does not match the reality. For one thing, gravitationally bound objects do slowly spiral in, emitting gravitational radiation (the process has been observed for binary neutron stars), and the corresponding electromagnetic process in an atom would be over in a fraction of a second. For another, the existence of solid matter, which consists of atoms packed closely together, is evidence that atoms cannot easily penetrate each other, yet solar systems certainly could. Furthermore, it turns out that, in the hydrogen atom, the electron in its lowest-energy state is not orbiting at all but, as I said, just sitting there like an ink blot – its uncertainty-principle tendency to spread exactly balanced by the electrostatic force. In this way, the phenomena of interference and diversity within fungibility are integral to the structure and stability of all static objects, including all solid bodies, just as they are integral to all motion.
The term ‘uncertainty principle’ is misleading. Let me stress that it has nothing to do with uncertainty or any other distressing psychological sensations that the pioneers of quantum physics might have felt. When an electron has more than one speed or more than one position, that has nothing to do with anyone being uncertain what the speed is, any more than anyone is ‘uncertain’ which dollar in their bank account belongs to the tax authority. The diversity of attributes in both cases is a physical fact, independent of what anyone knows or feels.
Nor, by the way, is the uncertainty principle a ‘principle’, for that suggests an independent postulate that could logically be dropped or replaced to obtain a different theory. In fact one could no more drop it from quantum theory than one could omit eclipses from astronomy. There is no ‘principle of eclipses’: their existence can be deduced from theories of much greater generality, such as those of the solar system’s geometry and dynamics. Similarly, the uncertainty principle is deduced from the principles of quantum theory.
Thanks to the strong internal interference that it is continuously undergoing, a typical electron is an irreducibly multiversal object, and not a collection of parallel-universe or parallel-histories objects. That is to say, it has multiple positions and multiple speeds without being divisible into autonomous sub-entities each of which has one speed and one position. Even different electrons do not have completely separate identities. So the reality is an electron field throughout the whole of space, and disturbances spread through this field as waves, at the speed of light or below. This is what gave rise to the often-quoted misconception among the pioneers of quantum theory that electrons (and likewise all other particles) are ‘particles and waves at the same time’. There is a field (or ‘waves’) in the multiverse for every individual particle that we observe in a particular universe.
Although quantum theory is expressed in mathematical language, I have now given an account in English of the main features of the reality that it describes. So at this point the fictional multiverse that I have been describing is more or less the real one. But there is one thing left to tidy up. My ‘succession of speculations’ was based on universes, and on instances of objects, and then on corrections to those ideas in order to describe the multiverse. But the real multiverse is not ‘based on’ anything, nor is it a correction to anything. Universes, histories, particles and their instances are not referred to by quantum theory at all – any more than are planets, and human beings and their lives and loves. Those are all approximate, emergent phenomena in the multiverse.
A history is part of the multiverse in the same sense that a geological stratum is part of the Earth’s crust. One history is distinguished from the others by the values of physical variables, just as a stratum is distinguished from others by its chemical composition and by the types of fossils found in it and so on. A stratum and a history are both channels of information flow. They preserve information because, although their contents change over time, they are approximately autonomous – that is to say, the changes in a particular stratum or history depend almost entirely on conditions inside it and not elsewhere. It is because of that autonomy that a fossil found today can be used as evidence of what was present when that stratum was formed. Similarly, it is why, within a history, using classical physics, one can successfully predict some aspects of the future of that history from its past.
A stratum, like a history, has no separate existence over and above the objects in it: it consists of them. Nor does a stratum have well-defined edges. Also, there are regions of the Earth – for instance, near volcanoes – where strata have merged (though I think there are no geological processes that split and remerge strata in the way that histories split and remerge). There are regions of the Earth – such as the core – where there have never been strata. And there are regions – such as the atmosphere – where strata do form but their contents interact and mix on much shorter timescales than in the crust. Similarly, there are regions of the multiverse that contain short-lived histories, and others that do not even approximately contain histories.
However, there is one big difference between the ways in which strata and histories emerge from their respective underlying phenomena. Although not every atom in the Earth’s crust can be unambiguously assigned to a particular stratum, most of the atoms that form a stratum can. In contrast, every atom in an everyday object is a multiversal object, not partitioned into nearly autonomous instances and nearly autonomous histories, yet everyday objects such as starships and betrothed couples, which are made of such particles, are partitioned very accurately into nearly autonomous histories with exactly one instance, one position, one speed of each object in each history.
That is because of the suppression of interference by entanglement. As I explained, interference almost always happens either very soon after splitting or not at all. That is why the larger and more complex an object or process is, the less its gross behaviour is affected by interference. At that ‘coarse-grained’ level of emergence, events in the multiverse consist of autonomous histories, with each coarse-grained history consisting of a swathe of many histories differing only in microscopic details but affecting each other through interference. Spheres of differentiation tend to grow at nearly the speed of light, so, on the scale of everyday life and above, those coarse-grained histories can justly be called ‘universes’ in the ordinary sense of the word. Each of them somewhat resembles the universe of classical physics. And they can usefully be called ‘parallel’ because they are nearly autonomous. To the inhabitants, each looks very like a single-universe world.
/> Microscopic events which are accidentally amplified to that coarse-grained level (like the voltage surge in our story) are rare in any one coarse-grained history, but common in the multiverse as a whole. For example, consider a single cosmic-ray particle travelling in the direction of Earth from deep space. That particle must be travelling in a range of slightly different directions, because the uncertainty principle implies that in the multiverse it must spread sideways like an ink blot as it travels. By the time it arrives, this ink blot may well be wider than the whole Earth – so most of it misses and the rest strikes everywhere on the exposed surface. Remember, this is just a single particle, which may consist of fungible instances. The next thing that happens is that they cease to be fungible, splitting through their interaction with atoms at their points of arrival into a finite but huge number of instances, each of which is the origin of a separate history.
In each such history, there is an autonomous instance of the cosmic-ray particle, which will dissipate its energy in creating a ‘cosmic-ray shower’ of electrically charged particles. Thus, in different histories, such a shower will occur at different locations. In some, that shower will provide a conducting path down which a lightning bolt will travel. Every atom on the surface of the Earth will be struck by such lightning in some history. In other histories, one of those cosmic-ray particles will strike a human cell, damaging some already damaged DNA in such a way as to make the cell cancerous. Some non-negligible proportion of all cancers are caused in this way. As a result, there exist histories in which any given person, alive in our history at any time, is killed soon afterwards by cancer. There exist other histories in which the course of a battle, or a war, is changed by such an event, or by a lightning bolt at exactly the right place and time, or by any of countless other unlikely, ‘random’ events. This makes it highly plausible that there exist histories in which events have played out more or less as in alternative-history stories such as Fatherland and Roma Eterna – or in which events in your own life played out very differently, for better or worse.
A great deal of fiction is therefore close to a fact somewhere in the multiverse. But not all fiction. For instance, there are no histories in which my stories of the transporter malfunction are true, because they require different laws of physics. Nor are there histories in which the fundamental constants of nature such as the speed of light or the charge on an electron are different. There is, however, a sense in which different laws of physics appear to be true for a period in some histories, because of a sequence of ‘unlikely accidents’. (There may also be universes in which there are different laws of physics, as required in anthropic explanations of fine-tuning. But as yet there is no viable theory of such a multiverse.)
Imagine a single photon from a starship’s communication laser, heading towards Earth. Like the cosmic ray, it arrives all over the surface, in different histories. In each history, only one atom will absorb the photon and the rest will initially be completely unaffected. A receiver for such communications would then detect the relatively large, discrete change undergone by such an atom. An important consequence for the construction of measuring devices (including eyes) is that no matter how far away the source is, the kick given to an atom by an arriving photon is always the same: it is just that the weaker the signal is, the fewer kicks there are. If this were not so – for instance, if classical physics were true – weak signals would be much more easily swamped by random local noise. This is the same as the advantage of digital over analogue information processing that I discussed in Chapter 6.
Some of my own research in physics has been concerned with the theory of quantum computers. These are computers in which the information-carrying variables have been protected by a variety of means from becoming entangled with their surroundings. This allows a new mode of computation in which the flow of information is not confined to a single history. In one type of quantum computation, enormous numbers of different computations, taking place simultaneously, can affect each other and hence contribute to the output of a computation. This is known as quantum parallelism.
In a typical quantum computation, individual bits of information are represented in physical objects known as ‘qubits’ – quantum bits – of which there is a large variety of physical implementations but always with two essential features. First, each qubit has a variable that can take one of two discrete values, and, second, special measures are taken to protect the qubits from entanglement – such as cooling them to temperatures close to absolute zero. A typical algorithm using quantum parallelism begins by causing the information-carrying variables in some of the qubits to acquire both their values simultaneously. Consequently, regarding those qubits as a register representing (say) a number, the number of separate instances of the register as a whole is exponentially large: two to the power of the number of qubits. Then, for a period, classical computations are performed, during which waves of differentiation spread to some of the other qubits – but no further, because of the special measures that prevent this. Hence, information is processed separately in each of that vast number of autonomous histories. Finally, an interference process involving all the affected qubits combines the information in those histories into a single history. Because of the intervening computation, which has processed the information, the final state is not the same as the initial one, as in the simple interference experiment I discussed above, namely , but is some function of it, like this:
A typical quantum computation. Y1 . . . Ymany are intermediate results that depend on the input X. All of them are needed to compute the output f(X) efficiently.
Just as the starship crew members could achieve the effect of large amounts of computation by sharing information with their doppelgängers computing the same function on different inputs, so an algorithm that makes use of quantum parallelism does the same. But, while the fictional effect is limited only by starship regulations that we may invent to suit the plot, quantum computers are limited by the laws of physics that govern quantum interference. Only certain types of parallel computation can be performed with the help of the multiverse in this way. They are the ones for which the mathematics of quantum interference happens to be just right for combining into a single history the information that is needed for the final result.
In such computations, a quantum computer with only a few hundred qubits could perform far more computations in parallel than there are atoms in the visible universe. At the time of writing, quantum computers with about ten qubits have been constructed. ‘Scaling’ the technology to larger numbers is a tremendous challenge for quantum technology, but it is gradually being met.
I mentioned above that, when a large object is affected by a small influence, the usual outcome is that the large object is strictly unaffected. I can now explain why. For example, in the Mach–Zehnder interferometer, shown earlier, two instances of a single photon travel on two different paths. On the way, they strike two different mirrors. Interference will happen only if the photon does not become entangled with the mirrors – but it will become entangled if either mirror retains the slightest record that it has been struck (for that would be a differential effect of the instances on the two different paths). Even a single quantum of change in the amplitude of the mirror’s vibration on its supports, for instance, would be enough to prevent the interference (the subsequent merging of the photon’s two instances).
When one of the instances of the photon bounces off either mirror, its momentum changes, and hence by the principle of the conservation of momentum (which holds universally in quantum physics, just as in classical physics), the mirror’s momentum must change by an equal and opposite amount. Hence it seems that, in each history, one mirror but not the other must be left vibrating with slightly more or less energy after the photon has struck it. That energy change would be a record of which path the photon took, and hence the mirrors would be entangled with the photon.
Fortunately, that is not what happens. Remember that, at a sufficiently fine
level of detail, what we crudely see as a single history of the mirror, resting passively or vibrating gently on its supports, is actually a vast number of histories with instances of all its atoms continually splitting and rejoining. In particular, the total energy of the mirror takes a vast number of possible values around the average, ‘classical’ one. Now, what happens when a photon strikes the mirror, changing that total energy by one quantum?
Oversimplifying for a moment, imagine just five of those countless instances of the mirror, with each instance having a different vibrational energy ranging from two quanta below the average to two quanta above it. Each instance of the photon strikes one instance of the mirror and imparts one additional quantum of energy to it. So, after that impact, the average energy of the instances of the mirror will have increased by one quantum, and there will now be instances with energies ranging from one quantum below the old average to three above. But since, at this fine level of detail, there is no autonomous history associated with any of those values of the energy, it is not meaningful to ask whether an instance of the mirror with a particular energy after the impact is the same one that previously had that energy. The objective physical fact is only that, of the five instances of the mirror, four have energies that were present before, and one does not. Hence, only that one – whose energy is three quanta higher than the previous average – carries any record of the impact of the photon. And that means that in only one-fifth of the universes in which the photon struck has the wave of differentiation spread to the mirror, and only in those will subsequent interference between instances of that photon that have or have not hit the mirror be suppressed.
With realistic numbers, that is more like one in a trillion trillion – which means that there is only a probability of one in a trillion trillion that interference will be suppressed. This is considerably lower than the probability that the experiment will give inaccurate results due to imperfect measuring instruments, or that it will be spoiled by a lightning strike.
The Beginning of Infinity Page 35