Book Read Free

Seeing Further

Page 32

by Bill Bryson


  Imagine that you are located inside a spaceship through whose windows you can see the far distant stars. Put the spaceship in a spin. Through the windows you will see the distant stars accelerating past in the opposite sense to the spin, even though they are not acted upon by any forces. Newton’s first law is not true for a spinning observer – a much more complicated law holds. This undemocratic situation signalled that there was something incomplete and unsatisfactory about Newton’s formulation of the laws of motion. One of Einstein’s great achievements was to create a new theory of gravity in which all observers, no matter how they move, do find the laws of gravity and motion to take the same form.7 By incorporating this principle of ‘general covariance’, Einstein’s theory of general relativity completed the extension of the Copernican principle from outcomes to laws.

  OUTCOMES ARE DIFFERENT

  The simplicity and economy of the laws and symmetries that govern Nature’s fundamental forces are not the end of the story. When we look around us we do not observe the laws of Nature; rather, we see the outcomes of those laws. The distinction is crucial. Outcomes are much more complicated than the laws that govern them because they do not have to respect the symmetries displayed by the laws. By this subtle interplay, it is possible to have a world which displays an unlimited number of complicated asymmetrical structures yet is governed by a few, very simple, symmetrical laws. This is one of the secrets of the universe.

  Suppose we balance a ball at the apex of a cone. If we were to release the ball, then the law of gravitation will determine its subsequent motion. Gravity has no preference for any particular direction in the universe; it is entirely democratic in that respect. Yet, when we release the ball, it will always fall in some particular direction, either because it was given a little push in one direction, or as a result of quantum fluctuations which do not permit an unstable equilibrium state to persist. So here, in the outcome of the falling ball, the directional symmetry of the law of gravity is broken. This teaches us why science is often so difficult. As observers, we see only the broken symmetries manifested as the outcomes of the laws of Nature; from them, we must work backwards to unmask the hidden symmetries behind the appearances.

  We can now understand the answers that we obtained from the different scientists we originally polled about the simplicity of the world. The particle physicist works closest to the laws of Nature themselves, and so is especially impressed by their unity, simplicity and symmetry. But the biologist, the economist, or the meteorologist is occupied with the study of the complex outcomes of the laws, rather than with the laws themselves. As a result, it is the complexities of Nature, rather than her laws, that impress them most.

  AMBIGUITIES BETWEEN LAWS AND OUTCOMES

  One of the most important developments in fundamental physics and cosmology over the past twenty years has been the steady dissolution of the divide between laws and outcomes. When the early quest for a theory of everything began many thought that such a theory would uniquely and completely specify all the constants of physics and the structural features of the universe. There would be no room left for wondering about ‘other’ universes, or hypothetical changes to the structure of our observed universe. Remarkably, things did not turn out like that. Candidate theories of everything revealed that many of the features of physics and the universe which we had become accustomed to think of as programmed into the universe from the start in some unalterable way, were nothing of the sort. The number of forces of Nature, their laws of interaction, the populations of elementary particles, the values of the so-called constants of Nature, the number of dimensions of space, and even whole universes, can all arise in quasi-random fashion in these theories. They are elaborate outcomes of processes that can have many different physically self-consistent results. There are fewer unalterable laws than we might think.

  This means that we have to take seriously the possibility that some features of the universe which we call fundamental may not have explanations in the sense that had always been expected. A good example is the value of the infamous cosmological constant which appears to drive the acceleration of the universe today. Its numerical value is very strange. It cannot so far be explained by known theories of physics. Some physicists hope that there will ultimately be a single theory of everything which will predict the exact numerical value of the cosmological constant that the astronomers need to explain their observations. Others recognise that there may not be any explanation of that sort to be found. If the value of the cosmological constant is a random outcome of some exotic symmetry-breaking process near the beginning of the universe’s expansion then all we can say is that it falls within the range of values that permit life to evolve and persist. This is a depressing situation to those who hoped to explain its value. However, it would be a strange (non-Copernican) universe that allowed us to determine everything that we want about it. We may just have to get used to the fact that there are some things we can predict and others that we can only measure. Here is a little piece of science faction to illustrate the point.

  Imagine someone in 1600 trying to convince Johannes Kepler that a theory of the solar system won’t be able to predict the number of planets in the solar system. Kepler would have had none of it. He would have been outraged. This would have constituted an admission of complete failure. He believed that the beautiful Platonic symmetries of mathematics required the solar system to have a particular number of planets. For Kepler this would have been the key feature of such a theory. He would have rejected the idea that the number of planets had no part to play in the ultimate theory.

  Today, no planetary astronomer would expect any theory of the origin of the solar system to predict the number of planets. It would make no sense. This number is something that falls out at random as a result of a chaotic sequence of formation events and subsequent mergers between embryonic planetesimals. It is simply not a predictable outcome. We concentrate instead on predicting other features of the solar system so as to test the theory of its origin. Perhaps those who are resolutely opposed to the idea that quantities like the cosmological constant might be randomly determined, and hence unpredictable by the theory of everything, might consider how strange Kepler’s views about the importance of the number of planets now seem.

  DISORGANISED COMPLEXITIES

  Complexity, like crime, comes in organised and disorganised forms. The disorganised form goes by the name of chaos and has proven to be ubiquitous in Nature. The standard folklore about chaotic systems is that they are unpredictable. They lead to out-of-control dinosaur parks and frustrated meteorologists. However, it is important to appreciate the nature of chaotic systems more fully than the Hollywood headlines.

  Classical (that is, non-quantum mechanical) chaotic systems are not in any sense intrinsically random or unpredictable. They merely possess extreme sensitivity to ignorance. As Maxwell was again the first to recognise in 1873, any initial uncertainty in our knowledge of a chaotic system’s state is rapidly amplified in time. This feature might make you think it hopeless even to try to use mathematics to describe a chaotic situation. We are never going to get the mathematical equations for weather prediction 100 per cent correct – there is too much going on – so we will always end up being inaccurate to some extent in our predictions. But although that type of inaccuracy can contribute to unpredictability, it is not in itself a fatal blow to predicting the future adequately. After all, small errors in the weather equations could turn out to have an increasingly insignificant effect on the forecast as time goes on. In practice, it is our inability to determine the weather everywhere at any given time with perfect accuracy that is the major problem. Our inevitable uncertainties about what is going on in between weather stations leaves scope for slightly different interpolations of the temperature and the wind motions in between their locations. Chaos means that those slight differences can produce very different forecasts about tomorrow’s weather.

  An important feature of chaotic systems is that, altho
ugh they become unpredictable when you try to determine the future from a particular uncertain starting value, there may be a particular stable statistical spread of outcomes after a long time, regardless of how you started out. The most important thing to appreciate about these stable statistical distributions of events is that they often have very stable and predictable average behaviours. As a simple example, take a gas of moving molecules (their average speed of motion determines what we call the gas ‘temperature’) and think of the individual molecules as little balls. The motion of any single molecule is chaotic because each time it bounces off another molecule any uncertainty in its direction is amplified exponentially. This is something you can check for yourself by observing the collisions of marbles or snooker balls. In fact, the amplification in the angle of recoil, $$, in the successive (the n+1st and nth) collisions of two identical balls is well described by a rule:

  where d is the average distance between collisions and r is the radius of the balls. Even the minimal initial uncertainty in θ0 allowed by Heisenberg’s uncertainty principle is increased to exceed θ = 360 degrees after only about 14 collisions. So you can then predict nothing about its trajectory.

  The motions of gas molecules behave like a huge number of snooker balls bouncing off each other and the denser walls of their container. One knows from bitter experience that snooker exhibits sensitive dependence on initial conditions: a slight miscue of the cue-ball produces a big miss! Unlike the snooker balls, the molecules won’t slow down and stop. Their typical distance between collisions is about 200 times their radius. With this value of d/r the unpredictability grows 200-fold at each close molecular encounter. All the molecular motions are individually chaotic, just like the snooker balls, but we still have simple rules like Boyle’s Law governing the pressure P, volume V, and temperature T– the averaged properties8 – of a confined gas of molecules:

  The lesson of this simple example is that chaotic systems can have stable, predictable, long-term, average behaviours. However, it can be difficult to predict when they will. The mathematical conditions that are sufficient to ensure it are often very difficult to prove. You usually just have to explore numerically to discover whether the computation of time averages converges in a nice way or not.9

  Considerable impetus was imparted to the study and understanding of this type of chaotic unpredictability and its influence on natural phenomena by theoretical biologists like Robert May (later to become the fifty-eighth President of the Royal Society in 2000) and George Oster, together with the mathematician James Yorke. They identified simple features displayed by wide classes of difference equation relating the (n+1)st to the nth state of a system as it made the transition from order to chaos.10

  ORGANISED COMPLEXITIES

  Among complex outcomes of the laws of Nature, the most interesting are those that display forms of organised complexity. A selection of these are displayed in the diagram on the next page, in terms of their size, gauged by their information storage capacity, which is just how many binary digits are needed to specify them versus their ability to process information, which is simply how quickly they can change one list of numbers into another list.

  As we proceed up the diagonal, increasing information storage capability grows hand in hand with the ability to transform that information into new forms. Organised complexity grows. Structures are typified by the presence of feedback, self-organisation and non-equilibrium behaviour. Mathematical scientists in many fields are searching for new types of ‘by-law’ or ‘principle’ which govern the existence and evolution of different varieties of complexity. These rules will be quite different from the ‘laws’ of the particle physicist. They will not be based upon symmetry and invariance, but upon principles of probability and information processing. Perhaps the second law of thermodynamics is as close as we have got to discovering one of this collection of general rules that govern the development of order and disorder.

  The defining characteristic of the structures in the diagram below is that they are more than the sum of their parts. They are what they are, they display the behaviour that they do, not because they are made of atoms or molecules (which they all are), but because of the way in which their constituents are organised. It is the circuit diagram of the neutral network that is the root of its complex behaviour. The laws of electromagnetism alone are insufficient to explain the working of a brain. We need to know how it is wired up and its circuits inter-connected. No theory of everything that the particle physicists supply us with is likely to shed any light upon the complex workings of the human brain or a turbulent waterfall.

  ON THE EDGE OF CHAOS

  The advent of small, inexpensive, powerful computers with good interactive graphics has enabled large, complex, and disordered situations to be studied observationally – by looking at a computer monitor. Experimental mathematics is a new tool. A computer can be programmed to simulate the evolution of complicated systems, and their long-term behaviour observed, studied, modified and replayed. By these means, the study of chaos and complexity has become a multidisciplinary subculture within science. The study of the traditional, exactly soluble problems of science has been augmented by a growing appreciation of the vast complexity expected in situations where many competing influences are at work. Prime candidates are provided by systems that evolve in their environment by natural selection, and, in so doing, modify those environments in complicated ways.

  As our intuition about the nuances of chaotic behaviour has matured by exposure to natural examples, novelties have emerged that give important hints about how disorder often develops from regularity. Chaos and order have been found to coexist in a curious symbiosis. Imagine a very large egg-timer in which sand is falling, grain by grain, to create a growing sand pile. The pile evolves under the force of gravity in an erratic manner. Sandfalls of all sizes occur, and their effect is to maintain the overall gradient of the sand pile in a temporary equilibrium, always just on the verge of collapse. The pile steadily steepens until it reaches a particular slope and then gets no steeper. This self-sustaining process was dubbed ‘self-organising criticality’ by its discoverers, Per Bak, Chao Tang and Kurt Wiesenfeld, in 1987. The adjective ‘self-organising’ captures the way in which the chaotically falling grains seem to arrange themselves into an orderly pile. The title ‘criticality’ reflects the precarious state of the pile at any time. It is always about to experience an avalanche of some size or another. The sequence of events that maintains its state of large-scale order is a slow local build-up of sand somewhere on the slope, then a sudden avalanche, followed by another slow build-up, a sudden avalanche, and so on. At first, the infalling grains affect a small area of the pile, but gradually their avalanching effects increase to span the dimension of the entire pile, as they must if they are to organise it.

  At a microscopic level, the fall of sand is chaotic, yet the result in the presence of a force like gravity is large-scale organisation. If there is nothing peculiar about the sand,11that renders avalanches of one size more probable than all others, then the frequency with which avalanches occur is proportional to some mathematical power of their size (the avalanches are said to be ‘scale-free’ processes). There are many natural systems – like earthquakes – and man-made ones – like some stock market crashes – where a concatenation of local processes combine to maintain a semblance of equilibrium in this way. Order develops on a large scale through the combination of many independent chaotic small-scale events that hover on the brink of instability. Complex adaptive systems thrive in the hinterland between the inflexibilities of determinism and the vagaries of chaos. There, they get the best of both worlds: out of chaos springs a wealth of alternatives for natural selection to sift; while the rudder of determinism sets a clear average course towards islands of stability.

  Originally, its discoverers hoped that the way in which the sandpile organised itself might be a paradigm for the development of all types of organised complexity. This was too optimistic. Bu
t it does provide clues as to how many types of complexity organise themselves. The avalanches of sand can represent extinctions of species in an ecological balance, traffic flow on a motorway, the bankruptcies of businesses in an economic system, earthquakes or volcanic eruptions in a model of the pressure equilibrium of the Earth’s crust, and even the formation of ox-bow lakes by a meandering river. Bends in the river make the flow faster there, which erodes the bank, leading to an ox-bow lake forming. After the lake forms, the river is left a little straighter. This process of gradual build-up of curvature followed by sudden ox-bow formation and straightening is how a river on a flat plain ‘organises’ its meandering shape.

  It seems rather remarkable that all these completely different problems should behave like a tumbling pile of sand. A picture of Richard Solé’s, showing a dog being taken for a bumpy walk, reveals the connection.12 If we have a situation where a force is acting – for the sand pile it is gravity, for the dog it is the elasticity of its leash – and there are many possible equilibrium states (valleys for the dog, stable local hills for the sand), then we can see what happens as the leash is pulled. The dog moves slowly uphill and then is pulled swiftly across the peak to the next valley, begins slowly climbing again, and then jumps across. This staccato movement of slow build-up and sudden jump, time and again, is what characterises the sandpile with its gradual buildup of sand followed by an avalanche. We can see from the picture that it will be the general pattern of behaviour in any system with very simple ingredients.

 

‹ Prev