Book Read Free

The Future of Everything: The Science of Prediction

Page 11

by David Orrell


  Then again, while nature seems loath to repeat herself, we may do the job for her. The resurrected virus now exists at the U.S. Centers for Disease Control and, in a theoretically weaker version, at a number of other laboratories, and the genetic sequence has been published. This virus is the biological equivalent of an atomic bomb. Of course, it will never escape. Not like SARS, or anthrax, or plague, or ebola—all of which have recently found their way out of secure laboratories. . . .

  Our impression of whether a being has free will is often based on the degree to which we find its behaviour predictable. If a system is completely predictable or completely random, then we tend to assume that it is the passive subject of external forces. But if it operates somewhere in between, so that its behaviour has a kind of discernible pattern and order but is still hard to predict, then we believe that it is acting independently. In other words, the autonomy that we assume to be present is a measure of the system’s complexity.

  COMPLEXITY

  Complexity is a branch of science that presents a much stronger challenge to prediction than chaos does. Like most of chaos theory, it developed only in the latter part of the twentieth century with the arrival of fast computers. It began when the Polish mathematician Stanislaw Ulam, following up a 1948 lecture by John von Neumann on the mechanistic modelling of living organisms, proposed the idea of cellular automata. Imagine that you have a two-dimensional grid, divided equally into squares (the cells) that are either black or white. The evolution of the system takes place in discrete steps. In each step, a rule is applied, and that rule determines the colour of each cell. The rules are typically based on the state of adjacent cells. For example, in a system proposed by the mathematician John Conway in 1970, a black cell survives (remains black) in the next step only if it has exactly two or three black neighbours (out of a maximum of eight), and a new active cell is born if it has exactly three black neighbours. Otherwise the cell turns white.

  This system is known as Life, for reasons that become evident when you see a simulation (there are many available on the Internet).22 From a given initial configuration of white and black cells, the screen seems to come alive with strange, shape-shifting forms that are constantly being born, dying, or interacting with one another. Life was invented as a kind of amusing screen saver, but it has since been the subject of much concerted study. Similar automata have been used to describe natural phenomena that also depend on local effects, such as forest fires or earthquakes, as well as more traditional systems, like fluid flow.

  The mathematician Stephen Wolfram, who has done much to advance and popularize this area of research, classified the behaviour of such cellular automata into four basic categories:

  All activity dies out

  Stable or periodic patterns

  Apparently random

  Situated on the border between order and chaos, with no enduring pattern that repeats in a predictable way

  The Life system is in the last, and most interesting, category. For these systems, there appears to be a huge discrepancy between the simplicity of the rules and the complexity of the resulting system. In particular, they have a property known as computational irreducibility. Cellular automata can be viewed as performing a complicated calculation: given an initial condition, they will describe the evolution of the system, just as an ODE describes the falling of an idealized stone. However, with Class IV automata, the calculation cannot be speeded up in any way. It’s not possible to plug the initial conditions into an equation and crank out the answer. The only way to determine the evolution of the system is to run it and see what happens.

  This means that Class IV automata are inherently unpredictable. Again, there is a direct correspondence between a system being interesting—that is, appearing to have a life of its own—and our inability to predict its evolution. Furthermore, according to Wolfram’s principle of computational equivalence, once a system has a certain level of complexity—isn’t obviously static or periodic or in any other way boring—then it is computationally irreducible.23

  The rules that define cellular automata can best be described as local or social in nature. Typically, the state of a cell in the next iteration is determined by the current state of neighbouring cells. In Class IV automata, these local rules produce systems that are inherently unpredictable, in the sense that one cannot know the future state of a given cell, but nevertheless have a kind of character, which distinguishes them from automata with other rules. The Life game, for example, has a recognizable zoology of figures, like “gliders” that traverse the screen as if with a mind of their own. These distinctive features are referred to as “emergent properties,” which is another way of saying that they just happen. They cannot be predicted or computed from a knowledge of the rules alone—you have to run the system to see them develop. The systems therefore have an intrinsically dualistic nature: they are based on simple, logical rules, but the resulting behaviour is neither simple nor very logical.

  Complex systems are like highly decentralized democracies: their decisions are the end result of a large number of local choices. ODEs, on the other hand, are like dictatorships: the physical system passively obeys the laws from central command. In one, the information flows from the bottom up, while in the other, it flows from the top down. Complexity isn’t rocket science—it’s the opposite of rocket science.

  This poses a challenge to numerical prediction because even if a system is deterministic and governed by simple rules, there may exist no mathematical model or set of equations to tell you how it’s going to behave—no Apollo’s arrow to fly magically into its future. Just as the square root of two cannot be expressed as a ratio of two integers, so a complex system cannot be expressed or predicted by any set of equations. If you start the Life game at a certain initial condition, it will always evolve the same way, but there is no way to tell what a particular cell will be doing after a thousand steps, except by running the game. This is like the weather forecaster telling you that he will have next week’s forecast ready by next week. Furthermore, because this unpredictability is not a result of sensitivity to initial condition, it cannot be fixed by improving observations. Often, such systems exhibit only moderate sensitivity to small changes in the initial condition, and even cellular automata that model fluid flow are typically not very chaotic.24 Rather, the unpredictability is a result of the inherent complexity of the system itself.

  This would not be a concern if it was limited to games on computer screens, but cellular automata can emulate a broader range of physical phenomena than ODEs. Since many physical and biological systems are governed by local interactions, there is no a priori reason to assume they can be predicted by the use of equations. As Wolfram writes, “It has normally been assumed that with our powers of mathematics and general thinking the computations we use to make predictions must be almost infinitely more sophisticated than those that occur in most systems in nature. . . . This assumption is not correct. . . . For many systems no systematic prediction can be done.”25 Water is one example of a substance in which simple interactions at a microscopic level lead to complex macroscopic properties. In high school science, a water molecule is typically represented as a stick-and-ball model, with a hydrogen atom connected to two oxygen atoms. The molecule’s electrical polarization puts it in constant communication with its neighbours, though, resulting in complex behaviour that could never be deduced from a knowledge of the molecular properties alone. To quote the writer Douglas Coupland, “You can’t look at H2O and predict snow or glaciers or hail or tsunamis, fog, or just about anything else.”26

  While cellular automata can be used to model physical systems, they can usually do so only in a generic sense. Their bottom-up approach relies on a detailed knowledge of local rules, which can never be perfectly known except for idealized, abstract systems. The advantage of equations is that they can approximate the behaviour of a complicated system with simple top-down laws, omitting the details. For this reason, the models used to pre
dict atmospheric, biological, or economic systems are usually based on equations.

  Chaos and complexity are often lumped together or confused, but the two are quite different. As mentioned above, complex systems may not be sensitive to initial condition, and conversely, chaotic systems need not be complex. Proof of the latter is the dough-folding shift map, shown in Appendix I.

  Of course, if one were really trying to model the making of bread, randomness in the initial condition would be overtaken by other effects: minor variations in the kneading procedure, the texture of the dough, the type of yeast, the fact that the whole thing is put in the oven, and so on. In a word, complications. As the publisher Elbert Hubbard said, life is just one damned thing after another. In a model of the weather, these complications might be referred to as stochastic effects; in a model of the cell, external noise; in economics, random shocks. They arise not from the internal dynamics of the model but from the rest of the world that the model has left out. It is important that models not be confused with this far richer reality.

  I will use the term “uncomputable” to describe systems that cannot be accurately modelled using equations without practically recreating the entire system. It may or may not come as a surprise to the reader that physical or biological systems are generally uncomputable, but it certainly comes as a surprise to a fair number of scientists, who have been trained by their institutions to see natural phenomena, from a cloud to a thought, as mathematically tractable. Newton’s amazing success at modelling gravity with equations may have inadvertently blinkered subsequent generations. As the cosmologist Hermann Bondi wrote, “His solution of the problem of motion in the solar system was so complete, so total, so precise, so stunning, that it was taken for generations as the model of what any decent theory should be like, not just in physics, but in all fields of human endeavour. It took a long time before one began to understand—and the understanding is not yet universal—that his genius selected an area where such perfection of solution was possible.”27

  The predictive sciences have developed around the use of equations such as ODEs not because they are an excellent match for the systems studied, but because they can be solved using mathematics. As we’ll see in the next few chapters, we run into severe problems when we use them to model complex, real-world systems. The inherent uncomputability of such systems means that it is extremely hard to do better than the simplest naïve approaches. Models become more and more refined, but as more detail is added, the degree of uncertainty in the parameters explodes. Because the system cannot be reduced to first principles, we must make subjective choices about parameter values and the structure of the equations. The models often get better at fitting what has happened in the past, but they don’t get much better at making predictions.

  COMPLICATIONS

  One winter weekend, when I was an undergraduate student in physics, I was making bread at home. I’d learned the techniques at a summer job in a bakery. Also, my great-grandfather was a baker, so, as they say, it’s in my blood. I was kneading the dough and wondering whether the temperature was high enough to allow the bread to rise in a reasonable time. And since my mind was full of physics courses, it occurred to me that in principle, I could build a detailed model that took into account the temperature, the biological state of the growing yeast, the exact state of the dough, and all the other factors, and predict the outcome of the bread experiment using just the laws of physics.

  As I pondered this possibility, I realized that an accurate model would have to be as complicated as the system itself. And as I continued to knead, another part of my brain took over—a part that said, Don’t be stupid. The dough feels good. (Left brains build detailed mathematical models of systems that involve the interplay of atmospheric and biological components; right brains just sniff the air or give a squeeze and come to their own conclusions.)

  Rolling the moist dough over the floured kitchen surface, I realized that while building models was an impressive and fascinating task, it could also have negative consequences, because it interferes with a person’s direct experience of an event. Instead of enjoying the smell of the bread and the sensation of forming the dough, I would become obsessed with controlling the timing and the temperature and the measurements of the ingredients. It seemed that my physics education was acting like a kind of indoctrination, forcing me to see the world in this very controlling way. Partly as a result of this dramatic flash of insight, I dropped the physics portion of the program and switched to mathematics, which leaves applications up to the user—and also allowed me to take film studies as an option.

  When he wasn’t laying down the foundations of Western European thought, Aristotle also developed the theory of how to write screenplays. His Poetica, standard reading in film courses, describes the basic structure of a three-act Hollywood movie. Every story has a beginning, a middle, and an end, but not necessarily in that order (Jean-Luc Godard). The first act sets up the characters and the situation—the initial conditions. In the second act, the arc of the story is diverted by friends, enemies, accidents—the complications. There is a fluid buildup of tension and conflict, as forces grow and align against one another. In the third act, the drama is resolved and peace is restored, but—if the story works—in a not altogether predictable way.

  Life without a little conflict, complexity, or complication is dull. A common criticism of Hollywood movies is that they are too formulaic; we’ve seen it all before, and we know exactly what is going to happen. Part of the enduring appeal of Shakespeare’s plays is that they elude tidy formulation. In the original story on which Hamlet is based, the action begins when Hamlet is only an adolescent and it becomes publicly known that his uncle murdered his father. Hamlet plays the fool for several years, pretending to be mad so that his uncle does not see him as a threat, while secretly waiting until the time is right for him to revenge his father and claim the throne. In Shakespeare’s version, the action is compressed into days, and now Hamlet’s feigned madness only serves to draw attention to him. The external, transparent motivation is replaced by inner compulsions; the characters seem like real people, not just cogs in a machine.28

  This points to the fundamental danger of deterministic, objective science: like a corny, over-formulaic film, it imagines and presents the world as a predictable object. It has no sense of the mystery, magic, or surprise of life. A key difference between a living thing and an object is predictability: kick a stone, and you know what will happen; swat at a bee, and things get more complicated. By objectifying nature, we kill it and demystify it, as surely as Apollo’s arrows ran through Python. Galileo’s argument against the sterility and “immutability” of the Aristotelian world-view can be turned around. As the psychiatrist R. D. Laing wrote, “Galileo’s program offers us a dead world: Out go sight, sound, taste, touch, and smell, and along with them have since gone esthetic and ethical sensibility, values, quality, soul, consciousness, spirit.”29 A consequence is that scientists have come to believe that our weather, our economy, and even our own bodies should be as predictable as a falling stone.

  Indeed, the attentive (or merely skeptical) reader may have noticed that in this book about prediction, the only thing we have so far shown how to predict is the motion of objects like planets, stones, and apples (even bread dough is beyond us). For simple systems, it is possible to write down equations that accurately describe their motion. So what techniques are available when we want to predict a real system we care about, like the weather? Pretty much the same thing. We write out sets of equations, code them in a computer, measure the initial conditions, and launch. Does it work? Not as well as with falling objects. One could say that Newton’s apple is the low-hanging fruit of predictability. Real-world systems, even ones that most people wouldn’t consider to be alive (such as the weather), appear to be uncomputable. As we’ll see in the next three chapters:

  Predictive models are based on sets of equations. These attempt to simulate atmospheric, biological, or economic sy
stems using a top-down approach that omits details, is amenable to mathematical analysis, and tells an understandable story.

  However, the underlying systems cannot be reduced to equations. They are based on local rules, and their global, “emergent” properties cannot be computed.

  Models of these systems tend to be sensitive to changes in parameters. The models can be adjusted to fit past data, but this does not mean they can predict the future.

  More data and bigger computers do not necessarily help.

  Adding more detail often gives diminishing returns, because the number of unknown parameters explodes.

  Statistical methods can sometimes be of use. However, these are often based on vague correlations, rely on the future’s resembling the past, and do not provide a cause-and-effect explanation.

  Even simple models can sometimes be used to make predictions. These usually take the form of subjective remarks or warnings, rather than precise forecasts.

  The history of prediction presented so far is a low-resolution model that picks up only the coarse features. It addresses our impulse to explain the world in terms of linear cause-and-effect relationships. Apollo (if the stories were true) begat Pythagoras begat Plato begat Aristotle begat Kepler begat Galileo begat Newton begat Einstein. Of course, it wasn’t really like that. Context matters. None of those people, demi-gods, or gods existed in a vacuum. They all had colleagues, societies, families, wives (but no husbands—the oracle’s sex change, as Ralph Abraham called it, was not reversed).30 However, telling the story in these simple terms helps us to organize and understand what happened, to see where we are coming from. The same linear, cause-and-effect method doesn’t always work when we go in the other direction and try to predict the future.

 

‹ Prev