Book Read Free

The Future of Everything: The Science of Prediction

Page 23

by David Orrell


  THE GLOBAL CAPITAL MODEL

  Chartists and analysts both focus on particular assets or asset classes. On a grander scale, major private and governmental banks, some economic- forecasting firms, and institutions such as the Organization for Economic Co-operation and Development (OECD) have developed large econometric models that attempt, in the style of Jevons, to simulate the entire economy by aggregating over individuals. Their aim is to make macro-economic forecasts of quantities such as gross domestic product (GDP), which is a measure of total economic output, and to predict recessions and other turning points in the economy, which are of vital interest to companies or governments. The models are similar in principle to those used in weather forecasting or biology, but they involve hundreds or sometimes thousands of economic variables, including tax rates, employment, spending, measures of consumer confidence, and so on.

  In these models, like the others, the variables interact in complex ways with multiple feedback loops. An increase in immigration may cause a temporary rise in unemployment, but over time, immigration will grow the economy and create new jobs, so unemployment actually falls. This may attract more immigrants, in a positive feedback loop, or heighten social resistance to immigration from those already there—negative feedback. The net effect depends on a myriad of local details, such as what each immigrant actually does when he arrives. The model equations therefore represent parameterizations of the underlying complex processes, and they attempt to capture correlations between variables, either measured or inferred from theory. Again, the combination of positive and negative feedback loops tends to make the equations sensitive to changes in parameterization. The model is checked by running it against historical data from the past couple of decades and comparing its predictions with actual results. The parameters are then adjusted to improve the performance, and the process is repeated until the model is reasonably consistent with the historical data.

  While the model can be adjusted to predict the past quite well—there is no shortage of knobs to adjust—this doesn’t mean that it can predict the future. As an example, the black circles in figure 6.2 show annual growth in GDP for the G7 countries (the United States, Japan, Germany, France, Italy, the United Kingdom, and Canada). The white circles are the OECD forecasts, made a year in advance, and represent a combination of model output and the subjective judgment of the OECD secretariat. The forecast errors, which have standard deviation 0.95, are comparable in magnitude to the fluctuations in what is being forecast, with standard deviation 1.0. The situation is analogous to the naïve “climatology” forecast in weather prediction, where the forecast error is exactly equal to the natural fluctuations of the weather.

  FIGURE 6.2. GDP growth for the G7 countries, plotted against the OECD one-year predictions, for the period 1986 to 1998. Standard deviation of errors is 0.95, that of the GDP growth is 1.0.31

  These results are not unique to the OECD, but are typical of the performance of such forecasts, which routinely fail to anticipate turning points in the economy.32 As The Economist noted in 1991, “The failure of virtually every forecaster to predict the recent recessions in America has generated yet more skepticism about the value of economic forecasts.”33 And again in 2005, “Despite containing hundreds of equations, models are notoriously bad at predicting recessions.”34 In fact, if investors used econometric models to predict the prices of assets and gamble on the stock or currency markets, they would actually lose money.35 Consensus between an ensemble of different models is no guarantor of accuracy: economic models agree with one another far more often than they do with the real economy.36 Nor does increasing the size and complexity of the model make results any better: large models do no better than small ones.37 The reason is that the more parameters used, the harder it is to find their right value. As the physicist Joe McCauley put it, “The models are too complicated and based on too few good ideas and too many unknown parameters to be very useful.”38 Of course, the parameters can be adjusted and epicycles added (just as the ancients did with the Greek Circle Model) until the model agrees with historical data. But the economy’s past is no guide to its future.

  Similar models are used to estimate the impact of policy changes such as interest hikes or tax changes—but again, the results are sensitive to the choices of the modeller and are prone to error. In one 1998 study, economist Ross McKitrick ran two simulations of how Canada’s economy would respond to an average tax cut of 2 percent, with subtle differences in parameterization. One implied that the government would have to cut spending by 27.7 percent, while the other implied a cut of only 5.6 percent—a difference of almost a factor five. In the 1990s, models were widely used to assess the economic effects of the North American Free Trade Agreement (NAFTA). A 2005 study by Timothy Kehoe, however, showed that “the models drastically underestimated the impact of NAFTA on North American trade, which has exploded over the past decade.”

  In a way, the poor success rate of economic forecasting again seems to confirm the hypothesis that markets are efficient. As Burton Malkiel argued, “The fact that no one, or no technique, can consistently predict the future represents . . . a resounding confirmation of the random-walk approach.”39 It also raises the question why legions of highly paid professionals—including a large proportion of mathematics graduates—are employed to chart the future course of the economy. And why governments and businesses would follow their advice.

  RATIONAL ECONOMISTS

  Now, a non-economist might read the above and ask, in an objective, rational way, “Can modern economic theory be based on the idea that I, and the people in my immediate family, are rational investors? Ha! What about that piece of land I inherited in a Florida swamp?” Indeed, the EMH sounds like a theory concocted by extremely sober economists whose idea of “irrational behaviour” would be to order an extra scoop of ice cream on their pie at the MIT cafeteria. Much of its appeal, however, lay in the fact that it provided useful tools to assess risk. As Bachelier pointed out, the fact that the markets are unpredictable does not mean that we cannot calculate risks or make wise investments. A roll of the dice is random, but a good gambler can still know the odds. The EMH made possible a whole range of sophisticated probabilistic financial techniques that are still taught and used today. These include the capital asset pricing model, modern portfolio theory, and the Black- Scholes formula for pricing options.

  The capital asset pricing model was introduced by the American economist William F. Sharpe in the 1960s as a way to value a financial asset by taking into account factors such as the asset’s risk, as measured by the standard deviation of past price fluctuations. It provides a kind of gold standard for value investors. The aim of modern portfolio theory, developed by Harry Markowitz, was to engineer a portfolio that would control the total amount of risk. It showed that portfolio volatility can be reduced by diversifying into holdings that have little correlation. (In other words, don’t put all your eggs in the same basket, or even similar baskets.) Each security is assigned a number β, which describes its correlation with the market as a whole. A β of 1 implies that the asset fluctuates with the rest of the market, but a β of 2 means its swings tend to be twice as large and a β of 0.5 means it is half as volatile. A portfolio with securities that tend to react in different ways to a given event will result in less overall volatility.

  The Black-Scholes method is a clever technique for pricing options (which are financial instruments that allow investors to buy or sell a security for a fixed price at some time in the future). Aristotle’s Politics describes how the philosopher Thales predicted, on the basis of astrology, that the coming harvest would produce a bumper olive crop. He took out an option with the local olive pressers to guarantee the use of their presses at the usual rate. “Then the time of the olive-harvest came, and as there was a sudden and simultaneous demand for oil-presses he hired them out at any price he liked to ask. He made a lot of money, and so demonstrated that it is easy for philosophers to become rich, if they want to; bu
t that is not their object in life.”40 Today, there are a wide variety of financial derivatives that businesses and investors use to reduce risk or make a profit. Despite the fact that options have been around a long time, it seems that no one, even philosophers, really knew how to price them until Black-Scholes. So that was a good thing.

  All of these methods were built on the foundations of the EMH, so they treated investors as inert and rational “profit-maxi-mizers,” modelled price fluctuations with the bell curve, and reduced the measurement of risk to simple parameters like volatility. There were some objections to this rather sterile vision of the economy. Volatility of assets seemed to be larger than expected from the EMH.41 Some psychologists even made the point that not all investors are rational, and they are often influenced by what other investors are doing. As Keynes had argued in the 1930s, events such as the Great Depression or the South Sea Bubble could be attributed to alternating waves of elation or depression on the part of investors. The homme moyen, it was rumoured, was subject to wild mood swings.

  On the whole, though, the EMH seemed to put economic theory on some kind of logical footing, and it enabled economists to price options and quantify risk in a way that previously hadn’t been possible. The “model” for predicting an asset’s correct value was just the price as set by the market, and it was always perfect. Even if all investors were not 100 percent rational, the new computer systems that had been set up to manage large portfolios had none of their psychological issues. Perhaps for the first time, market movements could be understood and risk contained. To many economists, the assumptions behind the orthodox theory seemed reasonable, at least until October 19, 1987.

  COMPLICATIONS

  According to random walk theory, market fluctuations are like a toss of the die in a casino. On Black Monday, the homme moyen, Mr. Average, sat down at a craps table. The only shooter, he tossed two sixes, a loss. Then two more, and two more. Beginning to enjoy himself in a perverse kind of way—nothing so out of the normal had happened to him in his life—he tried several more rolls, each one a pair of sixes. People started to gather around and bet that his shooting streak would not continue. Surveillance cameras in the casino swivelled around to monitor the table. The sixes kept coming. Soon, the average man was a star, a shooting star flaming out in a steady stream of sixes and taking everyone with him. At the end of the evening, when security guards pried the die out of his fingers, he had rolled thirty twelves in a row and was ready for more. The net worth of everyone in the room who bet against his streak had decreased, on average, by 29.2 percent. Someone in the house did the math and figured out the odds of that happening were one in about ten followed by forty-five zeros—math-speak for impossible.

  Black Monday, when the Dow Jones index fell by just that amount, was an equally unlikely event—and a huge wakeup call to the economics establishment. According to the EMH, which assumes that market events follow a normal distribution, it simply shouldn’t have happened. Some have theorized that it was triggered by automatic computer orders, which created a cascade of selling. Yet this didn’t explain why world markets that did not have automatic sell orders also fell sharply. Unlike the South Sea Bubble, which was at least partly rooted in fraud, Black Monday came out of nowhere and spread around the world like a contagious disease. It was as if the stock market suddenly just broke. But it has been followed by a string of similar crashes, including one in 1998 that reduced the value of East Asian stock markets by $2 trillion, and the collapse of the Internet bubble. Perhaps markets aren’t so efficient or rational after all.

  A strong critic of efficient-market theory has been Warren Buffet, who in 1988 observed that despite events such as Black Monday, most economists seemed set on defending the EMH at all costs: “Apparently, a reluctance to recant, and thereby to demystify the priesthood, is not limited to theologians.” Another critic was an early supporter (and Eugene Fama’s ex-supervisor), the mathematician Benoit Mandelbrot. He is best known for his work in fractals (a name he derived from the Latin fractus, for “broken”). Fractal geometry is a geometry of crooked lines that twist and weave in unpredictable ways. Mandelbrot turned the tools of fractal analysis to economic time series. Rather than being a random walk, with each change following a normal distribution, they turned out to have some intriguing features. They had the property, common to fractal systems, of being self-similar over different scales: a plot of the market movements had a similar appearance whether viewed over time periods of days or years (see boxed text below). They also had a kind of memory. A large change one day increased the chance of a large change the next, and long stretches where little happened would be followed by bursts of intense volatility. The markets were not like a calm sea, with a constant succession of “normal” waves, but like an unpredictable ocean with many violent storms lurking over the horizon.

  BORDERLINE NORMAL

  Lewis Fry Richardson, the inventor of numerical weather forecasting, once did an experiment in which he compared the lengths of borders between countries, as measured by each country. For example, Portugal believed its border with Spain was 1,214 kilometres long, but Spain thought it was only 987 kilometres. The problem was that the border was not a straight line, so the length would depend on the scale of the map used to measure it. A large scale includes all the zigs and zags, while a small scale misses these and give a shorter result. If the measured length is plotted as a function of the scale, it turns out to follow a simple pattern known as a power law. In a 1967 paper, Benoit Mandelbrot showed how this could be used to define the border’s fractal dimension, which was a measure of its roughness.

  If the border was a one-dimensional straight line, its length on the map would vary linearly with the scale—a map with twice the scale would show everything twice as long, including the border. The area of a two-dimensional object such as a circle would vary with the scale to the power of two—double the scale and the area increases by a factor of four. If the length of a border increases with scale to the power D, then D plays the role of dimension. Mandelbrot showed that the British coastline has a fractal dimension D of about 1.25.

  Like clouds, fractal systems reveal a similar amount of detail over a large range of scales. There is no unique “normal,” or correct, scale by which to measure them. Similarly, the fluctuations of an asset or market show fractal-like structure over different time scales, which makes analysis using orthodox techniques difficult.

  In the 1990s, researchers tested the orthodox theory by poring over scads of financial data from around the world. The theory assumes, for example, that a security has a certain volatility, and that it varies in a fixed way with other assets and the rest of the market. The volatility can in principle be found by plotting the asset’s price changes and calculating the standard deviation. In reality, though, the actual distributions have so-called fat tails, which means that extreme events—those in the tails of the distribution—occur much more frequently than they should. One consequence is that the volatility changes with time.42 An asset’s correlation with the rest of the market is also not well defined. It is always possible to plot two data sets against each other and draw a straight line through the resulting cloud of points, as Galton did for his height measurements. But economic data is often so noisy that the slope of the line says little about any underlying connection between sets.43

  Perhaps the biggest problem with the orthodox economic theory, though, is its use of the bell curve to describe variation in financial quantities. When scientists try to model a complex system, they begin by looking for symmetric, invariant principles: the circles and squares of classical geometry; Newton’s law of gravity. Einstein developed his theory of relativity by arguing that the laws of physics should remain invariant under a change of reference frame. The normal distribution has the same kind of properties. It is symmetric around the mean, and is invariant both to basic mathematical operations and to small changes to the sample. If the heights of men and women each follow a bell curve, then th
e mid-heights of couples will be another bell curve, as in figure 5.1 (see page 178). Adding a few more couples to the sample should not drastically change the average or standard deviation.

  The normal distribution means that volatility, risk, or variation of any kind can be expressed as a single number (the standard deviation), just as a circle can be described by its radius or a square by the length of one side. However, the empirical fact that asset volatility changes with time indicates that this is an oversimplification. The bell curve can be mathematically justified only if each event that contributes to fluctuation is independent and identically distributed. But fluctuations in the marketplace are caused by the decisions of individual investors, who are part of a social network. Investors are not independent or identical, and therefore they cannot be assumed to be “normal.” And as a result, neither can the market. Indeed, it turns out that much of the data of interest is better represented by a rather different distribution, known as a power-law distribution.

  POWER TO THE PEOPLE

  Suppose there existed a country in which the size of its cities was normally distributed, with an average size of half a million. Most people would live in a city that was close to the average. The chances of any city being either smaller or larger than average would be roughly the same, and none would be extremely large or extremely small. The expressions “small town” and “big city” would refer only to subtle variations. The pattern in real countries is quite different. One 1997 study tabulated the sizes of the 2,400 largest cities in the United States. The study’s authors found that the number of cities of a particular size varies inversely with the size squared (to the power of two). This so-called power-law pattern continues from the largest city, New York, right down to towns of only 10,000 residents.44 This distribution is highly asymmetrical. For each city with a certain population, there are, on average, four cities of half the size, but only a quarter the number of cities twice the size—so there are many more small towns than would be predicted from a normal distribution. The distribution is fat-tailed—the largest metropolis, New York, is far bigger than the mean. Even the home of Wall Street, it seems, is not normal. The same pattern was found for the 2,700 largest cities in the world.

 

‹ Prev