Book Read Free

The Road to Ruin

Page 11

by James Rickards


  NO KEY UNLOCKS THE MYSTERIES OF CAPITAL MARKETS WITH MORE EASE THAN COMPLEXITY THEORY. That theory formally dates from the 1960s, but observation of complex dynamics is as old as humanity. An ancient astronomer seeing a supernova in the night sky was watching complexity in action. Never was complexity put to more urgent use than in Los Alamos, New Mexico, in the mid-1940s.

  Los Alamos

  The drive from downtown Santa Fe to Los Alamos National Laboratory is desolate and beautiful. The road winds through the desert at a slight incline due to the difference in elevation between the lab and the city. Today the divided highway is improved, a far cry from dangerous dirt roads traversed by the first scientists at Los Alamos who worked on the Manhattan Project in late 1942. The surrounding country is fractured into mesas and canyons, pink desert on top, dark corners below.

  What is odd about the drive is the scarcity of day-trippers, RVs, boat trailers, and typical fellow travelers on America’s roads. At a certain juncture, the road goes to only one place—to the laboratory itself—and there is no reason to be on that road unless you are cleared to enter one of the most secure places on the planet.

  Los Alamos National Laboratory, LANL, is one among seventeen specially designated national laboratories that perform the most advanced research and development in nanotechnology, materials, supercomputing, magnetics, renewable energy, and pure science. Los Alamos is one of only three national laboratories that specialize in nuclear weapons, along with Sandia, in Albuquerque, New Mexico, and Lawrence Livermore, in Livermore, California.

  The work of the national laboratories is complemented by a network of private laboratories, mostly associated with elite universities that perform classified research under government contracts and operate under the same strict security protocols. These protocols include secure perimeters, restricted access, and top-secret security clearances for those with access to the most sensitive information. The best known of these private laboratories is the Applied Physics Laboratory at The Johns Hopkins University. The Jet Propulsion Laboratory near Los Angeles is a hybrid public-private model funded by NASA and operated by the California Institute of Technology.

  Together these private and public labs comprise a research archipelago stretching from coast to coast that keeps the United States ahead of the Russians, Chinese, and other rivals in the systems essential to defense, space, and national security. They give America its edge in world affairs.

  LANL is the crown jewel in this constellation. It is not the oldest, yet in the decades since its creation it has performed the most crucial tasks.

  Beginning in 1942, the lab was one of several Manhattan Project locales that developed and built the atomic bomb, which brought an early end to the Second World War and saved perhaps a million lives on both the Allied and Japanese sides.

  In the years following the first atomic bombs, Los Alamos was a crucial component in the ensuing arms race against Russian, then later Chinese, nuclear weapons programs.

  Nuclear bomb making technology advanced rapidly from the relatively crude fission weapons of 1945 to thermonuclear weapons designed in the 1950s and 1960s. These newer bombs used fission to cause a secondary fusion implosion, releasing far greater energy and achieving a new order of destruction.

  These advances in technology and destructive force were not ends in themselves. They were guided by new nuclear war fighting doctrines developed first at RAND Corporation, and later expanded at Harvard University and other elite schools. The doctrine, called Mutual Assured Destruction, or MAD, was the product of game theory in which participants based their actions on expected reactions of other participants who, in turn, acted based on expected reactions of the initial actor, and so on recursively until a behavioral equilibrium was reached.

  What RAND Corporation discovered is that winning a nuclear arms race was destabilizing and likely to result in nuclear war. If either the United States or Russia built enough nuclear weapons to destroy the other in a first strike, with no chance of a retaliatory second strike by the victim, the superior power’s motivation was to launch the first strike and win the war. Waiting until an inferior adversary achieved decisive first-strike capability seemed less attractive than striking first.

  One solution was for each side to build more weapons. If an opponent attacked, a substantial number of the victim’s weapons would survive the first strike. This provided a sufficient second-strike capability, enough to destroy the attacker. Cold warriors referred to this model as “two scorpions in a bottle.” Either scorpion could deliver a fatal sting to the other. The victim would have just enough strength left to reflexively strike back at the attacker before dying. Both would die. The hope was that national leaders would act more rationally than scorpions and avoid striking in the first place. A rough equilibrium or “balance of terror” worked out in those early theoretical efforts prevails to this day.

  While the worst days of the nuclear arms race may be past, the threat of nuclear war has not disappeared. LANL remains at the center of nuclear weapons technology and testing.

  The laboratory is one of the most secure sites on earth. It is perched atop a mesa with surrounding five-hundred-foot cliffs enveloped by multiple security perimeters. The airspace is restricted, although there is a landing strip nearby for approved flights. Those arriving by vehicle must pass through military checkpoints and show the appropriate badges indicating security clearance or prescreened resident or worker status. An intruder attempting to arrive on foot would have to cross miles of desert, descend canyons around the mesa, climb the mesa walls, and penetrate a secure perimeter. Motion, noise, and infrared sensors and a heavily armed security force ensure that no uninvited visitors make it that far.

  On April 8, 2009, I was in a U.S. government jitney with physicists and national security experts invited to attend classified briefings on new initiatives at LANL. The laboratory, and the government city around it, were visible in the distance as we approached on the access road from Santa Fe. Desert heat gave it a glimmering look. The city stood out in its isolation. My companions and I were not visiting Los Alamos that day to study nuclear weapons. Instead, we sought solutions to the quandary of systemic financial collapse.

  Capital and Complexity

  The systems dynamics of an atomic chain reaction and a stock market meltdown are similar. Each exemplifies complexity in action. There is a straight path from Los Alamos to Wall Street. Few have walked that path, as evidenced by the continued dominance of obsolete equilibrium models in central bank policymaking and private risk management.

  Modern complexity theory began in 1960 with the work of Edward Lorenz, an MIT mathematician and meteorologist. Lorenz was modeling atmospheric flows and discovered that minute changes in initial conditions resulted in wildly different outcomes in flow. In a seminal 1963 paper, Lorenz described his results:

  Two states differing by imperceptible amounts may eventually evolve into two considerably different states. If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible. . . . [P]rediction of the sufficiently distant future is impossible by any [known] method, unless the present conditions are known exactly. In view of the inevitable inaccuracy and incompleteness of . . . observations, precise very-long-range forecasting would seem to be nonexistent.

  Lorenz was writing about the atmosphere, yet his conclusions apply broadly to complex systems. Lorenz’s research is the source of the famous butterfly effect in which a hurricane is caused by a butterfly’s wings flapping thousands of miles away. The butterfly effect is good science. The difficulty is that not every butterfly causes a hurricane, and not every hurricane is caused by butterflies. Still, it’s useful to know that hurricanes emerge unexpectedly for unforeseen reasons. The same is true of market meltdowns.

  Merely because the precise origi
n of a particular hurricane is not forecast far in advance does not mean the likelihood of hurricanes hitting Miami can safely be ignored. Hurricanes in Miami are a near certainty; precautions are always in order. Likewise, the fact that particular market panics cannot be predicted to the day does not mean robust insights into the magnitude and frequency of panics cannot be derived. They can. Regulators who dismiss these insights ignore hurricane warnings while residing in soon-to-be-inundated low-lying bungalows.

  Complexity and the related field of chaos theory are two branches of the broader sciences of nonlinear mathematics and critical state systems analysis. Los Alamos has been on the cutting edge of these fields from its start. Significant breakthroughs in the 1970s were computational and built on earlier theoretical work from the 1940s and 1950s by iconic figures such as John von Neumann and Stanislaw Ulam.

  Theoretical constructs were harnessed to massive computing power to simulate phenomena such as hydrodynamic turbulence. Seeing a fast-flowing stream at sunset is an aesthetic experience; poets try to capture its noetic beauty. Still, an effort to write equations that precisely model the ebb and flow, twist and turn, of every molecule of H20 in the stream, not just at a point in time, but dynamically through time, presents a challenge. Describing a turbulent flow of water mathematically is one of the most daunting dynamics systems problems known. Los Alamos set out to solve precisely these types of challenges.

  The number of complex systems best comprehended using nonlinear and critical state models is vast. Climate, biology, solar flares, forest fires, traffic jams, and other natural and man-made behaviors can all be described using complexity theory. Lorenz’s observation that long-run forecasting in nonlinear systems is impossible given minute differences in initial conditions did not mean that no valuable information is derived from the models.

  Applied complexity theory is interdisciplinary. Complex systems all have behaviors in common, yet have dynamics unique to each domain. A team out to crack the code in applied complexity theory would include physicists, mathematicians, computer modelers, and subject matter experts from the field being addressed. Biologists, climatologists, hydrologists, psychologists, and other domain experts work together with complexity theorists to model particular systems.

  Financial experts are new kids on the block when it comes to this kind of team science. My visit to Los Alamos was part of an effort to bridge the gap between complexity science and capital markets. LANL developed a mathematical methods toolkit that could be applied to various problem sets with modifications specific to each problem. These tools were devised as part of LANL’s core mission in nuclear weaponry. My role was to learn how to use these tools on Wall Street.

  One of the most important problems addressed at the lab is the readiness and capability of the U.S. nuclear arsenal. Nuclear weapons are designed and engineered to exacting specifications. Yet even the most careful engineering requires testing to identify flaws and suggest improvements.

  Conventional weapons frequently fail to detonate. Yet they can easily be replaced as needed, and there are few practical constraints on testing. But belief by an adversary that U.S. nuclear weapons are duds has far more serious consequences. If an enemy thought the U.S. nuclear arsenal was unreliable, they might be tempted to try a first strike. The belief is highly destabilizing. The United States and the world require a high degree of assurance that U.S. nuclear weapons will work as expected in order to maintain the balance of terror and deter nuclear war. The last time the United States tested a nuclear weapon by detonation was September 23, 1992, almost a quarter century ago. How does the United States test its nuclear weapons, especially new smaller designs, without detonations?

  The solution used by LANL is to detonate conventional explosives arrayed to simulate some of the implosion dynamics of nuclear weapons, while testing new atomic fusion dynamics at subcritical levels. So-called hydronuclear tests of less than 0.1 tons yield are used. Designs are also tested in computer simulations combining data from past explosions with new data from recent experimental and theoretical advances. These simulations are run on the fastest and most powerful supercomputers in the world. In effect, nuclear weapons are being detonated in supercomputers.

  Models used to conduct these tests are among the most complex ever devised. My mission was to see how this modeling and computing power could be applied to another kind of explosion—stock market crashes.

  A starting place for this work is to use Bayesian statistics, based on Bayes’ theorem, also referred to as causal inference. Bayes’ theorem is most useful when data are scarce or a problem is fuzzy and not amenable to conventional data-rich statistical methods involving regressions and covariance. Bayesian methods are used at the CIA and other intelligence agencies to solve problems when information is limited.

  After 9/11, the CIA was faced with the problem of predicting the next spectacular terrorist attack. There had been only one such attack on U.S. soil in history. Intelligence analysts did not have the luxury of waiting for ten attacks and thirty thousand dead to look for a robust statistical pattern. We went to war with the data we had.

  Bayes’ theorem allows you to devise a hypothesis (or several) as a starting place and then fill in the blanks as you go along. Bayes’ theorem was formerly called inverse probability because it works backward with new data to update a preexisting conclusion. Bayesian methods are not perfect, but they can allow an analyst to make strong inferences while conventional statisticians are still waiting for more data.

  Bayes’ theorem in a simplified mathematical modern form states:

  where P(A) is the probability of observing event A, without regard to event B

  P(B) is the probability of observing event B, without regard to event A

  P(A|B) is the conditional probability of observing event A given that event B is true

  P(B|A) is the conditional probability of observing event B given that event A is true

  In plain English, the formula says that by updating an initial understanding with unbiased new information, you improve your understanding.

  In mathematical form, Bayes is used to forecast the likelihood of event A occurring. Event A could be anything from a critical state nuclear chain reaction to an interest rate increase by a central bank. The equation’s left side is an initial estimate of the probability of an event occurring on its own without regard to other events, based on a mix of data, history, intuition, and inference. New information goes into the equation’s right side. The likelihood of the new information appearing if the initial estimate is or is not true is computed separately. Then the probability of the initial estimate is updated as new information arrives. This process is repeated as often as new data arrives. Over time, the initial estimate gets stronger or weaker. Finally, a robust initial estimate can be used as a basis for decision making in the absence of better information.

  The essence of Bayes’ theorem is that a chain of events has memory. A new event is not disconnected from prior events like a roll of the dice; it is conditional upon the prior event. Wall Street and central bank models rely on events’ being discrete. Each coin toss or roll of the dice has an independent probability free from the prior toss or roll. That is how coin tosses work, yet it’s not how the real world works. A nuclear explosion is not unrelated to an earlier neutron release. A market meltdown is not unrelated to earlier excess credit creation. This is why central bank forecasting is abysmal, and why bankers never see panics in advance. Banks are using obsolete non-Bayesian models.

  The Bayesian models we discussed at LANL were the most advanced anywhere. Still, they were not conceptually different from basic Bayes. The main advance was construction of a cascade of separate hypotheses, each with its own Bayes equation inside. The cascade was structured from top to bottom like a waterfall. Each hypothesis was contained in its own cell. The cellular array looked like a mosaic when presented graphically.

  The top tier of h
ypothesis cells included those first in a sequence, and usually those with the highest initial probabilities. Below were other cells, later in the sequence, with lower initial probabilities. In a simulation, the top-tier output trickled down as input to middle and lower tiers. Based on that input, lower tiers were updated with new probabilities. Some downstream paths were truncated as their updated odds fell. Other paths were highlighted as their updated odds rose.

  The mosaic might contain millions of cells. As cells were abandoned or highlighted, an image emerged from the mosaic not visible at the start. This emergence had a mystical quality to it, the way a hurricane emerges in mid-ocean on a sunny day for no apparent reason. Still, it was hard science. The supercomputer was detonating a nuclear weapon in digital space, yet the earth did not tremble.

  The key to a robust Bayesian model mosaic is proper conception of the upstream cells that start the chain reaction. If a top cell is wrongly conceived, remaining output is largely worthless. The art is to get a postulate right and let probable paths progress from there.

  As I sat there watching physicists demonstrate Bayesian technique for nuclear weapons testing, my mind turned to applications in capital markets. In fact, there are many.

  Complexity theory is a branch of physics. Bayes’ theorem is applied mathematics. Complexity and Bayes fit together hand in glove for solving capital markets problems. Capital markets are complex systems nonpareil. Market participants must forecast continually to optimize trading strategies and asset allocations. Forecasting capital markets is treacherous because they do not behave according to the Markovian stochastics widely used on Wall Street. A Markov chain has no memory; capital markets do. Capital markets produce surprises, no different from the butterfly effect identified by Lorenz in 1960. Since 2009 I have achieved superior results using complexity and Bayes to navigate the uncharted waters of systemic risk.

 

‹ Prev