Book Read Free

Present Shock: When Everything Happens Now

Page 24

by Douglas Rushkoff


  This model of humanity as some sort of complex system proved irresistible for a new generation of social scientists and economists alike, who had been looking for a way to apply nonlinear math and computing brawn to the problem of seemingly chaotic human activity—and particularly economics. The Cold War had grown from a purely military game to an ideological one. From the ivory towers of America’s universities to the hearing rooms of the House Committee on Un-American Activities, capitalism was being defended and promoted as a way of life. During this period, the American intelligentsia as well as its funders—from the government to the Ford and Rockefeller foundations—were busily looking for scientific and mathematical proof of capitalism’s supremacy. They wanted to prove that a free market could spontaneously deliver the order and fairness for which centrally planned communist bureaucracies could only strive in vain.

  Adam Smith had already observed back in the 1700s how free markets did a great job of coordinating people’s actions. They seemed to self-regulate even though no one was aware of anyone else’s intentions. This “invisible hand” of the marketplace ensured that as individuals try to maximize their own gains in a free market, prices find the appropriate levels, supply meets demand, and products improve. But Smith only got so far as to argue that, through the market, competition channels individual ambitions toward socially beneficial results. The mechanisms through which that happened were still poorly understood and expressed.

  In 1945 famed Hungarian economist Friedrich Hayek came up with a new way of talking about the “invisible hand” of the marketplace. Instead of seeing some magical, inexplicable method through which markets solved its problems fairly, Hayek saw markets working through a more systemic process. In his view, pricing is established collectively by thousands or even millions of individual actors, each acting on his own little bit of knowledge. The marketplace serves to resolve all these little pieces of data in a process Hayek called “catallaxy”—a self-organizing system of voluntary cooperation. Feedback and iteration. Price mechanisms were not human inventions so much as the result of collective activity on a lower order. Humans were part of a bigger system that could achieve “spontaneous order.”

  Between chaos theory, cybernetic systems, and computing brawn, economists finally had the tools to approach the marketplace as a working catallaxy—as a product of nature as complex and stable as human life itself. Economists at the Santa Fe Institute, in particular, churned out chaotic models for the collective economic activity to which humans contributed unconsciously as members of this bigger fractal reality. Founded in 1984, the institute encourages the application of complex adaptive systems to work in the physical, biological, computational, and social sciences. Their research projects span subjects from “the cost of money” and “war-time sexual violence” to “biological systems” and “the dynamics of civilizations,” applying the same sorts of computational models for nonlinear dynamical systems to all these different levels of reality.

  The key to any of this working was what became known as emergence—the spontaneous achievement of order and intelligence through the interaction of a myriad of freely acting individuals. Birds do it, bees do it . . . free market economies do it. And now we have the fractals with which to catch them all in the act.

  Scientists from across the spectrum leaped on the systems bandwagon, applying what began as a mathematical proof of market equilibrium to, well, pretty much everything. Linguist Steven Pinker saw in Hayek and systems theory a new justification for his advancement of evolutionary psychology and his computational theory of mind:

  “Hayek was among the first to call attention to the emergence of large-scale order from individual choices. The phenomenon is ubiquitous, and not just in economic markets: What makes everyone suddenly drive SUVs, name their daughters Madison rather than Ethel or Linda, wear their baseball caps backwards, raise their pitch at the end of a sentence? The process is still poorly understood by social science, with its search for external causes of behavior, but is essential to bridging the largest chasm in intellectual life: that between individual psychology and collective culture.”13

  As above, so below. The effort to make things on different levels conform to the same rules had become explicit. Economics writer and The Wisdom of Crowds author James Surowiecki explained to libertarian Reason magazine that Hayek’s notion of catallaxy—amplified by Santa Fe’s computers—could well become a universal approach to understanding humans: “In the 20th Century, this insight helped change the way people thought about markets. In the next century, it should change the way people think about organizations, networks, and the social order more generally.”14

  And so scientists, economists, cultural theorists, and even military strategists15 end up adopting fractalnoia as the new approach to describing and predicting the behavior of both individual actors and the greater systems in which they live. Weather, plankton, anthills, cities, love, sex, profit, society, and culture are all subject to the same laws. Everything is everything, as Bateson’s theory of Mind finds itself realized in the computer-generated fractal.

  Where all these scientists and social programmers must tread carefully, however, is in their readiness to draw congruencies and equivalencies between things that may resemble one another in some ways but not others. Remember, the fractal is self-similar on all its levels, but not necessarily identical. The interactions between plankton in the coral reef may be very similar to those between members of Facebook—but they are not the same. Drawing too many connections between what’s happening on the molecular level and what’s happening on a social or intellectual level can be hazardous to everyone involved.

  Not even the economists who came up with these models are particularly immune from their often-imprecise predictions and recommendations. Many of the “quant” teams at hedge funds and the risk-management groups within brokerage houses use fractals to find technical patterns in stock market movements. They believe that, unlike traditional measurement and prediction, these nonlinear, systems approaches transcend the human inability to imagine the unthinkable. Even Black Swan author Nassim Taleb, who made a career of warning economists and investors against trying to see the future, believes in the power of fractals to predict the sudden shifts and wild outcomes of real markets. He dedicated the book to Benoit Mandelbrot.

  While fractal geometry can certainly help us find strong, repeating patterns within the market activity of the 1930s Depression, it did not predict the crash of 2007. Nor did the economists using fractals manage to protect their banks and brokerages from the systemic effects of bad mortgage packages, overleveraged European banks, or the impact of algorithmic trading on moment-to-moment volatility.

  More recently, in early 2010, the world’s leading forecaster applying fractals to markets, Robert Prechter, called for the market to enter a decline of such staggering proportions that it would dwarf anything that has happened in the past three hundred years.16 Prechter bases his methodology on the insights of a 1930s economist, Ralph Nelson Elliott, who isolated a number of the patterns that seem to recur in market price data. They didn’t always occur over the same timescale or amplitude, but they did have the same shape. And they combined to form larger and larger versions of themselves at higher levels, in a highly structured progression.

  Prechter calls this progression the Elliott Wave. We may as well call it fractalnoia. For not only is the pattern supposed to repeat on different scales and in progressively larger time frames; it’s also supposed to be repeating horizontally across different industries and aspects of human activity. Prechter titled his report on fractals and the stock market “The Human Social Experience Forms a Fractal.” And, at least as of this writing, the biggest market crash since the South Sea Bubble of 1720 has yet to occur.

  Fractalnoia of this sort is less dangerous for the individually incorrect predictions it suggests to its practitioners and their customers than for the reductionist outlook on humanity that it requires. Machines, math equations, mol
ecules, bacteria, and humans are often similar but hardly equivalent. Yet the overriding urge to connect everything to everything pushes those who should know better to make such leaps of logic. To ignore the special peculiarities, idiosyncrasies, and paradoxes of activity occurring on the human and cultural level is to ignore one’s own experience of the moment in order to connect with a computer simulation.

  TO BE OR TO BE

  Ironically perhaps, the way past the problem of human unpredictability may be to work with it and through it rather than to ignore it. Fractals may be generated by computers, but the patterns within them are best recognized and applied by people. In other words, pattern recognition may be less a science or mathematic than it is a liberal art. The arts may be touchy-feely and intuitive, but in eras of rapid change such as our own, they often bring more discipline to the table than do the sciences.

  When I was exposed to computers for the very first time, back in the mid-1970s, I remember feeling like quite the artist among geeks. Everyone else at my high school’s computer lab was already a devoted mathematician. I was a theater enthusiast. The programs they wrote on our IBM terminals had to do with proving the Pythagorean Theorem or utilizing Planck’s Constant. I was more interested in making pictures out of letters, or getting the whole machine to seize up with what I considered to be “performance art” programs such as:

  10: Print “Eat Me”

  20: Escape = “off”

  30: Go to 10

  Understandably, I was not appreciated, and so I abandoned technology for the arts.

  After I graduated from college with a theater degree in the mid-1980s, I learned that the most artsy and psychedelic folks with whom I went to school had gone off to Silicon Valley to become programmers. It just did not compute. So I flew out to California to find out what had compelled these Grateful Dead–listening hashish eaters to get so straitlaced and serious. Of course, it turns out they weren’t straight at all. They were building cyberspace—a task that required people who were not afraid to see their wildest dreams manifest before their own eyes. They programmed all day (and scraped the buds off peyote cactuses at night).

  I interviewed dozens of executives at companies such as Northrup, Intel, and Sun and learned that they were depending upon these legions of psychedelics-using “long hairs” to develop the interfaces through which humans would one day be interacting with machines and one another. “They already know what it’s like to hallucinate,” one executive explained. “We just have to warn them before the urine tests, is all.” Silicon Valley needed artistic visionaries to bring the code to life, and so they tapped the fringes of arts culture for them.

  Now that more than twenty years have passed, it is the artists who are trying to keep up with the technologists. At digital arts organizations such as Rhizome and Eyebeam, as well as digital arts schools from NYU’s Interactive Telecommunications Program and Parsons Design and Technology program, it is the technologists who are leading the way. Self-taught, long-haired programmers create new routines and interfaces, while well-dressed and well-read MFAs struggle to put this work into historical context. The artists are the geeks, and the programmers are the performers. The programmers conceive and create with abandon. The artists try to imagine what humans may want to do with these creations, as well as what these creations may want to do with us.

  Indeed, the more technologized and interconnected we become, the more dependent we are on the artist for orientation and pattern recognition. While I strongly advocate the teaching of computer programming to kids in grade school, I am just as much a believer in teaching kids how to think critically about the programmed environments in which they will be spending so much of their time. The former is engineering; the latter is liberal arts. The engineers write and launch the equations; the liberal artists must judge their usefulness, recognize the patterns they create, and—oh so very carefully—generalize from there. For the artist—the human, if you will—this care comes less from the accumulation of more and more specific data than the fine-tuning of the perceptual apparatus. In a fractal, it’s not how much you see, but how well you see it.

  In evaluating the intelligence failures surrounding the US overthrow of Iraq, University of Pennsylvania psychologist and decision theorist Philip Tetlock concluded that people with more wide-ranging and less specific interests are better at predicting the future than experts in the very fields being studied.17 Using Isaiah Berlin’s prototypes of the fox and the hedgehog, Tetlock studied hundreds of different decisions and determined that foxes—people with wider and less specific interests—saw patterns more accurately than hedgehogs, who used their knowledge of history and their prior experiences to predict future outcomes.

  Hedgehogs, Tetlock learned, were too eager for “closure” to be able to sit with the uncertainty of not knowing. They attempted to use their fixed ideas to determine where things were going. Foxes had less at stake in conventional wisdom, so they were much more likely to use probability in making their assessments. While hedgehogs were locked into their own disciplines, foxes could think laterally, applying the insights from one field to another. Knowledge had handicapped the hedgehogs, while the wide-ranging curiosity of the foxes gave them the edge.

  Now, on the surface this sounds like the failing of the fractalnoids—those economists who want to equate the properties of plankton with the personalities of Parisians. But it’s fundamentally different in that it’s human beings applying patterns intuitively to different systems, not the frantic confusion of apples and oranges or, more likely, apples with planets. Yes, it is still fraught with peril, but it’s also a rich competency to develop in an era of present shock.

  For instance, I still don’t know whether to be delighted or horrified by the student who told me he “got the gist” of Hamlet by skimming it in a couple of minutes and reading a paragraph of critique on Wikipedia. The student already saw the world in fractal terms and assumed that being able to fully grasp one moment of Hamlet would mean he had successfully “grokked” the whole. For him, the obvious moment upon which to seize was that “superfamous one, ‘to be or not to be.’” The student went on to explain how everything about the play extended from that one point: Hamlet can’t decide whether to take action or whether to kill himself. But he’s not even really contemplating suicide at all—he’s only pretending to do this because he’s being watched by Polonius and the man who murdered his father. “It’s all action and inaction, plays within plays, and the way performance and identity or thought and deed are confused,” the student explained to me. “The readiness is all. The play’s the thing. I get it. Trust me.”

  Trust him? It’s hard not to—particularly when I take into account that he’s living in a world where he’s expected to accomplish more tasks each minute than I did in an hour or a whole day at his age. Functioning in such a world does require getting the “gist” of things and moving on, recognizing patterns, and then inferring the rest. It’s the sort of intellectual fakery my college friend Walter Kirn describes in the novelized account of his education, Lost in the Meritocracy, in which the protagonist succeeds brilliantly at Princeton by faking it. He explains with a mix of pride and self-contempt:

  I came to suspect that certain professors were on to us, and I wondered if they, too, were actors. In classroom discussions, and even when grading essays, they seemed to favor us over the hard workers, whose patient, sedimentary study habits were ill adapted, I concluded, to the new world of antic postmodernism that I had mastered almost without effort.18

  What may have begun as a fakery, though, became a skill itself—a way of recognizing patterns and then surfing through a conversation in an increasingly convoluted, overlapping, and interdisciplinary academic world. Kirn graduated summa cum laude with a scholarship to Oxford and became a respected author and critic. This wide-angle approach may be not the only skill one needs to meet intellectual challenges, but it’s as crucial to understanding and performance as is focused study. The truly accomplis
hed musician can do more than play his repertoire; he can also pick up a “fake book” and instantly play one of thousands of tunes by scanning a melody and chord chart. He knows enough of the underlying patterns to fill in the rest.

  This more generalist and intuitive perspective—the big view, if you will—has been studied extensively by University of Michigan psychologist Richard Nisbett. Like Tetlock, Nisbett found that inductive logic was being undervalued in decision making and that people’s reasoning would be vastly improved by weighing the odds and broadening focus, rather than just relying on highly defined expertise and prior experiences.

  After noticing how differently his Chinese graduate students approached certain kinds of problems, Nisbett decided to compare the ways East Asians and Westerners perceive and think. He showed students pictures of animals in their natural environments and tracked their eye movements as they scanned the images. While his American students invariably looked at the animal first, and only then took time to look at the background, his Asian students looked at the forest or field first. When they did finally look at the tiger or elephant, they spent much less time on it than their American counterparts. When asked about the pictures later, the American students had much better recall of the specific objects they had seen, but the Chinese could recount the background in great detail. Nisbett could even fool his Asian students into believing they hadn’t seen an image before, simply by changing the background. He did the same thing to the Americans: when he completely changed the environment but left the subject animal alone, they had no idea the picture had changed. They were fixated on the focal point and blind to the greater environment.

 

‹ Prev