by Kabir Sehgal
Common depictions in Paleolithic art are large animals, portly women, and triangular vulvas; they likely represent food and sex. The conspicuous absence of men in the depictions and prevalence of women and their genitalia highlights the importance of reproduction to early humans. Surely not all women were portly, yet 95 percent were depicted thus, leading archaeologists to surmise that large women could be symbols of fertility, health, and an ability to rear children. In a study of more than fifty images of Paleolithic women, the waist-to-hip ratio was found to average 0.655, almost identical to the ratio that men in contemporary studies found most appealing.67 Paleolithic depictions of women were a symbolic ideal like today’s swimsuit models.
The archaeological register is filled with our ancestors’ attempts at externally manifesting thought—such as through decorative hand axes—as early as 1.7 million years ago. Paleolithic art revealed the state of the early human capacity for external manifestation. Humans were highly creative, and that would alter everything.68 Humans could see a bison, remember it, and then re-create it artistically at a later point. This ability to render thoughts in the real world, to create art, is unique to humans.
Civilization has been defined as the “storage of symbols outside the brain.”69 Whether it’s a hand axe or a coin, these items were first constructed with mental scaffolding. These mental representations were passed on as symbols and ideas to others, who could hone them and pass them down to future generations. Human society was becoming a decentralized neural network, and tools enabled humans to work cohesively as one unit—like an ant colony. The evolutionary force of exchange that began in the cell had gone social to create a “super-brain,” or collective mind of society. Hoffecker observes:
The mind is the super-brain: the integration of brains facilitated by language and other forms of symbolism created the mind. Although the super-brain is composed of a group of individual brains that are the product of evolutionary biology, it exhibits properties that are unknown or had not been evident in organic evolution.70
One of these properties was the human ability to externalize thoughts writ large, throughout society. By the time the Neolithic era begins, the advanced faculties of humans would lead to new ways to cooperate and new tools to facilitate it. It had taken millions of years, but two factors necessary for the creation of money were in place: cooperation and symbolic thought.
Humans were aware that cooperation increased their chances of survival. They could now create symbolic and social tools to abet their biological goals. It was this capacity that turned the currency of energy into commodity money (C-C), which we eventually transformed into coins and paper (C-M-C). These symbols of value represented what they could obtain. The creativity that the brain unleashed would even lead to putting artistic symbols on monetary ones—the signs and insignias found on ancient and modern currency alike. Just as Paleolithic artists drew on caves, today we illustrate on money.
My expedition to the Galapagos helped me glimpse the biological origin of a relatively modern instrument. It was these islands that helped Charles Darwin recognize that natural selection was a common link among all species. It’s on these same islands that Rachel opened my eyes to the ubiquity of exchange in nature, and how all organisms rely on each other to survive.
On a tour of the Galapagos Science Center, a joint facility between the University of North Carolina at Chapel Hill and Ecuador’s Universidad San Francisco de Quito, on San Cristobal Island, I met more biologists who shared additional examples of exchange. The laboratory technical director and I sat on wooden stools in the marine biology lab while I peppered him with questions. He boiled it down simply for me: “If exchange helps organisms survive, then it must be part of the evolutionary algorithm.”
Humans became aware that cooperation was evolutionarily advantageous, so they created tools like language and hand axes to foster it. Humans have continuously created new tools to make it easier to exchange, to cooperate, and to increase our chances of survival. The changing forms of money over thousands of years are improvements to make exchange more convenient and efficient.
At first these tools helped us trade energy. Nourishment is the catalyst for symbiotic exchange for most organisms. As humans produced more than they could consume, not living hand-to-mouth, they traded food items like salt and barley. This commodity money was nonsymbolic, since it directly filled the ultimate need of survival. As our brains gained the capacity for symbolic thought, humans didn’t just exchange food, but also other tools, like hand axes, spears, or agricultural devices. These items were an abstraction from the original evolutionary need—to eat and survive. Yet they were symbols of what they supplanted.
Over time, the symbolic mind and the “super-brain” of humanity would see value in more than just these tools. They would see the value in creating a universal tool that would be accepted by all mankind. What started as commodity money became bullion, and eventually coins, paper, and digital currencies. This monetary tool would become a further abstraction from its evolutionary purpose. It would be limited only by our imagination, and the organ that makes that possible.
CHAPTER TWO
A Piece of My Mind
The psychology of money
Because to be truly rational, you can’t hold a conviction without significant empirical evidence.
—Alan Greenspan1
He weighs losses about twice as much as gains, which is normal.
—Daniel Kahneman2
Brain imaging gives us the hope of opening up the black box.
—Brian Knutson3
Neuroeconomists use brain scans to understand how the brain makes financial decisions.
In ancient Egypt, the brain was considered insignificant. To prepare for mummification, it was sucked out via the nose and discarded. Instead the heart was preserved, because it was believed to be the seat of the soul and carrier of the consciousness.4
By now we know, of course, that the brain facilitates thought and catalyzes action. It’s where symbols like money are created and interpreted. Money is a manifestation of the mind. It makes sense, then, to explore how the brain processes money, interprets its value, and understands the idea of it. A telling method to determine how the brain registers the idea of money is to discern why and how we make financial decisions, which ultimately shape our lives.
Only recently have researchers used brain scans to understand how the mind processes money. Yet, despite not having this technology, economists have long made assumptions about human behavior, which indeed is directed by the brain. In Economics 101, most students learn about Homo economicus, or the economic man who makes self-interested decisions, which makes sense from an evolutionary standpoint, since acting in this manner should maximize one’s chances for survival. This supposition that humans are rational, logical actors is also the basis of many economic forecasts.
All too often, these forecasts are wrong. For example, the economists at the Federal Reserve, International Monetary Fund, and several Wall Street investment banks didn’t foresee the great financial crisis of 2008 or its macroeconomic consequences.
Even as late as September 2008, months after the collapse of Bear Stearns, the median forecast for US economic growth in the fourth quarter was 0.2 percent, as reported by Blue Chip Economic Indicators, versus the annualized decline of 6.2 percent actually recorded, the largest drop since the 1982 recession.5 Their forecasts failed in part because they were based on the questionable assumption that the market is self-correcting and composed of rational actors.
It raises this question: If economists better understand the mind, could they make more accurate forecasts? To answer this question, consider three economic thinkers: Alan Greenspan, Daniel Kahneman, and Brian Knutson. Greenspan spent many years at the helm of the Fed and long believed that markets were self-correcting and composed of rational actors. But in the wake of the 2008 financial crisis, this line of thought was discredited, and Greenspan has since modified his views. Kahneman is a b
ehavioral economist who studies the psychology of decisions and has documented how humans make many irrational financial decisions. Knutson is a neuroeconomist who uses brain imaging to visualize how humans make financial decisions.
Neuroeconomics presents great potential for the future of the economics discipline as a whole. By improving our grasp of how the brain processes the idea of money, perhaps economists will make more accurate forecasts. But the development of this field is a slow process, and, in the short run, it will complement rather than supplant the assumptions and forecasts of more traditional economists. Nevertheless, it’s worthwhile to examine these three perspectives to see how our thinking about money continues to evolve.
Let’s Get Rational
Alan Greenspan wasn’t expecting this type of phone call. The date was March 16, 2008, and an official from the Fed was on the line. Greenspan was informed that the Fed would lend $29 billion for J. P. Morgan to purchase Bear Stearns. It was a jarring moment because he hadn’t seen the crisis coming. To be sure, some investors recognized the problems with subprime assets and foresaw the market decline—and even made lots of money by betting against the conventional wisdom. Yet Greenspan describes the crisis as “almost universally unanticipated” in his 2013 book, The Map and the Territory.6 He raises the questions, “What went wrong? Why was virtually every economist and policy maker of note so off about so large an issue?”7 He answers his questions, in part, by blaming economic forecasts.
But let’s consider another question: Why do we forecast in the first place? Even though Greenspan acknowledges that forecasting is fraught with uncertainty, he asserts that “our nature demands it.”8 This economist assumes that there is a biological reason for predictions.
He’s right. In one study that demonstrates our nature to forecast, researchers scanned the brains of people who were provided squirts of juice and water in predictable and unpredictable patterns. There was heightened activity in the nucleus accumbens, a region deep within the brain involved in processing rewards, when subjects encountered a predictable pattern.9 In brief, an accurate forecast resulted in a jolt of pleasure. The evolutionary logic seems to be that when we identify predictable patterns, uncertainty is reduced, which increases our chances of survival.
Even though money has become an abstraction of its original evolutionary purpose, we humans register that it’s critical to our survival. “Money represents the means of maintaining life and sustaining us as organisms in our world,” states neuroscientist Antonio Damasio.10 We want to know our financial future because it can aid in our survival.
Forecasting has been integral to the modern financial services industry since the early twentieth century. It was a period of tumult, as the United States economy fluctuated between robust growth and financial panics. Businesspeople grew concerned with the instability because they had to plan for the future, minimize the chances of an idle workforce, and maximize capacity in times of strong demand. In his illuminating book Fortune Tellers, Walter Friedman explains:
Economic forecasting arose when it did because while the effort to introduce rationality—in the form of the scientific method—was emerging, the insatiable human longing for predictability persisted in the industrializing economy. Indeed, the early twentieth century saw a curious enlistment of science in a range of efforts to temper the uncertainty of the future.11
Early forecasters were influenced by meteorology and even used weather jargon when making economic predictions. They also had access to growing amounts of economic data, as government agencies tracked and published metrics on, for example, commodity prices.12 After the Panic of 1907, in which the stock market fell nearly 50 percent and many banks failed, retail and institutional investors were desperate for ways to mitigate uncertainty. They found solace in forecasters who leveraged and analyzed rich sets of data. As Friedman puts it:
Forecasts… were more than predictions of the future. They were assumptions about what the economy was and how the economy worked. By pointing out trends in data and creating charts and models, forecasters made capitalism seem natural, logical, and most of all, predictable.13
Over time, these forecasts were improved and refined, and many businesspeople came to rely on them for making important decisions—even treating these predictions as facts. However, unlike many hard sciences, economic forecasting is less reliable because it’s based on something that’s continuously changing: human behavior.
Nevertheless, the supposition that people are rational and able to weigh trade-offs between choices is the foundational model of human behavior on which modern economics is built. After all, economics is typically the study of how we make decisions to allocate limited resources to satisfy limitless desires. Conventional economics incorporates a widely studied model of human behavior: (1) Consider how each option can increase your happiness; (2) mull your constraints, such as time and money; and (3) choose what gives you the most happiness.
According to many economists, rational people form a rational market. And rational markets lead to rational or “right” prices. The price of a stock reflects the best collective intelligence of the market. Whether a stock goes up or down, the market dictates the “right” price. This belief formed the efficient market hypothesis, advanced by economist Gene Fama in his 1970 paper titled “Efficient Markets: Theory and Evidence,” which contends that stock prices reflect all available public information.14 Because the price is a manifestation of the wisdom of crowds, one can’t outperform the market—because it’s already “right.” A more straightforward name for the hypothesis is rational markets theory. This view shaped much of twentieth-century economic thought, from university campuses to Wall Street. It influenced the deregulation of the financial services industry and the growth of financial instruments like index funds and derivatives.15 In his The Myth of the Rational Market, Justin Fox writes:
The belief in the so-called rational market… was about more than just stocks. It held that as more stocks, bonds, options, futures, and other financial instruments were created and traded, they would inevitably bring more rationality to economic activity. Financial markets possessed a wisdom that individuals, companies, and governments did not… [From this] flowed the conviction that… prices were in some fundamental sense right.16
In other words, the market knows best. While many economists advanced parts of this theory, one stands out in packaging it for Wall Street. Harry Markowitz, a graduate student at the University of Chicago in the 1950s, trained his mathematical abilities on the financial markets. He read the works of Benjamin Graham, the father of value investing, and gleaned that investors diversify their holdings to diminish their risks but rarely consider the risk of the entire portfolio. He developed a formula for calculating the risk on a stock portfolio, taking into account the expected return of each stock, the uncertainty of that outcome (risk), and the degree to which the returns on the stocks would move in the same direction (correlation).17 He originated the modern portfolio theory that investors still use to optimize portfolios for risk and return: Portfolio managers could weigh their options and constraints, and choose the securities that would provide the greatest expected returns with the least risk. He and Fama were later awarded Nobel Prizes in Economics.
Modern portfolio theory sounds reasonable. But surprisingly, even Markowitz found it hard to follow when he considered how to invest for his own retirement:
I should have computed the historical co-variances of the asset classes and drawn an efficient frontier. Instead, I visualized my grief if the stock market went way up and I wasn’t in it—or if it went way down and I was completely in it. My intention was to minimize my future regret. So I split my contributions 50/50 between bonds and equities.18
The inventor of modern portfolio theory didn’t use it. Even though he was aware of the supposedly rational thing to do, he chose a seemingly irrational one.19 That a Nobel laureate economist acted in such a way demonstrates the problem with assuming that humans are completely rati
onal actors. In one Japanese study, researchers evaluated 446 residents of affluent Tokyo communities, people you might expect to act in a completely rational and self-interested manner, and concluded that only 31, or 7 percent, acted in line with the Homo economicus model of human behavior.20 “Economists’ models are just awful. They completely forget how important the human element is,” says Paul Wilmott, an expert and educator in quantitative finance.21
The economic forecasters who didn’t anticipate the 2008 crisis made the same mistake. In 2009, eight accomplished economists wrote a paper titled “The Financial Crisis and the Systemic Failure of Academic Economists,” in which they lambaste the use of mathematical models that didn’t account for differences in the ways that people make decisions. They assert that many forecasters assumed that all market actors, including individuals and institutions, would behave in a rational manner. Wharton School professor Sidney Winter, who agrees with the authors, reasons that “rational behavior is not that dependable, or else people would not do self-destructive things like taking out mortgages they could not afford, a key factor in the financial crisis. Nor would completely rational executives at financial firms invest in securities backed by those risky mortgages.”22 The eight economists make the case that forecasters should account for variations in human psychology and behavior:
The major problem [with forecasting] is that despite its many refinements, this is not at all an approach based on, and confirmed by, empirical research. In fact, it stands in stark contrast to a broad set of regularities in human behavior discovered both in psychology and what is called behavioral and experimental economics… Economic modeling has to be compatible with insights from other branches of science on human behavior. It is highly problematic to insist on a specific view of humans in economic settings that is irreconcilable with evidence.23