However you look at the Internet—locally or globally, on short time scales or long—it looks exactly the same. Although the discovery of this fractal structure, around 1995, was an unwelcome surprise because standard traffic-control algorithms, as used by routers, were designed assuming that all properties of the network dynamics would be random, the fractality is also broadly characteristic of biological networks. Without a master blueprint, the evolution of an Internet is subject to the same underlying statistical laws that govern biological evolution, and structure emerges spontaneously, without the need for a controlling entity. Moreover, the resultant network can come to life in strange and unpredictable ways, obeying new laws whose origin cannot be traced to any one part of the network. The network behaves as a collective, not just the sum of parts, and to talk about causality is meaningless, because the behavior is distributed in space and in time.
Between 2:42 P.M. and 2:50 P.M. on May 6, 2010, the Dow-Jones Industrial Average experienced a rapid decline and subsequent rebound of nearly six hundred points, an event of unprecedented magnitude and brevity. This disruption occurred as part of a tumultuous event on that day now known as the Flash Crash, which affected numerous market indices and individual stocks, even causing some stocks to be priced at unbelievable levels (e.g., Accenture was at one point priced at $.01).
With tick-by-tick data available for every trade, we can watch the crash unfold in slow motion, a film of a financial calamity. But the cause of the crash itself remains a mystery. The U.S. Securities & Exchange Commission report on the Flash Crash was able to identify the trigger event (a $4 billion sale by a mutual fund) but could provide no detailed understanding of why this event caused the crash. The conditions that precipitated the crash were already embedded in the market’s web of causation, a self-organized, rapidly evolving structure created by the interplay of high-frequency trading algorithms. The Flash Crash was the birth cry of a network coming to life, eerily reminiscent of Arthur C. Clarke’s science fiction story “Dial F for Frankenstein,” which begins “At 0150 GMT on December 1, 1975, every telephone in the world started to ring.” I’m excited by the scientific challenge of understanding all this in detail, because . . . well, never mind. I guess I don’t really know.
The Name Game
Stuart Firestein
Neuroscientist, chair of the Department of Biological Sciences, Columbia University
Too often in science we operate under the principle that “to name it is to tame it,” or so we think. One of the easiest mistakes, even among working scientists, is to believe that labeling something has somehow or other added to an explanation or understanding of it. Worse than that, we use it all the time when we’re teaching, leading students to believe that a phenomenon named is a phenomenon known, and that to know the name is to know the phenomenon. It’s what I and others have called the nominal fallacy. In biology especially, we have labels for everything—molecules, anatomical parts, physiological functions, organisms, ideas, hypotheses. The nominal fallacy is the error of believing that the label carries explanatory information.
An instance of the nominal fallacy is most easily seen when the meaning or importance of a term or concept shrinks with knowledge. One example of this would be the word “instinct.” “Instinct” refers to a set of behaviors whose actual cause we don’t know, or simply don’t understand or have access to, and therefore we call them instinctual, inborn, innate. Often this is the end of the exploration of these behaviors. They are the “nature” part of the nature-nurture argument (a term itself likely a product of the nominal fallacy) and therefore can’t be broken down or reduced any further. But experience has shown that this is rarely the truth.
One of the great examples: It was for quite some time thought that when chickens hatched and immediately began pecking the ground for food, this behavior must have been instinctive. In the 1920s, a Chinese researcher named Zing-Yang Kuo made a remarkable set of observations on the developing chick egg that overturned this idea—and many similar ones. Using a technique of elegant simplicity, he found that rubbing heated Vaseline on a chicken egg caused it to become transparent enough so that he could see the embryo inside without disturbing it. In this way, he was able to make detailed observations of the chick’s development, from fertilization to hatching. One of his observations was that in order for the growing embryo to fit properly in the egg, the neck is bent over the chest in such a way that the head rests on the chest just where the developing heart is encased. As the heart begins beating, the head of the chicken is moved up and down in a manner that precisely mimics the movement that will be used later for pecking the ground. Thus the “innate” pecking behavior that the chicken appears to know miraculously upon birth has, in fact, been practiced for more than a week within the egg.
In medicine, as well, physicians often use technical terms that lead patients to believe that more is known about pathology than may actually be the case. In Parkinson’s patients, we note an altered gait and in general slower movements. Physicians call this bradykinesia, but it doesn’t really tell you any more than if they simply said, “They move slower.” Why do they move slower? What is the pathology and what is the mechanism for this slowed movement? These are the deeper questions hidden by the simple statement that “a cardinal symptom of Parkinson’s is bradykinesia,” satisfying though it may be to say the word to a patient’s family.
In science, the one critical issue is to be able to distinguish between what we know and what we don’t know. This is often difficult enough, as things that seem known sometimes become unknown—or at least more ambiguous. When is it time to quit doing an experiment because we now know something? When is it time to stop spending money and resources on a particular line of investigation because the facts are known? This line between the known and the unknown is already difficult enough to define, but the nominal fallacy often needlessly obscures it. Even words that, like “gravity,” seem well settled may lend more of an aura to an idea than it deserves. After all, the apparently very-well-settled ideas of Newtonian gravity were almost completely undone after four hundred years by Einstein’s general relativity. And still, today, physicists do not have a clear understanding of what gravity is or where it comes from, even though its effects can be described quite accurately.
Another facet of the nominal fallacy is the danger of using common words and giving them a scientific meaning. This has the often disastrous effect of leading an unwary public down a path of misunderstanding. Words like “theory,” “law,” and “force” do not mean in common discourse what they mean to a scientist. “Success” in Darwinian evolution is not the same “success” as taught by Dale Carnegie. “Force” to a physicist has a meaning quite different from that used in political discourse. The worst of these, though, may be “theory” and “law,” which are almost polar opposites—theory being a strong idea in science while vague in common discourse, and law being a much more muscular social than scientific concept. These differences lead to sometimes serious misunderstandings between scientists and the public that supports their work.
Of course language is critical, and we must have names for things to talk about them. But the power of language to direct thought should never be taken lightly, and the dangers of the name game deserve our respect.
Living Is Fatal
Seth Lloyd
Quantum mechanical engineer, MIT; author, Programming the Universe
The ability to reason clearly in the face of uncertainty.
If everybody could learn to deal better with the unknown, this would improve not only their individual cognitive toolkit (to be placed in a slot right next to the ability to operate a remote control, perhaps) but the chances for humanity as a whole.
A well-developed scientific method for dealing with the unknown has existed for many years—the mathematical theory of probability. Probabilities are numbers whose values reflect how likely different events are to take place. People
are bad at assessing probabilities. They are bad at it not just because they are bad at addition and multiplication. Rather, people are bad at probability on a deep, intuitive level: They overestimate the probability of rare but shocking events—a burglar breaking into your bedroom while you’re asleep, say. Conversely, they underestimate the probability of common but quiet and insidious events—the slow accretion of globules of fat on the walls of an artery, or another ton of carbon dioxide pumped into the atmosphere.
I can’t say I’m optimistic about the odds that people will learn to understand the science of odds. When it comes to understanding probability, people basically suck. Consider the following example, based on a true story and reported by Joel Cohen of Rockefeller University. A group of graduate students note that women have a significantly lower chance of admission than men to the graduate programs at a major university. The data are unambiguous: Women applicants are only two-thirds as likely as male applicants to be admitted. The graduate students file suit against the university, alleging discrimination on the basis of gender. When admissions data are examined on a department-by-department basis, however, a strange fact emerges: Within each department, women are more likely to be admitted than men. How can this possibly be?
The answer turns out to be simple, if counterintuitive. More women are applying to departments that have few positions. These departments admit only a small percentage of applicants, men or women. Men, by contrast, are applying to departments that have more positions and so admit a higher percentage of applicants. Within each department, women have a better chance of admission than men—it’s just that few women apply to the departments that are easy to get into.
This counterintuitive result indicates that the admissions committees in the different departments are not discriminating against women. That doesn’t mean that bias is absent. The number of graduate fellowships available in a particular field is determined largely by the federal government, which chooses how to allocate research funds to different fields. It is not the university that is guilty of sexual discrimination but the society as a whole, which chose to devote more resources—and so more graduate fellowships—to the fields preferred by men.
Of course, some people are good at probability. A car-insurance company that can’t accurately determine the probabilities of accidents will go broke. In effect, when we pay premiums to insure ourselves against a rare event, we are buying into the insurance company’s estimate of just how likely that event is. Driving a car, however, is one of those common but dangerous processes where human beings habitually underestimate the odds of something bad happening. Accordingly, some are disinclined to obtain car insurance (perhaps not suprising, when the considerable majority of people rate themselves as better-than-average drivers). When a state government requires its citizens to buy car insurance, it does so because it figures, rightly, that people are underestimating the odds of an accident.
Let’s consider the debate over whether health insurance should be required by law. Living, like driving, is a common but dangerous process where people habitually underestimate risk, despite the fact that, with probability equal to 1, living is fatal.
Uncalculated Risk
Garrett Lisi
Independent theoretical physicist
We humans are terrible at dealing with probability. We are not merely bad at it but seem hardwired to be incompetent, in spite of the fact that we encounter innumerable circumstances every day which depend on accurate probabilistic calculations for our well-being. This incompetence is reflected in our language, in which the common words used to convey likelihood are “probably” and “usually”—vaguely implying a 50 to 100 percent chance. Going beyond the crude expression requires awkwardly geeky phrasing, such as “with 70 percent certainty,” likely only to raise the eyebrow of a casual listener bemused by the unexpected precision. This blind spot in our collective consciousness—the inability to deal with probability—may seem insignificant, but it has dire practical consequences. We are afraid of the wrong things, and we are making bad decisions.
Imagine the typical emotional reaction to seeing a spider: fear, ranging from minor trepidation to terror. But what is the likelihood of dying from a spider bite? Fewer than four people a year (on average) die from spider bites, establishing the expected risk of death by spider at lower than 1 in 100 million. This risk is so minuscule that it is actually counterproductive to worry about it: Millions of people die each year from stress-related illnesses. The startling implication is that the risk of being bitten and killed by a spider is less than the risk that being afraid of spiders will kill you because of the increased stress.
Our irrational fears and inclinations are costly. The typical reaction to seeing a sugary doughnut is the desire to consume it. But given the potential negative impact of that doughnut, including the increased risk of heart disease and reduction in overall health, our reaction should rationally be one of fear and revulsion. It may seem absurd to fear a doughnut—or, even more dangerous, a cigarette—but this reaction rationally reflects the potential negative effect on our lives.
We are especially ill-equipped to manage risk when dealing with small likelihoods of major events. This is evidenced by the success of lotteries and casinos at taking people’s money, but there are many other examples. The likelihood of being killed by terrorism is extremely low, yet we have instituted actions to counter terrorism that significantly reduce our quality of life. As a recent example, X-ray body scanners could increase the risk of cancer to a degree greater than the risk from terrorism—the same sort of counterproductive overreaction as the one to spiders. This does not imply that we should let spiders, or terrorists, crawl all over us—but the risks need to be managed rationally.
Socially, the act of expressing uncertainty is a display of weakness. But our lives are awash in uncertainty, and rational consideration of contingencies and likelihoods is the only sound basis for good decisions. As another example, a federal judge recently issued an injunction blocking stem-cell research funding. The probability that stem-cell research will quickly lead to life-saving medicine is low, but if successful, the positive effects could be huge. If one considers outcomes and approximates the probabilities, the conclusion is that the judge’s decision destroyed the lives of thousands of people, based on probabilistic expectation.
How do we make rational decisions based on contingencies? That judge didn’t actually cause thousands of people to die . . . or did he? If we follow the “many worlds” interpretation of quantum physics—the most direct interpretation of its mathematical description—then our universe is continually branching into all possible contingencies: There is a world in which stem-cell research saves millions of lives and another world in which people die because of the judge’s decision. Using the “frequentist” method of calculating probability, we have to add the probabilities of the worlds in which an event occurs to obtain the probability of that event.
Quantum mechanics dictates that the world we experience will happen according to this probability—the likelihood of the event. In this bizarre way, quantum mechanics reconciles the frequentist and “Bayesian” points of view, equating the frequency of an event over many possible worlds with its likelihood. An “expectation value,” such as the expected number of people killed by the judge’s decision, is the number of people killed in the various contingencies, weighted by their probabilities. This expected value is not necessarily likely to happen but is the weighted average of the expected outcomes—useful information when making decisions. In order to make good decisions about risk, we need to become better at these mental gymnastics, improve our language, and retrain our intuition.
Perhaps the best arena for honing our skills and making precise probabilistic assessments would be a betting market—an open site for betting on the outcomes of many quantifiable and socially significant events. In making good bets, all the tools and shorthand abstractions of Bayesian inference come into p
lay—translating directly to the ability to make good decisions. With these skills, the risks we face in everyday life would become clearer and we would develop more rational intuitive responses to uncalculated risks, based on collective rational assessment and social conditioning.
We might get over our excessive fear of spiders and develop a healthy aversion to doughnuts, cigarettes, television, and stressful full-time employment. We would become more aware of the low cost, compared to probable rewards, of research, including research into improving the quality and duration of human life. And more subtly, as we became more aware and apprehensive of ubiquitous vague language such as “probably” and “usually,” our standards of probabilistic description would improve.
Making good decisions requires concentrated mental effort, and if we overdo it, we run the risk of being counterproductive through increased stress and wasted time. So it’s best to balance, and play, and take healthy risks—as the greatest risk is that we’ll get to the end of our lives having never risked them on anything.
Truth Is a Model
Neil Gershenfeld
Physicist; director, MIT’s Center for Bits and Atoms; author, Fab: The Coming Revolution on Your Desktop—From Personal Computers to Personal Fabrication
This Will Make You Smarter Page 8