The Black Swan
Page 49
General works on memory: In psychology, Schacter (2001) is a review work of the memory biases with links to the hindsight effects. In neurobiology, see Rose (2003) and Squire and Kandel (2000). A general textbook on memory (in empirical psychology) is Baddeley (1997).
Intellectual colonies and social life: See the account in Collins (1998) of the “lineages” of philosophers (although I don’t think he was aware enough of the Casanova problem to take into account the bias making the works of solo philosophers less likely to survive). For an illustration of the aggressiveness of groups, see Uglow (2003).
Hyman Minsky’s work: Minsky (1982).
Asymmetry: Prospect theory (Kahneman and Tversky [1979] and Tversky and Kahneman [1992]) accounts for the asymmetry between bad and good random events, but it also shows that the negative domain is convex while the positive domain is concave, meaning that a loss of 100 is less painful than 100 losses of 1 but that a gain of 100 is also far less pleasurable than 100 times a gain of 1.
Neural correlates of the asymmetry: See Davidson’s work in Goleman (2003), Lane et al. (1997), and Gehring and Willoughby (2002). Csikszentmihalyi (1993, 1998) further explains the attractiveness of steady payoffs with his theory of “flow.”
Deferred rewards and its neural correlates: McLure et al. (2004) show the brain activation in the cortex upon making a decision to defer, providing insight on the limbic impulse behind immediacy and the cortical activity in delaying. See also Loewenstein et al. (1992), Elster (1998), Berridge (2005). For the neurology of preferences in Capuchin monkeys, Chen et al. (2005).
Bleed or blowup: Gladwell (2002) and Taleb (2004c). Why bleed is painful can be explained by dull stress; Sapolsky et al. (2003) and Sapolsky (1998). For how companies like steady returns, Degeorge and Zeckhauser (1999). Poetics of hope: Mihailescu (2006).
Discontinuities and jumps: Classified by René Thom as constituting seven classes; Thom (1980).
Evolution and small probabilities: Consider also the naïve evolutionary thinking positing the “optimality” of selection. The founder of sociobiology, the great E. O. Wilson, does not agree with such optimality when it comes to rare events. In Wilson (2002), he writes:
The human brain evidently evolved to commit itself emotionally only to a small piece of geography, a limited band of kinsmen, and two or three generations into the future. To look neither far ahead nor far afield is elemental in a Darwinian sense. We are innately inclined to ignore any distant possibility not yet requiring examination. It is, people say, just good common sense. Why do they think in this shortsighted way?
The reason is simple: it is a hardwired part of our Paleolithic heritage. For hundreds of millennia, those who worked for short-term gain within a small circle of relatives and friends lived longer and left more offspring—even when their collective striving caused their chiefdoms and empires to crumble around them. The long view that might have saved their distant descendants required a vision and extended altruism instinctively difficult to marshal.
See also Miller (2000): “Evolution has no foresight. It lacks the long-term vision of drug company management. A species can’t raise venture capital to pay its bills while its research team … This makes it hard to explain innovations.”
Note that neither author considered my age argument.
CHAPTER 8
Silent evidence bears the name wrong reference class in the nasty field of philosophy of probability, anthropic bias in physics, and survivorship bias in statistics (economists present the interesting attribute of having rediscovered it a few times while being severely fooled by it).
Confirmation: Bacon says in On Truth, “No pleasure is comparable to the standing upon the vantage ground of truth (a hill not to be commanded and where the air is always clear and serene), and to see the errors, and wanderings, and mists, and tempests, in the vale below.” This easily shows how great intentions can lead to the confirmation fallacy.
Bacon did not understand the empiricists: He was looking for the golden mean. Again, from On Truth:
There are three sources of error and three species of false philosophy; the sophistic, the empiric and the superstitious. … Aristotle affords the most eminent instance of the first; for he corrupted natural philosophy by logic—thus he formed the world of categories. … Nor is much stress to be laid on his frequent recourse to experiment in his books on animals, his problems and other treatises, for he had already decided, without having properly consulted experience as the basis of his decisions and axioms. … The empiric school produces dogmas of a more deformed and monstrous nature than the sophistic or theoretic school; not being founded in the light of common notions (which however poor and superstitious, is yet in a manner universal and of general tendency), but in the confined obscurity of a few experiments.
Bacon’s misconception may be the reason it took us a while to understand that they treated history (and experiments) as mere and vague “guidance,” i.e., epilogy.
Publishing: Allen (2005), Klebanoff (2002), Epstein (2001), de Bellaigue (2004), and Blake (1999). For a funny list of rejections, see Bernard (2002) and White (1982). Michael Korda’s memoir, Korda (2000), adds some color to the business. These books are anecdotal, but we will see later that books follow steep scale-invariant structures with the implication of a severe role for randomness.
Anthropic bias: See the wonderful and comprehensive discussion in Bostrom (2002). In physics, see Barrow and Tipler (1986) and Rees (2004). Sornette (2004) has Gott’s derivation of survival as a power law. In finance, Sullivan et al. (1999) discuss survivorship bias. See also Taleb (2004a). Studies that ignore the bias and state inappropriate conclusions: Stanley and Danko (1996) and the more foolish Stanley (2000).
Manuscripts and the Phoenicians: For survival and science, see Cisne (2005). Note that the article takes into account physical survival (like fossil), not cultural, which implies a selection bias. Courtesy Peter Bevelin.
Stigler’s law of eponymy: Stigler (2002).
French book statistics: Lire, April 2005.
Why dispersion matters: More technically, the distribution of the extremum (i.e., the maximum or minimum) of a random variable depends more on the variance of the process than on its mean. Someone whose weight tends to fluctuate a lot is more likely to show you a picture of himself very thin than someone else whose weight is on average lower but remains constant. The mean (read skills) sometimes plays a very, very small role.
Fossil record: I thank the reader Frederick Colbourne for his comments on this subject. The literature calls it the “pull of the recent,” but has difficulty estimating the effects, owing to disagreements. See Jablonski et al. (2003).
Undiscovered public knowledge: Here is another manifestation of silent evidence: you can actually do lab work sitting in an armchair, just by linking bits and pieces of research by people who labor apart from one another and miss on connections. Using bibliographic analysis, it is possible to find links between published information that had not been known previously by researchers. I “discovered” the vindication of the armchair in Fuller (2005). For other interesting discoveries, see Spasser (1997) and Swanson (1986a, 1986b, 1987).
Crime: The definition of economic “crime” is something that comes in hindsight. Regulations, once enacted, do not run retrospectively, so many activities causing excess are never sanctioned (e.g., bribery).
Bastiat: See Bastiat (1862–1864).
Casanova: I thank the reader Milo Jones for pointing out to me the exact number of volumes. See Masters (1969).
Reference point problem: Taking into account background information requires a form of thinking in conditional terms that, oddly, many scientists (especially the better ones) are incapable of handling. The difference between the two odds is called, simply, conditional probability. We are computing the probability of surviving conditional on our being in the sample itself. Simply put, you cannot compute probabilities if your survival is part of the condition of the realization of the process.
/>
Plagues: See McNeill (1976).
CHAPTER 9
Intelligence and Nobel: Simonton (1999). If IQ scores correlate, they do so very weakly with subsequent success.
“Uncertainty”: Knight (1923). My definition of such risk (Taleb, 2007c) is that it is a normative situation, where we can be certain about probabilities, i.e., no metaprobabilities. Whereas, if randomness and risk result from epistemic opacity, the difficulty in seeing causes, then necessarily the distinction is bunk. Any reader of Cicero would recognize it as his probability; see epistemic opacity in his De Divinatione, Liber primus, LVI, 127:
Qui enim teneat causas rerum futurarum, idem necesse est omnia teneat quae futura sint. Quod cum nemo facere nisi deus possit, relinquendum est homini, ut signis quibusdam consequentia declarantibus futura praesentiat.
“He who knows the causes will understand the future, except that, given that nobody outside God possesses such faculty …”
Philosophy and epistemology of probability: Laplace. Treatise, Keynes (1920), de Finetti (1931), Kyburg (1983), Levi (1970), Ayer, Hacking (1990, 2001), Gillies (2000), von Mises (1928), von Plato (1994), Carnap (1950), Cohen (1989), Popper (1971), Eatwell et al. (1987), and Gigerenzer et al. (1989).
History of statistical knowledge and methods: I found no intelligent work in the history of statistics, i.e., one that does not fall prey to the ludic fallacy or Gaussianism. For a conventional account, see Bernstein (1996) and David (1962).
General books on probability and information theory: Cover and Thomas (1991); less technical but excellent, Bayer (2003). For a probabilistic view of information theory: the posthumous Jaynes (2003) is the only mathematical book other than de Finetti’s work that I can recommend to the general reader, owing to his Bayesian approach and his allergy for the formalism of the idiot savant.
Poker: It escapes the ludic fallacy; see Taleb (2006a).
Plato’s normative approach to left and right hands: See McManus (2002).
Nietzsche’s bildungsphilister: See van Tongeren (2002) and Hicks and Rosenberg (2003). Note that because of the confirmation bias academics will tell you that intellectuals “lack rigor,” and will bring examples of those who do, not those who don’t.
Economics books that deal with uncertainty: Carter et al. (1962), Shackle (1961, 1973), Hayek (1994). Hirshleifer and Riley (1992) fits uncertainty into neoclassical economics.
Incomputability: For earthquakes, see Freedman and Stark (2003) (courtesy of Gur Huberman).
Academia and philistinism: There is a round-trip fallacy; if academia means rigor (which I doubt, since what I saw called “peer reviewing” is too often a masquerade), nonacademic does not imply nonrigorous. Why do I doubt the “rigor”? By the confirmation bias they show you their contributions yet in spite of the high number of laboring academics, a relatively minute fraction of our results come from them. A disproportionately high number of contributions come from freelance researchers and those dissingly called amateurs: Darwin, Freud, Marx, Mandelbrot, even the early Einstein. Influence on the part of an academic is usually accidental. This even held in the Middle Ages and the Renaissance, see Le Goff (1985). Also, the Enlightenment figures (Voltaire, Rousseau, d’Holbach, Diderot, Montesquieu) were all nonacademics at a time when academia was large.
CHAPTER 10
Overconfidence: Albert and Raiffa (1982) (though apparently the paper languished for a decade before formal publication). Lichtenstein and Fischhoff (1977) showed that overconfidence can be influenced by item difficulty; it typically diminishes and turns into underconfidence in easy items (compare with Armelius [1979]). Plenty of papers since have tried to pin down the conditions of calibration failures or robustness (be they task training, ecological aspects of the domain, level of education, or nationality): Dawes (1980), Koriat, Lichtenstein, and Fischhoff (1980), Mayseless and Kruglanski (1987), Dunning et al. (1990), Ayton and McClelland (1997), Gervais and Odean (1999), Griffin and Varey (1996), Juslin (1991, 1993, 1994), Juslin and Olsson (1997), Kadane and Lichtenstein (1982), May (1986), McClelland and Bolger (1994), Pfeifer (1994), Russo and Schoernaker (1992), Klayman et al. (1999). Note the decrease (unexpectedly) in overconfidence under group decisions: see Sniezek and Henry (1989)—and solutions in Plous (1995). I am suspicious here of the Mediocristan/Extremistan distinction and the unevenness of the variables. Alas, I found no paper making this distinction. There are also solutions in Stoll (1996), Arkes et al. (1987). For overconfidence in finance, see Thorley (1999) and Barber and Odean (1999). For cross-boundaries effects, Yates et al. (1996, 1998), Angele et al. (1982). For simultaneous overconfidence and underconfidence, see Erev, Wallsten, and Budescu (1994).
Frequency vs. probability—the ecological problem: Hoffrage and Gigerenzer (1998) think that overconfidence is less significant when the problem is expressed in frequencies as opposed to probabilities. In fact, there has been a debate about the difference between “ecology” and laboratory; see Gigerenzer et al. (2000), Gigerenzer and Richter (1990), and Gigerenzer (1991). We are “fast and frugal” (Gigerenzer and Goldstein [1996]). As far as the Black Swan is concerned, these problems of ecology do not arise: we do not live in an environment in which we are supplied with frequencies or, more generally, for which we are fit. Also in ecology, Spariosu (2004) for the ludic aspect, Cosmides and Tooby (1990). Leary (1987) for Brunswikian ideas, as well as Brunswik (1952).
Lack of awareness of ignorance: “In short, the same knowledge that underlies the ability to produce correct judgment is also the knowledge that underlies the ability to recognize correct judgment. To lack the former is to be deficient in the latter.” From Kruger and Dunning (1999).
Expert problem in isolation: I see the expert problem as indistinguishable from Matthew effects and Extremism fat tails (more later), yet I found no such link in the literatures of sociology and psychology.
Clinical knowledge and its problems: See Meehl (1954) and Dawes, Faust, and Meehl (1989). Most entertaining is the essay “Why I Do Not Attend Case Conferences” in Meehl (1973). See also Wagenaar and Keren (1985, 1986).
Financial analysts, herding, and forecasting: See Guedj and Bouchaud (2006), Abarbanell and Bernard (1992), Chen et al. (2002), De Bondt and Thaler (1990), Easterwood and Nutt (1999), Friesen and Weller (2002), Foster (1977), Hong and Kubik (2003), Jacob et al. (1999), Lim (2001), Liu (1998), Maines and Hand (1996), Mendenhall (1991), Mikhail et al. (1997, 1999), Zitzewitz (2001), and El-Galfy and Forbes (2005). For a comparison with weather forecasters (unfavorable): Tyszka and Zielonka (2002).
Economists and forecasting: Tetlock (2005), Makridakis and Hibon (2000), Makridakis et al. (1982), Makridakis et al. (1993), Gripaios (1994), Armstrong (1978, 1981); and rebuttals by McNees (1978), Tashman (2000), Blake et al. (1986), Onkal et al. (2003), Gillespie (1979), Baron (2004), Batchelor (1990, 2001), Dominitz and Grether (1999). Lamont (2002) looks for reputational factors: established forecasters get worse as they produce more radical forecasts to get attention—consistent with Tetlock’s hedgehog effect. Ahiya and Doi (2001) look for herd behavior in Japan. See McNees (1995), Remus et al. (1997), O’Neill and Desai (2005), Bewley and Fiebig (2002), Angner (2006), Bénassy-Quéré (2002); Brender and Pisani (2001) look at the Bloomberg consensus; De Bondt and Kappler (2004) claim evidence of weak persistence in fifty-two years of data, but I saw the slides in a presentation, never the paper, which after two years might never materialize. Overconfidence, Braun and Yaniv (1992). See Hahn (1993) for a general intellectual discussion. More general, Clemen (1986, 1989). For Game theory, Green (2005).
Many operators, such as James Montier, and many newspapers and magazines (such as The Economist), run casual tests of prediction. Cumulatively, they must be taken seriously since they cover more variables.
Popular culture: In 1931, Edward Angly exposed forecasts made by President Hoover in a book titled Oh Yeah? Another hilarious book is Cerf and Navasky (1998), where, incidentally, I got the pre-1973 oil-estimation story.
Effects of inform
ation: The major paper is Bruner and Potter (1964). I thank Danny Kahneman for discussions and pointing out this paper to me. See also Montier (2007), Oskamp (1965), and Benartzi (2001). These biases become ambiguous information (Griffin and Tversky [1992]). For how they fail to disappear with expertise and training, see Kahneman and Tversky (1982) and Tversky and Kahneman (1982). See Kunda (1990) for how preference-consistent information is taken at face value, while preference-inconsistent information is processed critically.
Planning fallacy: Kahneman and Tversky (1979) and Buehler, Griffin, and Ross (2002). The planning fallacy shows a consistent bias in people’s planning ability, even with matters of a repeatable nature—though it is more exaggerated with nonrepeatable events.
Wars: Trivers (2002).
Are there incentives to delay?: Flyvbjerg et al. (2002).
Oskamp: Oskamp (1965) and Montier (2007).
Task characteristics and effect on decision making: Shanteau (1992).
Epistēmē vs. Technē: This distinction harks back to Aristotle, but it recurs then dies? down—it most recently recurs in accounts such as tacit knowledge in “know how.” See Ryle (1949), Polanyi (1958/1974), and Mokyr (2002).
Catherine the Great: The number of lovers comes from Rounding (2006).
Life expectancy: www.annuityadvantage.com/lifeexpectancy.htm. For projects, I have used a probability of exceeding with a power-law exponent of 3/2: f= Kx3/2. Thus the conditional expectation of x, knowing that x exceeds a