Book Read Free

Rationality- From AI to Zombies

Page 54

by Eliezer Yudkowsky


  1. I think 0.92 is the highest correlation I’ve ever seen in any evolutionary psychology experiment, and indeed, one of the highest correlations I’ve seen in any psychology experiment. (Although I’ve seen e.g. a correlation of 0.98 reported for asking one group of subjects “How similar is A to B?” and another group “What is the probability of A given B?” on questions like “How likely are you to draw 60 red balls and 40 white balls from this barrel of 800 red balls and 200 white balls?”—in other words, these are simply processed as the same question.)

  Since we are all Bayesians here, we may take our priors into account and ask if at least some of this unexpectedly high correlation is due to luck. The evolutionary fine-tuning we can probably take for granted; this is a huge selection pressure we’re talking about. The remaining sources of suspiciously low variance are (a) whether a large group of adults could correctly envision, on average, relative degrees of parental grief (apparently they can), and (b) whether the surviving !Kung are typical ancestral hunter-gatherers in this dimension, or whether variance between hunter-gatherer tribal types should have been too high to allow a correlation of 0.92.

  But even after taking into account any skeptical priors, correlation 0.92 and N = 221 is pretty strong evidence, and our posteriors should be less skeptical on all these counts.

  2. You might think it an inelegance of the experiment that it was performed prospectively on imagined grief, rather than retrospectively on real grief. But it is prospectively imagined grief that will actually operate to steer parental behavior away from losing the child! From an evolutionary standpoint, an actual dead child is a sunk cost; evolution “wants” the parent to learn from the pain, not do it again, adjust back to their hedonic set point, and go on raising other children.

  3. Similarly, the graph that correlates to parental grief is for the future reproductive potential of a child that has survived to a given age, and not the sunk cost of raising the child which has survived to that age. (Might we get an even higher correlation if we tried to take into account the reproductive opportunity cost of raising a child of age X to independent maturity, while discarding all sunk costs to raise a child to age X?)

  Humans usually do notice sunk costs—this is presumably either an adaptation to prevent us from switching strategies too often (compensating for an overeager opportunity-noticer?) or an unfortunate spandrel of pain felt on wasting resources.

  Evolution, on the other hand—it’s not that evolution “doesn’t care about sunk costs,” but that evolution doesn’t even remotely “think” that way; “evolution” is just a macrofact about the real historical reproductive consequences.

  So—of course—the parental grief adaptation is fine-tuned in a way that has nothing to do with past investment in a child, and everything to do with the future reproductive consequences of losing that child. Natural selection isn’t crazy about sunk costs the way we are.

  But—of course—the parental grief adaptation goes on functioning as if the parent were living in a !Kung tribe rather than Canada. Most humans would notice the difference.

  Humans and natural selection are insane in different stable complicated ways.

  *

  1. Robert Wright, The Moral Animal: Why We Are the Way We Are: The New Science of Evolutionary Psychology (Pantheon Books, 1994); Charles B. Crawford, Brenda E. Salter, and Kerry L. Jang, “Human Grief: Is Its Intensity Related to the Reproductive Value of the Deceased?,” Ethology and Sociobiology 10, no. 4 (1989): 297–307.

  141

  Superstimuli and the Collapse of Western Civilization

  At least three people have died playing online games for days without rest. People have lost their spouses, jobs, and children to World of Warcraft. If people have the right to play video games—and it’s hard to imagine a more fundamental right—then the market is going to respond by supplying the most engaging video games that can be sold, to the point that exceptionally engaged consumers are removed from the gene pool.

  How does a consumer product become so involving that, after 57 hours of using the product, the consumer would rather use the product for one more hour than eat or sleep? (I suppose one could argue that the consumer makes a rational decision that they’d rather play Starcraft for the next hour than live out the rest of their life, but let’s just not go there. Please.)

  A candy bar is a superstimulus: it contains more concentrated sugar, salt, and fat than anything that exists in the ancestral environment. A candy bar matches taste buds that evolved in a hunter-gatherer environment, but it matches those taste buds much more strongly than anything that actually existed in the hunter-gatherer environment. The signal that once reliably correlated to healthy food has been hijacked, blotted out with a point in tastespace that wasn’t in the training dataset—an impossibly distant outlier on the old ancestral graphs. Tastiness, formerly representing the evolutionarily identified correlates of healthiness, has been reverse-engineered and perfectly matched with an artificial substance. Unfortunately there’s no equally powerful market incentive to make the resulting food item as healthy as it is tasty. We can’t taste healthfulness, after all.

  The now-famous Dove Evolution video shows the painstaking construction of another superstimulus: an ordinary woman transformed by makeup, careful photography, and finally extensive Photoshopping, into a billboard model—a beauty impossible, unmatchable by human women in the unretouched real world. Actual women are killing themselves (e.g., supermodels using cocaine to keep their weight down) to keep up with competitors that literally don’t exist.

  And likewise, a video game can be so much more engaging than mere reality, even through a simple computer monitor, that someone will play it without food or sleep until they literally die. I don’t know all the tricks used in video games, but I can guess some of them—challenges poised at the critical point between ease and impossibility, intermittent reinforcement, feedback showing an ever-increasing score, social involvement in massively multiplayer games.

  Is there a limit to the market incentive to make video games more engaging? You might hope there’d be no incentive past the point where the players lose their jobs; after all, they must be able to pay their subscription fee. This would imply a “sweet spot” for the addictiveness of games, where the mode of the bell curve is having fun, and only a few unfortunate souls on the tail become addicted to the point of losing their jobs. As of 2007, playing World of Warcraft for 58 hours straight until you literally die is still the exception rather than the rule. But video game manufacturers compete against each other, and if you can make your game 5% more addictive, you may be able to steal 50% of your competitor’s customers. You can see how this problem could get a lot worse.

  If people have the right to be tempted—and that’s what free will is all about—the market is going to respond by supplying as much temptation as can be sold. The incentive is to make your stimuli 5% more tempting than those of your current leading competitors. This continues well beyond the point where the stimuli become ancestrally anomalous superstimuli. Consider how our standards of product-selling feminine beauty have changed since the advertisements of the 1950s. And as candy bars demonstrate, the market incentive also continues well beyond the point where the superstimulus begins wreaking collateral damage on the consumer.

  So why don’t we just say no? A key assumption of free-market economics is that, in the absence of force and fraud, people can always refuse to engage in a harmful transaction. (To the extent this is true, a free market would be, not merely the best policy on the whole, but a policy with few or no downsides.)

  An organism that regularly passes up food will die, as some video game players found out the hard way. But, on some occasions in the ancestral environment, a typically beneficial (and therefore tempting) act may in fact be harmful. Humans, as organisms, have an unusually strong ability to perceive these special cases using abstract thought. On the other hand we also tend to imagine lots of special-case consequences that don’t exist, like ance
stor spirits commanding us not to eat perfectly good rabbits.

  Evolution seems to have struck a compromise, or perhaps just aggregated new systems on top of old. Homo sapiens are still tempted by food, but our oversized prefrontal cortices give us a limited ability to resist temptation. Not unlimited ability—our ancestors with too much willpower probably starved themselves to sacrifice to the gods, or failed to commit adultery one too many times. The video game players who died must have exercised willpower (in some sense) to keep playing for so long without food or sleep; the evolutionary hazard of self-control.

  Resisting any temptation takes conscious expenditure of an exhaustible supply of mental energy. It is not in fact true that we can “just say no”—not just say no, without cost to ourselves. Even humans who won the birth lottery for willpower or foresightfulness still pay a price to resist temptation. The price is just more easily paid.

  Our limited willpower evolved to deal with ancestral temptations; it may not operate well against enticements beyond anything known to hunter-gatherers. Even where we successfully resist a superstimulus, it seems plausible that the effort required would deplete willpower much faster than resisting ancestral temptations.

  Is public display of superstimuli a negative externality, even to the people who say no? Should we ban chocolate cookie ads, or storefronts that openly say “Ice Cream”?

  Just because a problem exists doesn’t show (without further justification and a substantial burden of proof) that the government can fix it. The regulator’s career incentive does not focus on products that combine low-grade consumer harm with addictive superstimuli; it focuses on products with failure modes spectacular enough to get into the newspaper. Conversely, just because the government may not be able to fix something, doesn’t mean it isn’t going wrong.

  I leave you with a final argument from fictional evidence: Simon Funk’s online novel After Life depicts (among other plot points) the planned extermination of biological Homo sapiens—not by marching robot armies, but by artificial children that are much cuter and sweeter and more fun to raise than real children. Perhaps the demographic collapse of advanced societies happens because the market supplies ever-more-tempting alternatives to having children, while the attractiveness of changing diapers remains constant over time. Where are the advertising billboards that say “BREED”? Who will pay professional image consultants to make arguing with sullen teenagers seem more alluring than a vacation in Tahiti?

  “In the end,” Simon Funk wrote, “the human species was simply marketed out of existence.”

  *

  142

  Thou Art Godshatter

  Before the twentieth century, not a single human being had an explicit concept of “inclusive genetic fitness,” the sole and absolute obsession of the blind idiot god. We have no instinctive revulsion of condoms or oral sex. Our brains, those supreme reproductive organs, don’t perform a check for reproductive efficacy before granting us sexual pleasure.

  Why not? Why aren’t we consciously obsessed with inclusive genetic fitness? Why did the Evolution-of-Humans Fairy create brains that would invent condoms? “It would have been so easy,” thinks the human, who can design new complex systems in an afternoon.

  The Evolution Fairy, as we all know, is obsessed with inclusive genetic fitness. When she decides which genes to promote to universality, she doesn’t seem to take into account anything except the number of copies a gene produces. (How strange!)

  But since the maker of intelligence is thus obsessed, why not create intelligent agents—you can’t call them humans—who would likewise care purely about inclusive genetic fitness? Such agents would have sex only as a means of reproduction, and wouldn’t bother with sex that involved birth control. They could eat food out of an explicitly reasoned belief that food was necessary to reproduce, not because they liked the taste, and so they wouldn’t eat candy if it became detrimental to survival or reproduction. Post-menopausal women would babysit grandchildren until they became sick enough to be a net drain on resources, and would then commit suicide.

  It seems like such an obvious design improvement—from the Evolution Fairy’s perspective.

  Now it’s clear that it’s hard to build a powerful enough consequentialist. Natural selection sort-of reasons consequentially, but only by depending on the actual consequences. Human evolutionary theorists have to do really high-falutin’ abstract reasoning in order to imagine the links between adaptations and reproductive success.

  But human brains clearly can imagine these links in protein. So when the Evolution Fairy made humans, why did It bother with any motivation except inclusive genetic fitness?

  It’s been less than two centuries since a protein brain first represented the concept of natural selection. The modern notion of “inclusive genetic fitness” is even more subtle, a highly abstract concept. What matters is not the number of shared genes. Chimpanzees share 95% of your genes. What matters is shared genetic variance, within a reproducing population—your sister is one-half related to you, because any variations in your genome, within the human species, are 50% likely to be shared by your sister.

  Only in the last century—arguably only in the last fifty years—have evolutionary biologists really begun to understand the full range of causes of reproductive success, things like reciprocal altruism and costly signaling. Without all this highly detailed knowledge, an intelligent agent that set out to “maximize inclusive fitness” would fall flat on its face.

  So why not preprogram protein brains with the knowledge? Why wasn’t a concept of “inclusive genetic fitness” programmed into us, along with a library of explicit strategies? Then you could dispense with all the reinforcers. The organism would be born knowing that, with high probability, fatty foods would lead to fitness. If the organism later learned that this was no longer the case, it would stop eating fatty foods. You could refactor the whole system. And it wouldn’t invent condoms or cookies.

  This looks like it should be quite possible in principle. I occasionally run into people who don’t quite understand consequentialism, who say, “But if the organism doesn’t have a separate drive to eat, it will starve, and so fail to reproduce.” So long as the organism knows this very fact, and has a utility function that values reproduction, it will automatically eat. In fact, this is exactly the consequentialist reasoning that natural selection itself used to build automatic eaters.

  What about curiosity? Wouldn’t a consequentialist only be curious when it saw some specific reason to be curious? And wouldn’t this cause it to miss out on lots of important knowledge that came with no specific reason for investigation attached? Again, a consequentialist will investigate given only the knowledge of this very same fact. If you consider the curiosity drive of a human—which is not undiscriminating, but responds to particular features of problems—then this complex adaptation is purely the result of consequentialist reasoning by DNA, an implicit representation of knowledge: Ancestors who engaged in this kind of inquiry left more descendants.

  So in principle, the pure reproductive consequentialist is possible. In principle, all the ancestral history implicitly represented in cognitive adaptations can be converted to explicitly represented knowledge, running on a core consequentialist.

  But the blind idiot god isn’t that smart. Evolution is not a human programmer who can simultaneously refactor whole code architectures. Evolution is not a human programmer who can sit down and type out instructions at sixty words per minute.

  For millions of years before hominid consequentialism, there was reinforcement learning. The reward signals were events that correlated reliably to reproduction. You can’t ask a nonhominid brain to foresee that a child eating fatty foods now will live through the winter. So the DNA builds a protein brain that generates a reward signal for eating fatty food. Then it’s up to the organism to learn which prey animals are tastiest.

  DNA constructs protein brains with reward signals that have a long-distance correlation to reproduc
tive fitness, but a short-distance correlation to organism behavior. You don’t have to figure out that eating sugary food in the fall will lead to digesting calories that can be stored fat to help you survive the winter so that you mate in spring to produce offspring in summer. An apple simply tastes good, and your brain just has to plot out how to get more apples off the tree.

  And so organisms evolve rewards for eating, and building nests, and scaring off competitors, and helping siblings, and discovering important truths, and forming strong alliances, and arguing persuasively, and of course having sex . . .

  When hominid brains capable of cross-domain consequential reasoning began to show up, they reasoned consequentially about how to get the existing reinforcers. It was a relatively simple hack, vastly simpler than rebuilding an “inclusive fitness maximizer” from scratch. The protein brains plotted how to acquire calories and sex, without any explicit cognitive representation of “inclusive fitness.”

  A human engineer would have said, “Whoa, I’ve just invented a consequentialist! Now I can take all my previous hard-won knowledge about which behaviors improve fitness, and declare it explicitly! I can convert all this complicated reinforcement learning machinery into a simple declarative knowledge statement that ‘fatty foods and sex usually improve your inclusive fitness.’ Consequential reasoning will automatically take care of the rest. Plus, it won’t have the obvious failure mode where it invents condoms!”

  But then a human engineer wouldn’t have built the retina backward, either.

  The blind idiot god is not a unitary purpose, but a many-splintered attention. Foxes evolve to catch rabbits, rabbits evolve to evade foxes; there are as many evolutions as species. But within each species, the blind idiot god is purely obsessed with inclusive genetic fitness. No quality is valued, not even survival, except insofar as it increases reproductive fitness. There’s no point in an organism with steel skin if it ends up having 1% less reproductive capacity.

 

‹ Prev