Book Read Free

Rationality- From AI to Zombies

Page 29

by Eliezer Yudkowsky


  *

  71

  What Evidence Filtered Evidence?

  I discussed the dilemma of the clever arguer, hired to sell you a box that may or may not contain a diamond. The clever arguer points out to you that the box has a blue stamp, and it is a valid known fact that diamond-containing boxes are more likely than empty boxes to bear a blue stamp. What happens at this point, from a Bayesian perspective? Must you helplessly update your probabilities, as the clever arguer wishes?

  If you can look at the box yourself, you can add up all the signs yourself. What if you can’t look? What if the only evidence you have is the word of the clever arguer, who is legally constrained to make only true statements, but does not tell you everything they know? Each statement that the clever arguer makes is valid evidence—how could you not update your probabilities? Has it ceased to be true that, in such-and-such a proportion of Everett branches or Tegmark duplicates in which box B has a blue stamp, box B contains a diamond? According to Jaynes, a Bayesian must always condition on all known evidence, on pain of paradox. But then the clever arguer can make you believe anything they choose, if there is a sufficient variety of signs to selectively report. That doesn’t sound right.

  Consider a simpler case, a biased coin, which may be biased to come up 2/3 heads and 1/3 tails, or 1/3 heads and 2/3 tails, both cases being equally likely a priori. Each H observed is 1 bit of evidence for an H-biased coin; each T observed is 1 bit of evidence for a T-biased coin. I flip the coin ten times, and then I tell you, “The 4th flip, 6th flip, and 9th flip came up heads.” What is your posterior probability that the coin is H-biased?

  And the answer is that it could be almost anything, depending on what chain of cause and effect lay behind my utterance of those words—my selection of which flips to report.

  I might be following the algorithm of reporting the result of the 4th, 6th, and 9th flips, regardless of the result of those and all other flips. If you know that I used this algorithm, the posterior odds are 8:1 in favor of an H-biased coin.

  I could be reporting on all flips, and only flips, that came up heads. In this case, you know that all 7 other flips came up tails, and the posterior odds are 1:16 against the coin being H-biased.

  I could have decided in advance to say the result of the 4th, 6th, and 9th flips only if the probability of the coin being H-biased exceeds 98%. And so on.

  Or consider the Monty Hall problem:

  On a game show, you are given the choice of three doors leading to three rooms. You know that in one room is $100,000, and the other two are empty. The host asks you to pick a door, and you pick door #1. Then the host opens door #2, revealing an empty room. Do you want to switch to door #3, or stick with door #1?

  The answer depends on the host’s algorithm. If the host always opens a door and always picks a door leading to an empty room, then you should switch to door #3. If the host always opens door #2 regardless of what is behind it, #1 and #3 both have 50% probabilities of containing the money. If the host only opens a door, at all, if you initially pick the door with the money, then you should definitely stick with #1.

  You shouldn’t just condition on #2 being empty, but this fact plus the fact of the host choosing to open door #2. Many people are confused by the standard Monty Hall problem because they update only on #2 being empty, in which case #1 and #3 have equal probabilities of containing the money. This is why Bayesians are commanded to condition on all of their knowledge, on pain of paradox.

  When someone says, “The 4th coinflip came up heads,” we are not conditioning on the 4th coinflip having come up heads—we are not taking the subset of all possible worlds where the 4th coinflip came up heads—rather we are conditioning on the subset of all possible worlds where a speaker following some particular algorithm said “The 4th coinflip came up heads.” The spoken sentence is not the fact itself; don’t be led astray by the mere meanings of words.

  Most legal processes work on the theory that every case has exactly two opposed sides and that it is easier to find two biased humans than one unbiased one. Between the prosecution and the defense, someone has a motive to present any given piece of evidence, so the court will see all the evidence; that is the theory. If there are two clever arguers in the box dilemma, it is not quite as good as one curious inquirer, but it is almost as good. But that is with two boxes. Reality often has many-sided problems, and deep problems, and nonobvious answers, which are not readily found by Blues and Greens screaming at each other.

  Beware lest you abuse the notion of evidence-filtering as a Fully General Counterargument to exclude all evidence you don’t like: “That argument was filtered, therefore I can ignore it.” If you’re ticked off by a contrary argument, then you are familiar with the case, and care enough to take sides. You probably already know your own side’s strongest arguments. You have no reason to infer, from a contrary argument, the existence of new favorable signs and portents which you have not yet seen. So you are left with the uncomfortable facts themselves; a blue stamp on box B is still evidence.

  But if you are hearing an argument for the first time, and you are only hearing one side of the argument, then indeed you should beware! In a way, no one can really trust the theory of natural selection until after they have listened to creationists for five minutes; and then they know it’s solid.

  *

  72

  Rationalization

  In The Bottom Line, I presented the dilemma of two boxes, only one of which contains a diamond, with various signs and portents as evidence. I dichotomized the curious inquirer and the clever arguer. The curious inquirer writes down all the signs and portents, and processes them, and finally writes down “Therefore, I estimate an 85% probability that box B contains the diamond.” The clever arguer works for the highest bidder, and begins by writing, “Therefore, box B contains the diamond,” and then selects favorable signs and portents to list on the lines above.

  The first procedure is rationality. The second procedure is generally known as “rationalization.”

  “Rationalization.” What a curious term. I would call it a wrong word. You cannot “rationalize” what is not already rational. It is as if “lying” were called “truthization.”

  On a purely computational level, there is a rather large difference between:

  Starting from evidence, and then crunching probability flows, in order to output a probable conclusion. (Writing down all the signs and portents, and then flowing forward to a probability on the bottom line which depends on those signs and portents.)

  Starting from a conclusion, and then crunching probability flows, in order to output evidence apparently favoring that conclusion. (Writing down the bottom line, and then flowing backward to select signs and portents for presentation on the lines above.)

  What fool devised such confusingly similar words, “rationality” and “rationalization,” to describe such extraordinarily different mental processes? I would prefer terms that made the algorithmic difference obvious, like “rationality” versus “giant sucking cognitive black hole.”

  Not every change is an improvement, but every improvement is necessarily a change. You cannot obtain more truth for a fixed proposition by arguing it; you can make more people believe it, but you cannot make it more true. To improve our beliefs, we must necessarily change our beliefs. Rationality is the operation that we use to obtain more accuracy for our beliefs by changing them. Rationalization operates to fix beliefs in place; it would be better named “anti-rationality,” both for its pragmatic results and for its reversed algorithm.

  “Rationality” is the forward flow that gathers evidence, weighs it, and outputs a conclusion. The curious inquirer used a forward-flow algorithm: first gathering the evidence, writing down a list of all visible signs and portents, which they then processed forward to obtain a previously unknown probability for the box containing the diamond. During the entire time that the rationality-process was running forward, the curious inquirer did not yet know thei
r destination, which was why they were curious. In the Way of Bayes, the prior probability equals the expected posterior probability: If you know your destination, you are already there.

  “Rationalization” is a backward flow from conclusion to selected evidence. First you write down the bottom line, which is known and fixed; the purpose of your processing is to find out which arguments you should write down on the lines above. This, not the bottom line, is the variable unknown to the running process.

  I fear that Traditional Rationality does not properly sensitize its users to the difference between forward flow and backward flow. In Traditional Rationality, there is nothing wrong with the scientist who arrives at a pet hypothesis and then sets out to find an experiment that proves it. A Traditional Rationalist would look at this approvingly, and say, “This pride is the engine that drives Science forward.” Well, it is the engine that drives Science forward. It is easier to find a prosecutor and defender biased in opposite directions, than to find a single unbiased human.

  But just because everyone does something, doesn’t make it okay. It would be better yet if the scientist, arriving at a pet hypothesis, set out to test that hypothesis for the sake of curiosity—creating experiments that would drive their own beliefs in an unknown direction.

  If you genuinely don’t know where you are going, you will probably feel quite curious about it. Curiosity is the first virtue, without which your questioning will be purposeless and your skills without direction.

  Feel the flow of the Force, and make sure it isn’t flowing backwards.

  *

  73

  A Rational Argument

  You are, by occupation, a campaign manager, and you’ve just been hired by Mortimer Q. Snodgrass, the Green candidate for Mayor of Hadleyburg. As a campaign manager reading a book on rationality, one question lies foremost on your mind: “How can I construct an impeccable rational argument that Mortimer Q. Snodgrass is the best candidate for Mayor of Hadleyburg?”

  Sorry. It can’t be done.

  “What?” you cry. “But what if I use only valid support to construct my structure of reason? What if every fact I cite is true to the best of my knowledge, and relevant evidence under Bayes’s Rule?”

  Sorry. It still can’t be done. You defeated yourself the instant you specified your argument’s conclusion in advance.

  This year, the Hadleyburg Trumpet sent out a 16-item questionnaire to all mayoral candidates, with questions like “Can you paint with all the colors of the wind?” and “Did you inhale?” Alas, the Trumpet’s offices are destroyed by a meteorite before publication. It’s a pity, since your own candidate, Mortimer Q. Snodgrass, compares well to his opponents on 15 out of 16 questions. The only sticking point was Question 11, “Are you now, or have you ever been, a supervillain?”

  So you are tempted to publish the questionnaire as part of your own campaign literature . . . with the 11th question omitted, of course.

  Which crosses the line between rationality and rationalization. It is no longer possible for the voters to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.

  Indeed, you crossed the line at the point where you considered whether the questionnaire was favorable or unfavorable to your candidate, before deciding whether to publish it. “What!” you cry. “A campaign should publish facts unfavorable to their candidate?” But put yourself in the shoes of a voter, still trying to select a candidate—why would you censor useful information? You wouldn’t, if you were genuinely curious. If you were flowing forward from the evidence to an unknown choice of candidate, rather than flowing backward from a fixed candidate to determine the arguments.

  A “logical” argument is one that follows from its premises. Thus the following argument is illogical:

  All rectangles are quadrilaterals.

  All squares are quadrilaterals.

  Therefore, all squares are rectangles.

  This syllogism is not rescued from illogic by the truth of its premises or even the truth of its conclusion. It is worth distinguishing logical deductions from illogical ones, and to refuse to excuse them even if their conclusions happen to be true. For one thing, the distinction may affect how we revise our beliefs in light of future evidence. For another, sloppiness is habit-forming.

  Above all, the syllogism fails to state the real explanation. Maybe all squares are rectangles, but, if so, it’s not because they are both quadrilaterals. You might call it a hypocritical syllogism—one with a disconnect between its stated reasons and real reasons.

  If you really want to present an honest, rational argument for your candidate, in a political campaign, there is only one way to do it:

  Before anyone hires you, gather up all the evidence you can about the different candidates.

  Make a checklist which you, yourself, will use to decide which candidate seems best.

  Process the checklist.

  Go to the winning candidate.

  Offer to become their campaign manager.

  When they ask for campaign literature, print out your checklist.

  Only in this way can you offer a rational chain of argument, one whose bottom line was written flowing forward from the lines above it. Whatever actually decides your bottom line, is the only thing you can honestly write on the lines above.

  *

  74

  Avoiding Your Belief’s Real Weak Points

  A few years back, my great-grandmother died, in her nineties, after a long, slow, and cruel disintegration. I never knew her as a person, but in my distant childhood, she cooked for her family; I remember her gefilte fish, and her face, and that she was kind to me. At her funeral, my grand-uncle, who had taken care of her for years, spoke. He said, choking back tears, that God had called back his mother piece by piece: her memory, and her speech, and then finally her smile; and that when God finally took her smile, he knew it wouldn’t be long before she died, because it meant that she was almost entirely gone.

  I heard this and was puzzled, because it was an unthinkably horrible thing to happen to anyone, and therefore I would not have expected my grand-uncle to attribute it to God. Usually, a Jew would somehow just-not-think-about the logical implication that God had permitted a tragedy. According to Jewish theology, God continually sustains the universe and chooses every event in it; but ordinarily, drawing logical implications from this belief is reserved for happier occasions. By saying “God did it!” only when you’ve been blessed with a baby girl, and just-not-thinking “God did it!” for miscarriages and stillbirths and crib deaths, you can build up quite a lopsided picture of your God’s benevolent personality.

  Hence I was surprised to hear my grand-uncle attributing the slow disintegration of his mother to a deliberate, strategically planned act of God. It violated the rules of religious self-deception as I understood them.

  If I had noticed my own confusion, I could have made a successful surprising prediction. Not long afterward, my grand-uncle left the Jewish religion. (The only member of my extended family besides myself to do so, as far as I know.)

  Modern Orthodox Judaism is like no other religion I have ever heard of, and I don’t know how to describe it to anyone who hasn’t been forced to study Mishna and Gemara. There is a tradition of questioning, but the kind of questioning . . . It would not be at all surprising to hear a rabbi, in his weekly sermon, point out the conflict between the seven days of creation and the 13.7 billion years since the Big Bang—because he thought he had a really clever explanation for it, involving three other Biblical references, a Midrash, and a half-understood article in Scientific American. In Orthodox Judaism you’re allowed to notice inconsistencies and contradictions, but only for purposes of explaining them away, and whoever comes up with the most complicated explanation gets a prize.

  There is a tradition of inquiry. But you only attack targets for purposes of defending them. You only attack targets you know you can defend.

>   In Modern Orthodox Judaism I have not heard much emphasis of the virtues of blind faith. You’re allowed to doubt. You’re just not allowed to successfully doubt.

  I expect that the vast majority of educated Orthodox Jews have questioned their faith at some point in their lives. But the questioning probably went something like this: “According to the skeptics, the Torah says that the universe was created in seven days, which is not scientifically accurate. But would the original tribespeople of Israel, gathered at Mount Sinai, have been able to understand the scientific truth, even if it had been presented to them? Did they even have a word for ‘billion’? It’s easier to see the seven-days story as a metaphor—first God created light, which represents the Big Bang . . .”

  Is this the weakest point at which to attack one’s own Judaism? Read a bit further on in the Torah, and you can find God killing the first-born male children of Egypt to convince an unelected Pharaoh to release slaves who logically could have been teleported out of the country. An Orthodox Jew is most certainly familiar with this episode, because they are supposed to read through the entire Torah in synagogue once per year, and this event has an associated major holiday. The name “Passover” (“Pesach”) comes from God passing over the Jewish households while killing every male firstborn in Egypt.

  Modern Orthodox Jews are, by and large, kind and civilized people; far more civilized than the several editors of the Old Testament. Even the old rabbis were more civilized. There’s a ritual in the Seder where you take ten drops of wine from your cup, one drop for each of the Ten Plagues, to emphasize the suffering of the Egyptians. (Of course, you’re supposed to be sympathetic to the suffering of the Egyptians, but not so sympathetic that you stand up and say, “This is not right! It is wrong to do such a thing!”) It shows an interesting contrast—the rabbis were sufficiently kinder than the compilers of the Old Testament that they saw the harshness of the Plagues. But Science was weaker in these days, and so rabbis could ponder the more unpleasant aspects of Scripture without fearing that it would break their faith entirely.

 

‹ Prev