—Recovering_irrationalist
Recovering_irrationalist, you’ve got no idea how glad I was to see you post that comment.
Of course I had more than just one reason for spending all that time writing about quantum physics. I like having lots of hidden motives. It’s the closest I can ethically get to being a supervillain.
But to give an example of a purpose I could only accomplish by discussing quantum physics . . .
In physics, you can get absolutely clear-cut issues. Not in the sense that the issues are trivial to explain. But if you try to apply Bayes to healthcare, or economics, you may not be able to formally lay out what is the simplest hypothesis, or what the evidence supports. But when I say “macroscopic decoherence is simpler than collapse” it is actually strict simplicity; you could write the two hypotheses out as computer programs and count the lines of code. Nor is the evidence itself in dispute.
I wanted a very clear example—Bayes says “zig,” this is a zag—when it came time to break your allegiance to Science.
“Oh, sure,” you say, “the physicists messed up the many-worlds thing, but give them a break, Eliezer! No one ever claimed that the social process of science was perfect. People are human; they make mistakes.”
But the physicists who refuse to adopt many-worlds aren’t disobeying the rules of Science. They’re obeying the rules of Science.
The tradition handed down through the generations says that a new physics theory comes up with new experimental predictions that distinguish it from the old theory. You perform the test, and the new theory is confirmed or falsified. If it’s confirmed, you hold a huge celebration, call the newspapers, and hand out Nobel Prizes for everyone; any doddering old emeritus professors who refuse to convert are quietly humored. If the theory is disconfirmed, the lead proponent publicly recants, and gains a reputation for honesty.
This is not how things do work in science; rather it is how things are supposed to work in Science. It’s the ideal to which all good scientists aspire.
Now many-worlds comes along, and it doesn’t seem to make any new predictions relative to the old theory. That’s suspicious. And there’s all these other worlds, but you can’t see them. That’s really suspicious. It just doesn’t seem scientific.
If you got as far as Recovering_irrationalist—so that many-worlds now seems perfectly logical, obvious and normal—and you also started out as a Traditional Rationalist, then you should be able to switch back and forth between the Scientific view and the Bayesian view, like a Necker Cube.
So now put on your Science Goggles—you’ve still got them around somewhere, right? Forget everything you know about Kolmogorov complexity, Solomonoff induction or Minimum Message Lengths. That’s not part of the traditional training. You just eyeball something to see how “simple” it looks. The word “testable” doesn’t conjure up a mental image of Bayes’s Theorem governing probability flows; it conjures up a mental image of being in a lab, performing an experiment, and having the celebration (or public recantation) afterward.
Science-Goggles on: The current quantum theory has passed all experimental tests so far. Many-worlds doesn’t make any new testable predictions—the amazing new phenomena it predicts are all hidden away where we can’t see them. You can get along fine without supposing the other worlds, and that’s just what you should do. The whole thing smacks of science fiction. But it must be admitted that quantum physics is a very deep and very confusing issue, and who knows what discoveries might be in store? Call me when Many-worlds makes a testable prediction.
Science-Goggles off, Bayes-Goggles back on:
Bayes-Goggles on: The simplest quantum equations that cover all known evidence don’t have a special exception for human-sized masses. There isn’t even any reason to ask that particular question. Next!
Okay, so is this a problem we can fix in five minutes with some duct tape and superglue?
No.
Huh? Why not just teach new graduating classes of scientists about Solomonoff induction and Bayes’s Rule?
Centuries ago, there was a widespread idea that the Wise could unravel the secrets of the universe just by thinking about them, while to go out and look at things was lesser, inferior, naive, and would just delude you in the end. You couldn’t trust the way things looked—only thought could be your guide.
Science began as a rebellion against this Deep Wisdom. At the core is the pragmatic belief that human beings, sitting around in their armchairs trying to be Deeply Wise, just drift off into never-never land. You couldn’t trust your thoughts. You had to make advance experimental predictions—predictions that no one else had made before—run the test, and confirm the result. That was evidence. Sitting in your armchair, thinking about what seemed reasonable . . . would not be taken to prejudice your theory, because Science wasn’t an idealistic belief about pragmatism, or getting your hands dirty. It was, rather, the dictum that experiment alone would decide. Only experiments could judge your theory—not your nationality, or your religious professions, or the fact that you’d invented the theory in your armchair. Only experiments! If you sat in your armchair and came up with a theory that made a novel prediction, and experiment confirmed the prediction, then we would care about the result of the experiment, not where your hypothesis came from.
That’s Science. And if you say that many-worlds should replace the immensely successful Copenhagen Interpretation, adding on all these twin Earths that can’t be observed, just because it sounds more reasonable and elegant—not because it crushed the old theory with a superior experimental prediction—then you’re undoing the core scientific rule that prevents people from running out and putting angels into all the theories, because angels are more reasonable and elegant.
You think teaching a few people about Solomonoff induction is going to solve that problem? Nobel laureate Robert Aumann—who first proved that Bayesian agents with similar priors cannot agree to disagree—is a believing Orthodox Jew. Aumann helped a project to test the Torah for “Bible codes,” hidden prophecies from God—and concluded that the project had failed to confirm the codes’ existence. Do you want Aumann thinking that once you’ve got Solomonoff induction, you can forget about the experimental method? Do you think that’s going to help him? And most scientists out there will not rise to the level of Robert Aumann.
Okay, Bayes-Goggles back on. Are you really going to believe that large parts of the wavefunction disappear when you can no longer see them? As a result of the only non-linear non-unitary non-differentiable non-CPT-symmetric acausal faster-than-light informally-specified phenomenon in all of physics? Just because, by sheer historical contingency, the stupid version of the theory was proposed first?
Are you going to make a major modification to a scientific model, and believe in zillions of other worlds you can’t see, without a defining moment of experimental triumph over the old model?
Or are you going to reject probability theory?
Will you give your allegiance to Science, or to Bayes?
Michael Vassar once observed (tongue-in-cheek) that it was a good thing that a majority of the human species believed in God, because otherwise, he would have a very hard time rejecting majoritarianism. But since the majority opinion that God exists is simply unbelievable, we have no choice but to reject the extremely strong philosophical arguments for majoritarianism.
You can see (one of the reasons) why I went to such lengths to explain quantum theory. Those who are good at math should now be able to visualize both macroscopic decoherence, and the probability theory of simplicity and testability—get the insanity of a global single world on a gut level.
I wanted to present you with a nice, sharp dilemma between rejecting the scientific method, or embracing insanity.
Why? I’ll give you a hint: It’s not just because I’m evil. If you would guess my motives here, think beyond the first obvious answer.
PS: If you try to come up with clever ways to wriggle out of the dilemma, you’re just g
oing to get shot down in future essays. You have been warned.
*
245
Science Doesn’t Trust Your Rationality
Scott Aaronson suggests that many-worlds and libertarianism are similar in that they are both cases of bullet-swallowing, rather than bullet-dodging:
Libertarianism and MWI are both grand philosophical theories that start from premises that almost all educated people accept (quantum mechanics in the one case, Econ 101 in the other), and claim to reach conclusions that most educated people reject, or are at least puzzled by (the existence of parallel universes / the desirability of eliminating fire departments).
Now there’s an analogy that would never have occurred to me.
I’ve previously argued that Science rejects Many-Worlds but Bayes accepts it. (Here, “Science” is capitalized because we are talking about the idealized form of Science, not just the actual social process of science.)
It furthermore seems to me that there is a deep analogy between (small-“l”) libertarianism and Science:
Both are based on a pragmatic distrust of reasonable-sounding arguments.
Both try to build systems that are more trustworthy than the people in them.
Both accept that people are flawed, and try to harness their flaws to power the system.
The core argument for libertarianism is historically motivated distrust of lovely theories of “How much better society would be, if we just made a rule that said XYZ.” If that sort of trick actually worked, then more regulations would correlate to higher economic growth as society moved from local to global optima. But when some person or interest group gets enough power to start doing everything they think is a good idea, history says that what actually happens is Revolutionary France or Soviet Russia.
The plans that in lovely theory should have made everyone happy ever after, don’t have the results predicted by reasonable-sounding arguments. And power corrupts, and attracts the corrupt.
So you regulate as little as possible, because you can’t trust the lovely theories and you can’t trust the people who implement them.
You don’t shake your finger at people for being selfish. You try to build an efficient system of production out of selfish participants, by requiring transactions to be voluntary. So people are forced to play positive-sum games, because that’s how they get the other party to sign the contract. With violence restrained and contracts enforced, individual selfishness can power a globally productive system.
Of course none of this works quite so well in practice as in theory, and I’m not going to go into market failures, commons problems, etc. The core argument for libertarianism is not that libertarianism would work in a perfect world, but that it degrades gracefully into real life. Or rather, degrades less awkwardly than any other known economic principle. (People who see Libertarianism as the perfect solution for perfect people strike me as kinda missing the point of the “pragmatic distrust” thing.)
Science first came to know itself as a rebellion against trusting the word of Aristotle. If the people of that revolution had merely said, “Let us trust ourselves, not Aristotle!” they would have flashed and faded like the French Revolution.
But the Scientific Revolution lasted because—like the American Revolution—the architects propounded a stranger philosophy: “Let us trust no one! Not even ourselves!”
In the beginning came the idea that we can’t just toss out Aristotle’s armchair reasoning and replace it with different armchair reasoning. We need to talk to Nature, and actually listen to what It says in reply. This, itself, was a stroke of genius.
But then came the challenge of implementation. People are stubborn, and may not want to accept the verdict of experiment. Shall we shake a disapproving finger at them, and say “Naughty”?
No; we assume and accept that each individual scientist may be crazily attached to their personal theories. Nor do we assume that anyone can be trained out of this tendency—we don’t try to choose Eminent Judges who are supposed to be impartial.
Instead, we try to harness the individual scientist’s stubborn desire to prove their personal theory, by saying: “Make a new experimental prediction, and do the experiment. If you’re right, and the experiment is replicated, you win.” So long as scientists believe this is true, they have a motive to do experiments that can falsify their own theories. Only by accepting the possibility of defeat is it possible to win. And any great claim will require replication; this gives scientists a motive to be honest, on pain of great embarrassment.
And so the stubbornness of individual scientists is harnessed to produce a steady stream of knowledge at the group level. The System is somewhat more trustworthy than its parts.
Libertarianism secretly relies on most individuals being prosocial enough to tip at a restaurant they won’t ever visit again. An economy of genuinely selfish human-level agents would implode. Similarly, Science relies on most scientists not committing sins so egregious that they can’t rationalize them away.
To the extent that scientists believe they can promote their theories by playing academic politics—or game the statistical methods to potentially win without a chance of losing—or to the extent that nobody bothers to replicate claims—science degrades in effectiveness. But it degrades gracefully, as such things go.
The part where the successful predictions belong to the theory and theorists who originally made them, and cannot just be stolen by a theory that comes along later—without a novel experimental prediction—is an important feature of this social process.
The final upshot is that Science is not easily reconciled with probability theory. If you do a probability-theoretic calculation correctly, you’re going to get the rational answer. Science doesn’t trust your rationality, and it doesn’t rely on your ability to use probability theory as the arbiter of truth. It wants you to set up a definitive experiment.
Regarding Science as a mere approximation to some probability-theoretic ideal of rationality . . . would certainly seem to be rational. There seems to be an extremely reasonable-sounding argument that Bayes’s Theorem is the hidden structure that explains why Science works. But to subordinate Science to the grand schema of Bayesianism, and let Bayesianism come in and override Science’s verdict when that seems appropriate, is not a trivial step!
Science is built around the assumption that you’re too stupid and self-deceiving to just use Solomonoff induction. After all, if it was that simple, we wouldn’t need a social process of science . . . right?
So, are you going to believe in faster-than-light quantum “collapse” fairies after all? Or do you think you’re smarter than that?
*
246
When Science Can’t Help
Once upon a time, a younger Eliezer had a stupid theory. Let’s say that Eliezer18’s stupid theory was that consciousness was caused by closed timelike curves hiding in quantum gravity. This isn’t the whole story, not even close, but it will do for a start.
And there came a point where I looked back, and realized:
I had carefully followed everything I’d been told was Traditionally Rational, in the course of going astray. For example, I’d been careful to only believe in stupid theories that made novel experimental predictions, e.g., that neuronal microtubules would be found to support coherent quantum states.
Science would have been perfectly fine with my spending ten years trying to test my stupid theory, only to get a negative experimental result, so long as I then said, “Oh, well, I guess my theory was wrong.”
From Science’s perspective, that is how things are supposed to work—happy fun for everyone. You admitted your error! Good for you! Isn’t that what Science is all about?
But what if I didn’t want to waste ten years?
Well . . . Science didn’t have much to say about that. How could Science say which theory was right, in advance of the experimental test? Science doesn’t care where your theory comes from—it just says, “Go test it.”
This is the great strength of Science, and also its great weakness.
Gray Area asked:
Eliezer, why are you concerned with untestable questions?
Because questions that are easily immediately tested are hard for Science to get wrong.
I mean, sure, when there’s already definite unmistakable experimental evidence available, go with it. Why on Earth wouldn’t you?
But sometimes a question will have very large, very definite experimental consequences in your future—but you can’t easily test it experimentally right now—and yet there is a strong rational argument.
Macroscopic quantum superpositions are readily testable: It would just take nanotechnologic precision, very low temperatures, and a nice clear area of interstellar space. Oh, sure, you can’t do it right now, because it’s too expensive or impossible for today’s technology or something like that—but in theory, sure! Why, maybe someday they’ll run whole civilizations on macroscopically superposed quantum computers, way out in a well-swept volume of a Great Void. (Asking what quantum non-realism says about the status of any observers inside these computers helps to reveal the underspecification of quantum non-realism.)
This doesn’t seem immediately pragmatically relevant to your life, I’m guessing, but it establishes the pattern: Not everything with future consequences is cheap to test now.
Evolutionary psychology is another example of a case where rationality has to take over from science. While theories of evolutionary psychology form a connected whole, only some of those theories are readily testable experimentally. But you still need the other parts of the theory, because they form a connected web that helps you to form the hypotheses that are actually testable—and then the helper hypotheses are supported in a Bayesian sense, but not supported experimentally. Science would render a verdict of “not proven” on individual parts of a connected theoretical mesh that is experimentally productive as a whole. We’d need a new kind of verdict for that, something like “indirectly supported.”
Rationality- From AI to Zombies Page 107