Rationality- From AI to Zombies
Page 81
So if snow is white, my belief “70%: ‘snow is white’” will score -0.51 bits: log2(0.7) = -0.51.
But what if snow is not white, as I have conceded a 30% probability is the case? If “snow is white” is false, my belief “30% probability: ‘snow is not white’” will score -1.73 bits. Note that -1.73 < -0.51, so I have done worse.
About how accurate do I think my own beliefs are? Well, my expectation over the score is 70% × -0.51 + 30% × -1.73 = -0.88 bits. If snow is white, then my beliefs will be more accurate than I expected; and if snow is not white, my beliefs will be less accurate than I expected; but in neither case will my belief be exactly as accurate as I expected on average.
All this should not be confused with the statement “I assign 70% credence that ‘snow is white.’” I may well believe that proposition with probability ~1—be quite certain that this is in fact my belief. If so I’ll expect my meta-belief “~1: ‘I assign 70% credence that “snow is white” ’” to score ~0 bits of accuracy, which is as good as it gets.
Just because I am uncertain about snow, does not mean I am uncertain about my quoted probabilistic beliefs. Snow is out there, my beliefs are inside me. I may be a great deal less uncertain about how uncertain I am about snow, than I am uncertain about snow. (Though beliefs about beliefs are not always accurate.)
Contrast this probabilistic situation to the qualitative reasoning where I just believe that snow is white, and believe that I believe that snow is white, and believe “‘snow is white’ is true,” and believe “my belief ‘“snow is white” is true’ is correct,” etc. Since all the quantities involved are 1, it’s easy to mix them up.
Yet the nice distinctions of quantitative reasoning will be short-circuited if you start thinking “‘“snow is white” with 70% probability’ is true,” which is a type error. It is a true fact about you, that you believe “70% probability: ‘snow is white’”; but that does not mean the probability assignment itself can possibly be “true.” The belief scores either -0.51 bits or -1.73 bits of accuracy, depending on the actual state of reality.
The cognoscenti will recognize “‘“snow is white” with 70% probability’ is true” as the mistake of thinking that probabilities are inherent properties of things.
From the inside, our beliefs about the world look like the world, and our beliefs about our beliefs look like beliefs. When you see the world, you are experiencing a belief from the inside. When you notice yourself believing something, you are experiencing a belief about belief from the inside. So if your internal representations of belief, and belief about belief, are dissimilar, then you are less likely to mix them up and commit the Mind Projection Fallacy—I hope.
When you think in probabilities, your beliefs, and your beliefs about your beliefs, will hopefully not be represented similarly enough that you mix up belief and accuracy, or mix up accuracy and reality. When you think in probabilities about the world, your beliefs will be represented with probabilities in the range (0,1). Unlike the truth-values of propositions, which are in the set {true, false}. As for the accuracy of your probabilistic belief, you can represent that in the range (-∞, 0). Your probabilities about your beliefs will typically be extreme. And things themselves—why, they’re just red, or blue, or weighing 20 pounds, or whatever.
Thus we will be less likely, perhaps, to mix up the map with the territory.
This type distinction may also help us remember that uncertainty is a state of mind. A coin is not inherently 50% uncertain of which way it will land. The coin is not a belief processor, and does not have partial information about itself. In qualitative reasoning you can create a belief that corresponds very straightforwardly to the coin, like “The coin will land heads.” This belief will be true or false depending on the coin, and there will be a transparent implication from the truth or falsity of the belief, to the facing side of the coin.
But even under qualitative reasoning, to say that the coin itself is “true” or “false” would be a severe type error. The coin is not a belief. It is a coin. The territory is not the map.
If a coin cannot be true or false, how much less can it assign a 50% probability to itself?
*
196
Think Like Reality
Whenever I hear someone describe quantum physics as “weird”—whenever I hear someone bewailing the mysterious effects of observation on the observed, or the bizarre existence of nonlocal correlations, or the incredible impossibility of knowing position and momentum at the same time—then I think to myself: This person will never understand physics no matter how many books they read.
Reality has been around since long before you showed up. Don’t go calling it nasty names like “bizarre” or “incredible.” The universe was propagating complex amplitudes through configuration space for ten billion years before life ever emerged on Earth. Quantum physics is not “weird.” You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality’s, and you are the one who needs to change.
Human intuitions were produced by evolution and evolution is a hack. The same optimization process that built your retina backward and then routed the optic cable through your field of vision, also designed your visual system to process persistent objects bouncing around in three spatial dimensions because that’s what it took to chase down tigers. But “tigers” are leaky surface generalizations—tigers came into existence gradually over evolutionary time, and they are not all absolutely similar to each other. When you go down to the fundamental level, the level on which the laws are stable, global, and exception-free, there aren’t any tigers. In fact there aren’t any persistent objects bouncing around in three spatial dimensions. Deal with it.
Calling reality “weird” keeps you inside a viewpoint already proven erroneous. Probability theory tells us that surprise is the measure of a poor hypothesis; if a model is consistently stupid—consistently hits on events the model assigns tiny probabilities—then it’s time to discard that model. A good model makes reality look normal, not weird; a good model assigns high probability to that which is actually the case. Intuition is only a model by another name: poor intuitions are shocked by reality, good intuitions make reality feel natural. You want to reshape your intuitions so that the universe looks normal. You want to think like reality.
This end state cannot be forced. It is pointless to pretend that quantum physics feels natural to you when in fact it feels strange. This is merely denying your confusion, not becoming less confused. But it will also hinder you to keep thinking How bizarre! Spending emotional energy on incredulity wastes time you could be using to update. It repeatedly throws you back into the frame of the old, wrong viewpoint. It feeds your sense of righteous indignation at reality daring to contradict you.
The principle extends beyond physics. Have you ever caught yourself saying something like, “I just don’t understand how a PhD physicist can believe in astrology?” Well, if you literally don’t understand, this indicates a problem with your model of human psychology. Perhaps you are indignant—you wish to express strong moral disapproval. But if you literally don’t understand, then your indignation is stopping you from coming to terms with reality. It shouldn’t be hard to imagine how a PhD physicist ends up believing in astrology. People compartmentalize, enough said.
I now try to avoid using the English idiom “I just don’t understand how . . .” to express indignation. If I genuinely don’t understand how, then my model is being surprised by the facts, and I should discard it and find a better model.
Surprise exists in the map, not in the territory. There are no surprising facts, only models that are surprised by facts. Likewise for facts called such nasty names as “bizarre,” “incredible,” “unbelievable,” “unexpected,” “strange,” “anomalous,” or “weird.” When you find yourself tempted by such labels,
it may be wise to check if the alleged fact is really factual. But if the fact checks out, then the problem isn’t the fact—it’s you.
*
197
Chaotic Inversion
I was recently having a conversation with some friends on the topic of hour-by-hour productivity and willpower maintenance—something I’ve struggled with my whole life.
I can avoid running away from a hard problem the first time I see it (perseverance on a timescale of seconds), and I can stick to the same problem for years; but to keep working on a timescale of hours is a constant battle for me. It goes without saying that I’ve already read reams and reams of advice; and the most help I got from it was realizing that a sizable fraction of other creative professionals had the same problem, and couldn’t beat it either, no matter how reasonable all the advice sounds.
“What do you do when you can’t work?” my friends asked me. (Conversation probably not accurate, this is a very loose gist.)
And I replied that I usually browse random websites, or watch a short video.
“Well,” they said, “if you know you can’t work for a while, you should watch a movie or something.”
“Unfortunately,” I replied, “I have to do something whose time comes in short units, like browsing the Web or watching short videos, because I might become able to work again at any time, and I can’t predict when—”
And then I stopped, because I’d just had a revelation.
I’d always thought of my workcycle as something chaotic, something unpredictable. I never used those words, but that was the way I treated it.
But here my friends seemed to be implying—what a strange thought—that other people could predict when they would become able to work again, and structure their time accordingly.
And it occurred to me for the first time that I might have been committing that damned old chestnut the Mind Projection Fallacy, right out there in my ordinary everyday life instead of high abstraction.
Maybe it wasn’t that my productivity was unusually chaotic; maybe I was just unusually stupid with respect to predicting it.
That’s what inverted stupidity looks like—chaos. Something hard to handle, hard to grasp, hard to guess, something you can’t do anything with. It’s not just an idiom for high abstract things like Artificial Intelligence. It can apply in ordinary life too.
And the reason we don’t think of the alternative explanation “I’m stupid,” is not—I suspect—that we think so highly of ourselves. It’s just that we don’t think of ourselves at all. We just see a chaotic feature of the environment.
So now it’s occurred to me that my productivity problem may not be chaos, but my own stupidity.
And that may or may not help anything. It certainly doesn’t fix the problem right away. Saying “I’m ignorant” doesn’t make you knowledgeable.
But it is, at least, a different path than saying “it’s too chaotic.”
*
198
Reductionism
Almost one year ago, in April 2007, Matthew C. submitted the following suggestion for an Overcoming Bias topic:
How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [ . . . ], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—
I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn’t do that topic until I’d started on the Mind Projection Fallacy sequence, which wouldn’t be for a while . . .
But now it’s time to begin addressing this question. And while I haven’t yet come to the “materialism” issue, we can now start on “reductionism.”
First, let it be said that I do indeed hold that “reductionism,” according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.
This seems like a strong statement, at least the first part of it. General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?
On the other hand, we are never going back to Newtonian mechanics. The ratchet of science turns, but it does not turn in reverse. There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.
“To hell with what past civilizations thought” seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.
And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.
I once met a fellow who claimed that he had experience as a Navy gunner, and he said, “When you fire artillery shells, you’ve got to compute the trajectories using Newtonian mechanics. If you compute the trajectories using relativity, you’ll get the wrong answer.”
And I, and another person who was present, said flatly, “No.” I added, “You might not be able to compute the trajectories fast enough to get the answers in time—maybe that’s what you mean? But the relativistic answer will always be more accurate than the Newtonian one.”
“No,” he said, “I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity.”
“If that were really true,” I replied, “you could publish it in a physics journal and collect your Nobel Prize.”
Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider. Nuclei and airplanes alike, according to our understanding, are obeying Special Relativity, quantum mechanics, and chromodynamics.
But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC. A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.
So is the 747 made of something other than quarks? No, you’re just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747. The map is not the territory.
Why not model the 747 with a chromodynamic representation? Because then it would take a gazillion years to get any answers out of the model. Also we could not store the model on all the memory on all the computers in the world, as of 2008.
As the saying goes, “The map is not the territory, but you can’t fold up the territory and put it in your glove compartment.” Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory. The scale of a map is not a fact about the territory, it’s a fact about the map.
If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions. Better predictions than the aerodynamic model, in fact.
To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift. There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings. It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.
“What?” cries the antireductionist. “Are you telling me the 747 doesn’t really have wings? I can see the wings right there!”
The notion here is a subtle one. It’s not just the notion that an object can have different descriptions at different levels.
It’s the notion that “having different descriptions at different levels” is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.
It’s not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought. Rather we, for our convenience, use different
simplified models at different levels.
If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.
You, looking at the model, and thinking about the model, would be able to figure out where the wings were. Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM. In your mind.
You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—
And the way that algorithm feels from inside is that the airplane would seem to be made up of many levels at once, interacting with each other.
The way a belief feels from inside is that you seem to be looking straight at reality. When it actually seems that you’re looking at a belief, as such, you are really experiencing a belief about belief.
So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.
But this is just the brain trying to efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can’t handle the truth.
But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces. You can’t handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)