So the group selectionists, imagining this beautiful picture of predators restraining their breeding, instinctively rationalized why natural selection ought to do things their way, even according to natural selection’s own purposes. The foxes will be fitter if they restrain their breeding! No, really! They’ll even outbreed other foxes who don’t restrain their breeding! Honestly!
The problem with trying to argue natural selection into doing things your way is that evolution does not contain that which could be moved by your arguments. Evolution does not work like you do—not even to the extent of having any element that could listen to or care about your painstaking explanation of why evolution ought to do things your way. Human arguments are not even commensurate with the internal structure of natural selection as an optimization process—human arguments aren’t used in promoting alleles, as human arguments would play a causal role in human politics.
So instead of successfully persuading natural selection to do things their way, the group selectionists were simply embarrassed when reality came out differently.
There’s a fairly heavy subtext here about Unfriendly AI.
But the point generalizes: this is the problem with optimistic reasoning in general. What is optimism? It is ranking the possibilities by your own preference ordering, and selecting an outcome high in that preference ordering, and somehow that outcome ends up as your prediction. What kind of elaborate rationalizations were generated along the way is probably not so relevant as one might fondly believe; look at the cognitive history and it’s optimism in, optimism out. But Nature, or whatever other process is under discussion, is not actually, causally choosing between outcomes by ranking them in your preference ordering and picking a high one. So the brain fails to synchronize with the environment, and the prediction fails to match reality.
*
1. Wade, “Group selections among laboratory populations of Tribolium.”
152
Lost Purposes
It was in either kindergarten or first grade that I was first asked to pray, given a transliteration of a Hebrew prayer. I asked what the words meant. I was told that so long as I prayed in Hebrew, I didn’t need to know what the words meant, it would work anyway.
That was the beginning of my break with Judaism.
As you read this, some young man or woman is sitting at a desk in a university, earnestly studying material they have no intention of ever using, and no interest in knowing for its own sake. They want a high-paying job, and the high-paying job requires a piece of paper, and the piece of paper requires a previous master’s degree, and the master’s degree requires a bachelor’s degree, and the university that grants the bachelor’s degree requires you to take a class in twelfth-century knitting patterns to graduate. So they diligently study, intending to forget it all the moment the final exam is administered, but still seriously working away, because they want that piece of paper.
Maybe you realized it was all madness, but I bet you did it anyway. You didn’t have a choice, right? A recent study here in the Bay Area showed that 80% of teachers in K-5 reported spending less than one hour per week on science, and 16% said they spend no time on science. Why? I’m given to understand the proximate cause is the No Child Left Behind Act and similar legislation. Virtually all classroom time is now spent on preparing for tests mandated at the state or federal level. I seem to recall (though I can’t find the source) that just taking mandatory tests was 40% of classroom time in one school.
The old Soviet bureaucracy was famous for being more interested in appearances than reality. One shoe factory overfulfilled its quota by producing lots of tiny shoes. Another shoe factory reported cut but unassembled leather as a “shoe.” The superior bureaucrats weren’t interested in looking too hard, because they also wanted to report quota overfulfillments. All this was a great help to the comrades freezing their feet off.
It is now being suggested in several sources that an actual majority of published findings in medicine, though “statistically significant with p < 0.05,” are untrue. But so long as p < 0.05 remains the threshold for publication, why should anyone hold themselves to higher standards, when that requires bigger research grants for larger experimental groups, and decreases the likelihood of getting a publication? Everyone knows that the whole point of science is to publish lots of papers, just as the whole point of a university is to print certain pieces of parchment, and the whole point of a school is to pass the mandatory tests that guarantee the annual budget. You don’t get to set the rules of the game, and if you try to play by different rules, you’ll just lose.
(Though for some reason, physics journals require a threshold of p < 0.0001. It’s as if they conceive of some other purpose to their existence than publishing physics papers.)
There’s chocolate at the supermarket, and you can get to the supermarket by driving, and driving requires that you be in the car, which means opening your car door, which needs keys. If you find there’s no chocolate at the supermarket, you won’t stand around opening and slamming your car door because the car door still needs opening. I rarely notice people losing track of plans they devised themselves.
It’s another matter when incentives must flow through large organizations—or worse, many different organizations and interest groups, some of them governmental. Then you see behaviors that would mark literal insanity, if they were born from a single mind. Someone gets paid every time they open a car door, because that’s what’s measurable; and this person doesn’t care whether the driver ever gets paid for arriving at the supermarket, let alone whether the buyer purchases the chocolate, or whether the eater is happy or starving.
From a Bayesian perspective, subgoals are epiphenomena of conditional probability functions. There is no expected utility without utility. How silly would it be to think that instrumental value could take on a mathematical life of its own, leaving terminal value in the dust? It’s not sane by decision-theoretical criteria of sanity.
But consider the No Child Left Behind Act. The politicians want to look like they’re doing something about educational difficulties; the politicians have to look busy to voters this year, not fifteen years later when the kids are looking for jobs. The politicians are not the consumers of education. The bureaucrats have to show progress, which means that they’re only interested in progress that can be measured this year. They aren’t the ones who’ll end up ignorant of science. The publishers who commission textbooks, and the committees that purchase textbooks, don’t sit in the classrooms bored out of their skulls.
The actual consumers of knowledge are the children—who can’t pay, can’t vote, can’t sit on the committees. Their parents care for them, but don’t sit in the classes themselves; they can only hold politicians responsible according to surface images of “tough on education.” Politicians are too busy being re-elected to study all the data themselves; they have to rely on surface images of bureaucrats being busy and commissioning studies—it may not work to help any children, but it works to let politicians appear caring. Bureaucrats don’t expect to use textbooks themselves, so they don’t care if the textbooks are hideous to read, so long as the process by which they are purchased looks good on the surface. The textbook publishers have no motive to produce bad textbooks, but they know that the textbook purchasing committee will be comparing textbooks based on how many different subjects they cover, and that the fourth-grade purchasing committee isn’t coordinated with the third-grade purchasing committee, so they cram as many subjects into one textbook as possible. Teachers won’t get through a fourth of the textbook before the end of the year, and then the next year’s teacher will start over. Teachers might complain, but they aren’t the decision-makers, and ultimately, it’s not their future on the line, which puts sharp bounds on how much effort they’ll spend on unpaid altruism . . .
It’s amazing, when you look at it that way—consider all the lost information and lost incentives—that anything at all remains of the original purpose, gaining kn
owledge. Though many educational systems seem to be currently in the process of collapsing into a state not much better than nothing.
Want to see the problem really solved? Make the politicians go to school.
A single human mind can track a probabilistic expectation of utility as it flows through the conditional chances of a dozen intermediate events—including nonlocal dependencies, places where the expected utility of opening the car door depends on whether there’s chocolate in the supermarket. But organizations can only reward today what is measurable today, what can be written into legal contract today, and this means measuring intermediate events rather than their distant consequences. These intermediate measures, in turn, are leaky generalizations—often very leaky. Bureaucrats are untrustworthy genies, for they do not share the values of the wisher.
Miyamoto Musashi said:1
The primary thing when you take a sword in your hands is your intention to cut the enemy, whatever the means. Whenever you parry, hit, spring, strike or touch the enemy’s cutting sword, you must cut the enemy in the same movement. It is essential to attain this. If you think only of hitting, springing, striking or touching the enemy, you will not be able actually to cut him. More than anything, you must be thinking of carrying your movement through to cutting him. You must thoroughly research this.
(I wish I lived in an era where I could just tell my readers they have to thoroughly research something, without giving insult.)
Why would any individual lose track of their purposes in a swordfight? If someone else had taught them to fight, if they had not generated the entire art from within themselves, they might not understand the reason for parrying at one moment, or springing at another moment; they might not realize when the rules had exceptions, fail to see the times when the usual method won’t cut through.
The essential thing in the art of epistemic rationality is to understand how every rule is cutting through to the truth in the same movement. The corresponding essential of pragmatic rationality—decision theory, versus probability theory—is to always see how every expected utility cuts through to utility. You must thoroughly research this.
C. J. Cherryh said:2
Your sword has no blade. It has only your intention. When that goes astray you have no weapon.
I have seen many people go astray when they wish to the genie of an imagined AI, dreaming up wish after wish that seems good to them, sometimes with many patches and sometimes without even that pretense of caution. And they don’t jump to the meta-level. They don’t instinctively look-to-purpose, the instinct that started me down the track to atheism at the age of five. They do not ask, as I reflexively ask, “Why do I think this wish is a good idea? Will the genie judge likewise?” They don’t see the source of their judgment, hovering behind the judgment as its generator. They lose track of the ball; they know the ball bounced, but they don’t instinctively look back to see where it bounced from—the criterion that generated their judgments.
Likewise with people not automatically noticing when supposedly selfish people give altruistic arguments in favor of selfishness, or when supposedly altruistic people give selfish arguments in favor of altruism.
People can handle goal-tracking for driving to the supermarket just fine, when it’s all inside their own heads, and no genies or bureaucracies or philosophies are involved. The trouble is that real civilization is immensely more complicated than this. Dozens of organizations, and dozens of years, intervene between the child suffering in the classroom, and the new-minted college graduate not being very good at their job. (But will the interviewer or manager notice, if the college graduate is good at looking busy?) With every new link that intervenes between the action and its consequence, intention has one more chance to go astray. With every intervening link, information is lost, incentive is lost. And this bothers most people a lot less than it bothers me, or why were all my classmates willing to say prayers without knowing what they meant? They didn’t feel the same instinct to look-to-the-generator.
Can people learn to keep their eye on the ball? To keep their intention from going astray? To never spring or strike or touch, without knowing the higher goal they will complete in the same movement? People do often want to do their jobs, all else being equal. Can there be such a thing as a sane corporation? A sane civilization, even? That’s only a distant dream, but it’s what I’ve been getting at with all of these essays on the flow of intentions (a.k.a. expected utility, a.k.a. instrumental value) without losing purpose (a.k.a. utility, a.k.a. terminal value). Can people learn to feel the flow of parent goals and child goals? To know consciously, as well as implicitly, the distinction between expected utility and utility?
Do you care about threats to your civilization? The worst metathreat to complex civilization is its own complexity, for that complication leads to the loss of many purposes.
I look back, and I see that more than anything, my life has been driven by an exceptionally strong abhorrence to lost purposes. I hope it can be transformed to a learnable skill.
*
1. Miyamoto Musashi, Book of Five Rings (New Line Publishing, 2003).
2. Carolyn J. Cherryh, The Paladin (Baen, 2002).
Part N
A Human’s Guide to Words
153
The Parable of the Dagger
(Adapted from Raymond Smullyan.1)
Once upon a time, there was a court jester who dabbled in logic.
The jester presented the king with two boxes. Upon the first box was inscribed:
Either this box contains an angry frog, or the box with a false inscription contains an angry frog, but not both.
On the second box was inscribed:
Either this box contains gold and the box with a false inscription contains an angry frog, or this box contains an angry frog and the box with a true inscription contains gold.
And the jester said to the king: “One box contains an angry frog, the other box gold; and one, and only one, of the inscriptions is true.”
The king opened the wrong box, and was savaged by an angry frog.
“You see,” the jester said, “let us hypothesize that the first inscription is the true one. Then suppose the first box contains gold. Then the other box would have an angry frog, while the box with a true inscription would contain gold, which would make the second statement true as well. Now hypothesize that the first inscription is false, and that the first box contains gold. Then the second inscription would be—”
The king ordered the jester thrown in the dungeons.
A day later, the jester was brought before the king in chains and shown two boxes.
“One box contains a key,” said the king, “to unlock your chains; and if you find the key you are free. But the other box contains a dagger for your heart if you fail.”
And the first box was inscribed:
Either both inscriptions are true, or both inscriptions are false.
And the second box was inscribed:
This box contains the key.
The jester reasoned thusly: “Suppose the first inscription is true. Then the second inscription must also be true. Now suppose the first inscription is false. Then again the second inscription must be true. So the second box must contain the key, if the first inscription is true, and also if the first inscription is false. Therefore, the second box must logically contain the key.”
The jester opened the second box, and found a dagger.
“How?!” cried the jester in horror, as he was dragged away. “It’s logically impossible!”
“It is entirely possible,” replied the king. “I merely wrote those inscriptions on two boxes, and then I put the dagger in the second one.”
*
1. Raymond M. Smullyan, What Is the Name of This Book?: The Riddle of Dracula and Other Logical Puzzles (Penguin Books, 1990).
154
The Parable of Hemlock
All men are mortal. Socrates is a man. Therefore Socrates is mortal.
—Sta
ndard Medieval syllogism
Socrates raised the glass of hemlock to his lips . . .
“Do you suppose,” asked one of the onlookers, “that even hemlock will not be enough to kill so wise and good a man?”
“No,” replied another bystander, a student of philosophy; “all men are mortal, and Socrates is a man; and if a mortal drinks hemlock, surely he dies.”
“Well,” said the onlooker, “what if it happens that Socrates isn’t mortal?”
“Nonsense,” replied the student, a little sharply; “all men are mortal by definition; it is part of what we mean by the word ‘man.’ All men are mortal, Socrates is a man, therefore Socrates is mortal. It is not merely a guess, but a logical certainty.”
“I suppose that’s right . . .” said the onlooker. “Oh, look, Socrates already drank the hemlock while we were talking.”
“Yes, he should be keeling over any minute now,” said the student.
And they waited, and they waited, and they waited . . .
“Socrates appears not to be mortal,” said the onlooker.
“Then Socrates must not be a man,” replied the student. “All men are mortal, Socrates is not mortal, therefore Socrates is not a man. And that is not merely a guess, but a logical certainty.”
The fundamental problem with arguing that things are true “by definition” is that you can’t make reality go a different way by choosing a different definition.
You could reason, perhaps, as follows: “All things I have observed which wear clothing, speak language, and use tools, have also shared certain other properties as well, such as breathing air and pumping red blood. The last thirty ‘humans’ belonging to this cluster whom I observed to drink hemlock soon fell over and stopped moving. Socrates wears a toga, speaks fluent ancient Greek, and drank hemlock from a cup. So I predict that Socrates will keel over in the next five minutes.”
Rationality- From AI to Zombies Page 60