Rationality- From AI to Zombies
Page 129
So I swallowed my trained-in revulsion of Luddism and theodicy, and at least tried to contemplate the argument:
A world in which nothing ever goes wrong, or no one ever experiences any pain or sorrow, is a world containing no stories worth reading about.
A world that you wouldn’t want to read about is a world where you wouldn’t want to live.
Into each eudaimonic life a little pain must fall. QED.
In one sense, it’s clear that we do not want to live the sort of lives that are depicted in most stories that human authors have written so far. Think of the truly great stories, the ones that have become legendary for being the very best of the best of their genre: the Iliad, Romeo and Juliet, The Godfather, Watchmen, Planescape: Torment, the second season of Buffy the Vampire Slayer, or that ending in Tsukihime. Is there a single story on the list that isn’t tragic?
Ordinarily, we prefer pleasure to pain, joy to sadness, and life to death. Yet it seems we prefer to empathize with hurting, sad, dead characters. Or stories about happier people aren’t serious, aren’t artistically great enough to be worthy of praise—but then why selectively praise stories containing unhappy people? Is there some hidden benefit to us in it? It’s a puzzle either way you look at it.
When I was a child I couldn’t write fiction because I wrote things to go well for my characters—just like I wanted things to go well in real life. Which I was cured of by Orson Scott Card: Oh, I said to myself, that’s what I’ve been doing wrong, my characters aren’t hurting. Even then, I didn’t realize that the microstructure of a plot works the same way—until Jack Bickham said that every scene must end in disaster. Here I’d been trying to set up problems and resolve them, instead of making them worse . . .
You simply don’t optimize a story the way you optimize a real life. The best story and the best life will be produced by different criteria.
In the real world, people can go on living for quite a while without any major disasters, and still seem to do pretty okay. When was the last time you were shot at by assassins? Quite a while, right? Does your life seem emptier for it?
But on the other hand . . .
For some odd reason, when authors get too old or too successful, they revert to my childhood. Their stories start going right. They stop doing horrible things to their characters, with the result that they start doing horrible things to their readers. It seems to be a regular part of Elder Author Syndrome. Mercedes Lackey, Laurell K. Hamilton, Robert Heinlein, even Orson Scott bloody Card—they all went that way. They forgot how to hurt their characters. I don’t know why.
And when you read a story by an Elder Author or a pure novice—a story where things just relentlessly go right one after another—where the main character defeats the supervillain with a snap of the fingers, or even worse, before the final battle, the supervillain gives up and apologizes and then they’re friends again—
It’s like a fingernail scraping on a blackboard at the base of your spine. If you’ve never actually read a story like that (or worse, written one) then count yourself lucky.
That fingernail-scraping quality—would it transfer over from the story to real life, if you tried living real life without a single drop of rain?
One answer might be that what a story really needs is not “disaster,” or “pain,” or even “conflict,” but simply striving. That the problem with Mary Sue stories is that there’s not enough striving in them, but they wouldn’t actually need pain. This might, perhaps, be tested.
An alternative answer might be that this is the transhumanist version of Fun Theory we’re talking about. So we can reply, “Modify brains to eliminate that fingernail-scraping feeling,” unless there’s some justification for keeping it. If the fingernail-scraping feeling is a pointless random bug getting in the way of Utopia, delete it.
Maybe we should. Maybe all the Great Stories are tragedies because . . . well . . .
I once read that in the BDSM community, “intense sensation” is a euphemism for pain. Upon reading this, it occurred to me that, the way humans are constructed now, it is just easier to produce pain than pleasure. Though I speak here somewhat outside my experience, I expect that it takes a highly talented and experienced sexual artist working for hours to produce a good feeling as intense as the pain of one strong kick in the testicles—which is doable in seconds by a novice.
Investigating the life of the priest and proto-rationalist Friedrich Spee von Langenfeld, who heard the confessions of accused witches, I looked up some of the instruments that had been used to produce confessions. There is no ordinary way to make a human being feel as good as those instruments would make you hurt. I’m not sure even drugs would do it, though my experience of drugs is as nonexistent as my experience of torture.
There’s something imbalanced about that.
Yes, human beings are too optimistic in their planning. If losses weren’t more aversive than gains, we’d go broke, the way we’re constructed now. The experimental rule is that losing a desideratum—$50, a coffee mug, whatever—hurts between 2 and 2.5 times as much as the equivalent gain.
But this is a deeper imbalance than that. The effort-in/intensity-out difference between sex and torture is not a mere factor of 2.
If someone goes in search of sensation—in this world, the way human beings are constructed now—it’s not surprising that they should arrive at pains to be mixed into their pleasures as a source of intensity in the combined experience.
If only people were constructed differently, so that you could produce pleasure as intense and in as many different flavors as pain! If only you could, with the same ingenuity and effort as a torturer of the Inquisition, make someone feel as good as the Inquisition’s victims felt bad—
But then, what is the analogous pleasure that feels that good? A victim of skillful torture will do anything to stop the pain and anything to prevent it from being repeated. Is the equivalent pleasure one that overrides everything with the demand to continue and repeat it? If people are stronger-willed to bear the pleasure, is it really the same pleasure?
There is another rule of writing which states that stories have to shout. A human brain is a long way off those printed letters. Every event and feeling needs to take place at ten times natural volume in order to have any impact at all. You must not try to make your characters behave or feel realistically—especially, you must not faithfully reproduce your own past experiences—because without exaggeration, they’ll be too quiet to rise from the page.
Maybe all the Great Stories are tragedies because happiness can’t shout loud enough—to a human reader.
Maybe that’s what needs fixing.
And if it were fixed . . . would there be any use left for pain or sorrow? For even the memory of sadness, if all things were already as good as they could be, and every remediable ill already remedied?
Can you just delete pain outright? Or does removing the old floor of the utility function just create a new floor? Will any pleasure less than 10,000,000 hedons be the new unbearable pain?
Humans, built the way we are now, do seem to have hedonic scaling tendencies. Someone who can remember starving will appreciate a loaf of bread more than someone who’s never known anything but cake. This was George Orwell’s hypothesis for why Utopia is impossible in literature and reality:1
It would seem that human beings are not able to describe, nor perhaps to imagine, happiness except in terms of contrast . . . The inability of mankind to imagine happiness except in the form of relief, either from effort or pain, presents Socialists with a serious problem. Dickens can describe a poverty-stricken family tucking into a roast goose, and can make them appear happy; on the other hand, the inhabitants of perfect universes seem to have no spontaneous gaiety and are usually somewhat repulsive into the bargain.
For an expected utility maximizer, rescaling the utility function to add a trillion to all outcomes is meaningless—it’s literally the same utility function, as a mathematical object. A utility func
tion describes the relative intervals between outcomes; that’s what it is, mathematically speaking.
But the human brain has distinct neural circuits for positive feedback and negative feedback, and different varieties of positive and negative feedback. There are people today who “suffer” from congenital analgesia—a total absence of pain. I never heard that insufficient pleasure becomes intolerable to them.
Congenital analgesics do have to inspect themselves carefully and frequently to see if they’ve cut themselves or burned a finger. Pain serves a purpose in the human mind design . . .
But that does not show there’s no alternative which could serve the same purpose. Could you delete pain and replace it with an urge not to do certain things that lacked the intolerable subjective quality of pain? I do not know all the Law that governs here, but I’d have to guess that yes, you could; you could replace that side of yourself with something more akin to an expected utility maximizer.
Could you delete the human tendency to scale pleasures—delete the accomodation, so that each new roast goose is as delightful as the last? I would guess that you could. This verges perilously close to deleting Boredom, which is right up there with Sympathy as an absolute indispensable . . . but to say that an old solution remains as pleasurable is not to say that you will lose the urge to seek new and better solutions.
Can you make every roast goose as pleasurable as it would be in contrast to starvation, without ever having starved?
Can you prevent the pain of a dust speck irritating your eye from being the new torture, if you’ve literally never experienced anything worse than a dust speck irritating your eye?
Such questions begin to exceed my grasp of the Law, but I would guess that the answer is: yes, it can be done. It is my experience in such matters that once you do learn the Law, you can usually see how to do weird-seeming things.
So far as I know or can guess, David Pearce (The Hedonistic Imperative) is very probably right about the feasibility part, when he says:2
Nanotechnology and genetic engineering will abolish suffering in all sentient life. The abolitionist project is hugely ambitious but technically feasible. It is also instrumentally rational and morally urgent. The metabolic pathways of pain and malaise evolved because they served the fitness of our genes in the ancestral environment. They will be replaced by a different sort of neural architecture—a motivational system based on heritable gradients of bliss. States of sublime well-being are destined to become the genetically pre-programmed norm of mental health. It is predicted that the world’s last unpleasant experience will be a precisely dateable event.
Is that . . . what we want?
To just wipe away the last tear, and be done?
Is there any good reason not to, except status quo bias and a handful of worn rationalizations?
What would be the alternative? Or alternatives?
To leave things as they are? Of course not. No God designed this world; we have no reason to think it exactly optimal on any dimension. If this world does not contain too much pain, then it must not contain enough, and the latter seems unlikely.
But perhaps . . .
You could cut out just the intolerable parts of pain?
Get rid of the Inquisition. Keep the sort of pain that tells you not to stick your finger in the fire, or the pain that tells you that you shouldn’t have put your friend’s finger in the fire, or even the pain of breaking up with a lover.
Try to get rid of the sort of pain that grinds down and destroys a mind. Or configure minds to be harder to damage.
You could have a world where there were broken legs, or even broken hearts, but no broken people. No child sexual abuse that turns out more abusers. No people ground down by weariness and drudging minor inconvenience to the point where they contemplate suicide. No random meaningless endless sorrows like starvation or AIDS.
And if even a broken leg still seems too scary—
Would we be less frightened of pain, if we were stronger, if our daily lives did not already exhaust so much of our reserves?
So that would be one alternative to Pearce’s world—if there are yet other alternatives, I haven’t thought them through in any detail.
The path of courage, you might call it—the idea being that if you eliminate the destroying kind of pain and strengthen the people, then what’s left shouldn’t be that scary.
A world where there is sorrow, but not massive systematic pointless sorrow, like we see on the evening news. A world where pain, if it is not eliminated, at least does not overbalance pleasure. You could write stories about that world, and they could read our stories.
I do tend to be rather conservative around the notion of deleting large parts of human nature. I’m not sure how many major chunks you can delete until that balanced, conflicting, dynamic structure collapses into something simpler, like an expected pleasure maximizer.
And so I do admit that it is the path of courage that appeals to me.
Then again, I haven’t lived it both ways.
Maybe I’m just afraid of a world so different as Analgesia—wouldn’t that be an ironic reason to walk “the path of courage”?
Maybe the path of courage just seems like the smaller change—maybe I just have trouble empathizing over a larger gap.
But “change” is a moving target.
If a human child grew up in a less painful world—if they had never lived in a world of AIDS or cancer or slavery, and so did not know these things as evils that had been triumphantly eliminated—and so did not feel that they were “already done” or that the world was “already changed enough” . . .
Would they take the next step, and try to eliminate the unbearable pain of broken hearts, when someone’s lover stops loving them?
And then what? Is there a point where Romeo and Juliet just seems less and less relevant, more and more a relic of some distant forgotten world? Does there come some point in the transhuman journey where the whole business of the negative reinforcement circuitry can’t possibly seem like anything except a pointless hangover to wake up from?
And if so, is there any point in delaying that last step? Or should we just throw away our fears and . . . throw away our fears?
I don’t know.
*
1. George Orwell, “Why Socialists Don’t Believe in Fun,” Tribune (December 1943).
2. David Pearce, The Hedonistic Imperative, http://www.hedweb.com/, 1995.
279
Value is Fragile
If I had to pick a single statement that relies on more Overcoming Bias content I’ve written than any other, that statement would be:
Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals will contain almost nothing of worth.
“Well,” says the one, “maybe according to your provincial human values, you wouldn’t like it. But I can easily imagine a galactic civilization full of agents who are nothing like you, yet find great value and interest in their own goals. And that’s fine by me. I’m not so bigoted as you are. Let the Future go its own way, without trying to bind it forever to the laughably primitive prejudices of a pack of four-limbed Squishy Things—”
My friend, I have no problem with the thought of a galactic civilization vastly unlike our own . . . full of strange beings who look nothing like me even in their own imaginations . . . pursuing pleasures and experiences I can’t begin to empathize with . . . trading in a marketplace of unimaginable goods . . . allying to pursue incomprehensible objectives . . . people whose life-stories I could never understand.
That’s what the Future looks like if things go right.
If the chain of inheritance from human (meta)morals is broken, the Future does not look like this. It does not end up magically, delightfully incomprehensible.
With very high probability, it ends up looking dull. Pointless. Something whose loss you wouldn’t mourn.
Seeing this as obvious is what requires that immense amount of backgroun
d explanation.
And I’m not going to iterate through all the points and winding pathways of argument here, because that would take us back through 75% of my Overcoming Bias posts. Except to remark on how many different things must be known to constrain the final answer.
Consider the incredibly important human value of “boredom”—our desire not to do “the same thing” over and over and over again. You can imagine a mind that contained almost the whole specification of human value, almost all the morals and metamorals, but left out just this one thing—
—and so it spent until the end of time, and until the farthest reaches of its light cone, replaying a single highly optimized experience, over and over and over again.
Or imagine a mind that contained almost the whole specification of which sort of feelings humans most enjoy—but not the idea that those feelings had important external referents. So that the mind just went around feeling like it had made an important discovery, feeling it had found the perfect lover, feeling it had helped a friend, but not actually doing any of those things—having become its own experience machine. And if the mind pursued those feelings and their referents, it would be a good future and true; but because this one dimension of value was left out, the future became something dull. Boring and repetitive, because although this mind felt that it was encountering experiences of incredible novelty, this feeling was in no wise true.
Or the converse problem—an agent that contains all the aspects of human value, except the valuation of subjective experience. So that the result is a nonsentient optimizer that goes around making genuine discoveries, but the discoveries are not savored and enjoyed, because there is no one there to do so. This, I admit, I don’t quite know to be possible. Consciousness does still confuse me to some extent. But a universe with no one to bear witness to it might as well not be.