-----------------------------------
Analog SFF, July-August 2010
by Dell Magazine Authors
-----------------------------------
Science Fiction
* * *
Dell Magazines
www.analogsf.com
Copyright ©2010 Dell Magazines
NOTICE: This work is copyrighted. It is licensed only for use by the original purchaser. Making copies of this work or distributing it to any unauthorized person by any means, including without limit email, floppy disk, file transfer, paper print out, or any other method constitutes a violation of International copyright law and subjects the violator to severe fines or imprisonment.
* * *
Cover art by Bob Eggleton
Cover design by Victoria Green
* * *
CONTENTS
Reader's Department: EDITORIAL: ANALOG, DIGITAL, AND US by Stanley Schmidt
Poetry: RONDEL FOR APOLLO 11: HERE MEN FROM THE PLANET EARTH by Geoffrey A. Landis
Reader's Department: THE ANALYTICAL LABORATORY
Novella: DOCTOR ALIEN'S FIVE EMPTY BOXES by Rajnar Vajra
Science Fact: ARTIFICIAL VOLCANOES: CAN WE COOL THE EARTH BY IMITATING MT. PINATUBO? by Richard A. Lovett
Short Story: THE LONG WAY AROUND by Carl Frederick
Short Story: QUESTIONING THE TREE by Brad Aiken
Novelette: FLY ME TO THE MOON by Marianne J. Dyson
Reader's Department: THE ALTERNATE VIEW: BUBBLES OF BROKEN SYMMETRY by John G. Cramer
Novella: BUG TRAP by Stephen L. Burns
Short Story: THE SINGLE LARRY TI, OR FEAR OF BLACK HOLES AND KEN by Brenda Cooper
Special Feature: THE SERIOUS BUSINESS OF WRITING HUMOR by Richard A. Lovett
Novelette: THE ANDROID WHO BECAME A HUMAN WHO BECAME AN ANDROID by Scott William Carter
Novella: PROJECT HADES by Stephen Baxter
Reader's Department: THE REFERENCE LIBRARY by Don Sakers
Reader's Department: BRASS TACKS
Reader's Department: IN TIMES TO COME
Reader's Department: UPCOMING EVENTS by Anthony Lewis
* * * *
Vol. CXXX No. 7 & 8 July-August 2010
Stanley Schmidt, Editor
Trevor Quachri, Managing Editor
* * *
Peter Kanter: Publisher
Christine Begley: Vice President for Editorial and Product Development
Susan Kendrioski: Vice President for Design and Production
Stanley Schmidt: Editor
Trevor Quachri: Managing Editor
Mary Grant: Editorial Assistant
Victoria Green: Senior Art Director
Cindy Tiberi: Production Artist
Laura Tulley: Senior Production Manager
Jennifer Cone: Production Associate
Abigail Browning: Manager, Subsidiary Rights and Marketing
Julia McEvoy: Manager, Advertising Sales
Bruce W. Sherbow: VP, Sales, Marketing, and IT
Sandy Marlowe: Circulation Services
Advertising Representative: Robin DiMeglio, Advertising Sales Manager, Tel:(203) 866-6688 n Fax:(203) 854-5962 (Display and Classified Advertising)
Editorial Correspondence Only: [email protected]
Published since 1930
First issue of Astounding January 1930 (c)
* * *
Reader's Department: EDITORIAL: ANALOG, DIGITAL, AND US
by Stanley Schmidt
One of the first things I had to get used to when I became editor of Analog was people asking me, “When are you going to change the name of the magazine to Digital, now that everything is?” Usually they were so obviously pleased with their cleverness and originality that it would have seemed boorish not to smile and pretend that I was, too; and I still get occasion to practice that ritual from time to time. But the fact is that, while a great many things are done digitally these days (and for very good reasons), not everything is (also for very good reasons). “Analog” is still a good description of the philosophy behind much of our fiction (even though most of our production work is now done digitally), and analog and digital are not the only games in town. We humans are ourselves a fine working example of a way of handling information that's fundamentally different from either of them. And might there be still other ways that we haven't even thought of yet?
The approaches differ at such a fundamental level that even engineers who have made a career of working with analog equipment may have trouble fully comprehending just how different digital is. With analog recordings of sound, for example, any time you make a new copy, you will lose quality—guaranteed. Maybe not enough to be obvious, but there will always be some distortion or deterioration of the signal. With digital recordings, on the other hand, a copy is exactly as good as the original (unless something goes wrong, but usually it doesn't). If you tell that to an engineer used to analog work and only analog work, his answer is likely to be a wary, “Well, maybe almost . . .”
But the answer is: No, not almost. Exactly.
To make that statement credible, for old-school engineers or anybody else, let's review exactly what characterizes and distinguishes analog and digital methods. Each has its own strengths and weaknesses, not shared by the other.
In analog processing, the variation of one quantity is determined by imitating the variation of another. For an example that was more familiar a few years ago, consider what happens when a phonograph record is made and later played. A sound is a periodic variation of air pressure; when that impinges on a microphone, it produces an electrical voltage that varies with time in the same way as the pressure. That voltage in turn causes a needle to move with the same sort of time variation, cutting a groove in a disk. That disk is the recording, or record. When it's played back, another needle rides the groove and is wiggled back and forth and up and down by the undulations of the track. The wiggles of the needle are converted by a piezoelectric or magnetic cartridge into an electrical signal that varies in the same way, and is finally used to drive a speaker which produces similarly varying air pressure near the listener—i.e., sound.
If all goes well, the signal produced at each stage of conversion will vary with time in the same way as the pressure in the original sound, so the sound coming out of the speaker will be indistinguishable from the original captured by the microphone. However, there are several stages of conversion from one signal form to another, and none of them is perfect. A signal contains many frequencies of sine waves, and no mechanical or electrical system responds in exactly the same way to all of them (which is why the specifications for an amplifier, microphone, or speaker include “frequency response"). And there is always noise, sometimes large and unpredictable like transients in the power supply, but always, at the very least, quantum noise at the level of atoms and electrons. So for a variety of reasons, every part of every “copy” made when the signal is translated into a different form is a little different from the one before it, and if you go through several translations, the distortions pile up.
Nevertheless, when analog processing was the only kind available, people put up with that because, with care to minimize the distortion at each step, it was possible to get eminently usable results—and, as engineers became better and better at it, very good (but never perfect) results.
The process also lends itself to some kinds of computing. The simplest kind of analog computer is the slide rule that used to be an everyday tool of engineers and physicists: two sticks mounted side by side and calibrated with the logarithms of numbers, so that sliding them past each other to add or subtract lengths provided a quick and easy way to multiply or divide. But there were
electronic versions too. A great many important systems in the physical world are oscillators, and the equations describing a mechanical oscillator are identical to those for an electromagnetic one; only the names of the variables are different, with an exact one-to-one correspondence. So if you want to know under what conditions a huge wrecking ball might start swinging out of control, without actually trying it, you can set up an analogous electrical circuit with inductors, capacitors, and resistors, and make measurements on that which will correspond accurately to the behavior of the big mechanical system. More complicated systems, from machines to ecosystems, could also be simulated by more complicated circuits put together with combinations of operational amplifiers, a special kind of amplifier with two inputs and one output, typically proportional to the difference between the inputs.
(And how about the name of this magazine? Each story can be thought of as an analog simulation of a possible future, in which the author imagines a set of conditions and characters and watches how they interact—which, if he does a good job, may be a good simulation of how the same scenario might play out in reality.)
In digital processing, information is expressed not as a physical quantity like distance (as in the groove of a phonograph record) or voltage (as in an audio amplifier), but as a number describing the instantaneous value of that measurement. In the output from a microphone, sound is coded as a voltage that varies in the same way as the pressure in the sound wave. In the output from a digital player (like a CD or an iPod), it's expressed instead as a series of numbers telling the value of that voltage at a large number of closely spaced instants in time.
The important advantage of that lies in the fact that numbers can be coded in a way much less susceptible to change by small extraneous signals, or noise—so much less susceptible, in fact, that the changes that do occur have no effect on the signal being processed and used. Yes, noise will still change the exact values of voltages, just as in analog equipment; but in analog equipment, those changes always mean that the input is noticeably different from the input. In digital equipment, each number is instead expressed in binary form by a series of circuit elements, each of which is either on or off (which can be interpreted as 1 or 0, or yes or no). There are no intermediate states.
Certain types of circuits (one of which, by a curious almost-coincidence, is called a Schmitt trigger) have to be in one or the other of two stable states, and can be flipped between them by an incoming signal. If the input is above a certain “trigger level,” the state flips; if the input is below the trigger level, it doesn't. Say, for instance, that the trigger level is 100 mv. Even if there's enough uncertainty in that that tripping can occur between 90 and 110 mv, anything from 0 to 90 mv won't flip the circuit, while anything above 110 will. If the overall circuit is set up so that a “no” input is always in the rather broad range of 40(plus-minus)40 mv and a “yes” input is 150(plus-minus)30 mv, you can tolerate quite a lot of noise without tripping the circuit into the wrong state. Since the signal that matters is just a series of ones and zeros, those won't be changed by noise, and one copy is as good as another. As long as a “yes” remains a “yes,” that's all that matters. (Occasionally, of course, something will go wrong enough to mis-set a bit, but with well-designed equipment that's extremely rare. How often have you hit the “e” key on your computer and gotten a different letter instead?)
Analog and digital processing aren't mutually exclusive, and sometimes both are used in different parts of a single process. In digital sound recording, for example, the first stage, done by the microphone, is all analog: the production of an electrical signal analogous to the air pressure variation at the microphone. That signal is then run through an analog-to-digital converter, which measures the voltage at many closely spaced points along the voltage graph and converts each into a number, expressed, stored, and processed digitally, as a series of yes-no bits in a medium such as a CD or a flash drive. Finally, that digital signal must be converted back to an analog one—a time-varying voltage—to drive the speaker in a way that will deliver the desired sound to a listener's ear.
Now, how about us? We often hear (usually, I suspect from people with a deep-seated psychological need to think of themselves as something utterly different from, and superior to, any other entities such as other animals and machines) that “the human brain is not a computer.” But of course it is—a computer that is extraordinarily powerful in some ways and quite limited in others. It takes information from sense organs and its own stored memories and processes them to generate outputs in the form of new information or instructions for actions. Those are exactly the sorts of things that we expect other computers to do, so our brains are computers, and there's no need to deny that or to apologize, rationalize, or feel defensive or ashamed about it.
But brains do all these things very differently from most of the analog or digital machines we build and use (which should provide at least a little consolation for those people who need to feel uniquely superior). We have some impressive strengths that our manufactured computers conspicuously lack. We can, for example, instantly recognize faces viewed from any angle and with a wide range of expressions, and written and printed symbols in a wide range of fonts and sizes, even if they're versions we've never seen before or they're badly executed or damaged. We can notice connections between seemingly unrelated pieces of information, and use them to generate something new and unexpected, ranging from an on-the-spot decision to take an alternate route home because we've heard there's an accident ahead, to composing Der Ring des Nibelungen or inventing the theory of relativity. We can do things that seem to us so simple that we take them for granted, like walking across a room without bumping into things.
Programming a robot controlled by a digital computer to do any of these things is exceedingly difficult. It's not too hard to program a digital computer to recognize the letter “Q” in 14-point Times New Roman type, provided it's positioned and oriented just so and doesn't have any flyspecks on it. It's vastly harder to program one to recognize every variation of “Q” that a four-year-old, a calligrapher, or an art director might come up with. Faces are even harder. Digital computers (as usually programmed) are not equipped to make correlations that they have not been instructed to make, much less come up with original ideas that resonate with people, like evolution or sonata form. Programming one to walk across a room of unknown size and shape, and filled with unknown furnishings, is beyond formidably difficult.
On the other hand, we have our own weaknesses and digital computers have their own strengths. They are far more efficient at crunching large numbers of numbers—provided they have detailed instructions on exactly how they should crunch them—than we are.
When we want to learn something, it typically involves a period, sometimes lengthy, of trial and error and practice. We need to keep practicing even after we've learned something, or the knowledge and skills slip away. Sometimes we make mistakes or forget things, even things that we know well—and sometimes we spontaneously remember them again later. Knowing that this happens, we often make up mnemonics to help us remember things by artificially associating them with something else (like “Oh Be A Fine Girl Kiss Me” for the spectral classes of stars).
Computers (again, of the usual sort) don't need practice to learn something, they don't need mnemonics, and they don't forget. If you want one to remember something, you tell it what and it immediately stores it, exactly as you entered it, in a precise place where it can be retrieved on demand anytime you want it, at any later time as long as it hasn't been erased.
These differences reflect very fundamental differences in how human brains and most of our computers store and process information. In a digital computer, information is stored as a sequence of ones and zeros in a single, precise, well-defined location. The downside of that is that if that location is damaged—which may not take much—the memory is lost (which is why it's so important to make and keep frequent backups). When a computer solves a problem,
it does it by using an algorithm, a predefined sequence of steps it goes through to process the data it's fed. The human who programs the computer to solve the problem must already know how to do it so he or she can tell the computer exactly what steps to take. The only advantage of having the computer do it is that it can do it much, much faster and is much less likely to make mistakes. ("Making mistakes,” in this context, means “Failing to follow directions exactly.” As we all know, a very good computer can give wildly wrong results if it's given faulty data or faulty instructions.)
The human way of storing memories and solving problems is completely different. Our memories are not stored in simple digital or analog form in a precise location, but distributed through a large network of neurons. The output of each neuron is determined by a weighted combination of the inputs it gets from other neurons, both “before” and “after” it in the network. Either learning or retrieving a memory involves conditioning all the neurons in the net so that a particular kind of input (such as any of the many versions of “Q") yields a particular kind of output (recognition that the thing being seen is a “Q"). The weighting of outputs from all the neurons is adjusted depending on whether the network's output is or is not what it ought to be. That's why human learning is so dependent on repetition, testing of results against an external standard, and positive or negative reinforcement. The payoff is that we can become exceedingly good at tasks like pattern recognition and realtime operations involving complicated coordination like playing a cello or driving a car. A bonus is that since memories are distributed, with many neurons involved in a single memory and a single neuron involved in many memories, memories are less vulnerable to localized damage.*
The difference between digital and human-style data processing is not between biological and electronic, or between divinely created and humanly manufactured. In recent decades some researchers have been experimenting with types of circuits called neural nets that mimic the operation of human or animal nervous systems, using very basic electronic computers (which may, perhaps ironically, be digital or analog in their inner workings) as analogs of neurons. You don't program them, as we do what we normally call computers; you teach them, much as we teach human students. Rick Cook had an article about them here twenty years ago (August 1989); since then they've acquired considerable practical importance as the basis of tools like robotic vacuum cleaners and optical character recognition (OCR) software.
Analog SFF, July-August 2010 Page 1