Book Read Free

Clarkesworld Magazine Issue 80

Page 9

by James Patrick Kelly


  The talk was sparsely attended, and most of the audience had no idea what was happening before them. For the young Gödel demonstrated the most audacious trick of logic ever performed: he showed how to turn mathematical formulas into numbers, on which one can then perform further mathematics.The result is that Gödel developed a technique to use mathematics to describe itself.

  Gödel then revealed what is today called a Gödel Sentence. In arithmetic it is possible to construct a sentence that means, This sentence is not provable. The consequences are stunning: if this sentence can be proved, arithmetic is inconsistent (because it would have proved a falsehood). If this sentence cannot be proved, then it is true, and thus there are truths of arithmetic that cannot be proved—a property we call incompleteness.

  Most of us hope that arithmetic is consistent. After all, we’ve been doing arithmetic for a long time, eons, and no one ever proved that 2+2=5. So, we conclude optimistically that Gödel has proved that there are truths of arithmetic that cannot be proved. This is why this proof is now called Gödel’s First Incompleteness Theorem.

  The theorem generalizes to all of what the rest of us consider mathematics. Furthermore, it has been shown that not just the Gödel Sentence, but other problems of mathematics are unprovable. This is a shocking result: for every mathematical system of significant power, there are truths of that system that we cannot reach.

  Gödel later extended his result, in a way that gives an answer to Hilbert’s Second Problem. Gödel proved that we cannot prove the consistency of a mathematical system from within that mathematical system. This is now known as Gödel’s Second Incompleteness Theorem.

  An answer to the decidability question came shortly thereafter. In a paper written in 1936 and published in 1937, a young English mathematician named Alan Turing gave us the first formal definition of an algorithm, and in so doing gave us the first formal description of computation.

  This is a marvelous accomplishment when you consider that Turing’s paper proved perfect: his description of computation captures exactly what all and any computer can do. And in this 1936 paper, Turing also proved a result as stunning as Gödel’s: there is no effective procedure to determine if another arbitrary procedure is effective. What this means is that there can be no computer program to tell you whether some arbitrary computer program is going to work.

  Think of your own experience with software. Sometimes you turn on your computer, open a piece of software, set it to work on some problem, and promptly get the wait symbol—be it a spinning daisy wheel or turning hourglass. You wait, at first patiently, but then with growing annoyance, as the symbol spins and spins. You are confronted with a conundrum: is this program stuck in an infinite loop, or is it working correctly but just taking a long time? Turing proved that you cannot have an effective procedure to answer this question for any arbitrary program. You simply can’t reliably know which situation you are in. Your program may be stuck. Or, it may be working correctly, but working on a hard problem that will take a long while. (My advice, however, is to reboot.)

  Some jokingly call Turing’s discovery the Computer Scientist Employment Act, since it means that we cannot replace human programmers with a computer that generates and tests programs. But the result is more general than this. It means that there is no algorithm to find the good algorithms, no one test to test all our reasoning.

  Together, these three results topple Hilbert’s Dream. The consequence of Gödel’s First Incompleteness Theorem is that we cannot know if the information that we need to solve some problem is, even if true, provable. We might find that we simply have to assume some answer to the problem before us, if we are going to reason about it. But how do we know if such an assumption is correct? The consequence of Gödel’s Second Incompleteness Theorem is that we cannot know beforehand if such assumptions are correct. We must simply develop our theories and work with them, and if they fail spectacularly, we will back up and start over; but until they do so fail, we won’t be able to predict whether our reasoning is consistent or not. And the consequence of Turing’s Undecidability result is that we have no brute force method to find the answer to our problems, and furthermore when we cannot find the answer to some problem, we cannot know whether this is because of our own failure of imagination, or rather is a consequence of our theory being too weak. And there is no escape route; no matter how smart you are, these theorems still stand in the way.

  To return to the Singularity: consider now what these results mean for our lonely supercomputer, as it sits in its humming server farm, planning world domination. If it is going to make faster and smarter versions of itself, what must it do? It must reason out the next steps in scientific development, draw the correct inferences from these, and use these results to develop its ever-more-intelligent children. And how shall it do this? Turing’s result guarantees that there is no effective procedure to just hammer out the conclusion. Gödel’s results guarantee that the machine cannot be sure its reasoning is sound or sufficient for the problem before it. And so the conniving machine must make some hypotheses, test those hypotheses through use, and go back to make more hypotheses when it fails. It must, in other words, go about applying the scientific method and scientific experimentation. And this takes time. Snail time. It takes laboratories, and other resources, down here in the crawling now.

  The dream of the Singularity is really an extension of the most persistent, but most unrealistic, of science fiction tropes. This is the trope of the lonely super-productive scientist.

  There are endless stories of the sole scientific genius, developing a host of brilliant advances, working alone in his laboratory. We can find examples from every Era. H. G. Wells’ Time Traveler builds a time machine alone in his Victorian shop. The aptly named Lone of Theodore Sturgeon’s, More Than Human orders some parts from the local electronic shop and builds an anti-gravity device in his barn. Ayn Rand’s John Galt has a room in his New York apartment that is filled with world-transforming inventions that he tinkered together when not speechifying. Ted Chiang’s Leon Greco, former computer graphics artist, gets an intelligence boost, and spontaneously discerns how to: hack into FDA databases and understand the scientific papers stored there, write viruses that can penetrate government databases and wipe and rewrite selected information, discover new pattern-matching algorithms, quickly comprehend all the physics he reads and identify easy extensions to it, and so on.

  There is a reason that nothing like this ever happens in real life. Yes, sometimes an inventor hits on an innovation, maybe even two, on her own. But our reasoning capabilities are constrained in many ways that ultimately require endless hours of hard slogging through the scientific method, something that can be effectively done only by armies of scientists and engineers openly sharing results. In contrast, these fantasies of the lone scientist producing wild new inventions are just as realistic as would be a fantasy in which Wells’ Time Traveler singlehandedly builds the Eiffel Tower, or John Galt singlehandedly erects the Brooklyn Bridge. Work—be it physical or mental—has important costs and constraints. There just are no free lunches. Even for supercomputers.

  Scientific progress appears to have accelerated, and perhaps it will continue to do so. But we are the ones driving this progress, working together in large numbers, and using our computers as valuable time-saving tools. If computers ever become our equals, they will be no better off than we are, laboring with us on the hard work of scientific and mathematical discovery.

  I don’t mean to scold. Or, well, maybe I do. I confess that I’m not very fond of the Singularity as SF trope; I fear it counsels passivity, asking us to await the technoreligious rapture. Why exercise or compost, when soon this is all going to be software? But, then, it’s easy to criticize any of our tropes. I should know: I’m one of those writers who throws in faster than light travel because I want to have aliens in my space operas; and I’m one of those writers who has a common galactic language so my aliens can make witty remarks to each other. There
are readers who groan and pull their hair when they encounter this kind of thing.

  Sometimes a writer gives up realism in one domain in order to realistically explore certain consequences. If a writer wants to explore and exaggerate a radical technology’s effect on a single human being, she might adopt the trope in which that single human being is a genius who, working alone, develops that technology. And if she wants to explore the vertiginous effects of scientific progress, she might introduce the idea of runaway scientific progress, driven by superintelligences, to allow for many encounters with such progress. So let the Singularity thrive as a narrative tool.

  But we should all be wary of those selling the message that the Singularity is near, here in the actual world. They are hawking a false dream, one long ago denied us. Artificial intelligences are not alone going to make us live longer or organize our economy or restore our environment or build us spaceships. We’ll have to do all that ourselves.

  Let’s get to work.

  About the Author

  Craig DeLancey is a philosopher and writer. He has published more than twenty short stories, in magazines like Analog, Cosmos, Shimmer, The Mississippi Review Online, and Nature Physics. His work has also appeared, in translation, in Russia and China. His short story “Julie is Three” won the Anlab reader’s choice award last year. He also writes plays, and his plays have had performances and staged readings in New York, Sydney, Melbourne, and in other cities. He has been a finalist for the Heideman Award. He teaches philosophy at the State University of New York at Oswego.

  Editor’s Desk:

  Day of the Wineberry

  Neil Clarke

  The Triffids, Biollante, and other infamous plant monsters have a new colleague. Like Biollante, it’s a fast growing invader from Japan, but this one has taken up residence in my backyard. It has whip-like thorny appendages and can be very territorial.

  Last month, I made the mistake of stepping too close to its habitat and was summarily punished. Before I could blink, my head was grabbed by the beast and a row of radioactive thorns thrust into my face, right across my still-open eye! [Note: Claims of radioactivity may be exaggerated.]

  Yes, your editor, the damage-magnet, was once again thrust into a situation requiring a week-long adventure complete with medical professionals. Five thorn fragments had to be tweezed from my eye and my vision has been slowly recovering since. I’m no longer photo-sensitive, which is great, but I did lose a lot of work time this month. The planned editorial, “Slush Reading as an Educational Opportunity for Children,” was simply too much to tackle and has had to be rescheduled for next month.

  Until then feel free to come up with various solutions that would prevent me from attracting further damage. If anyone has a working force field, please contact me.

  Award News

  Awards excitment continued in April with the announcements of the Ditmar Awards in Australia and the Aurora Awards finalists in Canada. There are always interesting works on their ballots, but this year we took special interest as two of our authors made the list. Congratulations to Thoraiya Dyer, winner of the Ditmar Award for Best Short Story (“The Wisdom of Ants”) and Suzanne Church, nominee for the Aurora Award for Best Short Fiction: English (“Synch Me, Kiss Me, Drop”). This is fantastic news and we are so happy for them.

  Later this month, the winners of the Nebula Award will be announced at a ceremony in San Jose, California. I would love to be in attendance and cheer on our four nominees (Helena Bell, Tom Crosshill, Aliette de Bodard and Catherynne M. Valente), but unfortunately, it isn’t possible. In past years, SFWA has broadcast the ceremony, so I hope to tune in and be there in spirit.

  Congratulations to all and best of luck!

  About the Author

  Neil Clarke is the editor of Clarkesworld Magazine, owner of Wyrm Publishing and a 2013 Hugo Nominee for Best Editor (short form). He currently lives in NJ with his wife and two children.

 

 

 


‹ Prev