The journalist Charles Seife included in his book Proofiness a very funny and mildly depressing chronicle of the similarly close contest between Democrat Al Franken and Republican Norm Coleman to represent Minnesota in the U.S. Senate. It would be great to say that Franken took office because a cold analytical procedure showed exactly 312 more Minnesotans wanted to see him seated in the chamber. In reality, though, that number reflects the result of an extended legal tussle over questions like whether a ballot with a mark for Franken and a write-in for “Lizard People” was legally cast. Once you get down to this kind of issue, the question of who “really” got more votes doesn’t even make sense. The signal is lost in the noise. And I tend to side with Seife, who argues that elections this close should be decided by coin flip.* Some will balk at the idea of choosing our leaders by chance. But that’s actually the coin flip’s most important benefit! Close elections are already determined by chance. Bad weather in the big city, a busted voting machine in an outlying town, a poorly designed ballot leading elderly Jews to vote for Pat Buchanan—any of these chance events can make the difference when the electorate is stuck at 50−50. Choosing by coin flip helps keep us from pretending that the people have spoken for the winning candidate in a closely divided race. Sometimes the people speak and they say, “I dunno.”
You might think I’d be really into decimal places. The conjoined twin of the stereotype that mathematicians are always certain is the stereotype that we are always precise, determined to compute everything to as many decimal places as possible. It isn’t so. We want to compute everything to as many decimal places as necessary. There is a young man in China named Lu Chao who learned and recited 67,890 digits of pi. That’s an impressive feat of memory. But is it interesting? No, because the digits of pi are not interesting. As far as anyone knows, they’re as good as random. Pi itself is interesting, to be sure. But pi is not its digits; it is merely specified by its digits, in the same way the Eiffel Tower is specified by the longitude and latitude 48.8586° N, 2.2942° E. Add as many decimal places to those numbers as you want, and they still won’t tell you what makes the Eiffel Tower the Eiffel Tower.
Precision isn’t just about digits. Benjamin Franklin wrote cuttingly of a member of his Philadelphia set, Thomas Godfrey: “He knew little out of his way, and was not a pleasing companion; as, like most great mathematicians I have met with, he expected universal precision in everything said, or was for ever denying or distinguishing upon trifles, to the disturbance of all conversation.”
This stings because it’s only partially unfair. Mathematicians can be persnickety about logical niceties. We’re the kind of people who think it’s funny, when asked, “Do you want soup or salad with that?” to reply, “Yes.”
THAT DOES NOT COMPUTE
And yet even mathematicians don’t, except when cracking wise, try to make themselves beings of pure logic. It can be dangerous to do so! For example: If you’re a purely deductive thinker, once you believe two contradictory facts you are logically obliged to believe that every statement is false. Here’s how that goes. Suppose I believe both that Paris is the capital of France and that it’s not. This seems to have nothing to do with whether the Portland Trail Blazers were NBA champions in 1982. But now watch this trick. Is it the case that Paris is the capital of France and the Trail Blazers won the NBA championship? It is not, because I know that Paris is not the capital of France.
If it’s not true that Paris is the capital of France and the Trail Blazers were the champs, then either Paris isn’t the capital of France or the Trail Blazers weren’t NBA champs. But I know that Paris is the capital of France, which rules out the first possibility. So the Trail Blazers did not win the 1982 NBA championship.
It is not hard to check that an argument of exactly the same form, but standing on its head, proves that every statement is also true.
This sounds weird, but as a logical deduction it’s irrefutable; drop one tiny contradiction anywhere into a formal system and the whole thing goes to hell. Philosophers of a mathematical bent call this brittleness in formal logic ex falso quodlibet, or, among friends, “the principle of explosion.” (Remember what I said about how much math people love violent terminology?)
Ex falso quodlibet is how Captain James T. Kirk used to disable dictatorial AIs—feed them a paradox and their reasoning modules frazzle and halt. That (they plaintively remark, just before the power light goes out) does not compute. Bertrand Russell did to Gottlob Frege’s set theory what Kirk did to uppity robots. His one sneaky paradox brought the whole edifice down.
But Kirk’s trick doesn’t work on human beings. We don’t reason this way, not even those of us who do math for a living. We are tolerant of contradiction, to a point. As F. Scott Fitzgerald said, “The test of a first-rate intelligence is the ability to hold two opposed ideas in the mind at the same time, and still retain the ability to function.”
Mathematicians use this ability as a basic tool of thought. It’s essential for the reductio ad absurdum, which requires you to hold in your mind a proposition you believe to be false and reason as if you think it’s true: suppose the square root of 2 is a rational number, even though I’m trying to prove it’s not. . . . It is lucid dreaming of a very systematic kind. And we can do it without short-circuiting ourselves.
In fact, it’s a common piece of folk advice—I know I heard it from my Ph.D. advisor, and presumably he from his, etc.—that when you’re working hard on a theorem you should try to prove it by day and disprove it by night. (The precise frequency of the toggle isn’t critical; it’s said of the topologist R. H. Bing that his habit was to split each month between two weeks trying to prove the Poincaré Conjecture and two weeks trying to find a counterexample.*)
Why work at such cross-purposes? There are two good reasons. The first is that you might, after all, be wrong; if the statement you think is true is really false, all your effort to prove it is doomed to be useless. Disproving by night is a kind of hedge against that gigantic waste.
But there’s a deeper reason. If something is true and you try to disprove it, you will fail. We are trained to think of failure as bad, but it’s not all bad. You can learn from failure. You try to disprove the statement one way, and you hit a wall. You try another way, and you hit another wall. Each night you try, each night you fail, each night a new wall, and if you are lucky, those walls start to come together into a structure, and that structure is the structure of the proof of the theorem. For if you have really understood what’s keeping you from disproving the theorem, you very likely understand, in a way inaccessible to you before, why the theorem is true. This is what happened to Bolyai, who bucked his father’s well-meaning advice and tried, like so many before him, to prove that the parallel postulate followed from Euclid’s other axioms. Like all the others, he failed. But unlike the others, he was able to understand the shape of his failure. What was blocking all his attempts to prove that there was no geometry without the parallel postulate was the existence of just such a geometry! And with each failed attempt he learned more about the features of the thing he didn’t think existed, getting to know it more and more intimately, until the moment when he realized it was really there.
Proving by day and disproving by night is not just for mathematics. I find it’s a good habit to put pressure on all your beliefs, social, political, scientific, and philosophical. Believe whatever you believe by day; but at night, argue against the propositions you hold most dear. Don’t cheat! To the greatest extent possible you have to think as though you believe what you don’t believe. And if you can’t talk yourself out of your existing beliefs, you’ll know a lot more about why you believe what you believe. You’ll have come a little closer to a proof.
This salutary mental exercise is not at all what F. Scott Fitzgerald was talking about, by the way. His endorsement of holding contradictory beliefs comes from “The Crack-Up,” his 1936 essay about his own irreparable brokenness. The opp
osing ideas he has in mind there are “the sense of futility of effort and the sense of the necessity to struggle.” Samuel Beckett later put it more succinctly: “I can’t go on, I’ll go on.” Fitzgerald’s characterization of a “first-rate intelligence” is meant to deny his own intelligence that designation; as he saw it, the pressure of the contradiction had made him effectively cease to exist, like Frege’s set theory or a computer downed by Kirkian paradox. (The Housemartins, elsewhere in “Sitting on a Fence,” more or less summarize “The Crack-Up”: “I lied to myself right from the start / and I just worked out that I’m falling apart.”) Unmanned and undone by self-doubt, drowned in books and introspection, he had become exactly the kind of sad young literary man who made Theodore Roosevelt puke.
David Foster Wallace was interested in paradox too. In his characteristically mathematical style; he put a somewhat tamed version of Russell’s paradox at the center of his first novel, The Broom of the System. It isn’t too strong to say his writing was driven by his struggle with contradictions. He was in love with the technical and analytic, but he saw that the simple dicta of religion and self-help offered better weapons against drugs, despair, and killing solipsism. He knew it was supposed to be the writer’s job to get inside other people’s heads, but his chief subject was the predicament of being stuck fast inside one’s own. Determined to record and neutralize the influence of his own preoccupations and prejudices, he knew this determination was itself among those preoccupations and subject to those prejudices. This is Phil 101 stuff, to be sure, but as any math student knows, the old problems you meet freshman year are some of the deepest you ever see. Wallace wrestled with the paradoxes just the way mathematicians do. You believe two things that seem in opposition. And so you go to work—step by step, clearing the brush, separating what you know from what you believe, holding the opposing hypotheses side by side in your mind and viewing each in the adversarial light of the other until the truth, or the nearest you can get to it, comes clear.
As for Beckett, he had a richer and more sympathetic view of contradiction, which is so ever-present in his work that it takes on every possible emotional color somewhere or other in the corpus. “I can’t go on, I’ll go on” is bleak; but Beckett also draws on the Pythagoreans’ proof of the irrationality of the square root of 2, turning it into a joke between drunks:
“But betray me,” said Neary, “and you go the way of Hippasos.”
“The Akousmatic, I presume,” said Wylie. “His retribution slips my mind.”
“Drowned in a puddle,” said Neary, “for having divulged the incommensurability of side and diagonal.”
“So perish all babblers,” said Wylie.
It’s not clear how much higher math Beckett knew, but in his late prose piece Worstward Ho, he sums up the value of failure in mathematical creation more succinctly than any professor ever has:
Ever tried. Ever failed. No matter. Try again. Fail again. Fail better.
WHEN AM I GOING TO USE THIS?
The mathematicians we’ve encountered in this book are not just puncturers of unjustified certainties, not just critics who count. They found things and they built things. Galton uncovered the idea of regression to the mean; Condorcet built a new paradigm for social decision making; Bolyai created an entirely novel geometry, “a strange new universe”; Shannon and Hamming made a geometry of their own, a space where digital signals lived instead of circles and triangles; Wald got the armor on the right part of the plane.
Every mathematician creates new things, some big, some small. All mathematical writing is creative writing. And the entities we can create mathematically are subject to no physical limits; they can be finite or infinite, they can be realizable in our observable universe or not. This sometimes leads outsiders to think of mathematicians as voyagers in a psychedelic realm of dangerous mental fire, staring straight at visions that would drive lesser beings mad, sometimes indeed being driven mad themselves.
It’s not like that, as we’ve seen. Mathematicians aren’t crazy, and we aren’t aliens, and we aren’t mystics.
What’s true is that the sensation of mathematical understanding—of suddenly knowing what’s going on, with total certainty, all the way to the bottom—is a special thing, attainable in few if any other places in life. You feel you’ve reached into the universe’s guts and put your hand on the wire. It’s hard to describe to people who haven’t experienced it.
We are not free to say whatever we like about the wild entities we make up. They require definition, and having been defined, they are no more psychedelic than trees and fish; they are what they are. To do mathematics is to be, at once, touched by fire and bound by reason. This is no contradiction. Logic forms a narrow channel through which intuition flows with vastly augmented force.
The lessons of mathematics are simple ones and there are no numbers in them: that there is structure in the world; that we can hope to understand some of it and not just gape at what our senses present to us; that our intuition is stronger with a formal exoskeleton than without one. And that mathematical certainty is one thing, the softer convictions we find attached to us in everyday life another, and we should keep track of the difference if we can.
Every time you observe that more of a good thing is not always better; or you remember that improbable things happen a lot, given enough chances, and resist the lure of the Baltimore stockbroker; or you make a decision based not just on the most likely future, but on the cloud of all possible futures, with attention to which ones are likely and which ones are not; or you let go of the idea that the beliefs of groups should be subject to the same rules as beliefs of individuals; or, simply, you find that cognitive sweet spot where you can let your intuition run wild on the network of tracks formal reasoning makes for it; without writing down an equation or drawing a graph, you are doing mathematics, the extension of common sense by other means. When are you going to use it? You’ve been using mathematics since you were born and you’ll probably never stop. Use it well.
ACKNOWLEDGMENTS
It has been about eight years since I first had the idea of writing this book. That How Not to Be Wrong is now in your hands, and not just an idea, is testament to the wise guidance of my agent, Jay Mandel, who patiently asked me every year whether I was ready to take a try at writing something and, when I finally said “yes,” helped me refine the concept from “I want to yell at people, at length, about how great math is” to something more like an actual book.
I’m very fortunate to have placed the book with The Penguin Press, which has a long tradition of helping academics speak to a wide audience while still allowing them to totally nerd out. I benefited tremendously from the insights of Colin Dickerman, who acquired the book and helped see it through to near-finished form, and Scott Moyers, who took over for the final push. Both of them were very understanding with a novice author as the project transformed itself into something quite different from the book I had originally proposed. I have also benefited greatly from the advice and assistance of Mally Anderson, Akif Saifi, Sarah Hutson, and Liz Calamari at The Penguin Press and Laura Stickney at Penguin UK.
I also owe thanks to the editors of Slate, especially Josh Levin, Jack Shafer, and David Plotz, who decided in 2001 that what Slate needed was a math column. They’ve been running my stuff ever since, helping me learn how to talk about math in a way that non-mathematicians can understand. Some parts of this book are adapted from my Slate pieces and have benefited from their editing. I’m also very grateful to my editors at other publications: at the New York Times, the Washington Post, the Boston Globe, and the Wall Street Journal. (The book also contains some repurposed bits and pieces from my articles in the Post and the Globe.) I’m especially thankful for Heidi Julavits at the Believer and Nicholas Thompson at Wired, who were the first to assign me long pieces and taught me critical lessons about how to keep a mathematical narrative moving for thousands of words at a stretch.
 
; Elise Craig did an excellent job fact-checking portions of this book; if you find a mistake, it’s in the other portions. Greg Villepique copyedited the book, removing many errors of usage and fact. He is a tireless foe of unnecessary hyphens.
Barry Mazur, my PhD advisor, taught me much of what I know about number theory; what’s more, he serves as a model for the deep connections between mathematics and other modes of thinking, expressing, and feeling.
For the Russell quote that opens the book I’m indebted to David Foster Wallace, who marked the quote as a potential epigraph in his working notes for Everything and More, his book about set theory, but didn’t end up using it.
Much of How Not to Be Wrong was written while I was on sabbatical from my position at the University of Wisconsin−Madison; I thank the Wisconsin Alumni Research Foundation for enabling me to extend that leave to a full year with a Romnes Faculty Fellowship and my colleagues at Madison for supporting this idiosyncratic and not-exactly-academic project.
I also want to thank Barriques Coffee on Monroe Street in Madison, Wisconsin, where much of this book was produced.
The book itself has benefited from suggestions and close readings from many friends, colleagues, and strangers who answered my e-mail, including: Laura Balzano, Meredith Broussard, Tim Carmody, Tim Chow, Jenny Davidson, Jon Eckhardt, Steve Fienberg, Peli Grietzer, the Hieratic Conglomerate, Gil Kalai, Emmanuel Kowalski, David Krakauer, Lauren Kroiz, Tanya Latty, Marc Mangel, Arika Okrent, John Quiggin, Ben Recht, Michel Regenwetter, Ian Roulstone, Nissim Schlam-Salman, Gerald Selbee, Cosma Shalizi, Michelle Shih, Barry Simon, Brad Snyder, Elliott Sober, Miranda Spieler, Jason Steinberg, Hal Stern, Stephanie Tai, Bob Temple, Ravi Vakil, Robert Wardrop, Eric Wepsic, Leland Wilkinson, and Janet Wittes. Inevitably there are others; I apologize to anyone I have missed. I want to single out several readers who gave especially important feedback: Tom Scocca, who read the whole thing with a keen eye and an unsparing stance; Andrew Gelman and Stephen Stigler, who kept me honest about the history of statistics; Stephen Burt, who kept me honest about poetry; Henry Cohn, who carried out an amazing close reading on a big chunk of the book and fed me the quote about Winston Churchill and the projective plane; Lynda Barry, who told me it was okay to draw the pictures myself; and my parents, both applied statisticians, who read everything and told me when it was getting too abstract.
How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) Page 43