How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843)

Home > Other > How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) > Page 41
How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) Page 41

by Ellenberg, Jordan


  But Ramanujan is an outlier, whose story is so often told precisely because it’s so uncharacteristic. Hilbert started out a very good but not exceptional student, by no means the brightest young mathematician in Königsberg; that was Hermann Minkowski, two years younger. Minkowski went on to a distinguished mathematical career himself, but he was no Hilbert.

  One of the most painful parts of teaching mathematics is seeing students damaged by the cult of the genius. The genius cult tells students it’s not worth doing mathematics unless you’re the best at mathematics, because those special few are the only ones whose contributions matter. We don’t treat any other subject that way! I’ve never heard a student say, “I like Hamlet, but I don’t really belong in AP English—that kid who sits in the front row knows all the plays, and he started reading Shakespeare when he was nine!” Athletes don’t quit their sport just because one of their teammates outshines them. And yet I see promising young mathematicians quit every year, even though they love mathematics, because someone in their range of vision was “ahead” of them.

  We lose a lot of math majors this way. Thus, we lose a lot of future mathematicians; but that’s not the whole of the problem. I think we need more math majors who don’t become mathematicians. More math major doctors, more math major high school teachers, more math major CEOs, more math major senators. But we won’t get there until we dump the stereotype that math is only worthwhile for kid geniuses.

  The cult of the genius also tends to undervalue hard work. When I was starting out, I thought “hardworking” was a kind of veiled insult—something to say about a student when you can’t honestly say they’re smart. But the ability to work hard—to keep one’s whole attention and energy focused on a problem, systematically turning it over and over and pushing at everything that looks like a crack, despite the lack of outward signs of progress—is not a skill everybody has. Psychologists nowadays call it “grit,” and it’s impossible to do math without it. It’s easy to lose sight of the importance of work, because mathematical inspiration, when it finally does come, can feel effortless and instant. I remember the first theorem I ever proved; I was in college, working on my senior thesis, and I was completely stuck. One night I was at an editorial meeting of the campus literary magazine, drinking red wine and participating fitfully in the discussion of a somewhat boring short story, when all at once something turned over in my mind and I understood how to get past the block. No details, but it didn’t matter; there was no doubt in my mind that the thing was done.

  That’s the way mathematical creation often presents itself. Here’s the French mathematician Henri Poincaré’s famous account of a geometric breakthrough he made in 1881:

  Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience’s sake I verified the result at my leisure.*

  But it didn’t really happen in the space of a footstep, Poincaré explains. That moment of inspiration is the product of weeks of work, both conscious and unconscious, which somehow prepare the mind to make the necessary connection of ideas. Sitting around waiting for inspiration leads to failure, no matter how much of a whiz kid you are.

  It can be hard for me to make this case, because I was one of those prodigious kids myself. I knew I was going to be a mathematician when I was six years old. I took courses way above my grade level and won a neckful of medals in math contests. And I was pretty sure, when I went off to college, that the competitors I knew from Math Olympiad were the great mathematicians of my generation. It didn’t exactly turn out that way. That group of young stars produced many excellent mathematicians, like Terry Tao, the Fields Medal−winning harmonic analyst. But most of the mathematicians I work with now weren’t ace mathletes at thirteen; they developed their abilities and talents on a different timescale. Should they have given up in middle school?

  What you learn after a long time in math—and I think the lesson applies much more broadly—is that there’s always somebody ahead of you, whether they’re right there in class with you or not. People just starting out look to people with good theorems, people with some good theorems look to people with lots of good theorems, people with lots of good theorems look to people with Fields Medals, people with Fields Medals look to the “inner circle” Medalists, and those people can always look toward the dead. Nobody ever looks in the mirror and says, “Let’s face it, I’m smarter than Gauss.” And yet, in the last hundred years, the joined effort of all these dummies-compared-to-Gauss has produced the greatest flowering of mathematical knowledge the world has ever seen.

  Mathematics, mostly, is a communal enterprise, each advance the product of a huge network of minds working toward a common purpose, even if we accord special honor to the person who places the last stone in the arch. Mark Twain is good on this: “It takes a thousand men to invent a telegraph, or a steam engine, or a phonograph, or a telephone or any other important thing—and the last man gets the credit and we forget the others.”

  It’s something like football. There are moments, of course, when one player seizes control of the game totally, and these are moments we remember and honor and recount for a long time afterward. But they’re not the normal mode of football, and they’re not the way most games are won. When the quarterback completes a dazzling touchdown pass to a streaking wide receiver, you are seeing the work of many people in concert: not only the quarterback and the receiver, but the offensive linemen who prevented the defense from breaking through just long enough to allow the quarterback to set and throw, that prevention in turn enabled by the running back who pretended to take a handoff in order to distract the attention of the defenders for a critical moment; and then, too, there’s the offensive coordinator who called the play, and his many clipboarded assistants, and the training staff who keep the players in condition to run and throw . . . One doesn’t call all those people geniuses. But they create the conditions under which genius can take place.

  Terry Tao writes:

  The popular image of the lone (and possibly slightly mad) genius—who ignores the literature and other conventional wisdom and manages by some inexplicable inspiration (enhanced, perhaps, with a liberal dash of suffering) to come up with a breathtakingly original solution to a problem that confounded all the experts—is a charming and romantic image, but also a wildly inaccurate one, at least in the world of modern mathematics. We do have spectacular, deep and remarkable results and insights in this subject, of course, but they are the hard-won and cumulative achievement of years, decades, or even centuries of steady work and progress of many good and great mathematicians; the advance from one stage of understanding to the next can be highly non-trivial, and sometimes rather unexpected, but still builds upon the foundation of earlier work rather than starting totally anew. . . . Actually, I find the reality of mathematical research today—in which progress is obtained naturally and cumulatively as a consequence of hard work, directed by intuition, literature, and a bit of luck—to be far more satisfying than the romantic image that I had as a student of mathematics being advanced primarily by the mystic inspirations of some rare breed of “geniuses.”

  It’s not wrong to say Hilbert was a genius. But it’s more right to say that what Hilbert accomplished was genius. Genius is a thing that happens, not a kind of person.

  POLITICAL LOGIC

  Political logic is not a formal system in the sense that Hilbert and the mathematical logicians meant, but mathematicians with a formalist outlook couldn’t help but approach politics with the same kind of methodological sympathies. They were encou
raged in this by Hilbert himself, who in his 1918 lecture “Axiomatic Thought” advocated that the other sciences adopt the axiomatic approach that had been so successful in mathematics.

  For example, Gödel, whose theorem ruled out the possibility of definitively banishing contradiction from arithmetic, was also worried about the Constitution, which he was studying in preparation for his 1948 U.S. citizenship test. In his view, the document contained a contradiction that could allow a Fascist dictatorship to take over the country in a perfectly constitutional manner. Gödel’s friends Albert Einstein and Oskar Morgenstern begged him to avoid this matter in his exam, but, as Morgenstern recalls it, the conversation ended up going like this:

  The examiner: Now, Mr. Gödel, where do you come from?

  Gödel: Where I come from? Austria.

  The examiner: What kind of government did you have in Austria?

  Gödel: It was a republic, but the constitution was such that it finally was changed into a dictatorship.

  The examiner: Oh! This is very bad. This could not happen in this country.

  Gödel: Oh, yes, I can prove it.

  Fortunately, the examiner hurriedly changed the subject and Gödel’s citizenship was duly granted. As to the nature of the contradiction Gödel found in the Constitution, it seems to have been lost to mathematical history. Perhaps for the best!

  —

  Hilbert’s commitment to logical principle and deduction often led him, like Condorcet, to adopt a surprisingly modern outlook in non-mathematical matters.* At some political cost to himself, he refused to sign the 1914 Declaration to the Cultural World, which defended the kaiser’s war in Europe with a long list of denials, each one starting “It is not true”: “It is not true that Germany violated the neutrality of Belgium,” and so on. Many of the greatest German scientists, like Felix Klein, Wilhelm Roentgen, and Max Planck, signed the declaration. Hilbert said, quite simply, that he was unable to verify to his exacting standards that the assertions in question were not true.

  A year later, when the faculty at Göttingen balked at offering a position to the great algebraist Emmy Noether, arguing that students could not possibly be asked to learn mathematics from a woman, Hilbert responded: “I do not see how the sex of the candidate is an argument against her admission. We are a university, not a bathhouse.”

  But reasoned analysis of politics has its limits. As an old man in the 1930s, Hilbert seemed quite unable to grasp what was happening to his home country as the Nazis consolidated their power. His first PhD student, Otto Blumenthal, visited Hilbert in Göttingen in 1938 to celebrate his seventy-sixth birthday. Blumenthal was a Christian but came from a Jewish family, and for that reason had been removed from his academic position at Aachen. (It was the same year that Abraham Wald, in German-occupied Austria, left for the United States.)

  Constance Reid, in her biography of Hilbert, recounts the conversation at the birthday party:

  “What subjects are you lecturing on this semester?” Hilbert asked.

  “I do not lecture anymore,” Blumenthal gently reminded him.

  “What do you mean, you do not lecture?”

  “I am not allowed to lecture anymore.”

  “But that is completely impossible! This cannot be done. Nobody has the right to dismiss a professor unless he has committed a crime. Why do you not apply for justice?”

  THE PROGRESS OF THE HUMAN MIND

  Condorcet, too, held fast to his formalist ideas about politics even when they didn’t conform well to reality. The existence of Condorcet cycles meant that any voting system that obeyed his basic, seemingly inarguable axiom—when the majority prefers A to B, B cannot be the winner—can fall prey to self-contradiction. Condorcet spent much of the last decade of his life grappling with the problem of the cycles, developing more and more intricate voting systems intended to evade the problem of collective inconsistency. He never succeeded. In 1785 he wrote, rather forlornly, “We cannot usually avoid being presented with decisions of this kind, which we might call equivocal, except by requiring a large plurality or allowing only very enlightened men to vote. . . . If we cannot find voters who are sufficiently enlightened, we must avoid making a bad choice by accepting as candidates only those men in whose competence we can trust.”

  But the problem wasn’t the voters; it was the math. Condorcet, we now understand, was doomed to failure from the start. Kenneth Arrow, in his 1951 PhD thesis, proved that even a much weaker set of axioms than Condorcet’s, a set of requirements that seem as hard to doubt as Peano’s rules of arithmetic, leads to paradoxes.* It was a work of great elegance, which helped earn Arrow a Nobel Prize in economics in 1972, but it surely would have disappointed Condorcet, just as Gödel’s Theorem had disappointed Hilbert.

  Or maybe not—Condorcet was a tough man to disappoint. When the Revolution gathered speed, his mild-mannered brand of republicanism was quickly crowded out by the more radical Jacobins; Condorcet was first politically marginalized, then forced into hiding to avoid the guillotine. And yet Condorcet’s belief in the inexorability of progress guided by reason and math didn’t desert him. Sequestered in a Paris safe house, knowing he might not have much time left, he wrote his Sketch for a Historical Picture of the Progress of the Human Mind, laying out his vision of the future. It is an astonishingly optimistic document, describing a world from which the errors of royalism, sex prejudice, hunger, and old age would be eliminated in turn by the force of science. This passage is typical:

  May it not be expected that the human race will be meliorated by new discoveries in the sciences and the arts, and, as an unavoidable consequence, in the means of individual and general prosperity; by farther progress in the principles of conduct, and in moral practice; and lastly, by the real improvement of our faculties, moral, intellectual and physical, which may be the result either of the improvement of the instruments which increase the power and direct the exercise of those faculties, or of the improvement of our natural organization itself?

  Nowadays, the Sketch is best known indirectly; it inspired Thomas Malthus, who considered Condorcet’s predictions hopelessly sunny, to write his much more famous, and much bleaker, account of humanity’s future.

  Shortly after the above passage was written, in March 1794 (or, in the rationalized revolutionary calendar, in Germinal of Year 2) Condorcet was captured and arrested. Two days later he was found dead—some say it was suicide, others that he was murdered.

  Just as Hilbert’s style of mathematics persisted despite the destruction of his formal program by Gödel, Condorcet’s approach to politics survived his demise. We no longer hope to find voting systems that satisfy his axiom. But we have committed ourselves to Condorcet’s more fundamental belief, that a quantitative “social mathematics”—what we now call “social science”—ought to have a part in determining the proper conduct of government. These were “the instruments which increase the power and direct the exercise of [our] faculties” that Condorcet wrote about with such vigor in the Sketch.

  Condorcet’s idea is so thoroughly intertwined with the modern way of doing political business that we hardly see it as a choice. But it is a choice. I think it’s the right one.

  HOW TO BE RIGHT

  Between my sophomore and junior years of college, I had a summer job working for a researcher in public health. The researcher—it will be clear in a minute why I don’t use his name—wanted to hire a math major because he wanted to know how many people were going to have tuberculosis in the year 2050. That was my summer job, to figure that out. The researcher gave me a big folder of papers about tuberculosis: how transmissible it was under various circumstances, the typical course of infection and the length of the maximally contagious period, survival curves and medication compliance rates and breakdowns of all of the above by age, race, sex, and HIV status. Big folder. Lots of papers. And I got to work, doing what math majors do: I made a model of TB prevalence, using the dat
a the researcher had given me to estimate how the TB infection rates in different population groups would change and interact over time, decade by decade, until 2050, when the simulation terminated.

  And what I learned was this: I did not have a clue how many people were going to have tuberculosis in the year 2050. Each one of those empirical studies had some uncertainty built in; they thought the transmission rate was 20%, but maybe it was 13%, or maybe it was 25%, though they were pretty sure it wasn’t 60% or 0%. Each one of these little local uncertainties percolated through the simulation, and the uncertainties about different parameters of the model fed back into each other, and by 2050, the noise had engulfed the signal. I could make the simulation come out any which way. Maybe there was going to be no such thing as tuberculosis in 2050, or maybe most of the world’s population would be infected. I had no principled way to choose.

  This was not what the researcher wanted to hear. It was not what he was paying me for. He was paying me for a number, and he patiently repeated his request for one. I know there’s uncertainty, he said, that’s how medical research is, I get that, just give me your best guess. It didn’t matter how much I protested that any single guess would be worse than no guess at all. He insisted. And he was my boss, and eventually I gave in. I have no doubt he told many people, afterward, that X million people were going to have tuberculosis in the year 2050. And I’ll bet if anyone asked him how he knew this, he would say, I hired a guy who did the math.

 

‹ Prev