How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843)

Home > Other > How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) > Page 14
How Not to Be Wrong : The Power of Mathematical Thinking (9780698163843) Page 14

by Ellenberg, Jordan


  At this point I’m sometimes asked, “Why is the product of no primes 1, and not 0?” Here’s one slightly convoluted explanation: If you take the product of some set of primes, like 2 and 3, but then divide away the very primes you multiplied, you ought to be left with the product of nothing at all; and 6 divided by 6 is 1, not 0. (The sum of no numbers, on the other hand, is indeed 0.)

  The primes are the atoms of number theory, the basic indivisible entities of which all numbers are made. As such, they’ve been the object of intense study ever since number theory started. One of the first theorems ever proved in number theory is that of Euclid, which tells us that the primes are infinite in number; we will never run out, no matter how far along the number line we let our minds range.

  But mathematicians are greedy types, not inclined to be satisfied with a mere assertion of infinitude. After all, there’s infinite and then there’s infinite. There are infinitely many powers of 2, but they’re very rare. Among the first one thousand numbers, there are only ten of them:

  1, 2, 4, 8, 16, 32, 64, 128, 256, and 512.

  There are infinitely many even numbers, too, but they’re much more common: exactly 500 out of the first 1,000 numbers. In fact, it’s pretty apparent that out of the first N numbers, just about (1/2)N will be even.

  Primes, it turns out, are intermediate—more common than the powers of 2 but rarer than even numbers. Among the first N numbers, about N/log N are prime; this is the Prime Number Theorem, proven at the end of the nineteenth century by the number theorists Jacques Hadamard and Charles-Jean de la Vallée Poussin.

  A NOTE ON THE LOGARITHM, AND THE FLOGARITHM

  It has come to my attention that hardly anybody knows what the logarithm is. Let me take a step toward fixing this. The logarithm of a positive number N, called log N, is the number of digits it has.

  Wait, really? That’s it?

  No. That’s not really it. We can call the number of digits the “fake logarithm,” or flogarithm. It’s close enough to the real thing to give the general idea of what the logarithm means in a context like this one. The flogarithm (whence also the logarithm) is a very slowly growing function indeed: the flogarithm of a thousand is 4, the flogarithm of a million, a thousand times greater, is 7, and the flogarithm of a billion is still only 10.*

  NOW BACK TO PRIME CLUSTERS

  The Prime Number Theorem says that, among the first N integers, a proportion of about 1/log N of them are prime. In particular, prime numbers get less and less common as the numbers get bigger, though the decrease is very slow; a random number with twenty digits is half as likely to be prime as a random number with ten digits.

  Naturally, one imagines that the more common a certain type of number, the smaller the gaps between instances of that type of number. If you’re looking at an even number, you never have to travel farther than two numbers forward to encounter the next even; in fact, the gaps between the even numbers are always exactly of size 2. For the powers of 2, it’s a different story. The gaps between successive powers of 2 grow exponentially, getting bigger and bigger with no retreats as you traverse the sequence; once you get past 16, for instance, you will never again see two powers of 2 separated by a gap of size 15 or less.

  Those two problems are easy, but the question of gaps between consecutive primes is harder. It’s so hard that, even after Zhang’s breakthrough, it remains a mystery in many respects.

  And yet we think we know what to expect, thanks to a remarkably fruitful point of view: we think of primes as random numbers. The reason the fruitfulness of this viewpoint is so remarkable is that the viewpoint is so very, very false. Primes are not random! Nothing about them is arbitrary or subject to chance. Quite the opposite: we take them as immutable features of the universe, and carve them on the golden records we shoot out into interstellar space to prove to the ETs that we’re no dopes.

  The primes are not random, but it turns out that in many ways they act as if they were. For example, when you divide a random whole number by 3, the remainder is either 0, 1, or 2, and each case arises equally often. When you divide a big prime number by 3, the quotient can’t come out even; otherwise, the so-called prime would be divisible by 3, which would mean it wasn’t really a prime at all. But an old theorem of Dirichlet tells us that remainder 1 shows up about equally as often as remainder 2, just as is the case for random numbers. So as far as “remainder when divided by 3” goes, prime numbers, apart from not being multiples of 3, look random.

  What about the gaps between consecutive primes? You might think that, because prime numbers get rarer and rarer as numbers get bigger, that they also get farther and farther apart. On average, that’s indeed the case. But what Zhang proved is that there are infinitely many pairs of primes that differ by at most 70 million. In other words, that the gap between one prime and the next is bounded by 70 million infinitely often—thus, the “bounded gaps” conjecture.

  Why 70 million? Just because that’s what Zhang was able to prove. In fact, the release of his paper set off an explosion of activity, with mathematicians from around the world working together in a “Polymath,” a sort of frenzied online math kibbutz, to narrow the gap still more using variations on Zhang’s method. By July 2013, the collective had shown that there were infinitely many gaps of size at most 5,414. In November, a just-fledged PhD in Montreal, James Maynard, knocked the bound down to 600, and Polymath scrambled into action to combine his insights with those of the hive. By the time you read this, the bound will no doubt be smaller still.

  On first glance, the bounded gaps might seem a miraculous phenomenon. If the primes are tending to be farther and farther apart, what’s causing there to be so many pairs that are close together? Is it some kind of prime gravity?

  Nothing of the kind. If you strew numbers at random, it’s very likely that some pairs will, by chance, land very close together, just as points dropped randomly in a plane form visible clusters.

  It’s not hard to compute that, if prime numbers behaved like random numbers, you’d see precisely the behavior that Zhang demonstrated. Even more: you’d expect to see infinitely many pairs of primes that are separated by only 2, like 3-5 and 11-13. These are the so-called twin primes, whose infinitude remains conjectural.

  (A short computation follows. If you’re not on board, avert your eyes and rejoin the text where it says “And a lot of twin primes . . .”)

  Remember: among the first N numbers, the Prime Number Theorem tells us that about N/log N of them are primes. If these were distributed randomly, each number n would have a 1/log N chance of being prime. The chance that n and n + 2 are both prime should thus be about (1/log N) × (1/log N) = (1/log N)2. So how many pairs of primes separated by 2 should we expect to see? There are about N pairs (n, n + 2) in the range of interest, and each one has a (1/log N)2 chance of being a twin prime, so one should expect to find about N/(log N)2 twin primes in the interval.

  There are some deviations from pure randomness whose small effects number theorists know how to handle. The main point is that n being prime and n + 1 being prime are not independent events; n being prime makes it somewhat more likely that n + 2 is prime, which means our use of the product (1/log N) × (1/log N) isn’t quite right. (One issue: if n is prime and bigger than 2, it’s odd, which means n + 2 is odd as well, which makes n + 2 more likely to be prime.) G. H. Hardy, of the “unnecessary perplexities,” together with his lifelong collaborator J. E. Littlewood, worked out a more refined prediction taking these dependencies into account, and predicting that the number of twin primes should in fact be about 32% greater than N/(log N)2. This better approximation gives a prediction that the number of twin primes less than a quadrillion should be about 1.1 trillion, a pretty good match for the actual figure of 1,177,209,242,304. That’s a lot of twin primes.

  And a lot of twin primes is exactly what number theorists expect to find, no matter how big the numbers get—not because we think there�
��s a deep, miraculous structure hidden in the primes, but precisely because we don’t think so. We expect the primes to be tossed around at random like dirt. If the twin primes conjecture were false, that would be a miracle, requiring that some hitherto unknown force was pushing the primes apart.

  Not to pull back the curtain too much, but a lot of famous conjectures in number theory work this way. The Goldbach conjecture, that every even number greater than 2 is the sum of two primes, is another one that would have to be true if primes behaved like random numbers. So is the conjecture that the primes contain arithmetic progressions of any desired length, whose resolution by Ben Green and Terry Tao in 2004 helped win Tao a Fields Medal.

  The most famous of all is the conjecture made by Pierre de Fermat in 1637, which asserted that the equation

  An + Bn = Cn

  has no solutions with A, B, C, and n positive whole numbers with n greater than 2. (When n is equal to 2, there are lots of solutions, like 32 + 42 = 52.)

  Everybody strongly believed the Fermat conjecture was true, just as we believe the twin primes conjecture now; but no one knew how to prove it* until the breakthrough of Princeton mathematician Andrew Wiles in the 1990s. We believed it because perfect nth powers are very rare, and the chance of finding two numbers that summed to a third in a random set of such extreme scarcity is next to nil. Even more: most people believe that there are no solutions to the generalized Fermat equation

  Ap + Bq = Cr

  when the exponents p, q, and r are big enough. A banker in Dallas named Andrew Beal will give you a million dollars if you can prove that the equation has no solutions for which p, q, and r are all greater than 3 and A, B, and C share no prime factor.* I fully believe that the statement is true, because it would be true if perfect powers were random; but I think we’ll have to understand something truly new about numbers before we can make our way to a proof. I spent a couple of years, along with a bunch of collaborators, proving that the generalized Fermat equation has no solution with p = 4, q = 2, and r bigger than 4. Just for that one case, we had to develop some novel techniques, and it’s clear they won’t be enough to cover the full million-dollar problem.

  Despite the apparent simplicity of the bounded gaps conjecture, Zhang’s proof requires some of the deepest theorems of modern mathematics.* Building on the work of many predecessors, Zhang is able to prove that the prime numbers look random in the first way we mentioned, concerning the remainders obtained after division by many different integers. From there,* he can show that the prime numbers look random in a totally different sense, having to do with the sizes of the gaps between them. Random is random!

  Zhang’s success, along with related work of other contemporary big shots like Ben Green and Terry Tao, points to a prospect even more exciting than any individual result about primes: that we might, in the end, be on our way to developing a richer theory of randomness. Say, a way of specifying precisely what we mean when we say that numbers act as if randomly scattered with no governing structure, despite arising from completely deterministic processes. How wonderfully paradoxical: what helps us break down the final mysteries about prime numbers may be new mathematical ideas that structure the concept of structurelessness itself.

  NINE

  THE INTERNATIONAL JOURNAL OF HARUSPICY

  Here’s a parable I learned from the statistician Cosma Shalizi.

  Imagine yourself a haruspex; that is, your profession is to make predictions about future events by sacrificing sheep and then examining the features of their entrails, especially their livers. You do not, of course, consider your predictions to be reliable merely because you follow the practices commanded by the Etruscan deities. That would be ridiculous. You require evidence. And so you and your colleagues submit all your work to the peer-reviewed International Journal of Haruspicy, which demands without exception that all published results clear the bar of statistical signficance.

  Haruspicy, especially rigorous evidence-based haruspicy, is not an easy gig. For one thing, you spend a lot of your time spattered with blood and bile. For another, a lot of your experiments don’t work. You try to use sheep guts to predict the price of Apple stock, and you fail; you try to model Democratic vote share among Hispanics, and you fail; you try to estimate global oil supply, and you fail again. The gods are very picky and it’s not always clear precisely which arrangement of the internal organs and which precise incantations will reliably unlock the future. Sometimes different haruspices run the same experiment and it works for one but not the other—who knows why? It’s frustrating. Some days you feel like chucking it all and going to law school.

  But it’s all worth it for those moments of discovery, where everything works, and you find that the texture and protrusions of the liver really do predict the severity of the following year’s flu season, and, with a silent thank-you to the gods, you publish.

  You might find this happens about one time in twenty.

  That’s what I’d expect, anyway. Because I, unlike you, don’t believe in haruspicy. I think the sheep’s guts don’t know anything about the flu data, and when they match up, it’s just luck. In other words, in every matter concerning divination from entrails, I’m a proponent of the null hypothesis. So in my world, it’s pretty unlikely that any given haruspectic experiment will succeed.

  How unlikely? The standard threshold for statistical significance, and thus for publication in IJoH, is fixed by convention to be a p-value of .05, or 1 in 20. Remember the definition of the p-value; this says precisely that if the null hypothesis is true for some particular experiment, then the chance that that experiment will nonetheless return a statistically significant result is only 1 in 20. If the null hypothesis is always true—that is, if haruspicy is undiluted hocus-pocus—then only one in twenty experiments will be publishable.

  And yet there are hundreds of haruspices, and thousands of ripped-open sheep, and even one in twenty divinations provides plenty of material to fill each issue of the journal with novel results, demonstrating the efficacy of the methods and the wisdom of the gods. A protocol that worked in one case and gets published usually fails when another haruspex tries it; but experiments without statistically significant results don’t get published, so no one ever finds out about the failure to replicate. And even if word starts getting around, there are always small differences the experts can point to that explain why the follow-up study didn’t succeed; after all, we know the protocol works, because we tested it and it had a statistically significant effect!

  Modern medicine and social science are not haruspicy. But a steadily louder drum circle of dissident scientists has been pounding out an uncomfortable message in recent years: there’s probably a lot more entrail reading in the sciences than we’d like to admit.

  The loudest drummer is John Ioannidis, a Greek high school math star turned biomedical researcher whose 2005 paper “Why Most Published Research Findings Are False” touched off a fierce bout of self-criticism (and a second wave of self-defense) in the clinical sciences. Some papers plead for attention with a title more dramatic than the claims made in the body, but not this one. Ioannidis takes seriously the idea that entire specialties of medical research are “null fields,” like haruspicy, in which there are simply no actual effects to be found. “It can be proven,” he writes, “that most claimed research findings are false.”

  “Proven” is a little more than this mathematician is willing to swallow, but Ioannidis certainly makes a strong case that his radical claim is not implausible. The story goes like this. In medicine, most interventions we try won’t work and most associations we test for are going to be absent. Think about tests of genetic association with diseases: there are lots of genes on the genome, and most of them don’t give you cancer or depression or make you fat or have any recognizable direct effect at all. Ioannidis asks us to consider the case of genetic influence on schizophrenia. Such an influence is almost certain, given what we kno
w about the heritability of the disorder. But where is it on the genome? Researchers might cast their net wide—it’s the Big Data era, after all—looking at a hundred thousand genes (more precisely: genetic polymorphisms) to see which ones are associated with schizophrenia. Ioannidis suggests that around ten of these actually have some clinically relevant effect.

  And the other 99,990? They’ve got nothing to do with schizophrenia. But one in twenty of them, or just about five thousand, are going to pass the p-value test of statistical significance. In other words, among the “OMG I found the schizophrenia gene” results that might get published, there are five hundred times as many bogus ones as real ones.

  And that’s assuming that all the genes that really do have an effect on schizophrenia pass the test! As we saw with Shakespeare and basketball, it’s very possible for a real effect to be rejected as statistically insignificant if the study isn’t high powered enough to find it. If the studies are underpowered, the genes that truly do make a difference might pass the significance test only half the time; but that means that of the genes certified by p-value to cause schizophrenia, only five really do so, as against the five thousand pretenders that passed the test by luck alone.

  A good way to keep track of the relevant quantities is by drawing circles in a box:

  The size of each circle represents the number of genes in each category. On the left half of the box we have the negatives, the genes that don’t pass the significance test, and on the right half we have the positives. The two top squares represent the tiny population of genes that actually do affect schizophrenia, so the genes in the top right are the true positives (genes that matter, and the test says they matter) while the top left represents the false negatives (genes that matter, but the test says they don’t). In the bottom row, you have the genes that don’t matter; the true negatives are the big circle on the bottom left, the false positives the circle on the bottom right.

 

‹ Prev