Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers

Home > Other > Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers > Page 7
Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today's Computers Page 7

by John MacCormick


  your PPN = 28 = 3 (clock size 11) Arnold's PPN = 29 = 6 (clock size 11)

  You can see the situation after this step in the figure on the following page. These public-private numbers are precisely analogous to the “public-private mixtures” that you made in the third step of the paint-mixing trick. There, you mixed one pot of the public color with one part of your private color to make your public-private mixture. Here, you have mixed your private number with the public numbers using power notation and clock arithmetic.

  Step 4. You and Arnold each separately take the other's public-private number and mix it in with your own private number.

  This is done according to the formula

  shared secret = other person's PPNprivate number (clock size)

  Again this looks a little weird written out in words, but by consulting the table on the previous page, it works out simply in numbers:

  your shared secret = 68 = 4 (clock size 11)

  Arnold's shared secret = 39 = 4 (clock size 11)

  The final situation is shown in the figure on page 57.

  Naturally, your shared secret and Arnold's shared secret end up being the same number (in this case, 4). It depends on some sophisticated mathematics in this case, but the basic idea is the same as before: although you mixed your ingredients in a different order, both you and Arnold used the same ingredients and therefore produced the same shared secret.

  Real-life number-mixing, step 3: The public-private numbers (3 and 6), computed using powers and clock arithmetic, are available to anyone who wants them. The “28” shown below the 3 reminds us how the 3 was computed, but the fact that 3 = 28 in clock size 11 is not made public. Similarly, the “29” shown below the 6 remains private.

  And as with the earlier versions of this trick, Eve is left out in the cold. She knows the two public numbers (2 and 11), and she also knows the two public-private numbers (3 and 6). But she can't use any of this knowledge to compute the shared secret number, because she can't access either of the secret ingredients (the private numbers) held by you and Arnold.

  PUBLIC KEY CRYPTOGRAPHY IN PRACTICE

  The final version of the paint-mixing trick, mixing numbers using powers and clock arithmetic, is one of the ways that computers actually establish shared secrets on the internet. The particular method described here is called the Diffie-Hellman key exchange protocol, named for Whitfield Diffie and Martin Hellman, who first published the algorithm in 1976. Whenever you go to a secure website (one that starts with “https:” rather than “http:”), your own computer and the web server it's communicating with create a shared secret, using the Diffie-Hellman protocol or one of several alternatives that work in a similar way. And once this shared secret is established, the two computers can encrypt all their communication using a variant of the addition trick described earlier.

  Real-life number-mixing, step 4: Only you and Arnold can make the shared secret number, by combining together the elements shown with arrows, using powers and clock arithmetic.

  It's important to realize that when the Diffie-Hellman protocol is used in practice, the actual numbers involved are far larger than the examples we worked through here. We used a very small clock size (11), so that the calculations would be easy. But if you choose a small public clock size, then the number of possible private numbers is also small (since you can only use private numbers that are smaller than the clock size). And that means someone could use a computer to try out all the possible private numbers until they find one that produces your public-private number. In the example above, there were only 11 possible private numbers, so this system would be ludicrously easy to crack. In contrast, real implementations of the Diffie-Hellman protocol typically use a clock size that is a few hundred digits long, which creates an unimaginably large number of possible private numbers (much more than a trillion trillion). And even then, the public numbers must be chosen with some care, to make sure they have the correct mathematical properties—check out the box on the next page if you're interested in this.?

  The most important property for Diffie-Hellman public numbers is that the clock size must be a prime number—so it has no divisors other than 1 and itself. Another intriguing requirement is that the base must be a primitive root of the clock size. This means that the powers of the base eventually cycle through every possible value on the clock. If you look at the table on page 54, you'll notice that 2 and 6 are both primitive roots of 11, but 3 is not—the powers of 3 cycle through the values 3,9,5,4,1 and miss 2,6, 7, 8, and 10.

  When choosing a clock size and base for the Diffie-Hellman protocol, certain mathematical properties must be satisfied.

  The Diffie-Hellman approach described here is just one of many cunning techniques for communicating via (electronic) postcards. Computer scientists call Diffie-Hellman a key exchange algorithm. Other public key algorithms work differently and allow you to directly encrypt a message for your intended recipient, using public information announced by that recipient. In contrast, a key exchange algorithm allows you to establish a shared secret using the public information from the recipient, but the encryption itself is done via the addition trick. For most communication on the internet, this latter option—the one we have learned about in this chapter—is preferable, as it requires much less computational power.

  But there are some applications in which fully fledged public key cryptography is required. Perhaps the most interesting of these applications is digital signatures, which will be explained in chapter 9. As you will discover when you read that chapter, the flavor of the ideas in the fully fledged type of public key cryptography is similar to what we have already seen: secret information is “mixed” with public information in a mathematically irreversible way, just as paint colors can be mixed irreversibly. The most famous public key cryptosystem is the one known as RSA, after the three inventors who first published it: Ronald Rivest, Adi Shamir, and Leonard Adleman. Chapter 9 uses RSA as the main example of how digital signatures work.

  There is a fascinating and complex story behind the invention of these early public key algorithms. Diffie and Hellman were indeed the first people to publish Diffie-Hellman, in 1976. Rivest, Shamir, and Adleman were indeed the first to publish RSA, in 1978. But that is not the whole story! It was later discovered that the British government had already known of similar systems for several years. Unfortunately for the inventors of these precursors to Diffie-Hellman and RSA, they were mathematicians working in the British government's communications laboratory, GCHQ. The results of their work were recorded in secret internal documents and were not declassified until 1997.

  RSA, Diffie-Hellman, and other public key cryptosystems are not just ingenious ideas. They have evolved into commercial technologies and internet standards with great importance for businesses and individuals alike. The vast majority of the online transactions we perform every day could not be completed securely without public key cryptography. The RSA inventors patented their system in the 1970s, and their patent did not expire until late 2000. A celebratory party was held at the Great American Music Hall in San Francisco on the night the patent expired—a celebration, perhaps, of the fact that public key cryptography is here to stay.

  * * *

  1For those who know about computer number systems, I'm referring here to decimal digits, not binary digits (bits). For those who know about logarithms, the conversion factor of 30% for transforming from bits to decimal digits comes from the fact that log10 2 ≈ 0.3.

  5

  Error-Correcting Codes: Mistakes That Fix Themselves

  It is one thing to show a man that he is in an error, and another to put him in possession of truth.

  —JOHN LOCKE, Essay Concerning Human Understanding (1690)

  These days, we're used to accessing computers whenever we need them. Richard Hamming, a researcher working at the Bell Telephone Company labs in the 1940s, was not so lucky: the company computer he needed was used by other departments and available to him only on weekends. You can im
agine his frustration, therefore, at the crashes that kept recurring due to the computer's errors in reading its own data. Here is what Hamming himself had to say on the matter:

  Two weekends in a row I came in and found that all my stuff had been dumped and nothing was done. I was really aroused and annoyed because I wanted those answers and two weekends had been lost. And so I said, “Dammit, if the machine can detect an error, why can't it locate the position of the error and correct it?”

  There can be few more clear-cut cases of necessity being the mother of invention. Hamming had soon created the first ever error-correcting code: a seemingly magical algorithm that detects and corrects errors in computer data. Without these codes, our computers and communication systems would be drastically slower, less powerful, and less reliable than they are today.

  THE NEED FOR ERROR DETECTION AND CORRECTION

  Computers have three fundamental jobs. The most important job is to perform computations. That is, given some input data, the computer must transform the data in some way to produce a useful answer. But the ability to compute answers would be essentially useless without the other two very important jobs that computers perform: storing data and transmitting data. (Computers mostly store data in their memory and on disk drives. And they typically transmit data over the internet.) To emphasize this point, imagine a computer that could neither store nor transmit information. It would, of course, be almost useless. Yes, you could do some complex computations (for example, preparing an intricate financial spreadsheet detailing the budget for a company), but then you would be unable to send the results to a colleague or even to save the results so you could come back and work on them later. Therefore, transmission and storage of data are truly essential for modern computers.

  But there is a huge challenge associated with transmitting and storing data: the data must be exactly right—because in many cases, even one tiny mistake can render the data useless. As humans, we are also familiar with the need to store and transmit information without any errors. For example, if you write down someone's phone number, it is essential that every digit is recorded correctly and in the right order. If there is even one mistake in one of the digits, the phone number is probably useless to you or anyone else. And in some cases, errors in data can actually be worse than useless. For example, an error in the file that stores a computer program can make that program crash or do things it was not intended to. (It might even delete some important files or crash before you get a chance to save your work.) And an error in some computerized financial records could result in actual monetary loss (if, say, you thought you were buying a stock priced at $5.34 per share but the actual cost was $8.34).

  But, as humans, the amount of error-free information we need to store is relatively small, and it's not too hard to avoid mistakes just by checking carefully whenever you know some information is important—things like bank account numbers, passwords, e-mail addresses, and the like. On the other hand, the amount of information that computers need to store and transmit without making any errors is absolutely immense. To get some idea of the scale, consider this. Suppose you have some kind of computing device with a storage capacity of 100 gigabytes. (At the time of writing, this is the typical capacity of a low-cost laptop.) This 100 gigabytes is equivalent to about 15 million pages of text. So even if this computer's storage system makes just one error per million pages, there would still be (on average) 15 mistakes on the device when filled to capacity. And the same lesson applies to transmission of data too: if you download a 20-megabyte software program, and your computer misinterprets just one in every million characters it receives, there will probably still be over 20 errors in your downloaded program—every one of which could cause a potentially costly crash when you least expect it.

  The moral of the story is that, for a computer, being accurate 99.9999% of the time is not even close to good enough. Computers must be able to store and transmit literally billions of pieces of information without making a single mistake. But computers have to deal with communication problems just like other devices. Telephones are a good example here: it's obvious that they don't transmit information perfectly, because phone conversations often suffer from distortions, static, or other types of noise. But telephones are not alone in their suffering: electrical wires are subject to all sorts of fluctuations; wireless communications suffer interference all the time; and physical media such as hard disks, CDs, and DVDs can be scratched, damaged, or simply misread because of dust or other physical interference. How on earth can we hope to achieve an error rate of less than one in many billions, in the face of such obvious communication errors? This chapter will reveal the ideas behind the ingenious computer science that makes this magic happen. It turns out that if you use the right tricks, even extremely unreliable communication channels can be used to transmit data with incredibly low error rates—so low that in practice, errors can be completely eliminated.

  THE REPETITION TRICK

  The most fundamental trick for communicating reliably over an unreliable channel is one that we are all familiar with: to make sure that some information has been communicated correctly, you just need to repeat it a few times. If someone dictates a phone number or bank account number to you over a bad telephone connection, you will probably ask them to repeat it at least once to make sure there were no mistakes.

  Computers can do exactly the same thing. Let's suppose a computer at your bank is trying to transmit your account balance to you over the internet. Your account balance is actually $5213.75, but unfortunately the network is not very reliable and every single digit has a 20% chance being changed to something else. So the first time your balance is transmitted, it might arrive as $5293.75. Obviously, you have no way of knowing whether or not this is correct. All of the digits might be right, but one or more of them might be wrong and you have no way of telling. But by using the repetition trick, you can make a very good guess as to the true balance. Imagine that you ask for your balance to be transmitted five times, and receive the following responses:

  Notice that some of the transmissions have more than one digit wrong, and there's even one transmission (number 2) with no errors at all. The crucial point is that you have no way of knowing where the errors are, so there is no way you can pick out transmission 2 as being the correct one. Instead, what you can do is examine each digit separately, looking at all transmissions of that one digit, and pick the value that occurs most often. Here are the results again, with the most common digits listed at the end:

  Let's look at some examples to make the idea absolutely clear. Examining the first digit in the transmission, we see that in transmissions 1-4, the first digit was a 5, whereas in transmission 5, the first digit was a 7. In other words, four of the transmissions said “5” and only one said “7.” So although you can't be absolutely sure, the most likely value for the first digit of your bank balance is 5. Moving on to the second digit, we see that 2 occurred four times, and 4 only once, so 2 is the most likely second digit. The third digit is a bit more interesting, because there are three possibilities: 1 occurs three times, 9 occurs once, and 4 occurs once. But the same principle applies, and 1 is the most likely true value. By doing this for all the digits, you can arrive at a final guess for your complete bank balance: $5213.75, which in this case is indeed correct.

  Well, that was easy. Have we solved the problem already? In some ways, the answer is yes. But you might be a little dissatisfied because of two things. First, the error rate for this communication channel was only 20%, and in some cases computers might need to communicate over channels that are much worse than that. Second, and perhaps more seriously, the final answer happened to be correct in the above example, but there is no guarantee that the answer will always be right: it is just a guess, based on what we think is most likely to be the true bank balance. Luckily, both of these objections can be addressed very easily: we just increase the number of retransmissions until the reliability is as high as we want.

  For
example, suppose the error rate was 50% instead of the 20% in the last example. Well, you could ask the bank to transmit your balance 1000 times instead of just 5. Let's concentrate on just the first digit, since the others work out the same way. Since the error rate is 50%, about half of them will be transmitted correctly, as a 5, and the other half will be changed to some other random values. So there will be about 500 occurrences of 5, and only about 50 each of the other digits (0-4 and 6-9). Mathematicians can calculate the chances of one of the other digits coming up more often than the 5: it turns out that even if we transmitted a new bank balance every second using this method, we would have to wait many trillions of years before we expect to make a wrong guess for the bank balance. The moral of the story is that by repeating an unreliable message often enough, you can make it as reliable as you want. (In these examples, we assumed the errors occur at random. If, on the other hand, a malicious entity is deliberately interfering with the transmission and choosing which errors to create, the repetition trick is much more vulnerable. Some of the codes introduced later work well even against this type of malicious attack.)

  So, by using the repetition trick, the problem of unreliable communication can be solved, and the chance of a mistake essentially eliminated. Unfortunately, the repetition trick is not good enough for modern computer systems. When transmitting a small piece of data like a bank balance, it is not too costly to retransmit 1000 times, but it would obviously be completely impractical to transmit 1000 copies of a large (say, 200-megabyte) software download. Clearly, computers need to use something more sophisticated than the repetition trick.

  THE REDUNDANCY TRICK

  Even though computers don't use the repetition trick as it was described above, we covered it first so that we could see the most basic principle of reliable communication in action. This basic principle is that you can't just send the original message; you need to send something extra to increase the reliability. In the case of the repetition trick, the extra thing you send is just more copies of the original message. But it turns out there are many other types of extra stuff you can send to improve the reliability. Computer scientists call the extra stuff “redundancy.” Sometimes, the redundancy is added on to the original message. We'll see this “adding on” technique when we look at the next trick (the checksum trick). But first, we will look at another way of adding redundancy, which actually transforms the original message into a longer “redundant” one—the original message is deleted and replaced by a different, longer one. When you receive the longer message, you can then transform it back into the original, even if it has been corrupted by a poor communication channel. We'll call this simply the redundancy trick.

 

‹ Prev