by DAVID KAHN
But after the June keys expired, Hinsley foresaw, the problems of slow solutions would recur. He concluded that the job would have to be done again, and he so persuaded the Admiralty. On Saturday, June 28, 1941, another task force of a cruiser and three destroyers seized the Lauenburg, an immaculate, three-year-old trawler—and with her the July Enigma keys. They reached B.P. on July 2, and the solution time fell from about 40 hours, to which it had risen when the June keys expired, to under three.
Yet it cannot be said that this speedier intelligence immediately affected the Battle of the Atlantic. As many ships were sunk in the slow-solution months of May and August as in the fast-solution months of June and July. Intelligence was outweighed by too many other factors. Yet gradually it gained in importance. It did so as the new knowledge of German naval terms, reports, forms of orders, stereotypical phraseology, and the like gave B.P. cribs that the growing number of bombes could masticate to spit out the daily U-boat keys. These keys let B.P. read U-boat messages, telling the Admiralty where the U-boats were patrolling and where they had been ordered to go. The Admiralty could then detour its convoys around the wolfpacks.
Convoy HX155 typified the technique. Its 54 ships sailed from Halifax, Nova Scotia, on October 16, 1941, carrying grain, sugar, fuel oil, aviation gasoline, steel, copper, and tobacco, among other goods. But its original route, decided a week before it sailed, would have run it close to a couple of U-boat concentrations whose presence had only since then become known. As a consequence, the Admiralty ordered it to swing more to the west and then, as more information about U-boat positions became known, still farther to the west. When it had steamed far enough north to avoid the concentrations, the Admiralty ordered HX155 to turn eastward again. And with these directions, the convoy sailed safely past the U-boats. As the Admiralty later said: “All ships now arrived.” The supplies so badly needed for Britain to continue her fight against the Axis had reached the island kingdom, thanks in part to the backroom boys of B.P.
The struggle against the Enigma continued throughout the war, as the Germans improved the device—not because they thought it had been compromised but because they feared that the growth in communications might produce a leak. A long blackout throughout most of 1942, due in part to the addition of a fourth rotor to the mechanism and in part to the change of a weather cipher that had permitted useful cribs, ended when a capture restored the cribs. From then on, naval Enigma was read with relative regularity. It permitted not only the defensive diversion of convoys but an offensive against the U-boats by aircraft from small escort carriers. These seriously disrupted the submarine dispositions and greatly blunted their attacks. Thus, B.P. helped win the Battle of the Atlantic. It did not win it alone. The merchant mariners, the shipwrights who built more vessels than U-boats could ever sink, the airplanes that gave convoys cover and forced U-boats to submerge to ineffectualness, the sailors on the warships escorting convoys—all were the chief winners of the war at sea. But codebreaking substantially shortened the struggle. And by doing so, it saved lives. And what contribution could be greater than that?
The great story of the solution of the Enigma machine and its effects on World War II remained a tightly held secret for almost 30 years. Only a few tiny shards of light about it escaped, and they revealed nothing about the vast scope of the work and its vast influence on the war. The tens of thousands of people involved in the work remained utterly silent about it for decades—probably the best example of general security in history. The British government insisted upon this silence because it had given the thousands of Enigma machines that it had gathered up after the end of the war to its former colonies as they gained independence and needed secure systems of communication. (Their officials were not stupid: probably they surmised that, if the mother country was giving them these cipher machines, she could read them. But they were concerned less with Britain than with their neighbors—India with Pakistan, for example—and they were almost certainly right in that those neighbors could not break the Enigma.)
Then, by the early 1970s, the last Enigmas in service wore out, physically. There was no longer any need to keep the story secret. There was, on the other hand, the possibility of showing the world Britain’s remarkable feat with communications and protocomputers. Among them was an electronic codebreaking device, called Colossus, for a non-Enigma cipher machine, that could be seen as a precursor of the information age. A Royal Air Force officer who had played a role in the distribution of the codebreaking results during the war, Group Captain F. W. Winterbotham, had been pestering Her Majesty’s Government for permission to tell the entire Enigma story, which he subsequently received. The Ultra Secret appeared in 1974 in Britain, at first excerpted in newspaper series, and in 1975 in the United States, where a review in The New York Times Book Review beginning “This book reveals the greatest secret of World War II after the atom bomb” helped set it on the road to best-sellerdom. Since then, dozens of books, sustained by an outpouring of documents into the American and British archives, have amplified the story. Together, they awakened a wide public to the existence of codebreaking. More important, they taught that public, which previously thought only of spies and cameras as kinds of intelligence, that codebreaking was the most important source of all.
Coincident with this growing public sensitivity to cryptology, though not driven by it, but spurred instead by the growth in computers and communications, the U.S. government moved to put into service a publicly available cryptosystem that would protect such items as bank messages or health information in data banks while allowing broad intercommunication among its users. It would do this by utilizing a cryptosystem that would be known to all but would have users establish private keys between themselves to keep their messages secret. In 1973 and 1974, the National Bureau of Standards, as the National Institute of Standards and Technology was then named, solicited candidate cryptosystems in the Federal Register. A handful of proposals was sent in. One of these was based on a system devised by Horst Feistel of the International Business Machines Corporation. Feistel, who had immigrated from Germany in 1932, had worked on I.F.F., or identification-friend-or-foe, systems during and just after World War II. This interested him in the problem of authentication and coded texts and, when he joined I.B.M. in 1967, he began trying to see how the monoalphabetic substitution, the most general substitution of all, could be used in a good system. He was accustomed to working with computer technology and in binary digital form. Computer memory enabled him to incorporate transposition into his system, something that had not been practicable with devices using letters. Binary operations facilitated his changing numerical bases from 2 to 8 (though always expressed in binary form) to devise a critical contraction that made it hard for a would-be cryptanalyst to track back through the cipher system. Computer technology thus let him devise this cipher, which, because of its complexity, could never be implemented by hand. Feistel wanted to name it “Dataseal,” but I.B.M. just shortened the term “demonstration cipher” to “Demon.” Later, the name was changed to “Lucifer,” which, in addition to maintaining what Feistel called “the evil atmosphere” of “Demon,” contained the word “cipher.”
The system was not an elegant one. It operated on blocks of 64 plaintext bits, which it transposed, divided, replaced, and combined in complicated ways, repeating some of the steps 16 times. Because the individual operations were simple, the system could run fast enough to keep up with the demands of computer communications.
Before I.B.M. submitted it as a candidate for the proposed Data Encryption Standard, it conferred with the National Security Agency to strengthen it. The meetings resulted in an improvement of a key component called the S (for substitution) boxes and in a reduction of the length of the key that Feistel had designed, down to 56 bits (plus eight extra as parity, or checking, bits). This was the modified version of Lucifer that I.B.M. submitted to the bureau.
Of all the candidate cryptosystems, it proved to be the only one the bureau bel
ieved met even the minimal demands of computer security and communications requirements. The bureau published it in the Federal Register of August 1, 1975, as a proposed federal information processing standard—the data encryption standard, or D.E.S.
Very quickly a storm blew up. The growing community of nongovernmental cryptologists, who worked in academe and for businesses such as banks, international oil conglomerates, communication equipment manufacturers, and communications companies, suspected something fishy. The involvement of N.S.A. led them to believe either that a “trap door” had been built into the data encryption standard that enabled N.S.A. to solve it easily, or that N.S.A. had deliberately weakened the system by reducing the length of its key to where the system was just strong enough to keep business firms from solving competitors’ messages but just weak enough to let the government read messages in it. Articles appeared in the technical, trade, and general press about it; panels argued over the issue at professional conferences; the standards bureau called meetings to discuss the matter and, it hoped, to convert attendees to its point of view. In the end, most people seemed to stick with the positions they had had at the beginning of the controversy—and the standards bureau issued the standard as it had originally proposed it, with the 56-bit key, on January 15, 1977, as the 18-page Federal Information Processing Standards Publication 46. The publication stated that the D.E.S. was to be used by federal departments and agencies for any of their nonnational-security data that an authorized official decided needs “cryptographic protection.” Perhaps more important, the publication said that the D.E.S.’s use by “commercial and private organizations” was to be “encouraged.”
With a market thus assured, manufacturers such as Motorola began producing D.E.S. chips for incorporation into computers. Rival firms selling cipher machines the world over warned that the D.E.S.’s approval by the U.S. government meant that that government could read D.E.S.-encrypted messages. But D.E.S. manufacturers maintained instead that the U.S. government had in effect certified the system as solid and that, if a firm feared that the N.S.A. could read D.E.S., it could doubly or triply encrypt its messages, thus assuring secrecy. Businesses either accepted this point or didn’t care about government spying, for more and more began encrypting messages using D.E.S. It has become the de facto standard for much of business around the world. And despite occasional rumors that this researcher or that has solved the D.E.S., despite successful attacks on parts of the system or on it up to its 15th round, despite the growth in computer power since the D.E.S. was promulgated, there is no authenticated case of anyone’s breaking it. Can the N.S.A. do so? No one outside the agency knows, but one standards institute official has said that, from the standpoint of national security, the D.E.S. was the worst mistake the N.S.A. ever made.
The controversy over the D.E.S. brought the government into conflict with at least parts of the broad new cryptologic public. This conflict expanded into other areas with some ham-handed attempts to control that public. Several persons who had been invited to speak at a conference at Cornell University were warned in a letter from an individual, later determined to be an N.S.A. employee, that discussing cryptology would violate the federal International Traffic in Arms Regulations. These forbid the export of cryptologic equipment or information on cryptology without government approval, and speaking before an audience that included non-U.S. citizens constitutes an export. The speakers, fearful of violating U.S. law, refrained from talking. At about the same time, the government slapped secrecy orders on a few applications for patents for cryptosystems. This raised protests. Then a new director of the N.S.A., Vice Admiral Bobby R. Inman, a tall, slender Texan, brought the agency out of the dark, and into the non-national-security world. He spoke to the media! He visited the offices of Science magazine and explained N.S.A.’s point of view. One upshot was the establishment of a committee under the American Council on Education to look into the problem of publishing material on cryptology that might harm the national security. The committee, consisting mainly of professors of mathematics who worked in cryptology, proposed a system of voluntary censorship. Authors of works on cryptology would be asked to submit their material to N.S.A., which, without any enforcement power, would urge the writers to delete or blur sensitive matter. The system was put into operation and has been working satisfactorily.
Meanwhile, revelations in the press about excesses of the intelligence community led the U.S. Senate and the House of Representatives to investigate it. Included was the N.S.A. The agency, the investigations showed, had monitored the domestic conversations of Americans without the proper court warrants. It was chastised and forbidden to overhear such communications, and Congress established a special court to grant national-security wiretaps.
Public interest in cryptology was further—and greatly—stimulated by the invention of a new form of cryptography that prompted more work in the field than anything else in its history. This was public-key cryptography. For the first time, a form of secret communication used different keys for encryption and decryption. The idea was first proposed by Dr. Martin Hellman of the Stanford University Department of Electrical Engineering and Whitfield Diffie, a graduate student. It was a dramatic breakthrough, for it had not occurred to anyone else in the long history of cryptology that the deciphering key could be anything other than the inverse of the enciphering key. The asymmetry permitted, for the first time in cryptology, the possibility of authenticating a message sent electrically. The two discussed the possibility in a pathbreaking article entitled “New Directions in Cryptography.” In it, however, they offered only partial implementations of their idea.
The theory put forth in the article came to the attention of three mathematicians at the Massachusetts Institute of Technology. Ronald Rivest, Len Adleman, and Adi Shamir were intrigued with the possibility and sought to realize it. After some failed attempts, they devised a system based upon the mathematical phenomenon that it is easy to determine whether a number is a prime but, if it is not, hard to determine its factors. Under the system, anyone may send a secret message to a particular person but only that person can read it. The system works like this: the person wishing to receive secret messages selects two large prime numbers, p and q, which must be kept secret, and another large number, e, which is public. He or she multiplies the primes together to produce n, which is also public. The numbers e and n, which must be prime to one another, constitute the public key. They are published, as a telephone number is in a phone directory. The person then calculates another number, d, by finding the greatest common denominator of p-1 and q-1, multiplying this number by the product of p-1 times q-1, adding 1, dividing this total by e, and taking the remainder. He keeps d secret. Someone wanting to send a message first converts it into numerical form (as a = 10, b = 11, etc.), multiplies it by itself e times, divides the result by n, discards the quotient and takes the remainder as the cryptogram. When the recipient gets this, he multiplies it by itself d times, divides it by n, and takes the remainder as the numerical plaintext. The system resists to the degree that n is hard to factor into p and q.
It offers a number of fascinating possibilities. If a person encrypts a message with her decrypting key, she cannot deny that the message came from her, because no one else knows her key. By the same token, the recipient knows it came from her: the message is thus authenticated. She can secure her message by encrypting it with the recipient’s public key. Now no one can read it except the legitimate recipient, who can decrypt the message with his secret key and then encrypt the result with the sender’s public key to obtain the plaintext.
When Martin Gardner mentioned the system in his column on Mathematical Games in Scientific American, explaining how it could both authenticate messages (assure the recipient that they came from whom they said they came) and make them undeniable by the sender, an avalanche of 5,000 requests for their article poured in to M.I.T. The interest stemmed from the apparent impossibility of doing what it claimed it would do: the event wa
s counterintuitive. But the system did do what it said it would do, and it did it in so elegant a mathematical fashion that it attracted hundreds of researchers to the field. The system came to be known as the RSA, from the initials of its inventors.
Dozens of applications of public-key cryptography presented themselves, such as digital cash. But the system runs much more slowly than, say, the D.E.S. because it requires heavy computation. So it has come mainly to encrypt keys between correspondents in cryptographic networks in which there are so many correspondents that it is difficult to exchange keys before secret communication needs to take place.
Public-key cryptology, the D.E.S., cryptosystems using shift registers, cryptosystems based on elliptic curves and other mathematical techniques—all are implemented today not on the alphabet of 26 letters, as the Enigma machine and the hand cipher systems of yesteryear were, but on the binary digital alphabet of 0s and 1s. The reason is that this is the international alphabet of computers and communications, and therefore of the Internet.
The openness of the Internet makes it easy for unauthorized persons to approach the gates of computers and computer networks and, if those gates are not properly guarded, to hack through them and gain entry to those computers. Inside, they can read personal and business files out of a morbid curiosity, or, if they are more vicious, to change or even destroy them. When film or TV viewers see young hackers tapping away at their computers, they are watching the characters attempting to gain entry to a system by trying likely passwords. Once inside, they are writing instructions in a computer language, such as C or Intel assembly language, to open up files or to alter them.