Book Read Free

Life's Greatest Secret

Page 9

by Matthew Cobb

3.DNA plays some unidentified role in one kind of viral infection.

  Hershey then told his audience that this evidence was not enough to support the conclusion that DNA was the hereditary material. He remained convinced that proteins must play a role:

  None of these, nor all together, forms a sufficient basis for scientific judgement concerning the genetic function of DNA. The evidence for this statement is that biologists (all of whom, being human, have an opinion) are about equally divided pro and con. My own guess is that DNA will not prove to be a unique determiner of genetic specificity, but that contributions to the question will be made in the future only by persons willing to entertain the contrary view.56

  Hershey’s caution shows us the rigorous nature of his scientific thinking – strictly speaking, his interpretation was absolutely correct; he would go no further than the data allowed. It also shows the continued uncertainty about the possibility of contamination – this was an important problem in the Hershey and Chase experiment, although nobody pointed it out at the time.

  Hershey later argued that the complex route from Avery’s 1944 discovery to the widespread acceptance that genes were made of DNA ‘shows that some redundancy of evidence was needed to be convincing and that diversity of experimental materials was often crucial to discovery’.57 Although that is undoubtedly true, it is also the case that while some people immediately embraced Avery’s discovery, others – including the phage group – were reluctant to recognise its significance. For a decade, scientists spent their time arguing over something that now seems blindingly obvious. There are many such moments in the history of science, and they can only be understood in terms of the evidence and attitudes of the time. In this case, the predominant problem was the power of the old ideas about the dominant role of protein, and the difficulty of imagining how in reality – not in theory – DNA could produce specificity.

  *

  While the chemists and the microbiologists battled it out over the chemical nature of the gene, there were several bold attempts to look at gene function – how genes do what they do. In 1947, Kurt Stern, a 43-year-old German biochemist who had left Nazi Germany for the US, published a highly speculative article on what he called the gene code – one of the first uses of the word code since the publication of Schrödinger’s book three years earlier. In a prescient guess, Stern argued that the chemical basis of genes might take the form of helical coils – he assumed that genes were made of nucleoproteins, although he recognised that Avery could be right and DNA alone might be the genetic material.

  Like Chargaff, Stern suggested that variations in the sequence of the bases in the DNA molecule could lie at the heart of gene specificity. According to Stern’s theory, genes were physical modulations, much like a groove on a vinyl record. The role of the nucleoprotein, Stern argued, was to fix the DNA molecule in a particular shape; removal of the protein would return the nucleic acid to its unmodulated form. To prove his point, Stern provided photographs of physical models that he had made, showing DNA and an associated protein spiralling around each other in a double helix, although he did not use the term. Despite the rich biochemical and structural data that were at the heart of Stern’s work, his model was too hypothetical to generate experiments, and his ingenious views had no influence.58 Although Stern used the term code, he did not embrace the idea that the code might abstractly represent protein structure; his vision was that of a template – genes were the physical form on which proteins were synthesised.

  At the same time, André Boivin and Roger Vendrely came up with a hypothesis about the relation between the two kinds of nucleic acid found in a cell and the enzymes that were thought to be the product of genes, building on Torbjörn Caspersson and Jean Brachet’s investigations into the role of RNA in protein synthesis.59 Boivin and Vendrely’s idea was pithily expressed by the editor of the journal, Experientia, in an English-language summary:

  the macromolecular desoxyribonucleic acids govern the building of macro-molecular ribonucleic acids, and, in turn the production of cytoplasmic enzymes.60

  In other words, DNA led to the production of RNA, which then led to protein synthesis and the production of enzymes. This hypothesis, which was correct, was another example of Boivin’s perception.

  In 1950, two Oxford chemists, P. C. Caldwell and Sir Cyril Hinshelwood, proposed a physical model to explain how protein synthesis takes place on a nucleic acid.61 If the synthesis of an amino acid depended on the presence of pairs of the five components in a DNA molecule (the four bases, plus the phosphate backbone), they argued, ‘twenty-five different internucleotide arrangements are possible’. This was enough to produce the twenty different kinds of amino acid that are found in organisms. Caldwell and Hinshelwood accepted that DNA was ‘the principal seat of the unchanging hereditary characters’, but this apparent enthusiasm hid doubts about the importance of genetics: Hinshelwood was a believer in the inheritance of acquired characteristics, a mode of heredity and evolution proposed by the French naturalist Lamarck in the early decades of the nineteenth century and since abandoned. Above all, like Stern, Caldwell and Hinshelwood were thinking in terms of a physical relation between the protein and the nucleic acid upon which it was synthesised. The gene acted as a template, they thought.

  In 1952, Alexander Dounce, a biochemistry professor at Rochester Medical School, had a similar starting point: there must be some structural relation between the bases in the nucleic acid of the gene and a corresponding amino acid in a protein. Dounce argued that the nucleic acid was a physical template for protein synthesis. His model proposed that each amino acid was determined by a set of three bases; this is indeed the case, but not for the reasons that Dounce suggested.62 Like Boivin, Dounce took the evidence from Brachet and Caspersson about the role of RNA and argued that protein synthesis occurred on an RNA molecule, not on DNA. Dounce described this idea in a way that is now taught to students all over the world every day: ‘deoxyribonucleic acid (DNA) → ribonucleic acid (RNA) → protein’.63 Dounce was the first person to spell out this series of links, following the work of Boivin and Vendrely five years earlier. Their names are now forgotten to all except a handful of historians and scientists.

  In retrospect, what is most remarkable about these models of gene function is that they were strictly physical in nature. They were all based on three-dimensional template structures rather than the kind of abstract one-dimensional ‘code-script’ that Schrödinger had suggested. They were all analogue, rather than digital. For this to change, a new set of ideas, different ways of thinking, had to enter biology. That was just around the corner. The developments in mathematical thinking about information, codes and control that had occurred during the war were being popularised at the very same time as the biological community was coming to terms with the implications of Avery’s discovery.

  –FIVE–

  THE AGE OF CONTROL

  In his 1988 best-seller A Brief History of Time, the physicist Stephen Hawking recounts that his editor told him that every equation he used would halve the potential readership. Hawking obligingly included just one equation (e = mc2) and the book went on to sell more than 10 million copies. Things were clearly different back in the 1940s – Norbert Wiener’s 1948 popular science book Cybernetics or Control and Communication in the Animal and the Machine was stuffed full of hundreds of complicated equations, and yet it became a publishing sensation around the world.

  With its weird title and its promise of a new theory of nearly everything, Cybernetics took the bookstores by storm. The first edition was sold out in six weeks and the book went through three printings in six months, jostling for top place in the best-seller lists.1 According to a critic in the New York Times, Cybernetics was one of the ‘seminal books … comparable in ultimate importance to … Galileo or Malthus or Rousseau or Mill.’2 Wiener’s book popularised the research on control systems and negative feedback that had taken place during the war, spreading this new approach throughout the scientific community –
and especially into biology. It helped invent a new vocabulary of information that transformed the postwar world and shaped a radical new view of genetics. We are still living in that world.

  *

  Cybernetics was born in Paris. In 1947, Wiener was invited to a conference in Nancy, in eastern France. On his visit to Europe he met thinkers in England, such as Alan Turing, J. B. S. Haldane and the University of Manchester computer pioneers, ‘Freddie’ Williams and Tom Kilburn. During a visit to Paris, Wiener went to ‘a drab little bookshop opposite the Sorbonne’, where he met Enriques Freymann, the Mexican-born head of a French publishing firm.3 Fascinated by Wiener’s ideas, Freymann suggested that the American should write a book to explain them to the general public. Wiener was taken with the proposal and the deal was sealed over a chocolat chaud. Wiener’s first task was to find an overall description of the new vision of control and information that he had been working on. He eventually came up with a new word – ‘cybernetics’, from the Greek word for ‘steersman’.4 All of the ‘cyber’ words we now use come from Wiener’s coinage.

  Cybernetics was published simultaneously on both sides of Atlantic at the end of October 1948. It was full of equations, which were riddled with mistakes due to a mixture of Wiener’s haste and the hazards of transatlantic proof-reading. But because the maths was so complicated nobody really noticed – few readers were able to follow what Science magazine called the ‘troublesome mathematical portions’, far less spot the flaws. The algebra was in fact pretty irrelevant. As a writer in the New York Times explained, the success of Cybernetics was explained by the fact that between the equations there were ‘pages of sparkling, literate and provocative prose.’5* Wiener put forward a theory of the role of information and control in behaviour, underlining the similarities between animals and machines, and sketched out a vision of a cybernetic future in which automation would dominate: ‘the present time is the age of communication and control’, he wrote.

  In a postwar world marked by anxieties about the destructive power of technology, Wiener’s vision was simultaneously apocalyptic and humane. He described the ‘unbounded possibilities for good and for evil’ contained in the increased automation of production. This was bold thinking, given that factory automation was in its infancy, and computers were only just able to store a brief program in their valve-based memories. Nevertheless, Wiener foresaw the overall tendency that would characterise production in the second half of the century. He predicted a new industrial revolution that would devalue the ‘simpler and more routine’ aspects of the human brain, as machines began to perform menial mental tasks. The only way to limit the destructive aspects of this development, he argued, was to create ‘a society based on human values other than buying and selling.’ Ultimately, Wiener was gloomy about the future. After describing the creation of a new science, he felt powerless and pessimistic: ‘We can only hand it over into the world that exists about us, and this is the world of Belsen and Hiroshima.’6

  At the heart of Cybernetics was Wiener’s view that all control systems, and the negative feedback they embodied, were based on information flows. Information, he argued, was at the heart of all systems – mechanical, electronic or organic – and this was closely related to the physicists’ concept of entropy. Five years earlier, Schrödinger had argued that life was ‘negative entropy’, because of its ability to temporarily resist the second law of thermodynamics. Now Wiener was extending that concept to information as a whole:

  The quantity we here define as amount of information is the negative of the quantity usually defined as entropy in similar situations.

  As he explained: ‘Just as the amount of information in a system is a measure of its degree of organization, so the entropy of a system is a measure of its degree of disorganization; and one is simply the negative of the other.’ Information, argued Wiener, was ‘negative entropy’. According to this view, life and information were intimately connected. There was therefore a continuum between the most ordered state of matter – living beings – and inanimate forms of organised matter, a continuum that could be viewed in terms of a new quality: information. As Wiener stated emphatically: ‘Information is information, not matter or energy. No materialism which does not admit this can survive at the present day.’7 While this is true, information requires a material substrate and, as Szilárd pointed out in his solution to Maxwell’s Demon, energy has to be expended in order to obtain or create information.

  Cybernetics was not just about information, it was fundamentally based on the concept of negative feedback that Wiener had first explored in his anti-aircraft work during the war.8 Negative feedback had been known in antiquity and during the golden age of Islam, in the form of liquid-level regulators (like a cistern in a toilet), and in the eighteenth century the phenomenon had been used by the steam engineer James Watt, who developed a ‘governor’ to prevent his machines from running out of control. Although Watt’s invention led to new forms of everyday language – it was the origin of terms such as ‘self-regulation’ and ‘checks and balances’ – it was not generalised into something that operated in all systems. Wiener’s vision of the importance of negative feedback, and his ability to spin together threads from behaviour, physiology, sociology, electronics and automation, showed the general public how an extraordinary variety of natural and mechanical phenomena could be interpreted using the same framework. It made for a heady mixture.

  The influence of Cybernetics extended right across the academic spectrum, just as Wiener had hoped. There were summaries in the New York Times and Scientific American and positive reviews in a wide range of academic journals, from American Anthropologist to Psychiatric Quarterly.9 According to the Annals of the American Academy of Political and Social Sciences it was ‘a fascinating book that is at present being read with more than just novelty or academic interest by scientists of all disciplines and philosophical camps.’10 The US journal Science hailed the birth of ‘a new discipline’, while in American Scientist the French physicist Léon Brillouin described the link between information and ‘negative entropy’ as ‘an entirely new field for investigation and a most revolutionary idea’, before eventually writing his own book on the topic.11 Cybernetics was read widely and was enthusiastically discussed – at the 1949 Summer School held at the world-famous Woods Hole Marine Biological Laboratory, it was a topic of debate among the students and young scientists.12

  Cybernetics had a similar impact in France, where it attracted wide attention despite having been published in English. In December 1948 a long article appeared in Le Monde by the French Dominican friar and professor of philosophy Dominique Dubarle. Dubarle praised Wiener’s ‘extraordinary’ book, which he claimed announced the birth of ‘a new science’.13 The world was being reshaped, but Wiener remained bemused by the success:

  Freymann had not rated the commercial prospects of Cybernetics very highly – nor, as a matter of fact, had anybody on either side of the ocean. When it became a scientific best-seller we were all astonished, not least myself.14

  *

  Wiener was not the sole creator of this revolution in thinking, nor did he claim to be. In the Introduction to Cybernetics, he described how this new vision of control had been developed through years of discussions with his intellectual partners, including Claude Shannon. Wiener generously explained that something like his ‘statistical theory of the amount of information’ had been simultaneously arrived at by Shannon in the US, by R. A. Fisher in the UK and by Andrei Kolmogoroff in the USSR. Fisher’s approach to information was in fact quite different, and the works of Kolmogoroff were unobtainable, but Shannon’s views were virtually identical to Wiener’s, as readers with the necessary mathematical ability were soon able to appreciate.

  In 1948, Shannon published two dense theoretical articles in the pages of the Bell System Technical Journal. These were then used as the basis of a 1949 book, The Theory of Communication, which also contained an explanatory essay by the Rockefeller Foundation adm
inistrator and one-time head of Section D-2, Warren Weaver. Although The Theory of Communication soon became well known in the scientific community, it did not have anywhere near the public success of Cybernetics. Shannon’s arid stretches of equations were not punctuated by anything like Wiener’s stylish and thought-provoking prose, and all except the most committed reader would probably have tripped up over the opening sentence: ‘The recent development of various methods of modulation such as PCM and PPM which exchange bandwidth for signal-to-noise ratio has intensified the interest in a general theory of communication.’15 In contrast, Wiener’s Cybernetics opened with a German folk song.

  Shannon was interested primarily in communication, not control. His model was highly abstract, dealing with information without reference to the content of that information. As he stated at the outset:

  The word information, in this theory, is used in a special sense that must not be confused with its ordinary usage. In particular, information must not be confused with meaning.16

  Unlike Wiener’s vision, Shannon’s approach had no place for feedback – this was a linear transmission system with no connection between the reception of the message and the source. It was not a model that could explain the flow of control in a behavioural or mechanical system; it did no more than what it claimed – it explained how a single message can pass from a transmitter to a receiver despite the presence of noise in a system. For Shannon, the critical process was encoding and decoding: ‘the function of the transmitter is to encode, and that of the receiver to decode, the message’.17

  Shannon showed that information could be conceptualised mathematically, as a measure of freedom of choice between all possible messages. The simplest such choice occurs when there are two options, as in a binary system. ‘This unit of information,’ wrote Shannon, ‘is called a “bit,” this word, first suggested by John W. Tukey, being a condensation of “binary digit.”’ Shannon’s formal mathematical description of information was essentially identical to that of Wiener, except that the two mathematicians approached the question from different sides: whereas Wiener saw information as negative entropy, for Shannon information was the same as entropy.18

 

‹ Prev