Book Read Free

The Rise and Fall of Modern Medicine

Page 25

by James Le Fanu


  The gynaecologists were the first to appreciate the potential of the new approach. In Germany Kurt Semm at Kiel saw the virtue of the new procedure for women seeking sterilisation having completed their families, passing an electric cauterising device down through the laparoscope that he used to close off the fallopian tubes. Over the next twenty years he performed the full range of gynaecological operations down the laparoscope, treating ectopic pregnancies, ruptured ovarian cysts and injured fallopian tubes.18 In Britain, Bob Edwards’s collaborator Patrick Steptoe used the laparoscope, as already described, to obtain maturing eggs from the ovary, which, once removed and fertilised, made possible the birth of the first test-tube baby.19

  The influence of the laparoscope on surgery of the gut and liver came about more slowly. In 1983 the first gall bladder was removed through the laparoscope, transforming an operation that previously involved a major abdominal incision and ten days of convalescence into a ‘day surgery’ procedure. Three years later a computer chip television camera was attached to the end of a laparoscope, inaugurating the era of ‘keyhole’ or ‘minimally invasive’ surgery. As with gynaecology, a wide range of operations that previously required open incisions in the abdomen – hernia repairs and the removal of malignant growths and parts of the spleen, stomach and colon – could now be performed so expeditiously and with so little trauma that the patient could often return home the same day.20

  And so it went on. Orthopaedic surgeons used the endoscope to look inside and repair traumatic injuries, especially to the knee and shoulder.21 ENT surgeons found that chronic sinusitis could be cured by improving the circulation of air with an operation through an endoscope at the back of the nose.22 Even the removal of a kidney, which previously left a vast scar in the flank, could now be performed endoscopically.23

  Harold Hopkins was a genius. The impact of his modern endoscopes – along with the operating microscope – is clearly very important in its own right, but also illustrates the cardinal feature of the technological contribution to post-war medicine. Surgeons had, it is true, been practising various forms of endoscopy since the turn of the century, but it remained a province of a few enthusiasts and the results were unreliable. Hopkins’s two optical innovations meant that now anyone could become an endoscopist, so the number of patients who could benefit increased vastly. The contribution of technological innovation has thus been not only to enlarge the scope of medical intervention, but also, by simplifying the complex, to enlarge its range as well. This, as will be seen, can be something of a two-edged sword.24

  5

  THE MYSTERIES OF BIOLOGY

  Momentous events have multiple causes. The excavation of the origins of the rise of modern medicine reveals explanations at many different levels. All those considered so far – the war, clinical science, the cornucopia of new drugs and technology’s triumphs – were clearly essential, but there are two further layers, readily overlooked, that can rightly be described as the foundations. The first are the human, moral qualities necessary for scientific innovation. There is always a difficulty in describing the process of scientific discovery because in retrospect it so often appears to have been quite obvious. But those who chip away at the boundaries of the unknown have a very different perspective, because they can never know in advance whether their research will be successful or a futile dead end. And when, as with Donald Pinkel’s search for a cure for childhood cancer, or Bob Edwards’s research on in vitro fertilisation, that research stretches over decades, it requires great strength of character to persist, often, as has been noted, in the face of repeated failure and the open hostility of colleagues.

  They also, of course, had to be intelligent and clear-sighted, but with a few exceptions, such as Harold Hopkins, they were not geniuses. This brings us to the second of the neglected foundations of the rise of modern medicine – the ‘gifts’ from nature. No amount of moral fibre, scientific creativity or natural intelligence could have elaborated, from first principles, antibiotics, or steroids, or indeed virtually any of the cornucopia of discoveries of medicinal chemistry. They were rather ‘gifts from nature’, profounder and more complex than science at the time (or even now) could comprehend.

  We turn first to that mystery of mysteries, antibiotics and the bacteria and fungi that produce them. The common perception of antibiotics (as described in ‘1941: Penicillin’) is that of chemical-warfare agents, produced by one microbiological species to maximise its chance of survival by destroying others and which, quite fortuitously, were found to be effective against a full range of infectious disease in humans. This was certainly the principle that inspired Selman Waksman to investigate the actinomycetes species of bacteria in the soil, from which so many of the antibiotics that are commonly in use today were derived. But, as has also been noted, within a few years of receiving the Nobel Prize for his great discovery of streptomycin Waksman realised his chemical-warfare theory must be wrong, and for several reasons. Antibiotics, he pointed out, could not play a central role in the microbes’ struggle for survival, because only a handful of species were capable of producing them. More specifically, he had been unable to demonstrate the presence of antibiotics in the soil in sufficient quantities that would allow them to destroy other bacteria. Even if they were to do so, the other competing bacteria in the soil had the capacity to become rapidly resistant, as indeed has been found in the treatment of human infections. Further, he noted: ‘Specific nutrients characteristic for each organism are a sine qua non requirement for the production of antibiotics, but such nutrients are never found in proper combination or in sufficient concentrations to enable the antibiotic-producing organisms to dominate their environment.’ For these and other equally cogent reasons, Waksman concluded that antibiotics are a ‘purely fortuitous phenomenon . . . there is no purposefulness behind them’.1

  Such a view is so heretical, so contrary to the prevailing scientific view that there is a reason and necessity for everything, it must be presumed that Waksman was wrong. But this apparent purposelessness of antibiotics is not exceptional, being only one example of a generalised phenomenon in biology – ‘secondary metabolism’ – that is hardly ever alluded to, precisely because it strikes at the heart of the claims of scientists to fully understand the natural world. This observation clearly requires some clarification.

  Every living organism on the face of the Earth, from bacteria to humans, shares certain chemical features. Their cells are made of the same type of large molecules, proteins, fat and carbohydrates; and the ‘energy’ that drives them to fulfil their functions and reproduce is based on the same sort of chemical reactions. The chemical necessities of life produced by these reactions are known as primary metabolites. But, in addition, bacteria and plants in particular also produce an enormous range of other chemicals known as secondary metabolites (including antibiotics), which are not essential for sustaining life but are rather the distinguishing features of the organisms. Thus the cells of a potato are made up of primary metabolites and water and cellulose, but what makes a potato a potato is a cocktail of over 150 secondary metabolites, including arsenic, alkaloids, nitrates, tannins and oxalic acid. Precisely the same applies to every grass, fruit, vegetable, flower, fungus and micro-organism – they are all vast chemical factories, manufacturing secondary metabolite chemicals in abundance. Indeed, there are over 20,000 known secondary metabolites, but with so many species still uninvestigated, the true number is probably several times greater.

  These secondary metabolites have always played an important role in human affairs, forming the basis of the hundreds of natural dyes such as English woad and Tyrian purple and the fragrances jasmine, rose and sandalwood. They provide the mind-bending chemicals such as cocaine, cannabis and morphine, and therapeutic drugs such as aspirin (from the bark of the willow tree) and digoxin (from the foxglove) as well as the anti-cancer drugs like actinomycin and vincristine, and many others, including antibiotics. More important than all this, they account for the diversity
of the natural world, the colours and fragrances of flowers, the taste and texture of fruit and vegetables.

  It is possible, in some instances, to infer the role of some of these secondary metabolites in the survival and propagation of the plant or organism that produces them, either in discouraging predators or, as with the scent of flowers, encouraging bees to pollinate. But for the most part, they appear to be, just like antibiotics, a ‘purely fortuitous phenomenon . . . there is no purposefulness behind them’. In this context antibiotics are not ‘the mystery of mysteries’ that initiated the therapeutic revolution, but only one example of a much greater mystery that lies beyond the comprehension of contemporary science. Why do living organisms produce in such abundance so many complex chemicals that are not necessary to sustain life?2

  The second pillar of the therapeutic revolution, cortisone, was also a ‘gift from nature’ but of a very different sort. The small adrenal glands that balance on top of the kidneys secrete many hormones that control the amount of water in the body. They metabolise sugar to provide the energy that drives the chemical reactions in the body, as well as being the essential precursors for the all-important sex hormones oestrogen and testosterone. But their crucial role in the control of inflammation – mediated by cortisone – was not appreciated until Philip Hench treated his first patient, Mrs Gardner, so miserably afflicted with rheumatoid arthritis: ‘The most unusual thing about this historical discovery was its unexpectedness,’ observes one commentator, citing another expert on the adrenal gland who, when asked whether an extract from the adrenal gland might be useful in the treatment of inflammation, had replied, ‘I cannot imagine anything more unlikely.’3

  The discovery of the value of cortisone in both rheumatoid arthritis and upwards of two hundred other illnesses was thus just as much an unexpected revelation as antibiotics. But this is only to scratch the surface of the extraordinary nature of the discovery, for then one has to go on to ask why, or rather how, does cortisone exert its influence on the cells involved in the inflammatory reaction? This requires a closer look at how cells work, which is best conveyed by imagining a single cell that has been magnified several million times – as lucidly described here by biologist Michael Denton:

  On the surface of the cell we would see millions of openings, like the portholes of a vast space ship, opening and closing to allow a continuing stream of materials to flow in and out. If we were to enter one of these openings we would find ourselves in a world of supreme technology and bewildering complexity. We would see endless corridors branching in every direction away from the perimeter of the cell, some leading to the central memory bank in the nucleus and others to assembly plant units. The nucleus itself would be a vast spherical dome inside of which we would see, all neatly stacked together, the miles of coiled chains of the DNA molecules. A huge range of products and raw materials would shuffle along the corridors in a highly ordered fashion to and from all the various assembly plants in the outer regions of the cell.

  We would wonder at the level of control implicit in the movement of so many objects, all in perfect unison. We would see that nearly every feature of our own advanced machines has its analogue in the cell: artificial languages and their decoding systems, memory banks for information storage and retrieval, elegant control systems regulating the automated assembly of parts and components, proof-reading devices utilised for quality control, assembly processes involving the principle of prefabrication and modular construction. What we would be witnessing would be an object resembling an immense automated factory carrying out almost as many unique functions as all the manufacturing activities of man on Earth. However, it would be a factory which would have one capacity not equalled in any of our most advanced machines, for it would be capable of replicating its entire structure within a matter of a few hours.

  It is astonishing to think that this remarkable piece of machinery, which possesses the ultimate capacity to construct every living thing that ever existed on Earth, from a giant redwood to the human brain, can construct all its own components in a matter of minutes and is of the order of several thousand million million times smaller than the smallest piece of functional machinery ever constructed by man.4

  Put like this, it is only natural to wonder how this ‘remarkable piece of machinery’ came into existence in the first place. Of more immediate concern is to work out how a single molecule of cortisone alters the function of the cell in a way that will dampen down the inflammatory response – which is best appreciated by reference to the illustration of the internal workings of the cell on page 314.

  First, the cortisone molecule must pass through one of the millions of portholes on the external surface of the cell, where it finds and docks with another molecule, its receptor. Together they pass down one of the avenues or conduits leading to the nucleus, into which they pass and somehow, inexplicably, find among the closely packed coils of DNA the gene that codes for one or other of the many proteins involved in the control of inflammation. The cortisone molecule, with its receptor, somehow stimulates the relevant section of DNA to produce a replica of itself called messenger RNA (mRNA), which then passes back out of the nucleus into the main part of the cell. The mRNA finds a protein factory called a ribosome, into which it feeds itself like a tickertape, providing the instructions to construct the relevant anti-inflammatory protein, which is then conveyed to the outer wall of the cell and expelled to enter the general circulation.

  Simultaneously, other cortisone molecules will be acting on other parts of the DNA in other types of cells to produce other anti-inflammatory proteins which are also involved in dampening down the inflammatory response. In all, cortisone either increases or decreases the production of up to twenty different proteins, the overall effect of which is far too complex to begin to describe (and indeed has never been properly worked out), but which has the effect of relieving the red, painful swollen joints of patients with rheumatoid arthritis, or reducing the life-threatening narrowing of the airways that occurs in an acute attack of asthma, or alleviating a multitude of other grievous conditions.

  Back in 1948, when Philip Hench gave the first cortisone injection to Mrs Gardner, there was absolutely no conception of the way in which the cell worked or how it could stimulate the production of these anti-inflammatory proteins. Self-evidently, then, cortisone could never have been synthesised from first principles, because those principles in 1948 were not known. It could only have been ‘a gift from nature’.

  And in this, cortisone was quite unexceptional, because precisely the same applies to virtually each and every one of the cornucopia of new drugs, each of which exerts its effect by entering the cell, latching on to a receptor, travelling to the nucleus and influencing the manner in which DNA codes for certain proteins. The therapeutic revolution is thus best conceived of as a massive game of roulette in which research chemists synthesised chemicals in their tens of thousands and then blindly tested them in the hope of a lucky break, when one or other would just happen to initiate the process already described. It was in this way that virtually all the drugs for the treatment of psychiatric illness, rheumatological disorders, heart disease and leukaemia were discovered.

  It is now possible to see how those like Howard Florey and Philip Hench, and so many others like them, were able to achieve so much without the need to be scientific geniuses. They just happened to be around at the crucial moment when it became possible to exploit the therapeutic potential of these complex and potent chemicals without having to create them in the first place or even know how they worked. The chance discovery and exploitation of these ‘mysteries of biology’ is the bedrock that underpins the rise of modern medicine and, as will be seen, also accounts for its fall. There is, after all, likely to be a limit to the number of nature’s gifts that can have a major impact on disease, so there is a ceiling to the ‘roulette’ approach to drug discovery. Sooner or later the rate of innovation must slow down, with the transition from a ‘cornucopia’ to a ‘dearth’
of new drugs.

  But that is not all. It is perhaps predictable that doctors and scientists should assume the credit for the ascendency of modern medicine without acknowledging, or indeed recognising, the mysteries of nature that have played so important a part. Not surprisingly, they came to believe their intellectual contribution to be greater than it really was, and that they understood more than they really did. They failed to acknowledge the overwhelmingly empirical nature of technological and drug innovation, which made possible spectacular breakthroughs in the treatment of disease without the requirement of any profound understanding of its causation or natural history. And, as will be seen in the following chapters, when this expectancy that medicine can solve any problem comes into conflict with a decline in therapeutic innovation, then false ideas, and claims to knowledge not possessed, are likely to flourish.

  PART II

  The End of the Age of Optimism

  1

  THE REVOLUTION FALTERS

  ‘I know, from life and from history, something you have not thought of: often, the outward, visible, material signs and symbols of happiness and success only show themselves when the process of decline has already set in. The outer manifestations take time – like the light of that star up there – which may in reality be already quenched when it looks to us to be shining its brightest.’

  Thomas Mann, Buddenbrooks

  By the close of the 1960s medicine’s astonishing progress over the previous quarter-century was building to a climax: the travail of incremental progress towards a cure for childhood cancer was finally coming to fruition, while the experience gained from open-heart surgery and transplanting kidneys had culminated in that supreme technical achievement of the heart transplant. It takes time, of course, for important new developments to ‘feed through’ to become a part of everyday practice – a generation of doctors must acquire the appropriate skills and further refine and improve on them. Predictably then it was the decade that followed – the 1970s – when the full potential of the post-war therapeutic revolution would be realised, as shown by the rising numbers of hospital specialists in Britain during this decade.

 

‹ Prev