The Rise and Fall of Modern Medicine

Home > Other > The Rise and Fall of Modern Medicine > Page 32
The Rise and Fall of Modern Medicine Page 32

by James Le Fanu


  But the chasm-like discrepancy between the promise and the reality of The New Genetics poses a whole series of questions as to why it might have failed to ‘deliver’. The first and obvious constraint is quite simply that genetics is not a very significant factor in human disease. This is scarcely surprising, as man would not be as successful a species as he is (many would argue too successful) were it not that natural selection had over millions of years weeded out the unfit. Consequently there is only a handful of common gene disorders and they themselves are not very common. Further, the contribution of genetics to adult disease such as cancer is limited to a minority of cases and for everybody else it is almost invariably only one of several factors, of which the most important is ageing, an everyday fact of life about which there is not much that can be done.37

  The second reason why The New Genetics might have failed to live up to expectations is that the genes themselves turn out to be infinitely more complex and elusive than could ever have been imagined. There was a charming and elegant simplicity revealed by the unravelling of the genetic basis for sickle cell anaemia – a defect in one triplet of nucleotides caused the insertion of the ‘wrong’ amino acid in the haemoglobin protein, thus altering the red cells’ physicochemical properties so that they ‘sickle’. It seemed, in the early 1980s, that genetics could be understood in terms of such well-defined rules and certainly, if all diseases had been similar to sickle cell anaemia, then everything would have been sorted out in no time. But now we know better. Sickle cell anaemia turns out to be virtually unique in the simple nature of its genetic defect. The behaviour of the genes turns out not to be determined by hard and fast rules, but rather is ambiguous, elusive, contradictory and unpredictable. The central concept that the gene, in the form of triplets of nucleotides, codes for an arrangement of amino acids that makes up a protein has turned out to be deficient in several ways. The first is ‘linguistic’: any triplet of nucleotides turns out to mean different things in different circumstances. Richard Lewontin, biologist at Harvard University, explains:

  The difficulty in devising causal information from DNA messages is that the same ‘words’ [nucleotides], as in any complex language, have different meanings in different contexts, and multiple functions in a given context. No word in English has more powerful implications of action than ‘do’. ‘Do it now!’ Yet in most of its contexts ‘do’ as in ‘I do not know’ [has no meaning at all]. While this ‘do’ has no meaning it undoubtedly has a linguistic function as a spacing element in the arrangement of a sentence. The code sequence GTA AGT is sometimes read by the cell as an instruction to insert the amino acids valine and serine in a protein but sometimes it signals the place to cut up and edit the genetic message; and sometimes it may be only a spacer, like ‘do’ in ‘I do not know’, that keeps other parts of the messages an appropriate distance from each other. Unfortunately we do not know how the cell decides among the possible interpretations.38

  And just as one can never be sure what any triplet of nucleotides might mean, one can never be sure what the significance of a mutation in the nucleotides might be. Thus in sickle cell anaemia one defect in a sequence of nucleotides – GAG instead of GTG – leads to the insertion of the ‘wrong’ amino acid (valine instead of glutamic acid) in the haemoglobin protein and thus causes the red blood cell to sickle. But in cystic fibrosis, 200 or more such mutations of nucleotides have been identified that can cause the disease, and a further 200 that make no difference. Nor can one be confident that the same mutation causes the same disease, as illustrated by two sisters who both had the same mutation in the gene for the ‘light-sensitive’ protein rhodopsin in the retina that results in blindness from retinitis pigmentosa (a gradual destruction of the retinal cells at the back of the eye). The younger sister was indeed blind but the visual acuity of her older sibling – whose rhodopsin gene contained exactly the same mutation – was excellent and did not prevent her from working as a night-time truck driver. So when, after prodigious efforts, the ‘ultimate genetic cause’ of retinitis pigmentosa was finally pinned down to a specific defect in a specific gene, it then emerged that the ‘ultimate cause’ was apparently quite compatible with not having the disease at all. Such perverse complexities, inexplicable in the conventional understanding of the mechanism of gene action, abound. They lead to a situation of incomprehensible complexity, where precisely the same genetic disease can be caused by different mutations in several genes, while several different diseases can stem from mutations in a single gene.39

  It could, of course, be argued that The New Genetics is currently in the same situation as The Old Genetics was back in 1970 when, following the elucidation of the mechanism of gene action, most molecular biologists felt they had reached the limits of scientific understanding. Might it be that further technical innovations in the future may make genetic screening and gene therapy perfectly feasible? Perhaps, but the practical applications of The New Genetics rest on a concept of the nature of the gene – a unidirectional flow of information, ‘DNA makes RNA makes protein’ – that is far too simplistic. Certainly the imagery of DNA as the ‘master molecule, the blueprint from which everything flows’ is vivid enough, but genes by themselves can do nothing without interacting with other genes operating within the context of the whole cell within which they are located. In the words of Philip Gell FRS, Emeritus Professor of Genetics at the University of Birmingham: ‘The heart of the problem lies in the fact that we are dealing not with a chain of causation but with a network that is a system like a spider’s web in which a perturbation at any point of the web changes the tension of every fibre right back to its anchorage in the blackberry bush. The gap in our knowledge is not merely unbridged, but in principle unbridgable and our ignorance will remain ineluctable.’40

  2

  SEDUCED BY THE SOCIAL THEORY

  (i) THE BEGINNING

  There is, as has been suggested, a seductive familiarity in the symmetrical manner with which The New Genetics and The Social Theory sought to explain the causes of disease, evoking the separate contributions of nature (the gene) and nurture (upbringing) in human development.

  The great appeal of The Social Theory is that it both provides an explanation for disease and opens the way to preventing it. The major significance of Bradford Hill’s demonstration in 1950 of the causative role of smoking in lung cancer is that it held out the possibility of the dramatically different and potent alternative of ‘mass prevention’ to the problem of illness. Self-evidently, public health campaigns to discourage people from smoking were likely to have a greater practical benefit, by orders of magnitude, by preventing lung cancer than seeking, not very successfully, to treat it with surgery or drugs. Prevention, as everyone knows, is better than cure. Indeed, the many great achievements of the therapeutic revolution, the ‘cornucopia of new drugs’ and the ‘triumphs of technology’, could simply be dispensed with were it possible to identify modifiable causes of common diseases in people’s everyday lives. The problem was that up until the mid-1970s the precise causes of these diseases remained quite unknown. And then, suddenly it seemed as if this ignorance was being swept away as, with increasing certainty, it was asserted that, in precisely the same way as abjuring smoking prevents lung cancer, so most cancers together with heart disease and strokes were similarly preventable were people to change their social habits. The rise in the scope and ambition of The Social Theory is really quite extraordinary. Thus a booklet published by the British Medical Association in the late 1960s with the old-fashioned title ‘Doctor’s Orders’ advised readers of the dangers of smoking, the merits of a ‘sensible balanced diet’, and particularly to avoid becoming overweight. It warned that drinking more than a bottle of wine a day (or its equivalent) could damage the liver. But that was all. By the 1990s, this sensible – if rather obvious – advice had escalated to encompass every aspect of people’s lives. The advice on a ‘sensible balanced diet’ had metamorphosed into the claim that the serious diseases are quite
simply the outcome of specific foods people consume: salt overloads the circulation, pushing up the blood pressure to cause paralysis or death from stroke; saturated fats in dairy foods and meat fur up the arteries to cause untimely death from a heart attack, as well as being ‘implicated’ in causing many common cancers including those of the breast and bowel.

  Meanwhile scientific investigations revealed that numerous other unsuspected hazards in people’s lives, including the minuscule quantities of chemicals and pollutants in air and water, were similarly implicated in a whole range of serious illnesses such as leukaemia, stomach cancer, infertility and much else besides. This was medicine on the grand scale of the great sanitary reforms of the nineteenth century when civil engineering, by providing a clean water supply, eradicated water-borne infectious diseases such as cholera. Now social engineering, by encouraging people to adopt healthy lifestyles, together with a serious assault on the environmental cause of disease, would have a comparably beneficial effect.

  While it is very difficult to evaluate all the relevant evidence for such assertions, their origin undoubtedly can be traced to a powerful and persuasive critique by Professor Thomas McKeown in 1976 of the prevailing view that the progress of medical science could take the credit for the prodigious improvements in health over the preceding 100 years. On the contrary, he argued, doctors might pride themselves on the modern drugs and technology they deployed in their shiny new palaces of disease, but in reality they had played only a minor role in the precipitous fall in infant and maternal mortality and the substantial increase in life expectancy. These achievements could more readily be attributed to social changes: ‘Medical science and its services are misdirected,’ he said, ‘because they rest on an indifference to the external influences and personal behaviour which are the predominant determinants of health.’

  The essence of McKeown’s argument is encompassed in a single graph (see page 354) showing the decline in mortality from tuberculosis of the lungs in England and Wales, from a peak of 4,000 per million of the population in 1838, down to 350 per million in 1945 when the drugs streptomycin and PAS were introduced, and then almost to zero by 1960. Thus 92 per cent of the decline in tuberculosis could be attributed to ‘social factors’ and only 8 per cent to the great miracle of twentieth-century medicine, antibiotics. From this McKeown concluded that ‘medical intervention can be expected to make a relatively small contribution to the prevention of sickness and health’. He conceded that there was ‘no direct evidence’ that social factors were primarily responsible; nonetheless it seemed plausible enough that better nutrition, improved hygiene and housing (and particularly the decline in overcrowding) could account for this massive decline of tuberculosis.1

  Similar sentiments had been expressed before. If McKeown had merely limited his observations to the past, they would have had little impact, but he used this example of the apparently limited contribution of antibiotics (such as streptomycin) to the decline of tuberculosis to infer that the same principles applied to contemporary medical problems in the 1970s. And certainly the parallel was compelling enough. There were, he claimed, two broad categories of preventable illness: the ‘Diseases of Poverty’, which obviously included infectious diseases like tuberculosis, but also the ‘Diseases of Affluence’ which had become more prevalent with growing prosperity – cancer, strokes and heart disease. So, just as the Diseases of Poverty had declined as society had become wealthier, the Diseases of Affluence would diminish by adopting a more rigorous and ascetic lifestyle: ‘The diseases associated with affluence are determined by personal behaviour, for example refined foods became widely available from the early nineteenth century . . . sedentary living dates from the introduction of mechanised transport, cigarette smoking on a significant scale has occurred only in recent decades.’

  (Adapted from Thomas McKeown, The Role of Medicine, Oxford: Blackwell, 1979.)

  The characterisation of these preventable Diseases of Affluence struck a most resonant chord. Thus the following year, in 1977, the Assistant Secretary of Health to the US Government told a Congressional Subcommittee: ‘There is general agreement that the kinds and amounts of food we consume may be the major factor associated with the causes of cancer, circulatory disorders (heart disease and strokes) and other chronic disorders.’ Soon afterwards Sir Richard Doll, former colleague of Sir Austin Bradford Hill and now Professor of Medicine at Oxford, provided his authoritative support. Following an extensive review of the relevant evidence, he had discovered that, leaving aside smoking, patterns of food consumption accounted for 70 per cent of all cancers in the Western world.2 And there was more. Professor Samuel Epstein of the University of Illinois, writing in the journal Nature in 1980, argued that a further 20 per cent of cancers were caused by minute quantities of chemical pollutants in the air and water and were thus also theoretically preventable.3 So, within four years of McKeown propounding his Social Theory, it seemed that he had been well and truly vindicated, and if attention were paid to modifying these ‘social factors’ then more deaths would be prevented every year than there were people dying.4

  It cannot be sufficiently stressed what a radical departure this Social Theory of disease was from the preceding thirty years. The achievements of the previous three decades had been hard-won; the pursuit of the cure for leukaemia had taken the best part of twenty-five years, drawing on specialist scientific expertise from many disciplines and requiring the accidental discovery of no less than four different types of anti-cancer drugs. But now here were distinguished doctors and scientists arguing that the future of medicine lay in a completely different direction: get people to change their diets, control pollution, and many diseases would evaporate like snow on a sunny day. Could it be that simple? Why had no one conceived of the problems of disease in this way before? Certainly, had they done so, much time and energy would have been saved trying to discover treatments for common diseases that now could so easily be prevented.

  It might sound almost too good to be true, but The Social Theory was enthusiastically taken up by many. Politicians and policy-makers, alarmed at the escalating costs of modern medicine, were impressed by McKeown’s arguments that the emphasis on expensive hospital-oriented medical services was misdirected and that, were the emphasis to be shifted towards ‘prevention’, the health services would be not only much more effective but also a lot cheaper into the bargain. Such sentiments were echoed by intelligent observers, as reflected in the BBC’s prestigious Reith Lectures for 1980, given by a young lawyer, Ian Kennedy, committed to the ‘unmasking of medicine’. ‘The elimination of the major infections has served as a star witness for the triumphs of modern medicine over illness,’ he observed, ‘but this has had the unfortunate consequence of creating a “mythology” where the doctor is portrayed as a crusader engaging in holy wars against the enemy of disease . . . The promise of more and more money to wage this war will not improve the quality of health care.’ Rather ‘the whole project’ had to be reoriented towards ‘prevention and health promotion’ – and who could argue with that?5 Since the war, ‘the public health’ had been very much the poor relation of medicine, marginalised by the glamorous successes of open-heart surgery and transplanting kidneys. Here now was the opportunity to change all this and reassert the priority of preventive measures in the finest tradition of the nineteenth-century sanitary reformers. This ‘new’ public health movement, as it styled itself, was to move forward relentlessly from the early 1980s, warning people of the dangers lurking in their food supply and in the air and water. And it was a dynamic process that every year brought evidence of yet further unanticipated hazards of everyday life, while those responsible for health policy felt it necessary to proffer ever more precise advice on how the public should lead their lives.

  And how much of it was true? It is necessary to bear a few general points in mind. First, the radicalism of The Social Theory is that, as suggested, it goes far beyond the self-evident truism that those who pursue a ‘sober lifest
yle’ – drinking sensibly, abjuring tobacco, avoiding obesity and staying fit – are likely to live longer and healthier lives. Rather, contentiously, it makes specific claims about the causative role of commonly consumed foods and environmental pollution as a major factor in common illnesses. Next, we are citizens of a society in which, utterly uniquely for the first time in history, most people now live out their natural lifespan to die from diseases strongly determined by ageing. Thus the putative gains from ‘prevention’ (if real) are likely to be quite small. Further, the human organism could not survive if its physiological functions such as blood pressure (implicated in stroke) or level of cholesterol (implicated in heart disease) varied widely in response to changes in the amount and type of food consumed. These functions rather are protected by a ‘milieu intérieur’, a multiplicity of different feedback mechanisms that combine to ensure a ‘steady state’. Hence truly substantial changes in the pattern of food consumption are required to change them and to influence the types of disease in which they have been implicated.

  Finally, man, as the end product of hundreds of millions of years of evolution, is highly successful as a species by virtue of this phenomenal adaptability. Humans can and do live and prosper in a bewildering variety of different habitats, from the plains of Africa to the Arctic wastes. No other species has the same facility, so it might seem improbable that for some reason right at the end of the twentieth century subtle changes in the pattern of food consumption should cause lethal diseases.

 

‹ Prev