These findings have a special historical significance as, prior to the 1960s, the absence of effective treatment for hypertension had a profound effect not just on individuals but on the fate of whole nations. Both the President of the United States, Franklin D. Roosevelt, and the Russian leader, Josef Stalin, had raised blood pressure, with devastating consequences for world politics in the post-war era. On 12 April 1945 Franklin Roosevelt died of a cerebral haemorrhage which his physician, Admiral Ross McIntire, said had ‘come out of the clear sky’, as only a few days earlier the President had apparently been ‘given a thorough examination by eight physicians including some of the most eminent in the country and pronounced physically sound in every way’.2 The admiral was lying; Roosevelt had been diagnosed as having hypertension almost ten years earlier, which by the time of the Yalta conference with Churchill and Stalin in February 1945 (just eight weeks before his death) had caused so much damage to his heart and kidneys that he was ‘a dying man’. At this crucial moment in world politics, Roosevelt’s ailing health so impaired his political judgement as to produce ‘a deadly hiatus’ in the leadership of the United States that would lead to ‘the betrayal of the Poles, the imposition of communist governments in Eastern Europe, the Czechoslovakian coup and – on the other side of the world – the loss of China and the invasion of South Korea’.3
Eight years later, in 1953, Josef Stalin also fell victim to a stroke, at the age of seventy-three. Again, the history of the post-war world would have been very different if his hypertension had been treatable with appropriate medication. He could well have lived on for another decade, up to and including the Cuban missile crisis, which might then have had a very different outcome, culminating in the Soviet Union, under his demented leadership, launching a full-scale nuclear war against the United States. One way or another, hypertension has had a crucial impact on the fate of nations and the survival of the human race. So how did hypertension become treatable?
The blood pressure is the pressure generated by the contraction of the heart muscle to pump blood into the arteries and around the circulation. It is determined by two factors. The first is the volume of blood in the arteries (the higher the volume, the greater the pressure needed to pump it round the circulation), and the second is the diameter of the vessels through which the blood travels (the narrower the arteries, the greater the pumping pressure that is required). Hence, prior to the discovery of effective drugs, the two ways of lowering the blood pressure were either to reduce the volume of fluid in the circulatory system or to dilate the blood vessels.
In 1944 a German-born physician at Duke University, Dr Walter Kempner, reported that the blood pressure returned to ‘normal or almost normal’ by reducing the volume of fluid in the circulatory system with a rice/fruit/sugar diet: ‘The rice is boiled or steamed in plain water without salt, milk or fat. All fruit juices and fruit are allowed with the exception of nuts, dates, avocados and any kind of dried or tinned fruit. No water is allowed and the fluid intake is restricted to one litre of fruit juice per day.’4 The problem, as can be imagined, was that the diet itself was so unpalatable that patients could not stick with it. ‘It is insipid, unappetising and monotonous and demands great care in its preparation . . . it is quite impracticable for a member of a large household with minimal domestic help . . . its deadly monotony tends to make it intolerable unless the physician can infuse into the patient some of the asceticism of the religious zealot.’5 Nor indeed, as it subsequently emerged, was Kempner’s diet as effective as he claimed, for when other doctors tried to replicate his results they were less successful. ‘The change of blood pressure does not exceed the random spontaneous variation to be anticipated,’ observed Dr Herbert Chasis of University College Hospital in 1950.6
The second approach to treating hypertension – dilating the diameter of the arteries so less pressure is required to push the blood around the body – involved an operation to cut the nerves that control the diameter of the arteries in the legs (a bilateral lumbar sympathectomy). This operation was a major procedure and so was limited to those who were still fairly fit and young.7
The limitations of these treatments are self-evident, so in the post-war years research chemists in the burgeoning pharmaceutical industry started to look for chemical compounds that might work in similar ways. The first, pentaquine – a drug originally used in the treatment of malaria – was introduced in 1947, to be followed by several others – hydrallazine, reserpine, guanethidine and methyldopa. All these drugs were, to a greater or lesser extent, effective, but their widespread use was constrained by their side-effects. Most people with raised blood pressure feel completely well, so the prospect of taking drugs that caused variously a dry mouth, constipation, blurring of vision and impotence was unacceptable, even if they might prevent a potentially catastrophic stroke in the future. Thus for hypertension to become treatable, the drugs would have to interfere so little with people’s lives that they would be prepared to take them indefinitely. The two that eventually fulfilled these criteria were the diuretic (or water pill) chlorothiazide, which lowers the blood pressure by reducing the volume of blood in the circulation, and the ‘beta blocker’ propranolol, which theoretically should have raised the blood pressure by narrowing the diameter of the arteries, but turned out to lower it instead.
The story of chlorothiazide’s discovery is as follows: soon after the development in the 1930s of the sulphonamides for the treatment of bacterial infection, some patients reported the unusual side-effect of passing large amounts of urine. In 1949, Dr William Schwartz put this side-effect to practical use, giving sulphonamides to three patients with heart failure, in whom the fluid accumulates in the lungs to cause shortness of breath. The daily volume of their urine soared, the fluid in their lungs dispersed and their breathlessness and other symptoms improved. Regrettably, Dr Schwartz observed the drug was ‘too toxic for prolonged or routine use’.8 However, a research chemist, Karl H. Beyer, realised that if he could find a related compound that had the same properties but was non-toxic, it could, by reducing the volume of blood in the circulation, be the long-awaited ‘magic cure’ for hypertension. The chemistry involved was sophisticated but essentially routine: take the sulphonamide compound, modify it in some way, give it to dogs and see whether it increases the amount of urine they produce. ‘It seemed only a matter of time and effort until we found what we were looking for,’ which they did in the form of chlorothiazide, ‘the best-behaved compound we had ever worked on from the standpoint of safety and efficacy’.9 When given to ten hypertensive patients, their blood pressure fell back to normal levels within a couple of days. ‘Side-effects were mild and infrequent.’10
The second drug, propranolol, is almost unique in the annals of drug discovery in being purposefully designed rather than being discovered by accident. Its origin and the name of the group of drugs to which it belongs – the ‘beta blockers’ – lay in the phenomenon whereby the hormone adrenaline has different effects on different tissues. Its action on the beta receptors in the blood vessels causes them to dilate, and on the heart increases the rate and forcefulness of its contractions.11 In the mid-1950s, the British research chemist (and subsequent Nobel Prize winner) James Black perceived the enormous therapeutic potential of antagonising this effect for those with the heart condition angina, though with the theoretical drawback that this would also cause the arteries to constrict, which, as has been noted, would necessarily raise the blood pressure.12 James Black eventually came up with propranolol which, as he had predicted, markedly reduced the symptoms of angina but which, astonishingly, had the reverse effect on blood pressure to that predicted. Rather than rising, the blood pressure fell.13 It is not clear precisely who saw that this paradoxical – and quite unexpected – effect could be used in the treatment of hypertension, but as it turned out propranolol worked very well.
These two drugs, chlorothiazide and propranolol, transformed the treatment of hypertension. Patients with raised blood pressure
no longer had to go on the unpalatable rice and fruit diet, or undergo a bilateral sympathectomy to cut the nerves to their legs, or take drugs with unpleasant side-effects. Instead they needed to take, singly or in combination, one or other of these drugs every day. Subsequently further well-tolerated types of drug became available, but the crucial point was that by the mid-1960s hypertension had become a treatable disease.14
The very ease with which hypertension could now be treated posed another problem. Drug treatment markedly reduces the incidence of strokes, but the situation is much less clear-cut in those whose blood pressure is only marginally elevated, so-called ‘mild’ hypertension. The question then as to what level of blood pressure merited treatment fuelled a decade-long public and often acrimonious exchange between two of British medicine’s leading figures: Robert (Lord) Platt, Professor of Medicine at Manchester University, and Sir George Pickering, Regius Professor of Medicine at Oxford. In essence, Platt maintained that hypertension was a specific illness caused by one or several genes, and that it was possible and indeed necessary to distinguish between those with a ‘normal’ blood pressure and those with an ‘abnormal’ one, and only to treat the latter group. Not so, responded Sir George Pickering; hypertension was not an ‘illness’ in the commonly accepted sense of the term, but there was a continuous gradient of risk relating blood pressure to the chance of a stroke. Clearly the higher the blood pressure the greater that risk became, but any cut-off point between those who needed treatment and those who did not, between the ‘normal’ and the ‘abnormal’, was arbitrary. Hypertension was thus not an illness but a matter of opinion.15
This argument might seem esoteric but its implications are not. If Pickering were right then logically anyone whose blood pressure was higher than ‘average’ should benefit from having their blood pressure lowered, leading to the claim following one important study that 24 million United States citizens had ‘hypertension’ that was either ‘undetected, untreated, or inadequately treated’. This was clearly good news for the pharmaceutical industry who had sponsored the study, for the prospect of finding 24 million as yet undiscovered patients and treating them with regular medication for life was nothing other than a gold mine. The catch was that the evidence of benefit from treating the millions with ‘mild’ hypertension was less than compelling.16
There are two predictable effects of telling someone his raised blood pressure needs treatment. The first is to make him worry about his health and be more aware of his mortality. Such fears are likely to be hidden and so difficult to measure, but a study of 5,000 steelworkers in 1978 found ‘dramatically increased rates of absenteeism where steelworkers are labelled hypertensive’. Those ‘labelled’ as having raised blood pressure tended to see themselves as being vulnerable to having a stroke, which naturally encouraged their adoption of a ‘sick role’.17 The second predictable adverse consequence is that no matter how relatively free of side-effects chlorothiazide and propranolol (and similar drugs) might be, they still prove unacceptable to a small percentage of those to whom they are prescribed. Both drugs cause lethargy, dizziness and headache in 5 per cent of those taking them and – in men – impotence in 20 per cent and 6 per cent respectively.18 When these drugs are being taken by millions of people the cumulative burden of these adverse effects is ‘not inconsiderable’. Is it worth it?
Back in 1967 the study of US military veterans readily demonstrated that treating markedly raised blood pressure for only a year could dramatically reduce the risk of strokes. But the results of treating those with ‘mild’ hypertension, as it turned out, were much more equivocal – 850 people would have to be treated for a year to prevent just one stroke. Eight hundred and forty nine out of the 850 in any one year would not benefit from taking medication.19
Nonetheless, the ‘Pickering paradigm’ of ‘the lower the blood pressure the better’ prevailed and, when hypertension is defined as any level higher than the average, then the obvious implication is that surreal numbers of people need to take blood-pressure-lowering medication. By 1996 more than one in three Americans between the ages of thirty-five and seventy-four were taking medication to lower their blood pressure, generating an annual revenue for the pharmaceutical industry of $6 billion.20
In the 1990s, the same argument was to be repeated, but this time with cholesterol, where again the benefits of treating those with high levels were extrapolated downwards. The notion of ‘the lower the cholesterol the better’ prevailed and millions were started on cholesterol-lowering drugs. And so it is that the great – and very desirable – project of preventing strokes by treating hypertension has enormously expanded the scope of medicine from treating the sick to finding, in the majority who are well, ‘illnesses’ they do not necessarily have, and treating them at enormous cost.
10
1971: CURING CHILDHOOD CANCER
In medicine – as in life – some problems are more complex than others and, science being the art of the soluble, it is only sensible to leave the apparently intractable aside, hoping perhaps that at some time in the future something will happen to open the doors to their resolution. It is, nonetheless, a distinctive feature of post-war medicine that many doctors and scientists attempted, against all the odds, to take on ‘the insoluble’. Here the long march in the search for the cure for childhood cancer, and Acute Lymphoblastic Leukaemia (ALL) in particular, stands in a league of its own. Whereas the effectiveness of the other discoveries in the post-war years – such as antibiotics and steroids – were immediately apparent, the anti-cancer drugs were different. They worked, but not very well, prolonging the life of a child by, at the most, a few months. So the cure of ALL, as will be seen, required not just one drug discovery but four quite separate ones combined together. Further, it was not sufficient merely to dispense these drugs and observe what happened; rather, a vast intellectual machine had to be created to assess the outcome of different treatment combinations to reveal the small incremental gains that eventually made ALL a treatable disease. Finally, the patients involved were children and the drugs very toxic. It needed an extraordinary sense of purpose to persist when most doctors believed that inflicting nasty drugs on children to prolong by only a few months a lethal illness was immoral. For all these reasons the cure of ALL ranks as the most impressive achievement of the post-war years.
Acute Lymphoblastic Leukaemia is a malignant proliferation of lymphoblasts (precursors of the white blood cells in the bone marrow). Those afflicted – usually children around the age of five or six – died within three months from a combination of symptoms caused by this lymphoblastic proliferation packing out the bone marrow, thus preventing the formation of the other components of the blood: the reduction of red blood cells resulted in anaemia; the paucity of platelets caused haemorrhage; and the absence of normal white blood cells created a predisposition to infection. The children were pale and weak and short of breath because of the anaemia, they bruised easily because of the low platelets, and the slightest injury could precipitate a major haemorrhage into a joint or even the brain. It was, however, their vulnerability to infection that posed the greatest risk, as they were defenceless against the bacteria that cause meningitis or septicaemia. The ‘inevitable’ could be postponed for a month or two, with blood transfusions to correct the anaemia and antibiotics to treat these infections. But so dismal was the prognosis that some doctors even disputed whether these supportive treatments should be given. Professor David Galton of London’s Hammersmith Hospital summarises the prevailing pessimistic view at that time: ‘Children were sent home as soon as they were discovered to have the disease. Even blood transfusions might be withheld on the grounds that it only kept the child alive to suffer more in the last few weeks.’1
From the first attempts to treat ALL in 1945 it took more than twenty-five years before a truly awesome combination of chemotherapy (or ‘chemo’) with cytotoxic (cell-killing) drugs and radiotherapy was shown to be capable of curing the disease.2 The origins and rationale of
this treatment will be considered in detail later but in broad outline it took the following form. The treatment started with a massive assault on the leukaemic cells in the bone marrow, with high doses of steroids and the cytotoxic drug vincristine, lasting six weeks. This was followed by a further week of daily injections of a cocktail of three further cytotoxic drugs: 6-mercaptopurine (6-mp), methotrexate (MTX) and cyclophosphamide. Next came two weeks of radiation treatment directly to the brain, and five doses of MTX injected directly into the spinal fluid. This regimen, which eliminated the leukaemic cells from the bloodstream in 90 per cent of the children, was called ‘remission induction’ (i.e. it induced a ‘remission’ of the disease) and was followed by ‘maintenance therapy’, two years of continuing treatment to keep the bone marrow free of leukaemic cells – weekly injections of the cocktail of three cytotoxic drugs already mentioned at lower doses, interspersed every ten weeks by ‘pulses’ of the ‘induction’ regime (steroids and vincristine) for fourteen days.
It is impossible to convey the physical and psychological trauma this regime imposed on the young patients and their parents. Each dose of treatment was followed by nausea and vomiting of such severity that many children were unable to eat and became malnourished, stopped growing and ceased to put on weight. Then there were the side-effects caused by the action of the drugs, which not only poisoned the leukaemic cells but also the healthy tissues of the body: the children’s hair fell out, their mouths were filled with painful ulcers, they developed chronic diarrhoea and cystitis. It is not for nothing that chemo has been described as ‘bottled death’.3
This terrible burden of physical suffering would be just acceptable were it to result in a cure, but there was absolutely no certainty that this would be the case. Prior to the introduction of this particular regime of treatment in 1967, a survey of nearly 1,000 children treated over the previous two decades found that only two could be described as having been cured – having survived for more than five years – and one of these subsequently relapsed and died.4 Looking back now, it seems astonishing that those responsible for devising this highly toxic regime, Dr Donald Pinkel and his colleagues at St Jude’s Hospital in Memphis, should have imposed it on these desperately ill children, not least because of the profound scepticism of his professional colleagues that ‘success’ – a major improvement in the prospects of survival – was achievable. This ambivalence is well caught by the contribution of a fellow specialist, Dr Wolf Zuelzer, a paediatrician at the children’s hospital in Michigan, in his contribution at an international conference on leukaemia: ‘The side-effects of treatment outweigh those directly attributable to the disease,’ he observed, and after reviewing recent progress he could only express the hope that ‘others might find grounds for greater optimism than I have been able to distil from the facts now at hand’.5
The Rise and Fall of Modern Medicine Page 15