Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It
Page 23
In order to protect patients from untested and experimental treatments, like thalidomide, governments around the world have set up rules. Their net effect is to make it exceedingly difficult, as well as expensive, to do any proper trials. The result is that we are not as protected as we need to be from medical ignorance.
A good example of the difficulties comes from the use of corticosteroids to treat head injuries. Corticosteroids are released by the adrenal glands in times of stress, including illness and injury. One of their actions in the body is to subdue inflammation – to keep the body’s inflammatory response from becoming so overwhelming that it is self-destructive. After a traumatic brain injury the affected area swells, exactly as your ankle does following a sprain or your knee after knocking it against a table edge. The swelling is an inflammatory process, the effect of your body directing cells towards the injured brain, cells capable of fighting off infection and helping the tissue rebuild.
All of this is a generally good thing.1 Room in the skull, however, is limited. A swollen brain compresses itself. For a long time doctors have worried that the inflammatory response to a bruised brain might make the injury worse. From the early 1970s they began giving corticosteroids to reduce this response, and subdue swelling within the skull. The treatment was supported by theory and, later, by animal experiments. A study from 1985 showed that corticosteroids reduced damage to neuronal cells in deliberately injured mice. Later, studies were done in humans, belatedly attempting to see if this effect on cells, a soft end point, translated into a meaningful effect on human recovery and survival, the ultimate hard end points. By the end of the twentieth century, a number of randomised placebo-controlled trials had been carried out on the effects of steroids on head injuries. They were small but, taken together, they suggested a benefit. A systematic review of the literature in 1997 found that altogether there had been thirteen trials, covering 2,073 patients. Collating the statistics, steroids reduced the risk of death by 1.8 per cent. It sounds like a small amount, and it is, but head injuries are common. An editorial in the British Medical Journal in 2000 put the figure into perspective. A million people die each year from head injuries, it pointed out, and since most of those come from car crashes, and car usage is going up, the number is likely to rise. A treatment that reduced mortality by 2 per cent would save 20,000 lives a year.
The use of steroids for head injuries was not consistent. In 1995 an American study showed that they were used in two thirds of critical care centres; a British survey in 1996 found a lower rate, with only half of the UK’s neurosurgical intensive care units giving the drug. Clearly doubts remained among some doctors. The combined evidence from the thirteen separate trials allowed that there might be good reason for it. Although the benefit averaged out at a 1.8 per cent reduction in deaths, that average hid a wide possible range of different impacts and it was impossible to be sufficiently confident about what the drug did. Allowing for the play of chance, the data appeared to fit either with a treatment that was almost three times as good as that average (reducing deaths by 6 per cent), or a treatment with no effect whatsoever, or even one that actually increased deaths (by up to about 3 per cent).
The subsequent trial proved a vast amount of work. Some people felt it should not go ahead since the data already showed evidence of likely benefit – randomising sick people to a placebo seemed unethical. Others, convinced for less good reason that the effects of steroids were likely to be bad, felt it was unethical to give anything other than a placebo. The trial, known as CRASH, eventually enrolled 10,008 patients at 238 different hospitals across the world. Informed consent was not possible, since the patients concerned were too sick to give it. That posed problems in getting the trial protocol accepted by the national, regional and hospital-specific ethical review boards. The organisers argued that informed consent was not relevant – all that was needed was for the treating doctor to be ‘substantially uncertain’ about whether steroids were likely to help the patient. If this were so, then ‘the doctor in charge should take responsibility for entering such patients [into the trial], just as they would take responsibility for choosing other treatments’.
The trial looked at hard end points, both two weeks and six months after a patient’s injuries. Data from a fortnight after the treatment had been given showed that steroids had a marked effect, and one that was not likely to have occurred just from chance. Death rates were increased by 3.2 per cent among the patients who received the steroids, and the large number of patients in the trial meant that there was less than a one in 10,000 chance that the result could be due to a run of bad luck in those given the drug, and good luck in those getting the placebo. Six months after their injuries, the absolute risk of death in the steroid-treated group was 3.4 per cent higher than in those who received the placebos. The CRASH trial was an overdue evaluation of an existing treatment, and one whose conclusions are now saving lives by protecting patients from a drug that was killing them.2
Properly testing a therapy is phenomenally difficult, and the difficulties are increased by the torturous regulatory processes set up to protect patients from experimental therapies. At the same time it remains frequently easy for doctors to prescribe treatments that have never been properly tested at all, just as they did with steroids in head injuries for over thirty years, killing countless tens of thousands as a result. Regulation currently makes it far easier for a doctor to give an untested drug to all of their patients – without formal consent, monitoring or ethics approvals – than to give it to only half of them as part of an organised and methodical trial. By so rigorously protecting ourselves from experimental treatments, we are opting instead to have many on the basis only of guesswork. Some of them will be killing us.
* * *
1 The advice to put an ice-pack on a bruise or a sprain is aimed at reducing the pain and the swelling. No one has ever done the tests necessary to determine whether it slows healing as a result.
2 A letter to the Lancet in 2005 showed the caution with which some in the medical profession were able to treat even the most persuasive of findings. Combining the results from CRASH with the earlier studies gave a one in a thousand chance that this apparent harm might be an illusion based on a run of luck. ‘The apparent excess of deaths in these studies could be largely or wholly due to an extreme play of chance . . . the accompanying Comment [editorial] should not have described the apparent increase in mortality as indisputable.’
20 Aspirin and the Heart
THE FIRST WORLD War affected the fortunes of drugs as well as nations. In Britain, dachshunds were stoned in the street because of their German heritage – and Aspirin became briefly unpopular for the same reason. There was no British patent for Bayer to lose, since the drug was first made by the Frenchman Charles Gerhardt, but there was still a trademark. The British government took the opportunity to seize it as a spoil of war and hand it out freely. When people later realised that they missed their Aspirin, they found they could now buy it without the capital letter – and from British companies, too. Across the Atlantic, supplies of Aspirin from Germany were halted by the Royal Navy. The British naval blockade did not go down well in America; at least until the German sinking of the Lusitania, with consequent loss of American lives, in May 1915.
Being cut off from German industry pushed America into developing her own. The phenol needed to manufacture aspirin, being a key ingredient of battlefield explosives, was hard to come by and Bayer’s American subsidiary, desperate to keep production going during the war’s shortages, struggled to get enough. Their solution was to employ Hugo Schweitzer, a coal tar chemist, an advocate for German interests, and a fully paid-up spy. Schweitzer contacted Thomas Edison, who was also failing to get hold of the phenol he needed to make phonograph records. Blessed with more powers of invention than most men, and partly funded by Schweitzer, Edison gave up on buying the phenol directly and set out to manufacture it himself. He used coal-tar benzene as his starting point.
 
; Schweitzer’s deal with Edison was good for almost everyone. Using money from the German secret services, Schweitzer secured a promise from Edison of 3 tons of phenol a day (roughly a quarter of his total production). The phenol would go to Bayer’s US arm, who would use it for aspirin. The German government’s satisfaction was in ensuring that the phenol was thereby not free to be bought up by the British for weapons.
The month after Schweitzer agreed his deal with Edison, it was destroyed. In a plot definitely unworthy of James Bond, a German spy carrying sensitive documents managed to forget his briefcase on a train. He returned to look for it, but an American secret service agent following him had picked it up. The briefcase contained details of the phenol scheme as well as lists of German sympathisers and sabotage plans. Many of the details were made public. Thomas Edison cancelled his deal with Schweitzer and sold the phenol to the American military instead, and Bayer’s popularity in the country plummeted. By the end of the war their American branch had been seized and sold off.
From 1918 to 1919, when the world could have been recovering from the war, there was influenza. It infected a third of the global population. Abnormally severe, the outbreak was also peculiar in another way. Influenza, like other infections, tends to pick off the physically vulnerable – the very young and very old. This time was an exception. Of the millions who died, half were at the physical peak of their lives.
Histories of aspirin record the effectiveness of the drug in dealing with influenza. Aspirin, wrote Diarmuid Jeffreys in 2004, ‘helped millions of people in their battle with the virus and undoubtedly saved many lives as a result’. Bayer today offers the same information, stating that aspirin saved ‘countless lives during major flu epidemics in Europe’. As evidence, Bayer cite a German newspaper: ‘As soon as you feel ill, take to your bed with a hot water bottle at your feet, drink hot camomile tea, take three Aspirin tablets a day. If you follow these rules, you’ll be fit and well again in no more than a few days.’
The majority of deaths from influenza came from pneumonia, a bacterial infection of the lungs. That was not the influenza virus itself, but another germ, leaping at opportunity. Those already struck down by one illness are less capable of fighting off a second. Other sufferers died from lung damage directly caused by the virus, the air sacs where gas exchange normally occurs swelling with inflammation and filling with blood.
Against this background, it is worth remembering that aspirin makes people feel better by lowering their fevers, but that fevers are part of the body’s mechanism for fighting off infection, whether viral or bacterial. The assumption that aspirin helped people with influenza is not based on any trial evidence. People felt ill and wanted to take a medicine. Doctors saw that they were feeling ill and wanted to do something to make them better. Aspirin was the mutual solution.
The estimates of the 1918 influenza pandemic suggest it infected half a billion people. It was one of the most lethal disease outbreaks in history, yet those infected were far more likely to survive than to die. It probably killed fifty to a hundred million, meaning the chance of dying if you fell ill with it was 10 to 20 per cent. Imagine for a moment a doctor seeing a vast number of patients. Suppose that they managed to treat 2,000 different people suffering from influenza, and gave aspirin to half. Altogether the doctor could expect to see between 100 and 200 deaths out of those 2,000 people. Now imagine that aspirin changed a person’s chance of survival by 20 per cent – for better or for worse. From 200 deaths in each 1,000, you get a difference appearing. Those who get the aspirin might benefit – rather than 200 dying, perhaps 160 do. A difference of forty deaths between two groups of 1,000 people each.
It would be almost impossible for any doctor to personally treat and keep track of two such large groups, one treated and one untreated. Even if they had done, a failure to randomise their patients to the different options would have betrayed them. The smallest tendency to give aspirin to certain people could have meant that the two groups became profoundly different. (What if the doctor reserved the ‘best’ therapy for the sickest? A treatment with a genuinely good effect might appear to be lethal.) The use of aspirin for influenza was adopted without any sort of reliable trial. Doctors and patients alike were content to believe that it worked. Bayer and historians have been content to follow them.
A drug’s effects, even if they are moderately large, can almost never be reliably figured out on the basis of personal experience. If that seems like a repeated point, then it is, but it was also one that history continued to show people just could not get their heads round. The assumption is that aspirin saved lives during 1918 and 1919. It might just as easily have cost them. We have no idea.
Increased global competition after the First World War prompted the German pharmaceutical industry to pay even more attention to acting as a co-ordinated conglomerate. I. G. Farben was reincarnated in a bigger and more inclusive form than ever before. It went on to subsidise the Nazis (bureaucratically as well as financially), to make eager use of slave labour, and to produce the Zyklon B that was used in the gas chambers. I. G. Farben built a plant at Auschwitz, with the intention of using the labour of those whom its gases were not killing. At the end of the war, twenty-three I. G. Farben directors were tried for war crimes. Eleven were found guilty. Fritz ter Meer, head of the plant associated with Auschwitz, defended himself on the basis that ‘concentration camp victims of scientific experiments were not subject to unacceptable suffering since they were going to die anyway’.
Bayer’s own account tells how, after it became a part of I.G. Farben, ‘the Bayer tradition lived on . . . and the Bayer cross was used as the trademark for all of the I.G.’s pharmaceutical products.’ In 1946, while still part of I.G. Farben, it began to re-establish its international presence as Bayer: ‘It was clearly vital to rebuild Bayer’s foreign business,’ says the company. When I.G. Farben was broken up in 1951, Bayer’s separate legal existence began again, based around the same four factory sites. Despite this clear perception of historical continuities much is missing from the company’s summary. There is no acknowledgement on Bayer’s website that the ‘four year plans’ designed to get the German economy ready for war were written by Farben managers for the Nazi government. It does not mention the war crime trials, nor I.G. Farben’s work camp at Auschwitz and funding of Josef Mengele’s experiments on imprisoned Jews. ‘I feel like I am in paradise,’ said an I.G. Farben employee, Dr Helmuth Vetter, speaking of the opportunities that Farben’s sponsorship of Auschwitz gave him. He spent 1942 to 1944 injecting bacteria and experimental drugs into concentration camp prisoners, actions for which he was executed as a war criminal.
Eva Mozes was one of the victims of Dr Mengele’s trials. Unlike the majority, she survived. ‘Emotionally I have forgiven the Nazis,’ she said, ‘but forgiveness does not absolve any perpetrator from taking responsibility for their actions . . . I know that the ones who ran Bayer fifty years ago are all dead now. But the company today should have the courage and decency to admit their past.’ On the grounds that during the war years Bayer was a part of I.G. Farben, Bayer today denies responsibility for what happened during the war. It did not technically exist. The payments and acknowledgements that it has made have simply been for ‘goodwill’, and not out of any obligation.
Throughout the war years, I.G. Farben continued to use the Bayer name. After his release from prison in 1956, Fritz ter Meer was appointed head of Bayer’s supervisory board. The company honours his memory by administering a scholarship fund in his name and laying wreaths at his grave.
Heart disease in the 1960s was on the increase. Richard Doll and Austin Bradford Hill showed that smoking was partly to blame. Blood pressure seemed to be a factor, although no one was quite sure why so many people’s seemed to be high. Fats in the blood were also implicated. A large randomised controlled trial of over 6,000 men looked at whether a range of drugs, hormones or vitamins could prevent their having second heart attacks. To their regret, the trialists found
that their interventions did more harm than good. They had to step away from recommending what had seemed, when they set out, to be perfectly reasonable treatments.
This willingness to test and reject theories, rather than adopting them because of their intuitive appeal, was proof of medicine’s step forwards. But with regard to heart disease, while this more humble approach was protecting people from new drugs that actually made them worse, it was not yet managing to make them better. There were only two seemingly effective drugs for heart disease.
William Withering, in late eighteenth-century Birmingham, decided that foxglove extracts were the active ingredient of a complicated local remedy. Digitalis – after the Latin name of the plant – appeared to treat the ‘dropsy’. That was the contemporary name for the swelling of the feet, ankles and legs that came on when the heart beat too weakly and the circulation backed up. Digoxin, the modern chemical derived from digitalis, does the same as its herbal predecessor. In response to it, people urinate out the extra fluid that otherwise accumulates in whichever bits of their bodies are most subject to the effects of gravity.
Digitalis and digoxin were regarded as life-savers. Not until 1997 was a reliable trial set up to assess the effects of the active drug. Before then doctors based their ideas on their clinical experience and intuition, and knew that digoxin saved lives. They could see it working. The 1997 trial showed that doctors had been wrong on this point for two centuries. People who took digoxin lived no longer than those who swallowed a placebo. That was not because it was less ‘natural’ than chewing on a foxglove, simply because the effect of the active compound on the human body was not as miraculous as intuition suggested. Digoxin did have some benefits – the trial showed that some of the people who took it avoided some periods of hospitalisation as a result, perhaps through helping their hearts contract more forcefully – but it brought harms as well, making patients feel weak and sick and nauseated. The trained judgement of generations of physicians, when compared with the evidence from a randomised controlled trial, was found wanting.