Book Read Free

Taking the Medicine: A Short History of Medicine’s Beautiful Idea, and our Difficulty Swallowing It

Page 27

by Burch, Druin


  So not only were doctors incapable of asking for hard evidence, they were also unaware of the degree to which their opinions were formed for them by drug company marketing. Regulatory bodies were not making sure that drugs were reliably and accurately tested, and doctors were not noticing. Given the failure of the government and the medical profession to do any better, it seems slightly unreasonable to blame the drug companies for taking advantage of the situation.

  This held proper research back for years. Eventually though, starting in 1987, a large-scale, double-blind, randomised controlled trial of these heart drugs began. As with the earlier, smaller trials the goal was not to test the drugs but to convince the unbelievers and persuade them to prescribe more. Those involved in the trial already knew that the drugs worked. They even pushed for the study – a $40 million affair, spread over a hundred hospitals – to be designed in such a way as to look only for evidence of benefits. Anything else, they thought, was a waste of time and money. The drug companies seemed genuinely pleased. They were not cynically selling a product that they secretly knew to be poisonous. They simply shared the untested optimism of the majority of cardiologists. Given the fearsome restrictions in place since thalidomide, the trial was difficult and expensive and laborious – but a few enthusiasts got it running all the same.

  The trial almost failed. It required doctors to enter their patients into a study without knowing whether they would get a placebo or one of the active drugs. There were three drugs in the trial, flecainide, encainide and moricizine, all from the class that suppressed extra heartbeats. In the views of many working doctors, the trial was unethical: the drugs were clearly good. Many cardiologists refused to let their patients near it. A shortage of willing participants almost made the whole thing impossible. Of those suitable for the trial, two thirds were ruled out because their doctors advised them that the drugs definitely worked, and that to end up on a placebo might kill them.

  The trial was due to run for five years but, in April 1989, after only two, it was stopped early. All the drugs successfully stopped the extra heartbeats. They also stopped the heart. Two of the three drugs – encainide and flecainide – were shown to be killing people. The idea that the beats caused people to die turned out, on testing, to be wrong.

  Details of the trial results were revealed to those involved on a Monday morning, but not immediately made public. That same Friday one of the lead investigators was attacked at a meeting about the trial. ‘You are immoral!’ cried out one of the cardiologists in the audience. They were not angry about the trial results; they did not yet know about them. They were angry about the trial. The drugs so plainly worked, the cardiologist was arguing, that testing them against a placebo was murderous and unethical.

  Together, the two drugs that the trial showed to be harmful are thought to have actively killed around 50,000 people in America alone. A tiny number compared to those whose lives were ended by leeches, by bleeding and by the treatments that doctors practised through most of history, but the result of the same mode of thinking, the same mental habit of doctors believing their own intuitions.

  As the findings of the trial were publicised, many doctors ignored them. They continued to believe their own opinions, their anecdotal experience that the drugs helped people. All of them had given the drugs to some people who then did well. They objected that if they stopped the drugs, some of their patients would die. ‘Yes,’ replied the aghast FDA, ‘but fewer.’

  Other doctors simply switched their patients to different drugs in the same class, a choice that competing pharmaceutical companies were happy to encourage. These other drugs had no proven harms since they had undergone none of the rigorous tests that might have uncovered them. They too supposedly provided benefits by suppressing extra heartbeats. Many doctors continued to believe that they simply had to work. After all, it made sense. A second trial was undertaken of the agent that was not shown to kill in the first, moricizine. That second trial was also stopped early when an excess of people on moricizine died.

  The greatest failure was not that doctors were shown to be killing so many of their patients. It was that learning that they were doing so did so very little to shift their beliefs. Some of the other new drugs in the same class were subsequently also proven to kill. The fashion for using them slipped, but did not disappear. ‘How much evidence was enough to persuade doctors to abandon a theory’, asked Moore, ‘that had been accepted without proof in the first place?’ Some doctors just thought these drugs should work. On that basis they were willing to carry on using them.

  ‘Doctors’, notes Moore at the end of his book, ‘are still free to exercise their medical judgment and may prescribe [these drugs] for patients with premature beats.’

  23 The Risks of Opinion

  THERE ARE TOO many possibilities in the world to test them all. We pick the ones that seem most likely on the basis of our theories or our previous experience. Science, when it comes to generating testable hypotheses, is an art.

  Our pre-judgement might be that a new molecular drug will treat a disease, or that a traditional herb will save a life. Those are both decent reasons for setting out to see if they will, particularly if similar molecules or herbs have turned out to be helpful in the past. Prejudgements are the best possible reasons for doing tests; they are the worst possible replacements for them. And tests need to be designed so that they can prove us wrong, no matter how strongly we believe that they won’t.

  There is a widespread prejudice in favour of traditional treatments. People find it difficult to believe that therapies used for hundreds or thousands of years should actually be useless. Another prejudice is contradictory – as well as liking to believe that age-old treatments must have something to them, we are also fond of favouring whatever seems most modern.

  Doctors are just as subject to these two prejudices as any other people. And when it comes to testing new therapies, it is the second of the two that really worries them. From the days when control groups began to become routine in medical tests, doctors have convulsed themselves with anxiety over the unfairness. Their presumption that the new treatment will be better than the old is strong. They worry that patients in a control group are being unethically treated, denied the best opportunity of a cure or comfort.

  If this is generally true, as it was in ISIS-2, then there is a real problem with clinical trials. They might be good for society, good for the majority of human beings, but they will be operating at the expense of the people within them, the people who get a placebo or the oldest of the possible treatment options.

  Since proper trials began, doctors and interested observers have fretted over the extent to which they asked participants to make unreasonable sacrifices. If you could be confident that all the options being tried out were equally likely to succeed, then you could enter the trial with a glad heart. From a selfish perspective, if new treatments are likely to be better than old then patients should avoid trials at all costs. They should instead try to get hold of whatever doctors think most likely to work. And doctors should encourage them to do this. You trust that a doctor will act in your best interests, not those of society at large.

  All of these anxieties have been particularly disturbing to doctors who look after children with cancer. Forty years ago, about three in ten children with the disease were cured. Today, that has risen to more than seven in ten. Over the same four decades, cure rates for adult cancer have barely shifted. That is despite President Nixon’s 1970 declaration that he was directing America to declare war on the disease.1

  The effort that has been put into working out how best to treat childhood cancers is unmatched. Such cancers are rare, and their treatment gets concentrated in a small number of specialist centres. These are exactly the sort of academic institutions where clinical trials are most often carried out. There have been other advantages, too. Here is a 2003 judgement on the situation by Robert Wittes in an editorial in the New England Journal of Medicine:

  Fi
nally, for reasons that are still obscure, many childhood cancers are very responsive to treatment, and cure has long been both a feasible objective for treatment and a powerful motivator of physicians’ behavior. As a consequence of this alignment of favorable tumor biology with a culture oriented toward cooperative clinical research, the majority of children with cancer in the United States receive definitive treatment for cancer while enrolled in clinical trials. The benefits have been monumental; the curability of most cancer in childhood stands as one of the great success stories of modern medicine.

  The editorial points out that adult cancers are common by comparison. Treatments for them are less successful, so doctors have not got into a virtuous circle of being so encouraged by the innovations of the year before that they plunge into fresh ones. As a result of all of this, the vast majority of adult cancer patients are not treated within clinical trials. Although their numbers provide plentiful opportunities for research, there has never been the same degree of interest. Robert Wittes, writing the New England Journal editorial, was clearly as angry about this failure for adults as he was delighted with the success for children:

  Of the many things that physicians do, participating in cooperative clinical trials is among the strangest. Relatively undervalued in the typical academic promotion-and-tenure process, often inadequately reimbursed by government funding agencies, faced with informed-consent regulations that vastly exceed in degree of disclosure what is required for routine care, and confronting progressively greater degrees of regulation with each passing year, the clinical trialist may be forgiven for occasionally wondering whether society really wants this kind of work to go forward.

  Leaving aside his complaints about the difficulties of performing clinical trials, what about their ethics? Are children within trials, who get allocated to the older treatments, sacrificing their lives for the benefits of medical progress? Have the wonderful advances in treatment for childhood cancer been bought at the expense of children who entered the trials and did not get the latest therapies?

  New treatments do not get tried on children (or adults) without a great deal of testing beforehand. Trial treatments are ones that researchers think should work. The theory supporting them is excellent. If they are drugs, then in laboratory studies, in test tubes and on cell cultures, they will have shown benefit. They will have been tried on animals, to test both their safety and their effectiveness. An initial small trial will have been done on humans, to check the drug’s immediate impact and toxicity. If the results are acceptable a second trial – phase II – will be carried out, to see if the safety and effectiveness from animal studies appear to carry through into human children. Only if the drug still looks good at this stage will a phase III trial be carried out. This is usually a full-blown, randomised, controlled, double-blinded effort to find out exactly what the drug’s effects in people are. After all the testing that goes on before it starts, it is almost impossible to believe that those allocated to the latest treatment will not be better off than those who volunteer to enter the trial but get used as controls. Why then should any sick child submit to being part of a such a trial, and risk being used as a control?

  The common perception of the situation was summed up by Henry Waxman, an American member of Congress talking to CNN in 1995. ‘I think that both with regard to AIDS and cancer and any other life-threatening disease,’ he said, ‘we ought to make available to people as quickly as possible drugs and other therapies that may extend their lives and not wait until we know with certainty that something is going to be effective.’

  Congressman Waxman’s comment expresses the age-old urge to do something in the face of illness. It is backed up by a belief in the effectiveness of modern medicine, and a confidence that doctors are now able to come up with powerful new therapies. People in the past may not have known what worked without performing proper tests, but these days our understanding of science is so much more advanced. Maybe today these painstaking tests are not needed so much, and the Congressman was right that withholding new treatments while they undergo rigorous examination is cruel. Some people, after all, will die before the trials are finished.

  In response to these concerns, a group of researchers led by Ambuj Kumar took a thorough look at the history of new childhood cancer treatments, publishing their results in December 2005. They collated a group of 126 different trials performed between 1955 and 1997. Every new treatment they looked at was approved only after strict review procedures. None was the result of individual enthusiasms; they came from the thoughtful opinions of large groups of scientists and doctors. All of the treatments were being tested in high-quality, phase III, randomised, controlled trials.

  What the researchers were worried about was whether modern doctors were so good at predicting improved treatments that the kids who got them were likely to be better off. They did not imagine that all the new treatments would turn out to work – there are always some surprises – their concern was that more than half of them might work. If they did, the ethical grounds for doing randomised trials were shaky. If more than half of the new therapies worked, then individual children would always be better off refusing to enter a trial and insisting on whatever the doctors thought was probably going to be best.

  Altogether the trials included almost 37,000 children. Some of the new treatments turned out to be breakthroughs. Others, of course, were disappointments, being no better than what came before them. There were even some that, despite all the promising signs that they were going to be helpful, actually caused harm. On average, taking all the trials and all the children, new treatments turned out to be as likely to harm as to help, as likely to be worse as better in comparison to what came before.

  That is, with the most advanced molecular underpinnings, the best laboratory scientists, with superb and highly motivated doctors and researchers, extensive trials in cancer models, then in animals, then on a small scale in actual children – with all of this, the greatest cancer experts in the world were unable to predict what worked and what did not without actually doing a trial.

  What got the authors of this research most excited was not the fact that this meant that asking children to enter trials posed no ethical problems. What they found inspirational was the way in which childhood cancer researchers had turned their uncertainties into therapeutic victories. ‘The success has not come from a series of continuous, steady improvements, as selective reporting of treatment accomplishments may lead us to believe. On the contrary, our data show that outcomes of new treatments are as likely to be inferior as they are to be superior to standard treatments.’ The ‘successful evolution of treatment resulted from empirical testing by investigators who acknowledged their uncertainty and chose to randomise between treatments, the relative effect of which they could not predict’. When some of the same authors repeated the exercise with a different medical speciality, radiation therapy, collecting together fifty-seven trials on almost 13,000 people, done between 1968 and 2002, the results were the same. Innovative treatments were just as likely to be worse than what came before as they were to be better.

  Congressman Waxman was wrong. The more serious the disease, the more important it becomes to actually test out what works, to ‘wait until we know with certainty that something is going to be effective’, as the Congressman said researchers had no need to. Other studies of doctors’ ability to predict the outcomes of trials, in surgery and adult cancer and in anaesthetics, have all shown similar results.

  The researchers who surveyed the trials were confident about the implications of their work. As Ambuj Kumar and his colleagues wrote:

  Our findings should underpin the continuing need to resolve uncertainty through the randomised comparison of new and standard treatments. Over the past few decades the use of this principle of randomising when uncertain has served children with cancer well . . . The scientific community and the public should be made more aware of how this mechanism underlies advances in clinical medicine.


  AIDS activists in America were successful in pushing for ‘compassionate modifications’ to the trial process. As a result the American trials in the early 1990s for zidovudine – AZT, the first effective anti-AIDS drug – were altered. The drug interferes with an enzyme called reverse transcriptase, used by retroviruses like HIV to insert their genetic material into that of their hosts. At the time it was unclear if AZT helped people who were infected with HIV but were otherwise healthy, people whose immune systems had not yet been destroyed by the virus and did not yet have AIDS. Rather than taking a ‘hard’ end point – death, or progression to full-blown AIDS – the modified trials took a short cut. They looked at levels of CD4 cells, the essential component of the immune system that HIV gradually attacks. The intention was to get an answer as quickly as possible, so that as few as possible died while waiting for it. Over a short period, AZT increased the numbers of these CD4 cells. That was enough; Americans, influenced by the organised campaigns of AIDS activists, were convinced. The drug was widely adopted for all HIV patients.

  A large European trial was continued all the same. This one did look at hard end points, those of death or progression to AIDS. American activists attacked the trial as unethical. Their own demonstration of AZT’s effect on CD4 counts, they argued, established that everyone with HIV should be on the drug regardless of how far their disease had progressed. These were sincere, intelligent and educated people making an argument that it was emotionally very difficult to refute. They were arguing that AZT was life-saving and that those unwilling to see where the evidence pointed were killing HIV sufferers by withholding the drug. This was in the days before there was any other drug besides AZT to slow HIV’s progression.

  In Britain, Ireland and France, the trial of early AZT carried on despite objections. Enough people felt that there was often a difference between where evidence points and what it eventually shows; a difference between soft outcomes that give some sign of what a disease is doing, and the hard outcomes that show it for certain.

 

‹ Prev