Book Read Free

Bad Science

Page 10

by Ben Goldacre


  Now that their interest had been piqued, the Cochrane researchers also began to notice that there were odd discrepancies between the frequency of adverse events in different databases. Roche’s global safety database held 2,466 neuropsychiatric adverse events, of which 562 were classified as ‘serious’. But the FDA database for the same period held only 1,805 adverse events in total. The rules vary on what needs to be notified to whom, and where, but even allowing for that, this was odd.

  In any case, since Roche was denying them access to the information needed to conduct a proper review, the Cochrane team concluded that they would have to exclude all the unpublished Kaiser data from their analysis, because the details could not be verified in the normal way. People cannot make treatment and purchasing decisions on the basis of trials if the full methods and results aren’t clear: the devil is often in the detail, as we shall see in Chapter 4, on ‘bad trials’, so we cannot blindly trust that every study is a fair test of the treatment.

  This is particularly important with Tamiflu, because there are good reasons to think that these trials were not ideal, and that published accounts were incomplete, to say the least. On closer examination, for example, the patients participating were clearly unusual, to the extent that the results may not be very relevant to normal everyday flu patients. In the published accounts, patients in the trials are described as typical flu patients, suffering from normal flu symptoms like cough, fatigue, and so on. We don’t do blood tests on people with flu in routine practice, but when these tests are done – for surveillance purposes – then even during peak flu season only about one in three people with ‘flu’ will actually be infected with the influenza virus, and most of the year only one in eight will really have it. (The rest are sick from something else, maybe just a common cold virus.)

  Two thirds of the trial participants summarised in the Kaiser paper tested positive for flu. This is bizarrely high, and means that the benefits of the drug will be overstated, because it is being tested on perfect patients, the very ones most likely to get better from a drug that selectively attacks the flu virus. In normal practice, which is where the results of these trials will be applied, doctors will be giving the drug to real patients who are diagnosed with ‘flu-like illness’, which is all you can realistically do in a clinic. Among these real patients, many will not actually have the influenza virus. This means that in the real world, the benefits of Tamiflu on flu will be diluted, and many more people will be exposed to the drug who don’t actually have flu virus in their systems. This, in turn, means that the side effects are likely to creep up in significance, in comparison with any benefits. That is why we strive to ensure that all trials are conducted in normal, everyday, realistic patients: if they are not, their findings are not relevant to the real world.

  So the Cochrane review was published without the Kaiser data in December 2009, alongside some explanatory material about why the Kaiser results had been excluded, and a small flurry of activity followed. Roche put the short excerpts it had sent over online, and committed to make full study reports available (it still hasn’t done so).

  What Roche posted was incomplete, but it began a journey for the Cochrane academics of learning a great deal more about the real information that is collected on a trial, and how that can differ from what is given to doctors and patients in the form of brief, published academic papers. At the core of every trial is the raw data: every single record of blood pressure of every patient, the doctors’ notes describing any unusual symptoms, investigators’ notes, and so on. A published academic paper is a short description of the study, usually following a set format: an introductory background; a description of the methods; a summary of the important results; and then finally a discussion, covering the strengths and weaknesses of the design, and the implications of the results for clinical practice.

  A clinical study report, or CSR, is the intermediate document that stands between these two, and can be very long, sometimes thousands of pages.74 Anybody working in the pharmaceutical industry is very familiar with these documents, but doctors and academics have rarely heard of them. They contain much more detail on things like the precise plan for analysing the data statistically, detailed descriptions of adverse events, and so on.

  These documents are split into different sections, or ‘modules’. Roche has shared only ‘module 1’, for only seven of the ten study reports Cochrane has requested. These modules are missing vitally important information, including the analysis plan, the randomisation details, the study protocol (and the list of deviations from that), and so on. But even these incomplete modules were enough to raise concerns about the universal practice of trusting academic papers to give a complete story about what happened to the patients in a trial.

  For example, looking at the two papers out of ten in the Kaiser review which were published, one says: ‘There were no drug-related serious adverse events,’ and the other doesn’t mention adverse events. But in the ‘module 1’ documents on these same two studies, there are ten serious adverse events listed, of which three are classified as being possibly related to Tamiflu.75

  Another published paper describes itself as a trial comparing Tamiflu against placebo. A placebo is an inert tablet, containing no active ingredient, that is visually indistinguishable from the pill containing the real medicine. But the CSR for this trial shows that the real medicine was in a grey and yellow capsule, whereas the placebos were grey and ivory. The ‘placebo’ tablets also contained something called dehydrocholic acid, a chemical which encourages the gall bladder to empty.76 Nobody has any clear idea of why, and it’s not even mentioned in the academic paper; but it seems that this was not actually an inert, dummy pill placebo.

  Simply making a list of all the trials conducted on a subject is vitally important if we want to avoid seeing only a biased summary of the research done on a subject; but in the case of Tamiflu even this proved to be almost impossible. For example, Roche Shanghai informed the Cochrane group of one large trial (ML16369), but Roche Basel seemed not to know of its existence. But by setting out all the trials side by side, the researchers were able to identify peculiar discrepancies: for example, the largest ‘phase 3’ trial – one of the large trials that are done to get a drug onto the market – was never published, and is rarely mentioned in regulatory documents.*

  There were other odd discrepancies. Why, for example, was one trial on Tamiflu published in 2010, ten years after it was completed?78 Why did some trials report completely different authors, depending on where they were being discussed?79 And so on.

  The chase continued. In December 2009 Roche had promised: ‘full study reports will also be made available on a password-protected site within the coming days to physicians and scientists undertaking legitimate analyses’. This never happened. Then an odd game began. In June 2010 Roche said: Oh, we’re sorry, we thought you had what you wanted. In July it announced that it was worried about patient confidentiality (you may remember this from the EMA saga). This was an odd move: for most of the important parts of these documents, privacy is no issue at all. The full trial protocol, and the analysis plan, are both completed before any single patient is ever touched. Roche has never explained why patient privacy prevents it from releasing the study reports. It simply continued to withhold them.

  Then in August 2010 it began to make some even more bizarre demands, betraying a disturbing belief that companies are perfectly entitled to control access to information that is needed by doctors and patients around the world to make safe decisions. Firstly, it insisted on seeing the Cochrane reviewers’ full analysis plan. Fine, they said, and posted the whole protocol online. Doing so is completely standard practice at Cochrane, as it should be for any transparent organisation, and allows people to suggest important changes before you begin. There were few surprises, since all Cochrane reports follow a pretty strict manual anyway. Roche continued to withhold its study reports (including, ironically, its own protocols, the very thing it demanded Cochrane s
hould publish, and that Cochrane had published, happily).

  By now Roche had been refusing to publish the study reports for a year. Suddenly, the company began to raise odd personal concerns. It claimed that some Cochrane researchers had made untrue statements about the drug, and about the company, but refused to say who, or what, or where. ‘Certain members of Cochrane Group involved with the review of the neuraminidase inhibitors,’ it announced, ‘are unlikely to approach the review with the independence that is both necessary and justified.’ This is an astonishing state of affairs, where a company feels it should be allowed to prevent individual researchers access to data that should be available to all; but still Roche refused to hand over the study reports.

  Then it complained that the Cochrane reviewers had begun to copy journalists in on their emails when responding to Roche staff. I was one of the people copied in on these interactions, and I believe that this was exactly the correct thing to do. Roche’s excuses had become perverse, and the company had failed to keep its promise to share all study reports. It’s clear that the modest pressure exerted by researchers in academic journals alone was having little impact on Roche’s refusal to release the data, and this is an important matter of public health, both for the individual case of this Tamiflu data, and for the broader issue of companies and regulators harming patients by withholding information.

  Then things became even more perverse. In January 2011 Roche announced that the Cochrane researchers had already been given all the data they need. This was simply untrue. In February it insisted that all the studies requested were published (meaning academic papers, now shown to be misleading on Tamiflu). Then it declared that it would hand over nothing more, saying: ‘You have all the detail you need to undertake a review.’ But this still wasn’t true: it was still withholding the material it had publicly promised to hand over ‘within a few days’ in December 2009, a year and a half earlier.

  At the same time, the company was raising the broken arguments we have already seen: it’s the job of regulators to make these decisions about benefit and risk, it said, not academics. Now, this claim fails on two important fronts. Firstly, as with many other drugs, we now know that not even the regulators had seen all the data. In January 2012 Roche claimed that it ‘has made full clinical study data available to health authorities around the world for their review as part of the licensing process’. But the EMA never received this information for at least fifteen trials. This was because the EMA had never requested it.

  And that brings us on to our final important realisation: regulators are not infallible. They make outright mistakes, and they make decisions which are open to judgement, and should be subject to second-guessing and checking by many eyes around the world. In the next chapter we will see more examples of how regulators can fail, behind closed doors, but here we will look at one story that illustrates the benefit of ‘many eyes’ perfectly.

  Rosiglitazone is a new kind of diabetes drug, and lots of researchers and patients had high hopes that it would be safe and effective.80 Diabetes is common, and more people develop the disease every year. Sufferers have poor control of their blood sugar, and diabetes drugs, alongside dietary changes, are supposed to fix this. Although it’s nice to see your blood sugar being controlled nicely in the numbers from lab tests and machines at home, we don’t control these figures for their own sake: we try to control blood sugar because we hope that this will help reduce the chances of real-world outcomes, like heart attack and death, both of which occur at a higher rate in people with diabetes.

  Rosiglitazone was first marketed in 1999, and from the outset it was a magnet for disappointing behaviour. In that first year, Dr John Buse from the University of North Carolina discussed an increased risk of heart problems at a pair of academic meetings. The drug’s manufacturer, GSK, made direct contact in an attempt to silence him, then moved on to his head of department. Buse felt pressured to sign various legal documents. To cut a long story short, after wading through documents for several months, in 2007 the US Senate Committee on Finance released a report describing the treatment of Dr Buse as ‘intimidation’.

  But we are more concerned with the safety and efficacy data. In 2003 the Uppsala Drug Monitoring Group of the World Health Organization contacted GSK about an unusually large number of spontaneous reports associating rosiglitazone with heart problems. GSK conducted two internal meta-analyses of its own data on this, in 2005 and 2006. These showed that the risk was real, but although both GSK and the FDA had these results, neither made any public statement about them, and they were not published until 2008.

  During this delay, vast numbers of patients were exposed to the drug, but doctors and patients only learned about this serious problem in 2007, when cardiologist Professor Steve Nissen and colleagues published a landmark meta-analysis. This showed a 43 per cent increase in the risk of heart problems in patients on rosiglitazone. Since people with diabetes are already at increased risk of heart problems, and the whole point of treating diabetes is to reduce this risk, that finding was big potatoes. His findings were confirmed in later work, and in 2010 the drug was either taken off the market or restricted, all around the world.

  Now, my argument is not that this drug should have been banned sooner, because as perverse as it sounds, doctors do often need inferior drugs for use as a last resort. For example, a patient may develop idiosyncratic side effects on the most effective pills, and be unable to take them any longer. Once this has happened, it may be worth trying a less effective drug, if it is at least better than nothing.

  The concern is that these discussions happened with the data locked behind closed doors, visible only to regulators. In fact, Nissen’s analysis could only be done at all because of a very unusual court judgement. In 2004, when GSK was caught out withholding data showing evidence of serious side effects from paroxetine in children, the UK conducted an unprecedented four-year-long investigation, as we saw earlier. But in the US, the same bad behaviour resulted in a court case over allegations of fraud, the settlement of which, alongside a significant payout, required GSK to commit to posting clinical trial results on a public website.

  Professor Nissen used the rosiglitazone data, when it became available, found worrying signs of harm, and published this to doctors, which is something that the regulators had never done, despite having the information years earlier. (Though before doctors got to read it, Nissen by chance caught GSK discussing a copy of his unpublished paper, which it had obtained improperly.81)

  If this information had all been freely available from the start, regulators might have felt a little more anxious about their decisions, but crucially, doctors and patients could have disagreed with them, and made informed choices. This is why we need wider access to full CSRs, and all trial reports, for all medicines, and this is why it is perverse that Roche should be able even to contemplate deciding which favoured researchers should be allowed to read the documents on Tamiflu.

  Astonishingly, a piece published in April 2012 by regulators from the UK and Europe suggests that they might agree to more data sharing, to a limited extent, within limits, for some studies, with caveats, at the appropriate juncture, and in the fullness of time.82 Before feeling any sense of enthusiasm, we should remember that this is a cautious utterance, wrung out after the dismal fights I have already described; that it has not been implemented; that it must be set against a background of broken promises from all players across the whole field of missing data; and that in any case, regulators do not have all the trial data anyway. But it is an interesting start.

  Their two main objections – if we accept their goodwill at face value – are interesting, because they lead us to the final problem in the way we tolerate harm to patients from missing trial data. Firstly, they raise the concern that some academics and journalists might use study reports to conduct histrionic or poorly conducted reviews of the data: to this, again, I say, ‘Let them,’ because these foolish analyses should be conducted, and then rubbished,
in public.

  When UK hospital mortality statistics first became easily accessible to the public, doctors were terrified that they would be unfairly judged: the crude figures can be misinterpreted, after all, because one hospital may have worse figures simply because it is a centre of excellence, and takes in more challenging patients than its neighbours; and there is random variation to be expected in mortality rates anyway, so some hospitals might look unusually good, or bad, simply through the play of chance. Initially, to an extent, these fears were realised: there were a few shrill, unfair stories, and people overinterpreted the results. Now, for the most part, things have settled down, and many lay people are quite able to recognise that crude analyses of such figures are misleading. For drug data, where there is so much danger from withheld information, and so many academics desperate to conduct meaningful analyses, and so many other academics happy to criticise them, releasing the data is the only healthy option.

  But secondly, the EMA raises the spectre of patient confidentiality, and hidden in this concern is one final prize.

  So far I have been talking about access to trial reports, summaries of patients’ outcomes in trials. There is no good reason to believe that this poses any threat to patient confidentiality, and where there are specific narratives that might make a patient identifiable – a lengthy medical description of one person’s idiosyncratic adverse event in a trial, perhaps – these can easily be removed, since they appear in a separate part of the document. These CSRs should undoubtedly, without question, be publicly available documents, and this should be enforced retrospectively, going back decades, to the dawn of trials.

  But all trials are ultimately run on individual patients, and the results of those individual patients are all stored and used for the summary analysis at the end of the study. While I would never suggest that these should be posted up on a public website – it would be easy for patients to be identifiable, from many small features of their histories – it is surprising that patient-level data is almost never shared with academics.

 

‹ Prev