Bad Science

Home > Science > Bad Science > Page 17
Bad Science Page 17

by Ben Goldacre


  Putting this in context: your drug might make one in every 5,000 people literally explode – their head blows off, their intestines fly out – through some idiosyncratic mechanism that nobody could have foreseen. But at the point when the drug is approved, after only 1,000 people have taken it, it’s very likely that you’ll never have witnessed one of these spectacular and unfortunate deaths. After 50,000 people have taken your drug, though, out there in the real world, you’d expect to have seen about ten people explode overall (since, on average, it makes one in every 5,000 people explode).

  Now, if your drug is causing a very rare adverse event, like exploding, you’re actually quite lucky, because weird adverse events really stand out, as there’s nothing like them happening already. People will talk about patients who explode, they’ll write them up in short reports for academic journals, probably notify various authorities, coroners might be involved, alarm bells will generally ring, and people will look around for what is suddenly causing patients to explode very early on, probably quite soon after the first one goes off.

  But many of the adverse events caused by drugs are things that happen a lot anyway. If your drug increases the chances of someone getting heart failure, well, there are a lot of people around with heart failure already, so if doctors see one more case of heart failure in their clinic, then they’re probably not going to notice, especially if this drug is given to older people, who already experience a lot of heart failure anyway. Even detecting any signal of increased heart failure in a large group of patients might be tricky.

  This helps us to understand the various different mechanisms that are used to monitor side effects by drug companies, regulators and academics. They fall into roughly three groups:

  Spontaneous reports of side effects, from patients and doctors, to the regulator

  ‘Epidemiology’ studies looking at the health records of large groups of patients

  Reports of data from drug companies

  Spontaneous reports are the simplest system. In most territories around the world, when a doctor suspects that a patient has developed some kind of adverse reaction to a drug, they can notify the relevant local authority. In the UK this is via something called the ‘Yellow Card System’: these freepost cards are given out to all doctors, making the system easy to use, and patients can also report suspected adverse events themselves, online at yellowcard.mhra.gov.uk (please do).

  These spontaneous reports are then categorised by hand, and collated into what is effectively a giant spreadsheet, with one row for every drug on the market, and one column for every imaginable type of side effect. Then you look at how often each type of side effect is reported for each drug, and try to decide whether the figure is higher than you’d expect to see simply from chance. (If you’re statistically minded, the names of the tools used, such as ‘proportional reporting ratios’ and ‘Bayesian confidence propagation neural networks’, will give you a clue as to how this is done. If you’re not statistically minded, then you’re not missing out; at least, no more here than elsewhere in your life.)

  This system is good for detecting unusual side effects: a drug that made your head and abdomen literally explode, for example, would be spotted fairly easily, as discussed. Similar systems are in place internationally, most of the results from around the world are pooled together by WHO in Uppsala, and academics or companies can apply for access, with varying success (as discussed in this long endnote37).

  But this approach suffers from an important problem: not all adverse events are reported. The usual estimate is that in Britain, only around one in twenty gets fed back to the MHRA.38 This is not because all doctors are slack. It would actually be perfect if that was the cause, because then at least we would know that all side effects on all drugs had an equal chance of not being reported, and we could still usefully compare the proportions of side-effect reports between each other, and between different drugs.

  Unfortunately, different side effects from different drugs are reported at very different rates. A doctor might be more likely to be suspicious of a symptom being a side effect if the patient is on a drug that is new on the market, for example, so those cases may be reported more than side effects for older drugs. Similarly, if a patient develops a side effect that is already well known to be associated with a drug, a doctor will be much less likely to bother reporting it, because it’s not an interesting new safety signal, it’s just a boring instance of a well-known phenomenon. And if there are rumours or news stories about problems with a drug, doctors may be more inclined to spontaneously report adverse events, not out of mischief, but simply because they’re more likely to remember prescribing the controversial drug when a patient comes back with an odd medical problem.

  Also, a doctor’s suspicions that something is a side effect at all will be much lower if it is a medical problem that happens a lot anyway, as we’ve already seen: people often get headaches, for example, or aching joints, or cancer, in the everyday run of life, so it may not even occur to a doctor that these problems are anything to do with a prescription they’ve given. In any case, these adverse events will be hard to notice against the high background rate of people who suffer from them, and this will all be especially true if they occur a long time after the patient starts on a new drug.

  Accounting for these problems is extremely difficult. So spontaneous reporting can be useful if the adverse events are extremely rare without the drug, or are brought on rapidly, or are the kind of thing that is typically found as an adverse drug reaction (a rash, say, or an unusual drop in the number of white blood cells). But overall, although these systems are important, and contribute to a lot of alarms being usefully raised, generally they’re only used to identify suspicions.39 These are then tested in more robust forms of data.

  Better data can come from looking at the medical records of very large numbers of people, in what are known as ‘epidemiological’ studies. In the US this is tough, and the closest you can really get are the administrative databases used to process payments for medical services, which miss most of the detail. In the UK, however, we’re currently in a very lucky and unusual position. This is because our health care is provided by the state, not just free at the point of access, but also through one single administrative entity, the NHS. As a result of this happy accident, we have large numbers of health records that can be used to monitor the benefits and risks of treatments. Although we have failed to realise this potential across the board, there is one corner called the General Practice Research Database, where several million people’s GP records are available. These records are closely guarded, to protect anonymity, but researchers in pharmaceutical companies, regulators and universities have been able to apply for access to specific parts of anonymised records for many years now, to see whether specific medicines are associated with unexpected harms. (Here I should declare an interest, because like many other academics I am doing some work on analysing this GPRD data myself, though not to look at side effects.)

  Studying drug safety in the full medical record of patients who receive a prescription in normal clinical practice has huge advantages over spontaneous report data, for a number of reasons. Firstly, you have all of a patient’s medical notes, in coded form, as they appear on the clinic’s computer, without any doctor having to make a decision about whether to bother flagging up a particular outcome.

  You also have an advantage over those small approval trials, because you have a lot of data, allowing you to look at rare outcomes. And more than that, these are real patients. The people who participate in trials are generally unusual ‘ideal patients’: they’re healthier than real patients, with fewer other medical problems, they’re on fewer other medications, they’re less likely to be elderly, very unlikely to be pregnant, and so on. Drug companies like to trial their drugs in these ideal patients, as healthier patients are more likely to get better and to make the drug look good. They’re also more likely to give that positive result in a briefer, cheaper trial. In
fact, this is another way in which database studies can have an advantage: approval trials are generally brief, so they expose patients to drugs for a shorter period of time than the normal duration of a prescription. But database studies give us information on what drugs do in real-world patients, under real-world conditions (and as we shall see, this isn’t just restricted to the issue of side effects).

  With this data, you can look for an association between a particular drug and an increased risk of an outcome that is already common, like heart attacks, for example. So you might compare heart-attack risk between patients who have received three different types of foot-fungus medication, for example, if you were worried that one of them might damage the heart. This is not an entirely straightforward business, of course, partly because you have to make important decisions about what you compare with what, and this can affect your outcomes. For example, should you compare people getting your worrying drug against other people getting a similar drug, or against people matched for age but not getting any drug? If you do the latter, are foot-fungus patients definitely comparable with age-matched healthy patients on your database? Or are patients with foot fungus, perhaps, more likely to be diabetic?

  You can also get caught out by a phenomenon called ‘channelling’: this is where patients who have reported problems on previous drugs are preferentially given a drug with a solid reputation for being safe. As a result, the patients on the safe drug include many of the patients who are sicker to start with, and so are more likely to report adverse events, for reasons that have nothing to do with the drug. That can end up making the safe drug look worse than it really is; and by extension, it can make a riskier drug look better in comparison.

  But in any case, short of conducting massive drug trials in routine care – not an insane idea, as we will see later – these kinds of studies are the best shot we have for making sure that drugs aren’t associated with terrible harms. So they are conducted by regulators, by academics, and often by the manufacturer at the request of the regulator.

  In fact, drug companies are under a number of obligations to monitor side effects, both general and specific, and report them to the relevant authority, but in reality these systems often don’t work very well. In 2010, for example, the FDA wrote a twelve-page letter to Pfizer complaining that it had failed to properly report adverse events arising after its drugs came to market.40 The FDA had conducted a six-week investigation, and found evidence of several serious and unexpected adverse events that had not been reported: Viagra causes serious visual problems, for example, and even blindness. The FDA said Pfizer failed to report these events in a timely fashion, by ‘misclassifying and/or downgrading reports to non-serious, without reasonable justification’. You will remember the paroxetine story from earlier, where GSK failed to report important data on suicide. These are not isolated incidents.

  Lastly, you can also get some data on side effects from trials, even though the adverse events we’re trying to spot are rare, and therefore much less likely to appear in small studies. Here again, though, there have been problems. For example, sometimes companies can round up all kinds of different problems into one group, with a label that doesn’t really capture the reality of what was happening to the patients. In antidepressant trials, adverse events like suicidal thoughts, suicidal behaviours and suicide attempts have been coded as ‘emotional lability’, ‘admissions to hospital’, ‘treatment failures’ or ‘drop-outs’.41 None of these really captures the reality of what was going on for the patient.

  To try to manage these problems, for the past few years companies have been required by the EMA to produce something called a Risk Management Plan (RMP) on their drug, and here our problems begin again. These documents are written by the company, and explain the safety studies it has agreed with the regulator; but for absolutely no sane reason that I can imagine, the contents are kept secret, so nobody knows exactly what studies the companies have agreed to conduct, what safety issues they are prioritising, or how they are researching them.

  A brief summary is available to doctors, academics and the public, and just recently academics have begun to publish papers assessing their contents, with damning findings.42 After explaining that changes in risk identified from the RMP were communicated unpredictably and inadequately to doctors, one concludes: ‘The main limitation of this study is the lack of publicly available data regarding the most significant aspects.’ The researchers were simply deprived of information about the studies that were conducted to monitor drug safety. A similar study, given slightly better access, looked at the safety studies that were discussed in RMPs.43 For about half of these studies, the RMP gave only a short description, or a commitment to conduct some kind of study, but no further information. In the full RMP document, where you would expect to have found full study protocols, the researchers found not one, for any of the eighteen drugs they looked at.

  If these Risk Management Plans are drawn up in secret, and their contents are poorly communicated, but at the same time they are the tool used to get drugs to market with a lower threshold of evidence, then we have a serious and interesting new problem: it’s possible that they are being used as a device to reassure the public, rather than to address a serious issue.44

  When it comes to the secrecy of regulators, it is clear that there is an important cultural issue that needs to be resolved. I’ve spent some time trying to understand the perspective of public servants who are clearly good people, but still seem to think that hiding documents from the public is desirable. The best I can manage is this: regulators believe that decisions about drugs are best made by them, behind closed doors; and that as long as they make good decisions, it is OK for these to then be communicated only in summary form to the outside world.

  This view, I think, is prevalent; but it is also misguided, in two ways. We have already seen many illustrations of how hidden data can be a cloak for mischief, and how many eyes are often valuable for spotting problems. But the regulators’ apparent belief that we should have blind faith in their judgements also misses a crucial point.

  A regulator and a doctor are trying to make two completely different decisions about a drug, even though they are using (or in doctors’ case would like to use) the same information. A regulator is deciding whether it’s in the interests of society overall that a particular drug should ever be available for use in its country, even if only in some very obscure circumstance, such as when all other drugs have failed. Doctors, meanwhile, are making a decision about whether they should use this drug right now, for the patient in front of them. Both are using the safety and efficacy data to which they have access, but they both need access to it in full, in order to make their very different decisions.

  This crucial distinction is not widely understood by patients, who often imagine that an approved drug is a safe and effective one. In a 2011 US survey of 3,000 people, for example, 39 per cent believed that the FDA only approves ‘extremely effective’ drugs, and 25 per cent that it only approves drugs without serious side effects.45 But that’s not true: regulators frequently approve drugs that are only vaguely effective, with serious side effects, on the off-chance that they might be useful to someone, somewhere, when other interventions aren’t an option. They are used by doctors and patients as second-best options, but we need all the facts to make safe and informed decisions.

  Some would argue that cracks are appearing in this secrecy, with some new pharmacovigilance legislation coming into force for Europe in 2012 which is supposed to improve transparency.46 But at best, this legislation is a very mixed bag. It does not give access to Risk Management Plans, but it does state that the EMA should publish the agendas, recommendations, opinions and minutes of various scientific committees, which are currently completely secret. We can only judge this small promised change on how it is implemented, if ever, and as we have seen, previous performance from the EMA does not inspire confidence. Even if we set aside the EMA’s astonishing and perverse behaviour over CSRs
for orlistat and rimonabant, which you will remember from Chapter 1, we should also recall that it has been mandated to provide an open clinical trials register for many years, but has simply failed to do so, still keeping much of that trials data secret to this very day.

  In any case, this legislation has several serious flaws.47 The EMA is being set up as the host of a single database for drug safety data, for example, yet this information will still be kept secret from health professionals, scientists and the public. But the most interesting shortcoming of this new legislation is an organisational one.

  Many had called for a new ‘drug safety agency’ to be set up, monitoring risks after a drug came to market, as a stand-alone organisation, with its own powers and staff, completely separate from the organisation in charge of approving a drug when it first comes to market.48 This may sound like a dull organisational irrelevancy, but in fact it speaks to one of the most disappointing problems that has been identified in the operations of regulators around the world: regulators that have approved a drug are often reluctant to take it off the market, in case that is seen as an admission of their failure to spot problems in the first place.

  That is not idle pontification on my part. In 2004 the epidemiologist from the US Office of Drug Safety who led the review on Vioxx told the Senate Finance Committee: ‘My experience with Vioxx is typical of how CDER [the FDA’s Center for Drug Evaluation and Research] responds to serious drug safety issues in general…the new drug reviewing division that approved the drug in the first place, and that regards it as its own child, typically proves to be the single greatest obstacle to effectively dealing with serious drug safety issues.’ Chillingly, in 1963, half a century ago, an FDA medical officer called John Nestor told Congress almost exactly the same thing: previous approval decisions were ‘sacrosanct’, he said. ‘We were not to question decisions made in the past.’

 

‹ Prev