Book Read Free

2008 - Bad Science

Page 22

by Ben Goldacre


  But the pharmaceutical industry is also currently in trouble. The golden age of medicine has creaked to a halt, as we have said, and the number of new drugs, or ‘new molecular entities’, being registered has dwindled from fifty a year in the 1990s to about twenty now. At the same time, the number of ‘me-too’ drugs has risen, making up to half of all new drugs.

  Me-too drugs are an inevitable function of the market: they are rough copies of drugs that already exist, made by another company, but are different enough for a manufacturer to be able to claim their own patent. They take huge effort to produce, and need to be tested (on human participants, with all the attendant risks) and trialled and refined and marketed just like a new drug. Sometimes they offer modest benefits (a more convenient dosing regime, for example), but for all the hard work they involve, they don’t generally represent a significant breakthrough in human health. They are merely a breakthrough in making money. Where do all these drugs come from?

  The journey of a drug

  First of all, you need an idea for a drug. This can come from any number of places: a molecule in a plant; a receptor in the body that you think you can build a molecule to interface with; an old drug that you’ve tinkered with; and so on. This part of the story is extremely interesting, and I recommend doing a degree in it. When you think you have a molecule that might be a runner, you test it in animals, to see if it works for whatever you think it should do (and to see if it kills them, of course).

  Then you do Phase I, or ‘first in man’, studies on a small number of brave, healthy young men who need money, firstly to see if it kills them, and also to measure basic things like how fast the drug is excreted from the body (this is the phase that went horribly wrong in the TGN1412 tests in 2006, where several young men were seriously injured). If this works, you move to a Phase II trial, in a couple of hundred people with the relevant illness, as a ‘proof of concept’, to work out the dose, and to get an idea if it is effective or not. A lot of drugs fail at this point, which is a shame, since this is no GCSE science project: bringing a drug to market costs around $500 million in total.

  Then you do a Phase III trial, in hundreds or thousands of patients, randomised, blinded, comparing your drug against placebo or a comparable treatment, and collect much more data on efficacy and safety. You might need to do a few of these, and then you can apply for a licence to sell your drug. After it goes to market, you should be doing more trials, and other people will probably do trials and other studies on your drug too; and hopefully everyone will keep their eyes open for any previously unnoticed side-effects, ideally reporting them using the Yellow Card system (patients can use this too; in fact, please do. It’s at http:⁄⁄yellowcard.mhra.gov.uk).

  Doctors make their rational decision on whether they want to prescribe a drug based on how good it has been shown to be in trials, how bad the side-effects are, and sometimes cost. Ideally they will get their information on efficacy from studies published in peer-reviewed academic journals, or from other material like textbooks and review articles which are themselves based on primary research like trials. At worst, they will rely on the lies of drug reps and word of mouth.

  But drug trials are expensive, so an astonishing 90 per cent of clinical drug trials, and 70 per cent of trials reported in major medical journals, are conducted or commissioned by the pharmaceutical industry. A key feature of science is that findings should be replicated, but if only one organisation is doing the funding, then this feature is lost.

  It is tempting to blame the drug companies—although it seems to me that nations and civic organisations are equally at fault here for not coughing up—but wherever you draw your own moral line, the upshot is that drug companies have a huge influence over what gets researched, how it is researched, how the results are reported, how they are analysed, and how they are interpreted.

  Sometimes whole areas can be orphaned because of a lack of money, and corporate interest. Homeopaths and vitamin pill quacks would tell you that their pills are good examples of this phenomenon. That is a moral affront to the better examples. There are conditions which affect a small number of people, like Creutzfeldt-Jakob disease and Wilson disease, but more chilling are the diseases which are neglected because they are only found in the developing world, like Chagas disease (which threatens a quarter of Latin America) and trypanosomiasis (300,000 cases a year, but in Africa). The Global Forum for Health Research estimates that only 10 per cent of the world’s health burden receives 90 per cent of total biomedical research funding.

  Often it is simply information that is missing, rather than some amazing new molecule. Eclampsia, say, is estimated to cause 50,000 deaths in pregnancy around the world each year, and the best treatment, by a huge margin, is cheap, unpatented, magnesium sulphate (high doses intravenously, that is, not some alternative medicine supplement, but also not the expensive anti-convulsants that were used for many decades). Although magnesium had been used to treat eclampsia since 1906, its position as the best treatment was only established a century later in 2002, with the help of the World Health Organisation, because there was no commercial interest in the research question: nobody has a patent on magnesium, and the majority of deaths from eclampsia are in the developing world. Millions of women have died of the condition since 1906, and many of those deaths were avoidable.

  To an extent these are political and development issues, which we should leave for another day; and I have a promise to pay out on: you want to be able to take the skills you’ve learnt about levels of evidence and distortions of research, and understand how the pharmaceutical industry distorts data, and pulls the wool over our eyes. How would we go about proving this? Overall, it’s true, drug company trials are much more likely to produce a positive outcome for their own drug. But to leave it there would be weak-minded.

  What I’m about to tell you is what I teach medical students and doctors—here and there—in a lecture I rather childishly call ‘drug company bullshit’. It is, in turn, what I was taught at medical school,* and I think the easiest way to understand the issue is to put yourself in the shoes of a big pharma researcher.

  ≡ In this subject, like many medics of my generation, I am indebted to the classic textbook How to Read a Paper by Professor Greenhalgh at UCL. It should be a best-seller. Testing Treatments by Imogen Evans, Hazel Thornton and Iain Chalmers is also a work of great genius, appropriate for a lay audience, and amazingly also free to download online from www.jameslindlibrary.org. For committed readers I recommend Methodological Errors in Medical Research by Bjorn Andersen. It’s extremely long. The subtitle is ‘An Incomplete Catalogue’.

  You have a pill. It’s OK, maybe not that brilliant, but a lot of money is riding on it. You need a positive result, but your audience aren’t homeopaths, journalists or the public: they are doctors and academics, so they have been trained in spotting the obvious tricks, like ‘no blinding’, or ‘inadequate randomisation’. Your sleights of hand will have to be much more elegant, much more subtle, but every bit as powerful.

  What can you do?

  Well, firstly, you could study it in winners. Different people respond differently to drugs: old people on lots of medications are often no-hopers, whereas younger people with just one problem are more likely to show an improvement. So only study your drug in the latter group. This will make your research much less applicable to the actual people that doctors are prescribing for, but hopefully they won’t notice. This is so commonplace it is hardly worth giving an example.

  Next up, you could compare your drug against a useless control. Many people would argue, for example, that you should never compare your drug against placebo, because it proves nothing of clinical value: in the real world, nobody cares if your drug is better than a sugar pill; they only care if it is better than the best currently available treatment. But you’ve already spent hundreds of millions of dollars bringing your drug to market, so stuff that: do lots of placebo-controlled trials and make a big fuss about them, because t
hey practically guarantee some positive data. Again, this is universal, because almost all drugs will be compared against placebo at some stage in their lives, and ‘drug reps’—the people employed by big pharma to bamboozle doctors (many simply refuse to see them)—love the unambiguous positivity of the graphs these studies can produce.

  Then things get more interesting. If you do have to compare your drug with one produced by a competitor—to save face, or because a regulator demands it—you could try a sneaky underhand trick: use an inadequate dose of the competing drug, so that patients on it don’t do very well; or give a very high dose of the competing drug, so that patients experience lots of side-effects; or give the competing drug in the wrong way (perhaps orally when it should be intravenous, and hope most readers don’t notice); or you could increase the dose of the competing drug much too quickly, so that the patients taking it get worse side-effects. Your drug will shine by comparison.

  You might think no such thing could ever happen. If you follow the references in the back, you will find studies where patients were given really rather high doses of old–fashioned antipsychotic medication (which made the new-generation drugs look as if they were better in terms of side-effects), and studies with doses of SSRI antidepressants which some might consider unusual, to name just a couple of examples. I know. It’s slightly incredible.

  Of course, another trick you could pull with side-effects is simply not to ask about them; or rather—since you have to be sneaky in this field—you could be careful about how you ask. Here is an example. SSRI antidepressant drugs cause sexual side-effects fairly commonly, including anorgasmia. We should be clear (and I’m trying to phrase this as neutrally as possible): I really enjoy the sensation of orgasm. It’s important to me, and everything I experience in the world tells me that this sensation is important to other people too. Wars have been fought, essentially, for the sensation of orgasm. There are evolutionary psychologists who would try to persuade you that the entirety of human culture and language is driven, in large part, by the pursuit of the sensation of orgasm. Losing it seems like an important side-effect to ask about.

  And yet, various studies have shown that the reported prevalence of anorgasmia in patients taking SSRI drugs varies between 2 per cent and 7.3 per cent, depending primarily on how you ask: a casual, open-ended question about side-effects, for example, or a careful and detailed enquiry. One 3,000-subject review on SSRIs simply did not list any sexual side-effects on its twenty-three-item side-effect table. Twenty-three other things were more important, according to the researchers, than losing the sensation of orgasm. I have read them. They are not.

  But back to the main outcomes. And here is a good trick: instead of a real-world outcome, like death or pain, you could always use a ‘surrogate outcome’, which is easier to attain. If your drug is supposed to reduce cholesterol and so prevent cardiac deaths, for example, don’t measure cardiac deaths, measure reduced cholesterol instead. That’s much easier to achieve than a reduction in cardiac deaths, and the trial will be cheaper and quicker to do, so your result will be cheaper and more positive. Result!

  Now you’ve done your trial, and despite your best efforts things have come out negative. What can you do? Well, if your trial has been good overall, but has thrown out a few negative results, you could try an old trick: don’t draw attention to the disappointing data by putting it on a graph. Mention it briefly in the text, and ignore it when drawing your conclusions. (I’m so good at this I scare myself. Comes from reading too many rubbish trials.)

  If your results are completely negative, don’t publish them at all, or publish them only after a long delay. This is exactly what the drug companies did with the data on SSRI antidepressants: they hid the data suggesting they might be dangerous, and they buried the data showing them to perform no better than placebo. If you’re really clever, and have money to burn, then after you get disappointing data, you could do some more trials with the same protocol, in the hope that they will be positive: then try to bundle all the data up together, so that your negative data is swallowed up by some mediocre positive results.

  Or you could get really serious, and start to manipulate the statistics. For two pages only, this book will now get quite nerdy.

  I understand if you want to skip it, but know that it is here for the doctors who bought the book to laugh at homeopaths. Here are the classic tricks to play in your statistical analysis to make sure your trial has a positive result.

  Ignore the protocol entirely

  Always assume that any correlation proves causation. Throw all your data into a spreadsheet programme and report—as significant—any relationship between anything and everything if it helps your case. If you measure enough, some things are bound to be positive just by sheer luck.

  Play with the baseline

  Sometimes, when you start a trial, quite by chance the treatment group is already doing better than the placebo group. If so, then leave it like that. If, on the other hand, the placebo group is already doing better than the treatment group at the start, then adjust for the baseline in your analysis.

  Ignore dropouts

  People who drop out of trials are statistically much more likely to have done badly, and much more likely to have had side-effects. They will only make your drug look bad. So ignore them, make no attempt to chase them up, do not include them in your final analysis.

  Clean up the data

  Look at your graphs. There will be some anomalous ‘outliers’, or points which lie a long way from the others. If they are making your drug look bad, just delete them. But if they are helping your drug look good, even if they seem to be spurious results, leave them in.

  ‘The best of five…no…seven…no…nine!’

  If the difference between your drug and placebo becomes significant four and a half months into a six-month trial, stop the trial immediately and start writing up the results: things might get less impressive if you carry on. Alternatively, if at six months the results are ‘nearly significant’, extend the trial by another three months.

  Torture the data

  If your results are bad, ask the computer to go back and see if any particular subgroups behaved differently. You might find that your drug works very well in Chinese women aged fifty-two to sixty-one. ‘Torture the data and it will confess to anything,’ as they say at Guantanamo Bay.

  Try every button on the computer

  If you’re really desperate, and analysing your data the way you planned does not give you the result you wanted, just run the figures through a wide selection of other statistical tests, even if they are entirely inappropriate, at random.

  And when you’re finished, the most important thing, of course, is to publish wisely. If you have a good trial, publish it in the biggest journal you can possibly manage. If you have a positive trial, but it was a completely unfair test, which will be obvious to everyone, then put it in an obscure journal (published, written and edited entirely by the industry): remember, the tricks we have just described hide nothing, and will be obvious to anyone who reads your paper, but only if they read it very attentively, so it’s in your interest to make sure it isn’t read beyond the abstract. Finally, if your finding is really embarrassing, hide it away somewhere and cite ‘data on file’. Nobody will know the methods, and it will only be noticed if someone comes pestering you for the data to do a systematic review. Hopefully, that won’t be for ages.

  How can this be possible?

  When I explain this abuse of research to friends from outside medicine and academia, they are rightly amazed. ‘How can this be possible?’ they say. Well, firstly, much bad research comes down to incompetence. Many of the methodological errors described above can come about by wishful thinking, as much as mendacity. But is it possible to prove foul play?

  On an individual level, it is sometimes quite hard to show that a trial has been deliberately rigged to give the right answer for its sponsors. Overall, however, the picture emerges very clearly. The iss
ue has been studied so frequently that in 2003 a systematic review found thirty separate studies looking at whether funding in various groups of trials affected the findings. Overall, studies funded by a pharmaceutical company were found to be four times more likely to give results that were favourable to the company than independent studies.

  One review of bias tells a particularly Alice in Wonderland story. Fifty-six different trials comparing painkillers like ibuprofen, diclofenac and so on were found. People often invent new versions of these drugs in the hope that they might have fewer side-effects, or be stronger (or stay in patent and make money). In every single trial the sponsoring manufacturer’s drug came out as better than, or equal to, the others in the trial. On not one occasion did the manufacturer’s drug come out worse. Philosophers and mathematicians talk about ‘transitivity’: if A is better than B, and B is better than C, then C cannot be better than A. To put it bluntly, this review of fifty-six trials exposed a singular absurdity: all of these drugs were better than each other.

  But there is a surprise waiting around the corner. Astonishingly, when the methodological flaws in studies are examined, it seems that industry-funded trials actually turn out to have better research methods, on average, than independent trials.

  The most that could be pinned on the drug companies were some fairly trivial howlers: things like using inadequate doses of the competitor’s drug (as we said above), or making claims in the conclusions section of the paper that exaggerated a positive finding. But these, at least, were transparent flaws: you only had to read the trial to see that the researchers had given a miserly dose of a painkiller; and you should always read the methods and results section of a trial to decide what its findings are, because the discussion and conclusion pages at the end are like the comment pages in a newspaper. They’re not where you get your news from.

 

‹ Prev