Bad Science

Home > Science > Bad Science > Page 31
Bad Science Page 31

by Ben Goldacre


  These days, in most universities, we send a long and threatening document to every undergraduate student, explaining how every paragraph of every essay and dissertation they submit will be put through a piece of software called TurnItIn, expensively developed to detect plagiarism. This software is ubiquitous, and every year its body of knowledge grows larger, as it adds every student project, every Wikipedia page, every academic article, and everything else it can find online, in order to catch people cheating. Every year, in every university, students are caught receiving undeclared outside help; every year, students are disciplined, with points docked and courses marked as ‘failed’. Sometimes they are thrown off their degree course completely, leaving a black mark of intellectual dishonesty on their CV forever.

  And yet, to the best of my knowledge, no academic anywhere in the world has ever been punished for putting their name on a ghostwritten academic paper. This is despite everything we know about the enormous prevalence of this unethical activity, and despite endless specific scandals around the world involving named professors and lecturers, with immaculate legal documentation, and despite the fact that it amounts, in many cases, to something that is certainly comparable to the crime of simple plagiarism by a student.

  Not one has ever been disciplined. Instead, they have senior teaching positions.

  So, what do the regulations say about ghostwriting? For the most part, very little. A survey in 2010 of the top fifty medical schools in the United States found that all but thirteen had no policy at all prohibiting their academics putting their name to ghostwritten articles.90 The International Committee of Medical Journal Editors, meanwhile, has issued guidelines on authorship, describing who should appear as a named author on a paper, in the hope that ghostwriters will have to be fully declared as a result. These are widely celebrated, and everyone now speaks of ghostwriting as if it has been fixed by the ICMJE. But in reality, as we have seen so many times before, this is a fake fix: the guidelines are hopelessly vague, and are exploited in ways that are so obvious and predictable that it takes only a paragraph to describe.

  The ICMJE criteria require that someone is listed as an author if they fulfil three criteria: they contributed to the conception and design of the study (or data acquisition, or analysis and interpretation); they contributed to drafting or revising the manuscript; and they had final approval on the contents of the paper. This sounds great, but because you have to fulfil all three criteria to be listed as an author, it is very easy for a drug company’s commercial medical writer to do almost all the work, but still avoid being listed as an author. For example, a paper could legitimately have the name of an independent academic on it, even if they only contributed 10 per cent of the design, 10 per cent of the analysis, a brief revision of the draft, and agreed the final contents. Meanwhile, a team of commercial medical writers employed by a drug company on the same paper would not appear in the author list, anywhere at all, even though they conceived the study in its entirety, did 90 per cent of the design, 90 per cent of the analysis, 90 per cent of the data acquisition, and wrote the entire draft.91

  In fact, often the industry authors’ names do not appear at all, and there is just an acknowledgement of editorial assistance to a company. And often, of course, even this doesn’t happen. A junior academic making the same contribution as many commercial medical writers – structuring the write-up, reviewing the literature, making the first draft, deciding how best to present the data, writing the words – would get their name on the paper, sometimes as first author. What we are seeing here is an obvious double standard. Someone reading an academic paper expects the authors to be the people who conducted the research and wrote the paper: that is the cultural norm, and that is why medical writers and drug companies will move heaven and earth to keep their employees’ names off the author list. It’s not an accident, and there is no room for special pleading. They don’t want commercial writers in the author list, because they know it looks bad.

  Is there a solution? Yes: it’s a system called ‘film credits’, where everyone’s contribution is simply described at the end of the paper: ‘X designed the study, Y wrote the first draft, Z did the statistical analysis,’ and so on. Apart from anything else, these kinds of credits can help to ameliorate the dismal political disputes within teams about the order in which everyone’s name should appear. Film credits are uncommon. They should be universal.

  If I sound impatient about any of this, it’s because I am. I like to speak with people who disagree with me, to try to change their behaviour, and to understand their position better: so I talk to rooms full of science journalists about problems in science journalism, rooms full of homeopaths about how homeopathy doesn’t work, and rooms full of people from big pharma about the bad things they do. I have spoken to the members of the International Society of Medical Publications Professionals three times now. Each time, as I’ve set out my concerns, they’ve become angry (I’m used to this, which is why I’m meticulously polite, unless it’s funnier not to be). Publicly, they insist that everything has changed, and ghostwriting is a thing of the past. They repeat that their professional code has changed in the past two years. But my concern is this. Having seen so many codes openly ignored and broken, it’s hard to take any set of voluntary ideals seriously. What matters is what happens, and undermining their claim that everything will now change is the fact that nobody from this community has ever engaged in whistleblowing (though privately many tell me they’re aware of dark practices continuing even today). And for all the shouting, this new code isn’t even very useful: a medical writer could still produce the outline, the first draft, the intermediate drafts and the final draft, for example, with no problem at all; and the language used to describe the whole process is oddly disturbing, assuming – unthinkingly – that the data is the possession of the company, and that it will ‘share’ it with the academic.

  But more than that, even if we did believe that everything has suddenly changed, as they claim, as everyone in this area always claims – and it will be half a decade, at least, as ever, before we can tell if they’re right – not one of the longstanding members of the commercial medical writing community has ever given a clear account of why they did the things described above with a clear conscience. They paid guest authors to put their names on papers they had little or nothing to do with; and they ghostwrote papers covertly, knowing exactly what they were doing, and why, and what effect it would have on the doctors reading their work. These are the banal, widespread, bread-and-butter activities of their industry. So, a weak new voluntary code with no teeth from people who have not engaged in full disclosure – nor, frankly, offered an apology – is not, to my mind, any evidence that things have changed.

  What can you do?

  1. Lobby for your university to develop a strong and unambiguous code forbidding academic staff from being involved in ghostwriting. If you are a student, draw parallels with the plagiarism checks that are deployed on your own work.

  2. Lobby for the following changes in all academic journals you are involved in:

  • A full description of ‘film credit’ contributions at the end of every paper, including details of who initiated the idea for the publication.

  • A full declaration of the amount paid to any commercial medical writing firm for each paper, in the paper, and of who paid it.

  • Every person making a significant contribution should appear as a proper author, not ‘editorial assistance’.

  3. Raise awareness of the issue of ghostwriting, and ensure that everyone you know realises that the people who appear as authors on an academic paper may have had little to do with writing it.

  4. If you teach medical students, ensure that they are aware of this widespread dishonesty among senior figures in the academic medical literature.

  5. If you are aware of colleagues who have accepted guest authorship, discuss the ethics of this with them.

  6. If you are a doctor or an academic, lobby for your
Royal College or academic society to have a strong code forbidding involvement in ghostwriting.

  Academic journals

  We put a lot of trust in academic journals, because they are the conduit through which we find out about new scientific research. We assume that they take scientific articles based on merit. We assume that they make basic checks on accuracy (though we’ve seen that they don’t prevent misleading analyses of trial data being published). And we assume that the biggest, most famous journals – which are read regularly by many more people – take the better articles.

  This is naïve. In reality, the systems used by journals to select articles are brittle, and vulnerable to exploitation.

  Firstly, of course, there are the inherent frailties in the system. There is a huge amount of confusion for the public – and for many doctors – around what ‘peer reviewed’ publication actually means. Put very simply, when a paper is submitted to a journal, the editor sends it out to a few academics who they know have an interest in a particular field. These reviewers are unpaid, and do this work for the good of the academic community. They read the paper, and come to a judgement about whether it’s a newsworthy piece of research, a well-conducted study, fairly described, and whether its conclusions broadly match its findings.

  This is an imperfect and subjective set of judgement calls, standards vary hugely between journals, and there’s also room to stick the knife into competitors and enemies, since most reviewers’ comments are anonymised. That being said, the reviewers are often not very anonymous, because a comment like ‘This paper is unacceptable because it doesn’t cite the work of Chancer et al. in the introduction’ is a pretty good sign that Professor Chancer himself has just peer reviewed your paper. In any case, good journals often take papers that aren’t perfect, on the grounds that they have something, of some small scientific interest, in their results. So the academic literature is a ‘buyer beware’ environment, where judgement must be deployed by expert readers, and you cannot simply say, ‘I saw it in a peer reviewed paper, therefore it is true.’

  Then there is the clear conflict of interest. This problem is now openly discussed for academics – their industry grants, their drug-company stock portfolio – and every scientist is compelled by journal editors to declare their financial interests when publishing a paper. But the very editors who impose this rule on their contributors have, for the most part, exempted themselves from the same process. That is odd. The pharmaceutical industry has global revenues of $600 billion, and it buys a lot of advertising space in academic journals, often representing the greatest single component of a journal’s income stream, as editors very well know. In some respects, taking a step back, it’s odd that journals should only take adverts for drugs (and the occasional body scanner): the rates in JAMA are cheaper than those in Vogue, taking circulation into account (300,000 against a million), and doctors buy cars and smartphones like everyone else. But journals do like to look scholarly; and it was only recently that they were trying to persuade the government that drug adverts are educational content, and should therefore be tax exempt. You will remember, I hope, just how educational these adverts are, from the discussion earlier in this chapter of how often they make claims that are not supported by the evidence.

  To reduce the risk that this income strand will pervert decisions on whether to publish an article, journals often claim that they introduce ‘firewalls’ between editorial and advertising staff. Sadly, such firewalls are easily burnt through.

  In 2004, for example, an editorial was submitted to the respected journal Transplantation and Dialysis, questioning the value of erythropoietin, or ‘EPO’.92 Although this molecule is made by the body, it can also be manufactured and given medically, and in this form it is one of the biggest-selling pharmaceutical products of all time. It is also, unfortunately, extremely expensive, and the editorial was submitted in response to a call from Medicare, which had asked for help in reviewing its policy on giving the treatment to people in end-stage renal disease, since there were fears that it might not be effective. The editorial agreed with this pessimistic stance, and was accepted by three ‘peer reviewers’ at the journal. Then the editor sent the following unwise letter to the author:

  I have now heard back from a third reviewer of your EPO editorial, who also recommended that it be published…Unfortunately, I have been overruled by our marketing department with regard to publishing your editorial.

  As you accurately surmised, the publication of your editorial would, in fact, not be accepted in some quarters…and apparently went beyond what our marketing department was willing to accommodate. Please know that I gave it my best shot, as I firmly believe that opposing points of view should be provided a forum, especially in a medical environment, and especially after those points of view survive the peer review process. I truly am sorry.

  The letter was made public, and the journal reversed its decision. As ever, it is impossible to know how often decisions like this are made, and how often they are hidden. All we can do is document the scale of the financial incentive for journals, and the quantitative evidence showing a possible impact on their content.

  Overall, the pharmaceutical industry spends around half a billion dollars a year on advertising in academic journals.93 The biggest – NEJM, JAMA – take $10 or $20 million each, and there is a few million each for the next rank down. Strikingly, while many journals are run by professional bodies, their income from advertising is still far larger than anything they get from membership fees. In addition to the large general journals, and the small specialist ones, some journals are delivered to doctors for free, and subsidised entirely by advertising revenue. To see whether this income has an impact on content, a 2011 paper looked at all the issues of eleven journals read by GPs in Germany – a mix of free and subscription publications – and found 412 articles where drug recommendations were made. The results were stark: free journals, subsidised by advertising, ‘almost exclusively recommended the use of the specified drugs’. Journals financed entirely through subscription fees, meanwhile, ‘tended to recommend against the use of the same drugs’.94

  Advertising is not the only source of drug company revenue for academic journals; there are several other strands of income, some of which are not immediately apparent. Journals often produce ‘supplements’, whole extra editions outside of its normal work. These are often sponsored by a drug company, based on the presentations at one of its sponsored conferences or events, and have much lower scientific standards than are found in the journal itself.

  Then there are ‘reprints’. These are special extra copies of individual academic papers that are printed off and sold by academic journals. These are then handed out to doctors by drug reps to promote their drugs, and are bought in huge quantities, with spends of up to $1 million to buy crates of copies of just one paper. Those are the sorts of figures that haunt editors’ imaginations when they try to choose which of two trials they should publish. Richard Smith, a former editor of the British Medical Journal, framed the dilemma: ‘Publish a trial that will bring in $100,000 of profit, or meet the end-of-year budget by firing an editor.’95

  Sometimes the implicit reasoning behind these choices can find its way into the public domain. A recent investigation from the UK Prescriptions Medicine Code of Practice Authority, for example, ruled that the company Boehringer Ingelheim was responsible for the content of an article making unacceptable claims for its diabetes drug linagliptin, even though it was written by two academics, and appeared in the Wiley Publishing academic journal Future Prescriber, because ‘although Boehringer Ingelheim did not pay for the article per se, it in effect commissioned it through an agreement to pay for 2,000 reprints’.96

  For the most part, however, even the most basic numbers on this huge source of income are hard to obtain. A research project I was involved in found that the biggest and most lucrative reprint orders come overwhelmingly from the pharmaceutical industry (this was a lot of work, and we’ve just h
ad it published in the BMJ,97 though it might have happened faster with a commercial medical writing firm handling the legwork for us). This simple finding is exactly what you might expect, but there was something else that happened during this study, which many people found much more concerning. We asked all the leading journals in the world for information about their income from reprints, but only the BMJ and the Lancet were willing to give us any data at all: the Journal of the American Medical Association said this information was proprietary; the vice president of publishing for Annals of Internal Medicine said they did not have the resources to provide the information; and the managing director of publishing for the New England Journal of Medicine said it would conflict with their business practices to tell us. So this huge source of pharmaceutical industry income, paid to the gatekeepers of medical knowledge, remains secret.

  Is there any evidence to show that journals are more likely, on a fair comparison, to take industry-funded studies?

  This has been studied only rarely – because, as we need to keep reminding ourselves, this whole area has hardly been a research funding priority – but the answer appears to be yes. A paper published in 2009 analysed every study ever published on the influenza vaccine98 (although it’s reasonable to assume that its results might hold for other subject areas). It looked at whether funding source affected the quality of a study, the accuracy of its summary, and the eminence of the journal in which it was published.

  Academics measure the eminence of a journal, rightly or wrongly, by its ‘impact factor’: an indicator of how commonly, on average, research papers in that journal go on to be ‘cited’ or ‘referenced’ by other research papers elsewhere. The average journal impact factor for the ninety-two government-funded studies was 3.74; for the fifty-two studies wholly or partly funded by industry, the average impact factor was much higher, at 8.78. This means that studies funded by the pharmaceutical industry were hugely more likely to get into the bigger, more respected journals.

 

‹ Prev