I Think You'll Find It's a Bit More Complicated Than That

Home > Science > I Think You'll Find It's a Bit More Complicated Than That > Page 2
I Think You'll Find It's a Bit More Complicated Than That Page 2

by Ben Goldacre


  LIBEL is a subject close to my heart, having been through the process too many times. In this section, we see how the people who sue tend not to be very nice, and how their legal aggression can – to my great pleasure – backfire. This section also includes breast-enhancement cream, and the brief return of Gillian McKeith.

  I’ve always railed against the idea that QUACKS are manipulators, with innocent victims for customers: one woman’s trip to intensive care presents an opportunity to see where the blame really lies, when quacks have their magical beliefs routinely reinforced by journalists and the government. More than that, we see how serious organisations – from universities to medicines regulators – can fail to uphold their own stated values when under political pressure or seduced by money. Then we have a brief interlude to look at three peculiarly enduring themes in modern culture: MAGIC BOXES of secret electronic components with supernatural powers (to detect bombs, cure cigarette addiction and even find murdered children), AIDS denialism (at the Spectator, of all places), and, in ELECTROSENSITIVITY, people eager to claim that electrical fields make you unwell (while selling you expensive equipment to protect yourself, and seducing journalists from broadsheets to the BBC’s Panorama).

  If science is about the quest for truth, then equally important is the science of IRRATIONALITY – how and why our hunches get things wrong – because that’s the reason we need fair experiments and careful statistics in the first place. Here we see how our intuitions about whether a treatment works can be affected by the way the numbers are presented, how our outrage is lower when a criminal has more victims, why blind auditions can help combat sexism in orchestras, how people can turn their back on all of science when some evidence challenges just one of their prejudices, how people win more in a simple game when they’re told they’ve got a lucky ball, how responding to a smear can reinforce it, how smokers are misled by cigarette packaging, how people can convince themselves that patients in comas are communicating, and how negative beliefs can make people experience horrible side effects, even when they’re only taking sugar pills with no medicine in them. In this section I also unwisely disclose my own positive and creative visualisation ritual, and the evidence behind it.

  In BAD JOURNALISM we see the many different ways that journalists can distort scientific findings: misrepresenting an MSc student’s dissertation project with a headline that claims scientists are blaming women for their own rape, creating vaccine scares, and saying that exercise makes you fat. We also see the techniques journalists use to mislead, by burying the caveats and failing to link to primary sources, then we review research showing that academic press releases are often to blame, and that crass reporting on suicide can create copy-cat behaviour. The work in this section has made me extremely unpopular with whole chunks of the media, but I truly don’t think there’s anything personal here: the pieces are simply straight explanations, illustrating how evidence has been misrepresented by professional people with huge public influence. In light of that, I’ve included some attacks on me by others, and you can make what you will of their backlash. Lastly, we see how hit TV science series BRAINIAC – which sells itself on doing truly dangerous, really ‘real’ science – simply fakes explosions with cheap stage effects.

  In the final furlong, there’s a collection of STUFF: my affectionate introduction in the guidebook of a miniature steam railway that takes you through council estates to the foot of a nuclear power station, and a guide to stalking your girlfriend through her mobile phone (with permission). Lastly there are some EARLY SNARKS. Reading your own work from ten years ago is a bit like being tied down, with your eyelids glued open, and forced to watch ten-foot videos of yourself saying stupid things with bad hair. But in case you miss the child I once was, here I take pops at cosmetics companies selling ‘trionated particles’, do the maths on oxygenated water that would drown you before it did any good, and cry at finding New Scientist being taken in by some obviously fake artificial intelligence software.

  So welcome, again, to my epidemiology and statistics toilet book. By the simple act of keeping this book next to the loo you will – I can guarantee it – develop a clear understanding of almost all the key issues in statistics and study design. Your knowledge will outdo that of many working scientists and doctors, trapped in the silo of their specialist subjects. You will be funny at parties and useful at work, and the trionated ink molecules embedded in every page will make you youthful, beautiful and politically astute.

  I hope these small packages bring you satisfaction.

  2014

  HOW SCIENCE WORKS

  Why Won’t Professor Susan Greenfield Publish This Theory in a Scientific Journal?

  Guardian, 22 October 2011

  This week Baroness Susan Greenfield, Professor of Pharmacology at Oxford, apparently announced that computer games are causing dementia in children. This would be very concerning scientific information; but it comes to us from the opening of a new wing at an expensive boarding school, not an academic conference. Then a spokesperson told a gaming site that’s not really what she meant. But they couldn’t say what she does mean.

  Two months ago the same professor linked internet use with the rise in autism diagnoses (not for the first time), then pulled back when autism charities and an Oxford professor of psychology raised concerns. Similar claims go back a very long way. They seem changeable, but serious.

  It’s with some trepidation that anyone writes about Professor Greenfield’s claims. When I raised concerns, she said I was like the epidemiologists who denied that smoking caused cancer. Other critics find themselves derided in the media as sexist. When Professor Dorothy Bishop raised concerns, Professor Greenfield responded: ‘It’s not really for Dorothy to comment on how I run my career.’

  But I have one, humble, question: why, in over five years of appearing in the media raising these grave worries, has Professor Greenfield of Oxford University never simply published the claims in an academic paper?

  A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper, and for one simple reason. Science has authority, not because of white coats or titles, but because of precision and transparency: you explain your theory, set out your evidence, and reference the studies that support your case. Other scientists can then read it, see if you’ve fairly represented the evidence, and decide whether the methods of the papers you’ve cited really do produce results that meaningfully support your hypothesis.

  Perhaps there are gaps in our knowledge? Great. The phrase ‘more research is needed’ has famously been banned by the British Medical Journal, because it’s uninformative: a scientific paper is the place to clearly describe the gaps in our knowledge, and specify new experiments that might resolve these uncertainties.

  But the value of a scientific publication goes beyond this simple benefit of all relevant information appearing, unambiguously, in one place. It’s also a way to communicate your ideas to your scientific peers, and invite them to express an informed view.

  In this regard, I don’t mean peer review, the ‘least-worst’ system settled on for deciding whether a paper is worth publishing, where other academics decide if it’s accurate, novel, and so on. This is often represented as some kind of policing system for truth, but in reality some dreadful nonsense gets published, and mercifully so: shaky material of some small value can be published into the buyer-beware professional literature of academic science; then the academic readers of this literature, who are trained to critically appraise a scientific case, can make their own judgement.

  And it is this second stage of review by your peers – after publication – that is so important in science. If there are flaws in your case, responses can be written, as letters to the academic journal, or even whole new papers. If there is merit in your work, then new ideas and research will be triggered. That is the real process of science.

  If a scientist sidesteps their sci
entific peers, and chooses to take an apparently changeable, frightening and technical scientific case directly to the public, then that is a deliberate decision, and one that can’t realistically go unnoticed. The lay public might find your case superficially appealing, but they may not be fully able to judge the merits of all your technical evidence.

  I think these serious scientific concerns belong, at least once, in a clear scientific paper. I don’t see how this suggestion is inappropriate, or impudent, and in all seriousness, I can’t see an argument against it. I hope it won’t elicit an accusation of sexism, or of participation in a cover-up. I hope that it will simply result in an Oxford science professor writing a scientific paper, about a scientific claim of great public health importance, that she has made repeatedly – but confusingly – for at least half a decade.

  Cherry-Picking Is Bad. At Least Warn Us When You Do It

  Guardian, 24 September 2011

  Last week the Daily Mail and Radio 4’s Today programme took some bait from Aric Sigman, an author of popular-sciencey books about the merits of traditional values. ‘Sending babies and toddlers to daycare could do untold damage to the development of their brains and their future health,’ explained the Mail.

  These news stories were based on a scientific paper by Sigman in the Biologist. It misrepresents individual studies, as Professor Dorothy Bishop demonstrated almost immediately, and it cherry-picks the scientific literature, selectively referencing only the studies that support Sigman’s view. Normally this charge of cherry-picking would take a column of effort to prove, but this time Sigman himself admits it, frankly, in a PDF posted on his own website.

  Let me explain why this behaviour is a problem. Nobody reading the Biologist, or its press release, could possibly have known that the evidence presented was deliberately incomplete. That is, in my opinion, an act of deceit by the journal; but it also illustrates one of the most important principles in science, and one of the most bafflingly recent to emerge.

  Here is the paradox. In science, we design every individual experiment as cleanly as possible. In a trial comparing two pills, for example, we make sure that participants don’t know which pill they’re getting, so that their expectations don’t change the symptoms they report. We design experiments carefully like this to exclude bias: to isolate individual factors, and ensure that the findings we get really do reflect the thing we’re trying to measure.

  But individual experiments are not the end of the story. There is a second, crucial process in science, which is synthesising that evidence together to create a coherent picture.

  In the very recent past, this was done badly. In the 1980s, researchers such as Celia Mulrow produced damning research showing that review articles in academic journals and textbooks, which everyone had trusted, actually presented a distorted and unrepresentative view when compared with a systematic search of the academic literature. After struggling to exclude bias from every individual study, doctors and academics would then synthesise that evidence together with frightening arbitrariness.

  The science of ‘systematic reviews’ that grew from this research is exactly that: a science. It’s a series of reproducible methods for searching information, to ensure that your evidence synthesis is as free from bias as your individual experiments. You describe not just what you found, but how you looked, which research databases you used, what search terms you typed, and so on. This apparently obvious manoeuvre has revolutionised the science of medicine.

  What does that have to do with Aric Sigman, the Society of Biologists, and their journal, the Biologist? Well, this article was not a systematic review, the cleanest form of research summary, and it was not presented as one. But it also wasn’t a reasonable summary of the research literature, and that wasn’t just a function of Sigman’s unconscious desire to make a case: it was entirely deliberate. A deliberately incomplete view of the literature, as I hope I’ve explained, isn’t a neutral or marginal failure. It is exactly as bad as a deliberately flawed experiment, and to present it to readers without warning is bizarre.

  Blame is not interesting, but I got in touch with the Society of Biology, as I think we’re more entitled to have high expectations of them than of Sigman, who is, after all, some guy writing fun books in Brighton. They agree that what they did was wrong, that mistakes were made, and that they will do differently in future.

  Here’s why I don’t think that’s true. The last time they did exactly the same thing, not long ago, with another deliberately incomplete article from Sigman, I wrote to the journal, the editor, and the editorial board, setting out these concerns very clearly.

  The Biologist has actively decided to continue publishing these pieces by Sigman, without warning. They get the journal huge publicity; and fair enough. I’m no policeman. But in the two-actor process of communication, until it explains to its readers that it knowingly presents cherry-picked papers without warning – and makes a public commitment to stop – it’s for readers to decide whether they can trust what the journal publishes.

  Being Wrong

  Guardian, 15 July 2011

  Morons often like to claim that their truth has been suppressed: that they are like Galileo, a noble outsider fighting the rigid and political domain of the scientific literature, which resists every challenge to orthodoxy.

  Like many claims, this is something for which it’s possible to gather data.

  Firstly, there are individual anecdotes that demonstrate the routine humdrum of medical fact being overturned. We used to think that hormone-replacement therapy significantly reduced the risk of heart attacks, for example, because this was the finding of several large observational studies. That kind of research has important limitations: if you just grab some women who are receiving prescriptions for HRT from their doctors, and compare them to women who aren’t getting HRT, you might well find that the women on HRT are healthier, but that might simply be because they were healthier to start with. Women on HRT might be richer, or more interested in their health, for example. At the time, this research represented our best guess, and that’s often all you have to work with. Eventually, after decades of HRT being widely used, a large randomised trial was conducted: they took 16,000 women who were eligible for HRT, and randomly assigned them to receive either real hormones, or a dummy placebo pill. At long last we had a fair test, and after several years of treatment had passed, in 2002, the answer fell out. HRT increased the risk of having a heart attack by 29 per cent.

  Were these findings suppressed? No. They were greeted eagerly, and with some horror: in fact, the finding was so concerning that the trial had to be stopped early, to avoid putting any further participants at risk, and medical practice was overturned.

  Even the supposed stories of outright medical intransigence turn out, on close examination, to be pretty weak: people claim that doctors were slow to embrace Helicobacter pylori as the cause of gastric ulcers, when in reality it only took a decade from the first murmur of a research finding to international guidelines recommending antibiotic treatment for all patients with ulcers.

  But individual stories aren’t enough. This week Vinay Prasad and colleagues published a fascinating piece of research about research. They took all 212 academic papers published in the New England Journal of Medicine during 2009. Of those, 124 made some kind of claim about whether a treatment worked or not. Then, they set about measuring how those findings fitted into what was already known. Two reviewers assessed whether the results in each study were positive or negative, and finally – separately – they decided whether these new findings overturned previous research.

  Seventy-three of the studies looked at new treatments, so there was nothing to overturn. But the remaining fifty-one were very interesting, because they were, essentially, evenly split: sixteen upheld a current practice as beneficial; nineteen were inconclusive; and, crucially, sixteen found that a practice believed to be effective was in fact ineffective, or vice versa.

  Is this unexpected? Not at all.
If you like, you can look at the same problem from the opposite end of the telescope. In 2005, John Ioannidis gathered together all the major clinical research papers published in three prominent medical journals between 1990 and 2003; specifically, he took the ‘citation classics’, the forty-nine studies that were cited more than a thousand times by subsequent academic papers.

  Then he checked to see whether their findings had stood the test of time, by conducting a systematic search in the literature to make sure he was consistent in finding subsequent data. Of his forty-nine citation classics, forty-five had found that an intervention was effective, but in the time that had passed subsequently, only half of these findings had been positively replicated. Seven studies – 16 per cent of the total – were flatly contradicted by subsequent research; and for a further seven studies, follow-up research had found that the benefits originally identified were present, but more modest than first thought.

  This looks like a reasonably healthy state of affairs to me: there probably are true tales of dodgy peer reviewers delaying publication of findings they don’t like, but overall, things are routinely proven to be wrong in academic journals. Equally, the other side of this coin is not to be neglected: we often turn out to be wrong, even with giant, classic papers. So it pays to be cautious with dramatic new findings; if you blink you might miss a refutation; and there’s never an excuse to stop monitoring outcomes.

 

‹ Prev