I Think You'll Find It's a Bit More Complicated Than That

Home > Science > I Think You'll Find It's a Bit More Complicated Than That > Page 9
I Think You'll Find It's a Bit More Complicated Than That Page 9

by Ben Goldacre


  In the intervening fifty years this book has sold one and a half million copies. It’s the greatest-selling stats book of all time (tough market), and it remains in print, at just £8.99.

  Meanwhile, ‘Doctors say no to abortions in their surgeries’ is the headline in the Daily Telegraph. ‘Family doctors are threatening a revolt against Government plans to allow them to perform abortions in their surgeries, the Daily Telegraph can disclose.’ A revolt? ‘Four out of five GPs do not want to carry out terminations even though the idea is being tested in NHS pilot schemes, a survey has revealed.’

  Channelling Huff through my fingers, in a trancelike state, I went in search of the figures. Was this a systematic survey of all GPs, with lots of chasing to catch the non-responders? Telephoning them at work? A postal survey, at least? No. It was an informal poll through doctors.net.uk, an online chat site for doctors, producing this major news story about a profession threatening a revolt.

  The statement to which doctors were invited to respond was this: ‘GPs should carry out abortions in their surgeries’. You can ‘strongly agree’, ‘agree’, ‘don’t know’, ‘disagree’ or ‘strongly disagree’.

  I might be slow, but I myself do not fully understand the statement. Is that ‘should’ as in ‘should’, as in, ‘ought to’, as in ‘coerced’? And in what circumstances? With extra training, time, and money? With extra systems in place for adverse outcomes? This is a chat website where doctors go to grumble, cynically, in good company. Are they saying ‘no’ because this new responsibility would involve more work and lower morale? Would you even click the ‘abortion’ link in the chat pages index if you didn’t already have an interest in abortion?

  And stepping bravely beyond the second word ‘should’, what does ‘carry out abortions in their surgeries’ mean? Looking at the comments in the chat forum – as I am doing right now – plenty of the doctors seemed to think the question referred to surgical abortions, not the relatively safe oral pill for termination of early pregnancy. Doctors aren’t all that bright, you see, and questionnaire respondents in general may not necessarily know what you’re thinking about if you don’t write a proper question.

  Here are some quotes from the doctors in the discussion underneath this poll. ‘This is a preposterous idea. How can GPs ever carry out abortions in their own surgeries. What if there was a major complication like uterine and bowel perforation?’ ‘The only way it would or rather should happen is if GP practices have a surgical day care facility as part of their premises which is staffed by appropriately trained staff, i.e. theatre staff, anaesthetist and gynaecologist … any surgical operation is not without its risks, and presumably [we] will undergo gynaecological surgical training in order to perform.’ ‘What are we all going on about? Let’s all carry out abortions in our surgeries, living rooms, kitchens, garages, corner shops, you know, just like in the old days.’

  But my favourite is this: ‘I think that the question is poorly worded and I hope that DNUK do not release the results of this poll to the Daily Telegraph.’

  A New and Interesting Form of Wrong

  Guardian, 27 November 2010

  Wrong isn’t enough: we need interestingly wrong, and this week that came in some research from Stonewall, an organisation for which I generally have great respect, which was reported in the Guardian. Stonewall has conducted a survey, and its press release says it shows that ‘the average coming-out age has fallen by over twenty years’.

  People may well be coming out earlier than before – intuitively, that seems plausible – but Stonewall’s survey is flawed by design, and contains some interesting statistical traps.

  Through social networking sites, Stonewall asked 1,536 people – who were already out – how old they were when they came out. Among the over-sixties, the average age was thirty-seven; those in their thirties had come out at an average age of twenty-one; in the group aged eighteen to twenty-four, the average age for coming out was seventeen.

  Why is the age coming down? Here’s one reason. Obviously, there are no out gay people in the eighteen-to-twenty-four group who came out at an age later than twenty-four; so the average age at which people in the eighteen-to-twenty-four group came out cannot possibly be greater than the average age of that group, and certainly it will be lower than, say, thirty-seven, the average age at which people in their sixties came out.

  For the same reason, it’s very likely indeed that the average age of coming out will increase as the average age of each age group rises. In fact, if we assume (in formal terms we could call this a ‘statistical model’) that at any time, all the people who are out have always come out at a uniform rate between the age of ten and their current age, you would get almost exactly the same figures (you’d get fifteen, twenty-three and thirty-five, instead of seventeen, twenty-one and thirty-seven). This is almost certainly why ‘the average coming-out age has fallen by over twenty years’: in fact you could say that Stonewall’s survey has found that on average, as people get older, they get older.

  But there is also an interesting problem around whether, with the data it collected, Stonewall could ever have created a meaningful answer to the question ‘Have people started coming out earlier?’ It’s a difficult analysis to design, because in each age band there is no information on gay people who are not yet out, but may come out later, and also it’s hard to compare each age band with the others.

  You could try to fix this by restricting all the data to include only those people who came out under the age of twenty-four, and then measure the mean age of coming out for each age group (eighteen-to-twenty-four, thirties, sixty plus) in this subgroup alone. That would give you some kind of answer for this very narrow age band, but even that makes some very dubious statistical assumptions. And if we allowed ourselves this move, we’d then be working with an extremely small set of data: only thirty-three respondents aged over sixty, for example.

  Even then, the discussion of this poll also assumes that the age at which people know their sexuality has remained unchanged. Some believe that everyone’s sexuality is fixed and known from birth – I may be walking into a minefield here – but if the age at which people recognise their own sexuality is changing, then a more relevant figure by which to measure discomfort at coming out might be the delay, rather than the absolute age.

  I thought I’d already covered all the ways that a survey could get things wrong, but this one brought something new. Maybe we should accept that all research of this kind is only produced as a hook for a news story about a political issue, and isn’t ever supposed to be taken seriously. In any case, my intuition is that a well-constructed study would probably confirm Stonewall’s original hypothesis. But it’s still fun to dig.

  ‘Hello Madam, Would You Like Your Children to Be Unemployed?’

  Guardian, 20 November 2010

  Obviously I like nerdy days out: like Kelvedon Hatch secret nuclear bunker, maybe, with its sign on the A128 pointing the way to the ‘Secret Nuclear Bunker’. Last month eight of us commissioned a boat to get onto a rotting man-made World War II sea-fort in the middle of the ocean through Project Redsand (we genuinely thought we might die climbing the ladders), and a couple of weeks earlier, myself and Mrs Bad Science travelled to Dungeness, where a toytown narrow-gauge railway takes you through amusement parks and back gardens, past Derek Jarman’s house, then into barren wasteland, before depositing you incongruously at the base of a magnificent, enormous, and terrifying nuclear power station.1

  I tell you this, because I should declare an interest: I quite like nuclear power stations, not just because they’re clever, or even because I regretfully concede they might be one of our least bad options for power. I secretly like nuclear power stations because they remind me, in the way nostalgia makes us pine for things we disliked at the time, of a childhood in the early 1980s when I knew that I would definitely die in a nuclear holocaust.

  So. Last month energy company EDF conducted a poll on whether people near Hinkley Po
int nuclear power station would like it to be expanded. The BBC dutifully reported the results: ‘EDF Survey Shows Support for Hinkley Power Station’, said the headline. ‘Six in 10 people support a new power station at Hinkley’. Polls like this convince locals, and politicians.

  But Leo Barasi at the blog ClimateSock has obtained the original polling questions from ICM, and found a masterclass in manipulation.

  First, respondents are set into the frame with a simple starter: ‘How favourable or unfavourable is your opinion of the nuclear energy industry?’ Then things heat up. ‘To what extent do you agree or disagree with the following statement: Nuclear energy has disadvantages but the country needs nuclear power as part of the energy balance with coal, gas and wind power.’ As Leo says, this is structured in a way that makes it harder to disagree. ‘It appears reasoned: taking on board the downsides of nuclear before drawing a measured conclusion that it’s a necessary evil to produce a greater good.’ As a result, only 13 per cent disagree, but the whole audience is gently nudged.

  Then locals are asked a whole series of branching questions, forcing them to weigh up the positive and negative impacts a new power station would have on the area. People who think it would be positive are asked to also weigh up the negative, and people who think it would be negative are asked to weigh up the positive factors, and everyone is asked to say why they think what they think.

  Then, in a killer move, they’re asked: ‘How important, if at all, do you consider a new power station at Hinkley to each of the following? To the creation of local jobs? To the future of local businesses?’ And take a moment to reinforce those concerns: ‘Why do you say that?’

  Finally, after being led on this thoughtful journey, and immediately after mulling over the beneficial economic impact it would have on their community, the locals are asked if they’re in favour of a new nuclear power station. It’s the results of this, the final question, that are reported in the press release and headlines.

  To me it seems clear that this long series of preceding questions will guide people down a very specific path when thinking about a nuclear power station. It’s a guided narrative, and that might make sense if you were trying to advocate a kind of structured decision-making, but it’s very unlikely to produce results that reflect the true range of local views, partly because we’re all a bit thoughtless in the real world, and follow our guts in odd ways; but partly because the penultimate question is ‘Do you want your children to be unemployed?’ rather than ‘Are you secretly terrified we might cock up and give you cancer?’

  So I still quite like nuclear power stations, but more than that, as ever, I salute the PR industry for finding new and elaborate ways to muddy the waters. And I salute the nerds who bust them for it.

  EPIDEMIOLOGY

  Beau Funnel

  Guardian, 28 October 2011

  The BBC has found a story: ‘“Threefold variation” in UK bowel cancer rates’. The average death rate across the UK from bowel cancer is 17.9 per 100,000 people, but in some places it’s as low as nine, and in some places it’s as high as thirty. What can be causing this?

  Journalists tend to find imaginary patterns in statistical noise, which we’ve covered many times before. But this case is particularly silly, as you will see, and it has a heartwarming, nerdy twist.

  Paul Barden is a quantitative analyst. He saw the story, and decided to download the data and analyse it himself. The claims come from a press release by the charity Beating Bowel Cancer: they’ve built a map where you can find your own local authority’s bowel cancer mortality rate and get worried, or reassured. Using a ‘scraping’ program, Barden brought up the page for each area in turn, and downloaded the figures. By doing this he could make a spreadsheet showing the death rate in each region, and its population. From here things get slightly complicated, but very rewarding.

  We know that there will be random variation around the average mortality rate, and also that this will be different in different regions: local authorities with larger populations will have less random variation than areas with smaller populations, because the variation from chance events gets evened out more when there are more people.

  You can show this formally. The random variation for this kind of mortality rate will follow the Poisson distribution (a bit like the bell-shaped curve you’ll be familiar with). This bell-shaped curve gets narrower – less random variation – for areas with a large population.

  So, Barden ran a series of simulations in Excel, where he took the UK average bowel cancer mortality rate and a series of typical population sizes, and then used the Poisson distribution to generate figures for the bowel cancer death rate that varied with the randomness you would expect from chance.

  This random variation predicted by the Poisson distribution – before you even look at the real variations between areas – shows that you would expect some areas to have a death rate of seven, and some areas to have a death rate of thirty-two. So it turns out that the real UK variation, from nine to thirty-one, may actually be less than you’d expect from chance.

  Then Barden sent his blog to David Spiegelhalter, a Professor of Statistics at Cambridge, who runs the excellent website Understanding Uncertainty. Spiegelhalter suggested that Barden could present the real cancer figures as a funnel plot, and that’s what you see opposite.

  I cannot begin to tell you how happy it makes me that Spiegelhalter, author of Funnel Plots for Comparing Institutional Performance – the citation classic from 2005 – can be found by a random blogger online, and then collaborate to make an informative graph of some data that’s been over-interpreted by the BBC.

  But back to the picture. Each dot is a local authority. The dots higher up show areas with more deaths. The dots further to the right show ones with larger populations. As you can see, areas with larger populations are more tightly clustered around the UK average death rate, because there’s less random variation in bigger populations. Lastly, the dotted lines show you the amount of random variation you would expect to see, from the Poisson distribution, and there are very few outliers (well, one main one, really).

  Excitingly, you can also do this yourself online. The Public Health Observatories provide several neat tools for analysing data, and one will draw a funnel plot for you, from exactly this kind of mortality data. The bowel cancer numbers are in the table above. You can paste them into the Observatories’ tool, click ‘calculate’, and experience the thrill of touching real data.

  In fact, if you’re a journalist, and you find yourself wanting to claim one region is worse than another, for any similar set of death rate figures, then do feel free to use this tool on those figures yourself. It might take five minutes.

  The week after this column came out, a letter was published from Gary Smith, UK News Editor at the BBC. It said: ‘The BBC stands by this report as an accurate representation of the figures, which were provided by the reputable charity Beating Bowel Cancer. Dr Goldacre suggests the difference between the best- and worst-performing authorities falls within a range that could be expected through change [I don’t suggest this: I demonstrate it to be true]. But that does not change the fact that there is a threefold difference between the best and worst local authorities.’ This is a good example of the Kruger-Dunning effect: the phenomenon of being too stupid to know how stupid you’re being (discussed, if you’re keen to know more, in Bad Science, here).

  When Journalists Do Primary Research

  Guardian, 9 April 2011

  This week some journalists found a pattern in some data, and ascribed a cause to it. ‘Recession linked to huge rise in antidepressants’ said the Telegraph. ‘Economic woes fuel dramatic rise in use of antidepressants’ said the Daily Mail. ‘Record numbers of people are being handed antidepressants’ said the Express. Even the Guardian joined in. It seems to have come from a BBC report.

  The journalists are keen for you to know that these figures come from a Freedom of Information Act request, which surprised me, since each y
ear – like you – I enjoy reading the Prescription Cost Analysis documents, which detail everything that has been prescribed over the previous year. The 2009 data was published in April 2010, so I guess the 2010 data was due about now.

  But are the numbers correct? Yes. From 2006 to 2010 there was a 43 per cent increase in the number of prescriptions for the SSRI class of antidepressants. Does that mean more people are depressed in the recession?

  Firstly, this rise in scripts for antidepressants isn’t a new phenomenon. In 2009 the BMJ published a paper titled ‘Explaining the rise in antidepressant prescribing’, which looked at the period from 1993 to 2005. In the five years from 2000 to 2005 – the boom before the bust these journalists are writing about – antidepressant prescribing also increased, by 36 per cent. That isn’t very different from 43 per cent, so it feels unlikely that the present increase in prescriptions is due to the recession.

  That’s not the only problem here. It turns out that the number of prescriptions for an SSRI drug is a pretty unhelpful way of measuring how many people are being treated for depression: not just because people get prescribed SSRIs for all kinds of other things, like anxiety, PTSD, hot flushes, and more; and not just because doctors have moved away from older types of antidepressants, so they would be prescribing more of the newer SSRI drugs even if the number of people with depression had stayed the same.

  Excitingly, it’s a bit more complicated than that. A 2006 paper from the British Journal of General Practice looked at prescribing and diagnosis rates in Scotland. Overall, again, the number of prescriptions for antidepressants increased, from 1.5 million in 1996 to 2.8 million in 2001 (that is, it almost doubled).

 

‹ Prev