Book Read Free

I Think You'll Find It's a Bit More Complicated Than That

Page 14

by Ben Goldacre


  Meanwhile, back in the real world, what do local governments actually procure? Well, the biggest thing, about a quarter of that £50 billion budget, more than £10 billion a year of local government procurement, is social care: mostly residential care, mostly for the elderly, and most through the independent sector.

  If you’re going to save 20 per cent off that, then I suggest you tell us how, in full and educative detail. In the meantime, saying you can get us a better deal on our mobile-phone tariff, and then pretending that means you’ve taken 20 per cent off the entire £50 billion local government procurement spend, isn’t just misleading: it’s the reasoning of a ten-year-old.

  Anarchy for the UK. Ish.

  Guardian, 2 April 2011

  Here are two fun ways that numbers can be distorted for political purposes. Each of them feels oddly poetic in its ability to smear or stifle.

  The first is simple: you can conflate two different things into one omnibus figure, either to inflate a problem, or to confuse it. Last weekend a few hundred thousand people marched in London against government cuts. On the same day there was some violent disturbance, windows smashed, policemen injured, and drunkenness.

  The Sun said: ‘Police have charged nearly 150 people after violent anarchists hijacked the anti-cuts demo and brought terror to London’s streets.’ The Guardian republished a Press Association report headlined ‘Cuts protest violence: 149 people charged’. And from the locals, for example, the Manchester Evening News carried ‘Boy, 17, from Manchester Among 149 Charged Over Violence After Anti-Cuts March’.

  In reality, a dozen of these charges related to violence, while 138 were people who were involved in an apparently peaceful occupation of Fortnum & Mason organised by UKUncut, who campaign against tax avoidance.

  You will have your own view on whether people should be arrested and charged for standing in a shop as an act of protest. But describing these 150 people as ‘violent anarchists … who brought terror to London’s streets’ is not just misleading; it also makes the police look over twelve times more effective than they really were at charging people who perpetrated acts of violence.

  The second method of obfuscation is even simpler. After London was chosen to host the 2012 Olympics, Labour made a series of pledges, including two around health: to use the power of the Games to inspire a million more people to play sport three or more times a week; and to get a million more people doing more general physical activity.

  Politicians seem keen on the idea that large multi-sports events can have a positive impact like this, so the area has been studied fairly frequently. Last year the BMJ published a systematic review of the literature. It set out to find any study that had ever been conducted into the real-world health and socio-economic impacts of major multi-sport events on the host population.

  This research found fifty-four studies. Overall, the quality was poor (it’s a fairly difficult thing to measure, and most studies used cross-sectional surveys, repeated over time). The bottom line was this: there is no evidence that events like the Olympics have a positive impact on either health or socio-economic outcomes.

  Here are some examples from the review. One study looked at Manchester before and after the 2002 Commonwealth Games: overall sports participation (four times or more in the past month) fell after the Games, and the gap in participation rates between rich and poor areas widened significantly.

  Another study in Manchester suggested there were particular problems around voluntary groups being excluded from using Commonwealth branding, and that new facilities tended to benefit elite athletes rather than the general population.

  There was a vague upward trend in sports participation in Barcelona between the early 1980s and 1994, and it had the Olympics in 1992. Volunteers working at the Commonwealth Games showed no increase in sports participation.

  You will have your own views on whether the cost of hosting the Olympics is proportionate to the benefits, and where those benefits lie. From this systematic review, however, there’s no evidence for large multi-sports events having a positive health or socioeconomic impact overall, so only an optimist would make promises to the contrary.

  This week, it emerged that both of the government’s targets for improving healthy activity after the 2012 Olympics are now being quietly dropped. By walking away from outcome indicators that will not be met, a government can create a false impression of success: if prespecified outcome indicators are ever to mean anything, after all, it’s because you report on all of them clearly, whether success is achieved or not.

  But more than that, governments around the world spend billions of pounds on these events. By quietly dropping these outcome indicators, rather than carefully documenting our success or failure at meeting them, our current politicians pave the way for ever more false and over-optimistic claims by their colleagues, all around the world, for many years to come.

  More Than Sixty Children Saved from Abuse

  Guardian, 7 August 2010

  According to the Home Office this week, Sarah’s Law – by which any parent can find out if any adult in contact with their child has a record of violent or sexual crimes – has ‘already protected more than 60 children from abuse during its pilot’. This fact was widely reported, and was the headline finding. As the Sun said: ‘More than sixty sickening offences were halted by Sarah’s Law during its trial.’

  It seems to me that the number of sickening offences prevented by an intervention is a difficult thing to calculate: nobody explained where the number came from, so for my own interest I called the Home Office.

  ‘It’s not that difficult to work out is it?’ This is the Home Office telling me I’m stupid. ‘It’s the number of disclosures issued, how many were of sex offenders, and how many children would those offenders have had contact with.’ This means telling a parent that someone in contact with their child had a history of abuse equated to preventing an act of abuse? Yes, they said: ‘Protecting that child means ensuring that offender did not have a way of having contact with that child. Therefore that child is being protected.’ This assumes that any such contact is itself abusive, or would definitely result in abuse. That might be correct: I slightly doubt it, but I don’t know for sure.

  Then I asked where the number sixty came from. I was sent to an excellent report assessing the programme, written by a team of academics. Neither the number 60 nor the word sixty appears in that document.

  So I contacted the lead author, Prof Hazel Kemshall, who said: ‘You are correct that reference to sixty children is not made in the report. As I understand it the Home Office have drawn on police data sources to quote this figure, and therefore I cannot assist you further. As you will see from the report, we were careful to state the limits of the methodology.’

  I contacted the Home Office again. ‘The figure is over sixty and it comes from the number of disclosures made where there was a conviction of a sexual offence with a minor or violence against a minor. In total twenty-one disclosures were made specifically about registered sex offenders (RSO), a further eleven disclosures were made, for example relating to convictions for violent offending. These people had access to over sixty children.’

  I’m not sure this is self-evident. The academics who wrote the report couldn’t work out where the number sixty came from, and at least two pieces have appeared trying to unpick it, each coming up with different answers from me and the Home Office. The excellently named Conrad Quilty-Harper in the Telegraph and a promising new website called FullFact, both – very reasonably – tried adding various categories of numbers from the academic report, including a figure on social-worker activity which seemed to make up the numbers.

  I’m not saying the figure sixty is wrong. While what it represents was probably overstated – by the Home Office and the press reports – the number itself isn’t absurd. But it does seem odd that just finding out where it came from involved so much mucking about, and it seems even odder to ignore the robust figures in a long academic re
port that you’ve commissioned (the scheme wasn’t cheap compared to, say, social-worker salaries), and instead build your press activity around one opaque figure constructed, ever so slightly, somewhere, it seems, on the back of an envelope.

  Home Taping Didn’t Kill Music

  Guardian, 6 June 2009

  You are killing our creative industries. ‘Downloading Costs Billions’, said the Sun. ‘MORE than seven million Brits use illegal downloading sites that cost the economy billions of pounds, Government advisors said today. Researchers found more than a million people using a download site in ONE day and estimated that in a year they would use £120 billion worth of material.’

  That’s about a tenth of our GDP. No wonder the Daily Mail was worried too: ‘The network had 1.3 million users sharing files online at midday on a weekday. If each of those downloaded just one file per day, this would amount to 4.73 billion items being consumed for free every year.’

  Now, I am always suspicious of anything on piracy from the music industry, because it has produced a lot of dodgy figures over the years. I also doubt that every download is lost revenue, since, for example, people who download more music also buy more music. I’d like more details.

  So where do these notions of so many billions in lost revenue come from? I found the original report. It was written by some academics you can hire in a unit at UCL called CIBER, the Centre for Information Behaviour and the Evaluation of Research (which ‘seeks to inform by countering idle speculation and uninformed opinion with the facts’). The report was commissioned by a government body called SABIP, the Strategic Advisory Board for Intellectual Property Policy.

  On the billions lost it says: ‘Estimates as to the overall lost revenues if we include all creative industries whose products can be copied digitally, or counterfeited, reach £10 billion (IP Rights, 2004), conservatively, as our figure is from 2004, and a loss of 4,000 jobs.’

  What is the origin of this conservative figure? I hunted down the full CIBER documents, found the references section, and followed the web link, which led to a 2004 press release from a private legal firm called Rouse which specialises in intellectual property law. This press release was not about the £10 billion figure. It was, in fact, a one-page document which simply welcomed the government setting up an intellectual property theft strategy. In a short section headed ‘Background’, among five other points, it says: ‘Rights owners have estimated that last year alone counterfeiting and piracy cost the UK economy £10 billion and 4,000 jobs.’ So this authoritative government figure, from an academic study, in fact comes from an industry estimate, made as an aside, five years earlier, in a short press release from one law firm.

  But what about all those other figures in the media coverage? Lots of it revolved around the figure of 4.73 billion items downloaded each year, worth £120 billion. This means each downloaded item – software, movie, mp3, ebook – is worth £25 on average. Now, before we go anywhere, this seems very high. I am not an economist, and I don’t know about their methods, but to me, an appropriate comparator for someone who downloads a film to watch it once might be the rental value, for example, not the sale value. I’d also like to suggest that sometimes, perhaps quite often, someone who downloads a £1,000 professional 3D animation software package to fiddle about with at home may not use it more than three times.

  In any case, that’s £175 a week, or £9,100 a year, potentially not being spent by millions of people. Is this really lost revenue for the economy, as reported in the press? Plenty of those downloading will have been schoolkids, or students, who may not have £9,100 a year to spend. Even if they weren’t, that figure is still about a third of the average UK wage. Before tax. Oh, and the government’s figures were wrong: it was actually 473 million items, and £12 billion, so the value of each downloaded item was still £25, but it exaggerated the amount of money ‘lost’ by a factor of ten, in the original executive summary, and in the press release. These were changed quietly after the errors were pointed out by a BBC journalist, but I can find no public correction for the many people who were misled.

  So I asked SABIP what steps they took to notify journalists of their error, which resulted in their absurdly huge claims being widely reported in news outlets around the world. They refused to answer my questions in emails; insisted on a phone call (always a warning sign); told me that they had taken steps, but wouldn’t say what; explained something about how they couldn’t be held responsible for lazy journalism; then, bizarrely, after ten minutes, tried to tell me retrospectively that the whole call was actually off the record, that I wasn’t allowed to use the information in my piece, but that they had answered my questions, and so they didn’t need to answer on the record, but I wasn’t allowed to use the answers, and I couldn’t say they hadn’t answered, I just couldn’t say what the answers were. Then the PR man from SABIP demanded that I acknowledge, in our phone call, formally, for reasons I still don’t fully understand, that he had been helpful.

  I think it’s OK to be confused and disappointed by this. Like I said: as far as I’m concerned, everything from this industry is false, until proven otherwise.

  Is This a Joke?

  Guardian, 18 July 2009

  We’d all like to help the police do their job well. They, in turn, would like to have a massive database with DNA profiles of everyone who has been arrested, but not convicted of a crime. We worry that this is intrusive, but some of us are willing to make concessions – on our principles and the invasion of our privacy – in the name of preventing crimes. To do this, we’d like to know the evidence on whether this database is helpful, to help us make an informed decision.

  Luckily, the Home Office has now published a consultation paper on the subject. It defends the database by arguing that innocent people who have been arrested and released are basically criminals anyway, and go on to commit crimes in the future as much as guilty people do. ‘This,’ it says, ‘is obviously a controversial assertion.’ There is no reason for this assertion to be controversial: it’s a simple factual matter, and if it’s true, then you could easily assemble some good-quality evidence to prove it.

  The Home Office has assembled some evidence. This study, from the Jill Dando Institute, attached to the consultation paper as an appendix, is possibly the most unclear and badly presented piece of research I have ever seen in a professional environment.

  They want to show that the level of criminal activity in a group of people who have been arrested, but against whom no further action has been taken, is the same as the level of criminal activity in people who have been arrested and convicted of a crime, or who have accepted a caution.

  On page 30 they explain their methods, haphazardly, scattered about in the text. They describe some people ‘sampled on 1st June 2004, 1st June 2005 and 1st June 2006’. These dates are never mentioned again. I have no idea what their plan was there. They then leap to talking about Table 2. This contains data on people, each from a ‘sample’ in 1996, 1995 and 1994, followed up for thirty months, forty-two months and fifty-four months respectively. Are these anything to do with the people from 2004, 2005 and 2006? I have no idea, and it is impossible to tell.

  In fact, I have no idea what ‘sample’ means – perhaps that was the date on which they were first arrested. I don’t know why they were only followed up for thirty, forty-two and fifty-four months, instead of all the way from the 1990s to 2009. Crucially, I also don’t know what the numbers in the table mean, because that isn’t properly explained. I think it’s the number of people from the original group who have subsequently been arrested again, but there’s no way to tell.

  Then they start to discuss the results from this table. They say that these figures show that arrested non-convicted people are the same as convicted people. There are no statistics conducted on these figures, so there is absolutely no indication of how wide the error margins are, and whether these are chance findings. To give you a hint about the impact of noise on their data, more people are describe
d as having been subsequently re-arrested over the forty-two-month follow-up period than over the fifty-four-month follow-up period, which seems surprising, given that the people in the fifty-four-month group had a much longer period of time in which to get arrested.

  This is before we even get on to the other problems. At a few hundred people, this study seems pretty small for one that is supposed to give compelling evidence that there is no difference between two groups, because to prove a negative like this, you generally need a very large sample, to minimise the chance of missing a true difference in the noise.

  There is no evidence that they have done a ‘power calculation’ to determine the sample size they’d need, and in any case, their comparison group feels a bit rigged to me. In their ‘convicted’ sample they only count people who had a non-custodial sentence, and exclude people who got a custodial sentence, on the grounds that those people would be incapable of committing a crime during their incarceration. This also has the effect, however, of making the ‘criminal’ group really not very criminal, and so consequently a bit more likely to be similar to innocent people.

  I could go on. Table 1 is so thoroughly ‘not as described’ as to be uninterpretable. In the text they talk about different cells on the table which are ‘solid red’, ‘stippled yellow’, and ‘blank’, when in fact the whole thing is just blue.

  This research is incomprehensible and unreadable. Anybody who claims to have been persuaded by the data quoted here is telling you, loudly and clearly in the subtitles, that they don’t need to understand – or possibly even read – a piece of research in order to find it compelling. People like this are not to be trusted, and if research of this calibre is what guides our policy on huge intrusions into the personal privacy of millions of innocent people, then they might as well be channelling spirits.

 

‹ Prev