The Tiger That Isn't

Home > Other > The Tiger That Isn't > Page 14
The Tiger That Isn't Page 14

by Andrew Dilnot


  In one famous case it was found that a sample of Democrat voters in the United States were less satisfied with their sex lives than Republicans, until someone remembered that women generally reported less satisfaction with their sex lives than men, and that more women tended to vote Democrat than Republican.

  Bias can seep into a population sample in as many ways as people are different, in their beliefs, habits, lifestyles, history, biology. A magazine surveys its readers and claims 70 per cent of Britons believe in fairies, but the magazine is called Paranormal and Star-Gazers Monthly. Simply by buying it, the readership proves itself already more predisposed to believe than the general population of 'Britons' to whom its conclusions are too readily applied.

  Bias of some kind, intentional, careless, or accidental, is par for the course in many samples that go into magazine surveys, or surveys which serve as a marketing gimmick, and generally get the answer they want. For a quick sample of this practice – representative or not, we couldn't possibly say – try the following, and put your imagination to the test thinking of possible bias. All of these arrived on the desk of Rory Cellan Jones, the BBC's business correspondent, during a week or two in the summer of 2006.

  New mothers spend £400 on average on toddlers' wardrobes, a survey says.

  Tea is the number one night-time drink for men and women.

  Research shows that 52 per cent of men in the city admit to wearing odd socks at least once a week (survey by an online sock retailer).

  60 per cent of women would prefer to see celebrities looking slightly flawed, while 76 per cent of men in the UK prefer to see images of celebrities looking perfect (a survey courtesy of a make-up company and a high-definition TV channel).

  More than 20 million British homeowners have spent more than £150bn in tasteless home improvements that have reduced the value of their homes (a home insurance firm courteously tells us).

  Bias is less likely in carefully designed surveys which aim for a random sample of the population and have asked more than the first half-dozen people that come along. But surveys asking daft questions and getting daft answers are not alone in finding that potential bias is lurking everywhere, with menace, trying to get a foot in the door, threatening to wreck the integrity of our conclusions.

  Once more, it is tempting to give up on all things sampled as fatally biased and terminally useless. But bias is a risk, not a certainty. Being wise to that risk is not cynicism, it is what any statistician worth their salt strives for.

  And sampling is, as we have said, inevitable: there is simply too much to count to count it properly. So let us take the extreme example of an uncountable number which we nevertheless want to know: how many fish are there in the sea?

  If you believe the scientists, not enough; the seas are empty almost to the point where some fish stocks cannot replace themselves. If you believe the fishermen, there is still a viable industry. Whether that industry is allowed to continue depends on counting those fish. Since that is impossible, the only alternative is sampling.

  When More or Less visited the daily fish market in Newlyn, Cornwall, the consensus there was that the catch was bigger than twelve to fifteen years ago. 'We don't believe the scientific evidence,' they said.

  The scientists had sampled cod fish stocks at random and weren't finding many. The fishing industry thought them stupid. One industry spokesperson told us, 'If you want to count sheep, you don't look at the land at random, you go to the field where the sheep are.' In other words, the samplers were looking in the wrong places. If they wanted to know how many fish there were, they should go where there were fish to count. When the fishermen went to those places, they found fish in abundance.

  The International Council for the Exploration of the Sea recommends a quota for cod fishing in the North Sea of zero. EU fisheries ministers regularly ignore this advice during what has become an annual feud about the right levels of TACs (total allowable catches), and in 2006 the EU allowed a catch of about 26,500 tonnes.

  Is the scientists' sampling right? In all probability, it is – by which we mean, of course, that it is not precisely accurate, but nor, more importantly, is it misleading. Whereas fishing boats might be sailing for longer or further to achieve the same level of catch, research surveys will not. They trawl for a standard period, usually half an hour. Nor will they upgrade their equipment, as commercial fishers do, to maximise their catch. And on the question of whether they count in the wrong places, and the superficial logic of going where the fish are, this ignores the possibility that there are more and more places where the fish are not, and is itself a bias. Going to where the fish are in order to count them is a little like drawing the picture of the donkey after you have pinned up the tail.

  The fishermen, like all hunters, are searching for their prey. The fact that they find it and catch it tells us much more about how good they are at hunting than about how many fish there are. The scientists' samples are likely to be a far better guide to how many fish there are in the sea than the catch landed by determined and hard-working fishermen.

  The kind of sampling that spots all these red herrings has to be rigorous and imaginative for every kind of bias. After all that, the sampling might still not be perfect, but it has told us something amazing.

  9

  Data: Know the Unknowns

  In Britain, the most senior civil servants, who implement and advise on policy, are often clueless about basic numbers on the economy or society.

  Asked about the facts and figures of American life, American college students are often so wrong – and find the correct answers so surprising – that they change their minds about policy on the spot.

  Strong views and serious responsibilities are no guarantee of even passing acquaintance with the data. But numbers do not fall ripe into our laps, someone has to find and fetch them; far easier, some feel, not to bother.

  Much of what is known through numbers is foggy, but a fair portion of what people do not know is due to neglect, sloppiness or fear. Policies have been badly conceived, even harmful, for want of looking up the obvious numbers in the ministry next door. People have died needlessly in hospital for want of a tally to tell us too many had died already.

  The deepest pitfall with numbers owes nothing to numbers themselves and much to the slack way they are treated, with carelessness all the way to contempt. But numbers, with all the caveats, are potent and persuasive, a versatile tool of understanding and argument.

  They are also often all we have got, and ignoring them is a dire alternative. For those at sea with the numbers, all this should be strangely reassuring. First, it means you have illustrious company. Second, it creates the opportunity to get ahead. Simply show enough care for numbers' integrity to think them worth treating seriously, and you are well on the way to empowerment.

  At frequent talks and seminars over the past ten years or more, Britain's most senior civil servants, journalists, numerous business people and academics have been set multiple-choice questions on very basic facts about the economy and society. Some, given their status or political importance, asked to remain anonymous. It is just as well that they did.

  Here is a sample of the questions, along with the answers given by a particular group of between seventy-five and a hundred senior civil servants in September 2005. It would be unfair to identify them, but reasonable to say you would certainly hope that they understood the economy. (There is some rounding so the totals do not all sum to 100 per cent, and not everyone answered all the questions.)

  What share of the income tax paid in the UK is paid by the top 1 per cent of earners?

  They were all wrong. The correct answer is that the top earners pay 21 per cent of all the income tax collected. It might seem unfair not to have given them the chance of getting it right, so it is reasonable to give credit to those who picked the biggest number available, 17 per cent. All the others did poorly and almost two thirds thought the answer was 11 per cent or less, roughly half the true figu
re and a woeful degree of ignorance from people in their position. Analysing the effect of the tax system, and of changes to it, should be a core function of this group, but they simply did not know who paid what. Almost as surprising as the fact that so few knew the right answer is that their answers could almost have been drawn randomly. There is no sign of a shared view among those questioned. If you were playing Who Wants to be a Millionaire? when this came up, you might have thought this would be a good audience to ask. You would be disappointed.

  What joint income (after tax) would a childless couple need to earn in order to be in the top 10 per cent of earners?

  The answer is £35,000. Some will resist believing that £35,000 after tax is enough to put a couple (both incomes combined – if they have two incomes) in the top 10 per cent. It is a powerful, instructive number and it is worth knowing. But the proportion of the group that did was just 10 per cent. The single most common answer (£50,000) was nearly half as high again as it should have been. And 90 per cent of a group of around seventy-five people, whose job it is to analyse our economy and help to set policy, think people earn far more than they really do, with more than 40 per cent of them ludicrously wrong.

  Figure 9 Poorer than they thought

  How much bigger is the UK economy now (UK national income after adjusting for inflation) than it was in 1948?

  The right answer is that the economy now is around 300 per cent bigger than it was in 1948. This is another question where the right answer was not an option. But since only 5 per cent of the group chose the highest possibility, it seems that most had no sense whatsoever of where the right answer lay. The economy grew in this period, on average, by about 2.5 per cent a year. More than three quarters of the group gave answers that, if correct, would have implied half as much or less, and that we were less than half as well off as we actually are. That is quite an error – economics does not get much more fundamental than how fast the economy grows – and something of a shock.

  There are 780,000 single parents on means tested benefits. How many are under age 18?

  The correct figure (for 2005) was 6,000. Those who chose the lowest option once again deserve credit. But there seems to be a common belief in this group and elsewhere that we have an epidemic of single gymslip mums – a familiar political target – with half our group believing the problem at least 10 times worse than it is, and some who no doubt would have gone higher had the options allowed.

  The performance on these and other multiple-choice questions over the years from all sorts of groups has been unremittingly awful. That matters: if you want even the most rudimentary economic sense of what kind of a country this is, it is hard to imagine it can be achieved without knowing, at least in vague terms, what typical incomes are. If you expect to comment on the burden of taxation, what chance a comment worth listening to if you are wholly in the dark about where the tax burden falls?

  Professor Michael Ranney is another who likes asking questions. Being an academic at the University of California in Los Angeles, he has plentiful informed and opinionated young people for guinea pigs, but keeps the questions straightforward. Even so, he does not expect the students to know the answers accurately; his initial interest is merely to see if they have a rough idea. For example: for every 1,000 US residents, how many legal immigrants are there each year? How many people, per thousand residents, are incarcerated? For every thousand drivers, how many cars are there? For every thousand people, how many computers? How many abortions? How many murders per million inhabitants? And so on.

  Few of us spend our leisure hours looking up and memorising data. But many of us flatter ourselves that we know about these issues. And yet … says Ranney:

  On abortion and immigration, about 80 per cent of those questioned base their opinions on significantly inaccurate base-rate information. For example, students at an elite college typically estimated annual legal immigration at about 10 per cent of the existing population of the United States (implying that for a population of 300 million, there were 30 million legal immigrants every year). Others – non-students – guessed higher.

  The actual rate was about 0.3 per cent. That is, even the lower estimates were more than thirty times too high. If your authors made the numerically equivalent mistake of telling you they were at least fifty metres tall, it's unlikely you'd take seriously their views on human height, or anything else. You would probably advise them, for their own good, to shut up.

  The students' estimates for the number of abortions varied widely, but the middle of the range was about 5,000 for every million live births. The actual figure in the US at that time (2006) was 335,000 per million live births – that is sixty-seven times higher than the typical estimate. These answers, in the famous words of the physicist Wolfgang Pauli, are not only not right, they are so far off the mark that they are not even wrong.

  The next step is equally revealing. When the students found out the true figures, it made a difference. More thought there should be a big decrease in abortions. Among those who initially thought that abortions should always be allowed, there was movement towards favouring some restriction. We make no comment about the rightness or otherwise of these views. We simply observe the marked effect on them of data.

  Professor Ranney, says that if people are first invited to make their own guesses and are then corrected, their sense of surprise is greater than if they are simply given the correct figure at the outset. Surprise, it turns out, is useful. It helps to make the correct figure more memorable, and also makes more likely a change in belief about policy. For our purposes, the lesson is simpler. Many educated people voicing opinions on the numbers at the heart of social and economic issues, in truth haven't the faintest idea what the numbers are. But – and it is a critical but – they do change their arguments when they find out.

  One more example from Michael Ranney's locker: unexpected numerical feedback on death rates for various diseases led undergraduates to provide funding allocations that more closely tracked those rates. Initially, they tended to overestimate the incidence, for example, of breast cancer compared with heart disease – and allocated a notional $100 accordingly. Once they knew the figures, they moved more money to heart disease. This fascinating work seems to make a strong case, contrary to the sceptical view, that opinions are not immune to data, but rather that accurate data does matter to people. The alternative to using data – to rely instead on hunch or prejudice – seems to us indefensible, if commonplace.

  The list of excuses for ignorance is long. It is far easier to mock than search for understanding, to say numbers do not matter or they are all wrong anyway so who cares, or to say that we already know all that is important. Wherever such prejudices take root, the consequences are disastrous.

  The death of Joshua Loveday, aged eighteen months, on the operating table at the Bristol Royal Infirmary began a series of investigations into what became a scandal. It was later established by an inquiry under Professor Sir Ian Kennedy that, of children operated on for certain heart conditions at Bristol, fully twice as many died as the national norm. It was described as one of the worst medical crises ever in Britain. The facts began to come to light when an anaesthetist, Dr Steve Bolsin, arrived at Bristol from a hospital in London. Paediatric heart surgery took longer than he'd been used to; patients were a long time on heart bypass machines – with what effect he decided to find out, though he already suspected that death rates were unusually high. So he and a colleague rooted through the data where they discovered, they thought, persuasive evidence of what the medical profession calls excess mortality.

  At first slow to respond, the hospital found itself – with Joshua's death the catalyst – eventually overwhelmed by suspicion and pressure for inquiries from press and public. The first of these was by an outside surgeon and cardiologist, another by the General Medical Council (the longest investigation in its history) and finally a third by the independent team led by Sir Ian, which concluded that between thirty and thirty-five
children had probably died unnecessarily.

  Most of those involved believed they knew the effectiveness of their procedures and believed them to be as good as anyone's. None, however, knew the numbers; none knew how their mortality rates compared.

  One crucial point attracted little attention at the time – grief and anger at the raw facts understandably swept all before them – but members of the inquiry team judged that had the excess mortality been not 100 per cent (twice as bad as others), but 50 per cent, it would have been hard to say for sure that Bristol was genuinely out of line. That is, if about 15–17 babies had died unnecessarily rather than the estimated 30–35, it might have been impossible to conclude that anything was wrong. Fifty per cent worse than the norm strikes most as a shocking degree of failure, especially where failure means death. Why did mortality have to be 100 per cent worse before the inquiry team was confident of its conclusions?

  The two surgeons who took the brunt of public blame for what happened argued that, even on the figures available, it was not possible to show they had performed badly (and the inquiry itself was reluctant to blame individuals rather than the system at Bristol as a whole, saying that 'The story of [the] paediatric cardiac surgical service [in] Bristol is not an account of bad people. Nor is it an account of people who did not care, nor of people who wilfully harmed patients'). Mortality 100 per cent worse than the norm was a big difference given the numbers of children involved, large enough to constitute 'one of the worst medical crises' in Britain's history, but even then the conclusion was disputed.

 

‹ Prev