Book Read Free

The Intelligence Trap

Page 16

by David Robson


  Sometimes, increasing a statement’s truthiness can be as simple as adding an irrelevant picture. In one rather macabre experiment from 2012, Newman showed her participants statements about a series of famous figures – such as a sentence claiming that the indie singer Nick Cave was dead.12 When the statement was accompanied by a stock photo of the singer, they were more likely to believe that the statement was true, compared to the participants who saw only the plain text.

  The photo of Nick Cave could, of course, have been taken at any point in his life. ‘It makes no sense that someone would use it as evidence – it just shows you that he’s a musician in a random band,’ Newman told me. ‘But from a psychological perspective it made sense. Anything that would make it easy to picture or easy to imagine something should sway someone’s judgement.’ Newman has also tested the principle on a range of general knowledge statements; they were more likely to agree that ‘magnesium is the liquid metal inside a thermometer’ or ‘giraffes are the only mammal that cannot jump’ if the statement was accompanied by a picture of the thermometer or giraffe. Once again, the photos added no further evidence, but significantly increased the participants’ acceptance of the statement.

  Interestingly, detailed verbal descriptions (such as of the celebrities’ physical characteristics) provided similar benefits. If we are concerned about whether he is alive or dead, it shouldn’t matter if we hear that Nick Cave is a white, male singer – but those small, irrelevant details really do make a statement more persuasive.

  Perhaps the most powerful strategy to boost a statement’s truthiness is simple repetition. In one study, Schwarz’s colleagues handed out a list of statements that were said to come from members of the ‘National Alliance Party of Belgium’ (a fictitious group invented for the experiment). But in some of the documents, there appeared to be a glitch in the printing, meaning the same statement from the same person appeared three times. Despite the fact that it was clearly providing no new information, the participants reading the repeated statement were subsequently more likely to believe that it reflected the consensus of the whole group.

  Schwarz observed the same effect when his participants read notes about a focus group discussing steps to protect a local park. Some participants read quotes from the same particularly mouthy person who made the same point three times; others read a document in which three different people made the same point, or a document in which three people presented separate arguments. As you might expect, the participants were more likely to be swayed by an argument if they heard it from different people all converging on the same idea. But they were almost as convinced by the argument when it came from a single person, multiple times.13 ‘It made hardly any difference,’ Schwarz said. ‘You are not tracking who said what.’

  To make matters worse, the more we see someone, the more familiar they become, and this makes them appear to be more trustworthy.14 A liar can become an ‘expert’; a lone voice begins to sound like a chorus, just through repeated exposure.

  These strategies have long been known to professional purveyors of misinformation. ‘The most brilliant propagandist technique will yield no success unless one fundamental principle is borne in mind constantly and with unflagging attention,’ Adolf Hitler noted in Mein Kampf. ‘It must confine itself to a few points and repeat them over and over.’

  And they are no less prevalent today. The manufacturers of a quack medicine or a fad diet, for instance, will dress up their claims with reassuringly technical diagrams that add little to their argument – with powerful effect. Indeed, one study found that the mere presence of a brain scan can make pseudo-scientific claims seem more credible – even if the photo is meaningless to the average reader.15

  The power of repetition, meanwhile, allows a small but vocal minority to persuade the public that their opinion is more popular than it really is. This tactic was regularly employed by tobacco industry lobbyists in the 1960s and 70s. The vice president of the Tobacco Institute, Fred Panzer, admitted as much in an internal memo, describing the industry’s ‘brilliantly conceived strategy’ to create ‘doubt about the health charge without actually denying it’, by recruiting scientists to regularly question overwhelming medical opinion.16

  The same strategies will almost certainly have been at play for many other myths. It is extremely common for media outlets to feature prominent climate change deniers (such as Nigel Lawson in the UK) who have no background in the science but who regularly question the link between human activity and rising sea temperatures. With repetition, their message begins to sound more trustworthy – even though it is only the same small minority repeating the same message. Similarly, you may not remember when you first heard that mobile phones cause cancer and vaccines cause autism, and it’s quite possible that you may have even been highly doubtful when you did. But each time you read the headline, the claim gained truthiness, and you became a little less sceptical.

  To make matters worse, attempts to debunk these claims often backfire, accidentally spreading the myth. In one experiment, Schwarz showed some undergraduate students a leaflet from the US Centers for Disease Control, which aimed to debunk some of the myths around vaccinations – such as the commonly held idea that we may become ill after getting the flu shot. Within just thirty minutes, the participants had already started to remember 15 per cent of the false claims as facts, and when asked about their intentions to act on the information, they reported that they were less likely to be immunised as a result.17

  The problem is that the boring details of the correction were quickly forgotten, while the false claims lingered for longer, and became more familiar as a result. By repeating the claim – even to debunk it – you are inadvertently boosting its truthiness. ‘You’re literally turning warnings into recommendations,’ Schwarz told me.

  The CDC observed exactly this when they tried to put the banana hoax to rest. It’s little wonder. Their headline: ‘False Internet report about necrotizing fasciitis associated with bananas’ was far less digestible – or ‘cognitively fluent’, in the technical terms – than the vivid (and terrifying) idea of a flesh-eating virus and a government cover-up.

  In line with the work on motivated reasoning, our broader worldviews will almost certainly determine how susceptible we are to misinformation – partly because a message that already fits with our existing opinions is processed more fluently and feels more familiar. This may help to explain why more educated people seem particularly susceptible to medical misinformation: it seems that fears about healthcare, in general, are more common among wealthier, more middle-class people, who may also be more likely to have degrees. Conspiracies about doctors – and beliefs in alternative medicine – may naturally fit into that belief system.

  The same processes may also explain why politicians’ lies continue to spread long after they have been corrected – including Donald Trump’s theory that Barack Obama was not born in the United States. As you might expect from the research on motivated reasoning, this was particularly believed by Republicans – but even 14 per cent of Democrats held the view as late as 2017.18

  We can also see this mental inertia in the lingering messages of certain advertising campaigns. Consider the marketing of the mouthwash Listerine. For decades, Listerine’s adverts falsely claimed that the mouthwash could soothe sore throats and protect consumers from the common cold. But after a long legal battle in the late 1970s, the Federal Trade Commission forced the company to run adverts correcting the myths. Despite a sixteen-month, $10-million-dollar campaign retracting the statements, the adverts were only marginally effective.19

  This new understanding of misinformation has been the cause for serious soul searching in organisations that are attempting to spread the truth.

  In an influential white paper, John Cook, then at the University of Queensland, and Stephan Lewandowsky, then at the University of Western Australia, pointed out that most organisations had operated on the ‘information deficit model’ – assuming that misper
ceptions come from a lack of knowledge.20 To counter misinformation on topics such as vaccination, you simply offer the facts and try to make sure that as many people see them as possible.

  Our understanding of the intelligence trap shows us that this isn’t enough: we simply can’t assume that smart, educated people will absorb the facts we are giving them. As Cook and Lewandowsky put it: ‘It’s not just what people think that matters, but how they think.’

  Their ‘debunking handbook’ offers some solutions. For one thing, organisations hoping to combat misinformation should ditch the ‘myth-busting’ approach where they emphasise the misconception and then explain the facts. A cursory glance at an NHS webpage on vaccines, for instance, lists the ten myths, in bold, right at the top of the page.21 They are then repeated again, as bold headlines, underneath. According to the latest cognitive science, this kind of approach places too much emphasis on the misinformation itself: the presentation means it is processed more fluently than the facts, and the multiple repetitions simply increase its familiarity. As we have seen, those two feelings – of cognitive fluency and familiarity – contribute to the sense of truthiness, meaning that an anti-vaccination campaigner could hardly have done a better job in reinforcing the view.

  Instead, Cook and Lewandowsky argue that any attempt to debunk a misconception should be careful to design the page so that the fact stands out. If possible, you should avoid repeating the myth entirely. When trying to combat fears about vaccines, for instance, you may just decide to focus on the scientifically proven, positive benefits. But if it is necessary to discuss the myths, you can at least make sure that the false statements are less salient than the truth you are trying to convey. It’s better to headline your article ‘Flu vaccines are safe and effective’ than ‘Myth: Vaccines can give you the flu’.

  Cook and Lewandowsky also point out that many organisations may be too earnest in their presentation of the facts – to the point that they over-complicate the argument, again reducing the fluency of the message. Instead, they argue that it is best to be selective in the evidence you present: sometimes two facts are more powerful than ten.

  For more controversial topics, it is also possible to reduce people’s motivated reasoning in the way you frame the issue. If you are trying to discuss the need for companies to pay for the fossil fuels they consume, for example, you are more likely to win over conservative voters by calling it a ‘carbon offset’ rather than a ‘tax’, which is a more loaded term and triggers their political identity.

  Although my own browse of various public health websites suggests that many institutions still have a long way to go, there are some signs of movement. In 2017, the World Health Organisation announced that they had now adopted these guidelines to deal with the misinformation spread by ‘anti-vaccination’ campaigners.22

  But how can we protect ourselves?

  To answer that question, we need to explore another form of metacognition called ‘cognitive reflection’, which, although related to the forms of reflection we examined in the previous chapter, more specifically concerns the ways we respond to factual information, rather than emotional self-awareness.

  Cognitive reflection can be measured with a simple test of just three questions, and you can get a flavour of what it involves by considering the following example:

  A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? _____ cents

  In a lake, there is a patch of lily pads. Every day, the patch doubles in size. If it takes 48 days for the patch to cover the entire lake, how long would it take for the patch to cover half of the lake? _____ days

  If it takes 5 machines 5 minutes to make 5 widgets, how long would it take 100 machines to make 100 widgets? ____ minutes?

  The maths required is not beyond the most elementary education, but the majority of people – even students at Ivy League colleges – only answer between one and two of the three questions correctly.23 That’s because they are designed with misleadingly obvious, but incorrect, answers (in this case, $0.10, 24 days, and 100 minutes). It is only once you challenge those assumptions that you can then come to the correct answer ($0.05, 47 days, and 5 minutes).

  This makes it very different from the IQ questions we examined in Chapter 1, which may involve complex calculations, but which do not ask you to question an enticing but incorrect lure. In this way, the Cognitive Reflection Test offers a short and sweet way of measuring how we appraise information and our abilities to override the misleading cues you may face in real life, where problems are ill-defined and messages deceptive.24

  As you might expect, people who score better on the test are less likely to suffer from various cognitive biases – and sure enough scores on the CRT predict how well people perform on Keith Stanovich’s rationality quotient.

  In the early 2010s, however, a PhD student called Gordon Pennycook (then at the University of Waterloo) began to explore whether cognitive reflection could also influence our broader beliefs. Someone who stops to challenge their intuitions, and think of alternative possibilities, should be less likely to take evidence at face value, he suspected – making them less vulnerable to misinformation. Sure enough, Pennycook found that people with this more analytical thinking style are less likely to endorse magical thinking and complementary medicine. Further studies have shown that they are also more likely to reject the theory of evolution and to believe 9/11 conspiracy theories.

  Crucially, this holds even when you control for other potential factors – like intelligence or education – underlining the fact that it’s not just your brainpower that really matters; it’s whether or not you use it.25 ‘We should distinguish between cognitive ability and cognitive style,’ Pennycook told me. Or, to put it more bluntly: ‘If you aren’t willing to think, you aren’t, practically speaking, intelligent.’ As we have seen with other measures of thinking and reasoning, we are often fairly bad at guessing where we lie on that spectrum. ‘People that are actually low in analytic [reflective] thinking believe they are fairly good at it.’

  Pennycook has since built on those findings, with one study receiving particularly widespread attention, including an Ig Nobel Award for research ‘that first makes you laugh, then makes you think’. The study in question examined the faux inspirational, ‘pseudo-profound bullshit’ that people often post on social media. To measure people’s credulity, Pennycook asked participants to rate the profundity of various nonsense statements. These included random, made-up combinations of words with vaguely spiritual connotations, such as ‘Hidden meaning transforms unparalleled abstract beauty’. The participants also saw real tweets by Deepak Chopra – a New Age guru and champion of so-called ‘quantum healing’ with more than twenty New York Times bestsellers to his name. Chopra’s thoughts include: ‘Attention and intention are the mechanics of manifestation’ and ‘Nature is a self-regulating ecosystem of awareness.’

  A little like the Moses question, those statements might sound as though they make sense; their buzzwords seem to suggest a kind of warm, inspirational message – until you actually think about their content. Sure enough, the participants with lower CRT scores reported seeing greater meaning in these pseudo-profound statements, compared to people with a more analytical mindset.26

  Pennycook has since explored whether this ‘bullshit receptivity’ also leaves us vulnerable to fake news – unfounded claims, often disguised as real news stories, that percolate through social media. Following the discussions of fake news during the 2016 presidential election, he exposed hundreds of participants to a range of headlines – some of which had been independently verified and fact-checked as being true, others as false. The stories were balanced equally between those that were favourable for Democrats and those that were favourable to Republicans.

  For example, a headline from the New York Times proclaiming that ‘Donald Trump says he “absolutely” requires Muslims to register’ was supported by a real, substantiated news story. The hea
dline ‘Mike Pence: Gay conversion therapy saved my marriage’ failed fact-checking, and came from the site NCSCOOPER.com.

  Crunching the data, Pennycook found that people with greater cognitive reflection were better able to discern the two, regardless of whether they were told the name of the news source, and whether it supported their own political convictions: they were actually engaging with the words themselves and testing whether they were credible rather than simply using them to reinforce their previous prejudices.27

  Pennycook’s research would seem to imply that we could protect ourselves from misinformation by trying to think more reflectively – and a few recent studies demonstrate that even subtle suggestions can have an effect. In 2014, Viren Swami (then at the University of Westminster) asked participants to complete simple word games, some of which happened to revolve around words to do with cognition like ‘reason’, ‘ponder’ and ‘rational’, while others evoked physical concepts like ‘hammer’ or ‘jump’.

  After playing the games with the ‘thinking’ words, participants were better at detecting the error in the Moses question, suggesting that they were processing the information more carefully. Intriguingly, they also scored lower on measures of conspiracy theories, suggesting that they were now reflecting more carefully on their existing beliefs, too.28

  The problems come when we consider how to apply these results to our daily lives. Some of the mindfulness techniques should train you to have a more analytic point of view, and to avoid jumping to quick conclusions about the information you receive.29 One tantalising experiment has even revealed that a single meditation can improve scores on the Cognitive Reflection Test, which would seem promising if it can be borne out through future research that specifically examines the effect on the way we process misinformation.30

 

‹ Prev