Not Born Yesterday
Page 31
trust, and to make promises credible. This doesn’t quite mean
that people trust blindly, as such cheap promises lose some of
their power when the stakes increase.29
Social scientist Toshio Yamagishi highlighted another advan-
tage from trusting even when short- term rationality dictates we
shouldn’t, pointing out a fundamental asymmetry between trust-
ing and not trusting in terms of information gains.30 If you
choose to trust someone, more often than not you’ll be able to
tell whether your trust was warranted. If a new classmate asks to
borrow your notes and promises to give them back to you the
next day, you’ll only know if they’ll keep their word if you
lend them the notes. By contrast, if you don’t trust someone,
you might never know whether they would in fact have been
a n g r y p und i t s a nd sk il l f ul c o n me n 253
trustworthy. If a friend tries to set you up with someone you
don’t know, it’s only if you follow the dating advice that you’ll
figure out whether or not it was solid.
Admittedly, there are situations in which we can gauge the
value of someone’s word without having to trust them first. For
instance, you can see whether investment advice pans out with-
out following it, simply by keeping track of the relevant stocks.
Stil , as a rule, we learn more by trusting than by not trusting.
Trust is like any other skil : practice makes perfect.
As a result of this asymmetry between trusting and mistrust-
ing, the more we trust, the more information we gain. We not
only know better which specific individuals are trustworthy but
also use these experiences to figure out what kind of individual,
in what kind of situation, should be trusted. In a series of experi-
ments, Yamagishi and his colleagues found that the most trust-
ful of their participants— those more likely to think that other
people could be trusted— were also the best at ascertaining who
should be trusted (in games analogous to the trust game).31 Like-
wise, people who are the least trusting are the least able to dis-
criminate between phishing attempts and legitimate interfaces.32
My maternal grandparents are the best illustration I know of
Yamagishi’s ideas. On the surface, they might seem like easy prey:
they aren’t so young anymore (being in their early nineties at the
time of this writing), they are supernice, and are always there
when a friend or a neighbor (or indeed my wife and I) need
something. One doesn’t get much more grandmotherly than my
grand mother, plying children with sweets and giving big hugs.
Yet my grandparents have a very shrewd judgment, skil fully ap-
plying selective trust. I have never seen them fall for any mar-
keting stunt, and all their friends are perfectly trustworthy. By
giving people the benefit of the doubt in initial interactions with
little risk, they have accumulated a wealth of knowledge about
254 ch ap t er 15
who can be trusted and have met enough people that they could
afford to select the most reliable as friends.
In spite of the informational gains that can be accrued from
trusting when in doubt, the general logic of open vigilance
mechanisms suggests that, on the whole, we make more errors
of omission (not trusting when we should) than of commis-
sion (trusting when we shouldn’t). This might seem counter-
intuitive, but beware the sampling bias: we’re much more
likely to realize we shouldn’t have trusted someone when we
did (we follow our friend’s advice and end up on a horrible
date) than to realize we should have trusted someone when
we didn’t (we don’t follow our friend’s advice and fail to meet
our soul mate). The main issue with using coarse cues isn’t that
we trust people we shouldn’t (trusting a con man because he’s
dressed as a respectable businessman), but that we don’t trust
people we should (mistrusting someone because of their skin
color, clothing, accent, etc., when in fact they are perfectly
trustworthy).
Experiments with economic games support this prediction.
Economists Chaim Fershtman and Uri Gneezy asked Jewish par-
ticipants in Israel to play trust games.33 Some of the participants
were Ashkenazi Jews (mostly coming from Eu rope and the
United States); others were Eastern Jews (mostly coming from
Africa and Asia). By and large, the former group had higher sta-
tus and was expected to be perceived as more trustworthy. This
is indeed what Fershtman and Gneezy observed. In a trust game,
male investors transferred more money to Ashkenazi trustees
than to Eastern trustees. However, the relative mistrust of the
Eastern Jews was unwarranted, as Ashkenazi and Eastern trust-
ees returned similar amounts. The same pattern was observed
by economist Justine Burns in South Africa.34 In her experiment,
investors transferred less money to black trustees than to other
a n g r y p und i t s a nd sk il l f ul c o n me n 255
trustees, even though black trustees then returned as much
money.35 In these experiments at least, the participants would
have been better off recalibrating their coarse cues and trusting
more these ethnic groups.
What to Do?
How can we better calibrate our trust? The two trust calibration
mechanisms I have explored here are quite distinct and call for
diff er ent adjustments. When it comes to the taking- sides strat-
egy, we should be aware that it can be abused by people who
claim to be on our side but aren’t actually paying any cost for their
commitment. We should be wary of largely made-up controver-
sies with largely made-up enemies. If we base our repre sen ta-
tion of the other side on how it is portrayed in the news or, worse,
on social media, then this repre sen ta tion is likely to be wide off
the mark— mistaking, say, crazy conspiracy theorists for average
Republicans, or enraged social justice warriors for typical Demo-
crats. We must remind ourselves that the members of the “other
side” are prob ably not that diff er ent from us, and that engaging
with them is worthwhile.
What about coarse cues? When we have to rely on coarse
cues— for example, when we meet someone for the first
time— I believe we should try to worry less about how people
judge our decisions to trust or not to trust. Con men and social
engineers often rely on our reluctance to question our inter-
locutors, our fear of appearing rude because we don’t trust
them. After all, if you meet someone who really is a long- lost
acquaintance, and you suggest they are trying to scam you, they
will be justifiably annoyed. Not wishing to be thought ill of also
drives some of our misplaced mistrust, as we’re afraid of looking
like fools if we get played.
256 ch ap t er 15
In both instances we should strive to resist these social pres-
sures. The long- lost acquaintance shouldn’t put us in a situation
in which we have to immediately
trust them with something sig-
nificant (like an expensive watch). If they do, they are the ones
who are breaking social norms, not us when we refuse to grant
trust under pressure. As for the fear of looking like we’re easily
tricked, we should strive to remember the information we gain
by trusting people, even when our trust doesn’t pan out. As long
as we start small, trusting people quite broadly is a decision that
should pay off in the long run, with the occasional failure a mere
cost of doing business. To compensate for when we trust too
much, we should consider the costs of failing to trust, the myr-
iad mutually beneficial relationships we could have formed if we
had trusted more people.
16
THE CASE AGAINST GULLIBILITY
this book is a long argument against the idea that humans
are gullible, that they are “wired not to seek truth” and “overly
deferential to authority,” and that they “cower before uniform
opinion.”1 If gullibility appears to have some advantages, allow-
ing us to learn more easily from our elders and our peers, the
costs are just too high. The theory of the evolution of commu-
nication dictates that for communication to exist, both senders
and receivers must benefit from it. If receivers were excessively
gullible, they would be mercilessly abused by senders, until they
reached a point where they simply stopped paying any attention
to what they were being told.
Far from being gullible, we are endowed with a suite of cogni-
tive mechanisms that evaluate what we hear or read. These mech-
anisms allow us to be open—we listen to information deemed
valuable— and vigilant—we reject most harmful messages. As
these open vigilance mechanisms grew increasingly complex, we
paid attention to more cues tel ing us that others are right and we
are wrong. We let ourselves be influenced by others more and
more, going from the fairly limited communicative powers of our
pre de ces sors to the infinitely complex and power ful ideas that
human language lets us express.
257
258 ch ap t er 16
This evolution is reflected in the organ ization of our minds.
People deprived of the most sophisticated means of evaluating
information— through brainwashing, subliminal influence, or
mere distraction— cannot pro cess the cues tel ing them to ac-
cept new, challenging messages. They revert to a conservative
core, rejecting anything they don’t already agree with, being
much harder, not much easier, to influence.
Open vigilance mechanisms are part of our common cogni-
tive endowment. Their roots can be found in toddlers or even
infants. Twelve- month- old infants integrate what they are told
with their prior opinions, so that they are easiest to influence when
their opinions are weak, and are very stubborn other wise—as
anyone who has interacted with a one- year- old will be painful y
aware.2 Infants this age also track the actions of adults and are more
influenced by those who behave competently.3 Two- and- a- half-
year- olds listen more to speakers who offer sound rather than
circular arguments.4 At three years of age, toddlers put more trust
in someone who is reporting what they have seen rather than
guessed, and they have figured out who is an expert in familiar
domain, such as food and toys.5 When they turn four, preschool-
ers get a grasp of how best to follow the majority opinion, and
they discount agreement based on mere hearsay.6
Our open vigilance mechanisms are for learning, and figur-
ing out what to believe and who to trust doesn’t stop at four years
of age. It never stops: as we accumulate knowledge and experi-
ence, we constantly sharpen our open vigilance mechanisms. As
an adult, think of how many factors you effortlessly weigh when
evaluating the most mundane communication. If your colleague
Bao says, “You should switch to the new OS; they’ve fixed a
major security flaw,” your reaction will depend on the following:
what you already know about the new OS (have you heard it seri-
ously slows computers down?), how vulnerable you think your
t he c a se a g a ins t g ul l ib il i t y 259
computer is to attacks (is the security flaw really major?), what
Bao’s level of competence in this domain is compared with
yours (is she the IT specialist?), and whether you believe Bao
might have any ulterior motive (might she want you to install
the new OS so she can see whether it works well?). None of
these kinds of calculations have to be conscious, but they are
going on whenever we hear or read something.
In everyday life, when interacting with people we know, cues
tel ing us to change our minds abound: we have time to ascertain
goodwil , recognize expertise, and exchange arguments. By con-
trast, these cues are typically absent from mass persuasion contexts.
How can a government agency build trust? How can politicians
display their competence to those who don’t closely follow poli-
tics? How can an advertising campaign convince you a given
product is worth buying? Mass persuasion should be tremen-
dously difficult. Indeed, the vast majority of mass persuasion
efforts, from propaganda to po liti cal campaigns, from religious
proselytizing to advertising, end in abject failure. The (modest)
successes of mass persuasion are also well accounted for by the
functioning of our open vigilance mechanisms. The conclusion
reached by Ian Kershaw with re spect to Nazi propaganda applies
more broadly: the effectiveness of mass persuasion is “heavi ly
dependent on its ability to build on existing consensus, to con-
firm existing values, to bolster existing prejudices.”7 This reflects
the working of plausibility checking, which is always operating,
making even the most successful mass persuasion efforts some-
what inert: people might accept the messages, but the messages
do not substantially affect their preexisting plans or beliefs. In
some situations, when some trust has been built, mass persua-
sion can change minds, but then only on issues of little personal
import, as when people follow po liti cal leaders on topics in
which they have little interest and even less knowledge.
260 ch ap t er 16
How to Be Wrong without Being Gullible
If the successes of mass persuasion are, more often than not, a
figment of the popu lar imagination, the dissemination of empiri-
cally dubious beliefs is not. We all have, at some point in our
lives, endorsed one type of misconception or another, believing
in anything from wild rumors about politicians to the dangers
of vaccination, conspiracy theories, or a flat earth. Yet the suc-
cess of these misconceptions is not necessarily a symptom of
gullibility.
The spread of most misconceptions is explained by their in-
tuitively appealing content, rather than by the skil s of those who
propound them. Vaccine hesitancy surfs on the counterintuitive-
ness of vaccination. Conspiracy theories depend on our justi-
fied fear of power ful enemy co ali tions. Even flat- earthers argue
that you just have to follow your intuition when you look at the
horizon and fail to see any curvature.
Even though many misconceptions have an intuitive dimen-
sion, most remain cut off from the rest of our cognition: they
are reflective beliefs with little consequences for our other
thoughts, and limited effects on our actions. The 9/11 truthers
might believe the CIA is power ful enough to take down the
World Trade Center, but they aren’t afraid it could easily silence
a blabbing blogger. Most of those who accused Hil ary Clinton’s
aides of pedophilia were content with leaving one- star reviews
of the restaurant in which the children were supposedly abused.
Even forcefully held religious or scientific beliefs, from god’s om-
niscience to relativity theory, do not deeply affect how we
think: Christians still act as if god were an agent who could only
pay attention to one thing at a time, and physicists can barely
intuit the relationship between time and speed dictated by Ein-
stein’s theories.
t he c a se a g a ins t g ul l ib il i t y 261
If some of these reflective beliefs are counterintuitive—an
omniscient god, the influence of speed on time— I have ar-
gued that most have an intuitive dimension, such as vaccine
hesitancy, conspiracy theories, or a flat earth. How can a belief
be both reflective (separated from most of our cognition) and
intuitive (tapping into a number of our cognitive mecha-
nisms)? Take the belief in a flat earth. Imagine you have no
knowledge of astronomy. Someone tel s you that the stuff you’re
standing on, the stuff you see, is called the earth. So far so good.
Now they either tell you that the earth is flat, which fits with
what you perceive, or that it is spherical, which doesn’t. The
first alternative is more intuitively compel ing. Stil , even if you
now accept that the earth is flat, the belief remains largely re-
flective, as you aren’t quite sure what to do with the concept of
“earth.” Unless you’re about to embark on a very long journey,
or have to perform some astronomical calculations, your ideas
about the shape of the earth have no cognitive or practical
consequences.