This Will Make You Smarter
Page 13
Externalities can be small or large, negative and positive. When I lived in Santa Barbara, many people with no goal other than working on their tans generated (small, it’s true) positive externalities for passersby, who benefited from the enhanced scenery. These onlookers didn’t have to pay for this improvement to the landscape, but at the same beach Rollerbladers traveling at high speed and distracted by this particular positive externality occasionally produced a negative one, in the form of a risk of collision for pedestrians trying to enjoy the footpath.
Externalities are increasingly important in the present era, when actions in one place potentially affect others half a world away. When I manufacture widgets for you to buy, I might, as a side effect of the process, produce waste that makes the people around my factory—and maybe around the world—worse off. As long as I don’t have to compensate anyone for polluting their water and air, it’s unlikely I’ll make much of an effort to stop doing it.
On a smaller, more personal scale, we all impose externalities on one another as we go through our daily lives. I drive to work, increasing the amount of traffic you face. You feel the strange compulsion that infects people in theaters these days to check your text messages on your cell phone during the film, and the bright glow peeking over your shoulder reduces my enjoyment of the movie.
The concept of externalities is useful because it directs our attention to such unintended side effects. If you weren’t focused on externalities, you might think that the way to reduce traffic congestion was to build more roads. That might work, but another way, and a potentially more efficient way, is to implement policies that force drivers to pay the cost of their negative externalities by charging a fee to use roads, particularly at peak times. Congestion charges, such as those implemented in London and Singapore, are designed to do exactly that. If I have to pay to go into town during rush hour, I may stay home unless my need is pressing.
Keeping externalities firmly in mind also reminds us that in complex, integrated systems, simple interventions designed to bring about a particular desirable effect will potentially have many more consequences, both positive and negative. Consider, as an example, the history of DDT. When first used, it had its intended effect, which was to reduce the spread of malaria through the control of mosquito populations. However, its use also had two unintended consequences. First, it poisoned a number of animals (including humans), and, second, it selected for resistance among mosquitoes. Subsequently, policies to reduce the use of DDT probably were effective in preventing these two negative consequences. However, while there is some debate about the details, these policies might themselves have had an important side effect—increasing rates of malaria, carried by the mosquitoes no longer suppressed by DDT.
The key point is that the notion of externalities forces us to think about unintended (positive and negative) effects of actions, an issue that looms larger as the world gets smaller. It highlights the need to balance not only the intended costs and benefits of a given candidate policy but also its unintended effects. Further, it helps us focus on one type of solution to the problems of unintended harms, which is to think about using financial incentives for people and firms to produce more positive externalities and fewer negative ones.
Considering externalities in our daily lives directs our attention to ways in which we harm, albeit inadvertently, the people around us, and can guide our decision making—including waiting until after the credits have rolled to check our messages.
Everything Is in Motion
James O’Donnell
Classicist; provost, Georgetown University; author, The Ruin of the Roman Empire
Nothing is more wonderful about human beings than their ability to abstract, infer, calculate, and produce rules, algorithms, and tables that enable them to work marvels. We are the only species that could even imagine taking on Mother Nature in a fight for control of the world. We may well lose that fight, but it’s an amazing spectacle nonetheless.
But nothing is less wonderful about human beings than their ability to refuse to learn from their own discoveries. The edge to the Edge Question this year is the implication that we are brilliant and stupid at the same time, capable of inventing wonders and still capable of forgetting what we’ve done and blundering stupidly on. Our poor cognitive toolkits are always missing a screwdriver when we need one, and we’re always trying to get a bolt off that wheel with our teeth, when a perfectly serviceable wrench is in the kit over there, unused.
So as a classicist, I’ll make my pitch for what is arguably the oldest of our SHA concepts, the one that goes back to the senior pre-Socratic philosopher Heraclitus. “You can’t step in the same river twice,” he said. Putting it another way, his mantra was “Everything flows.” Remembering that everything is in motion—feverish, ceaseless, unbelievably rapid motion—is always hard for us. Vast galaxies dash apart at speeds that seem faster than is physically possible, while the subatomic particles of which we are composed beggar our ability to comprehend large numbers when we try to understand their motion—and at the same time, I lie here, sluglike, inert, trying to muster the energy to change channels, convinced that one day is just like another.
Because we think and move on a human scale in time and space, we can deceive ourselves. Pre-Copernican astronomies depended on the self-evident fact that the “fixed stars” orbited the Earth in a slow annual dance; and it was an advance in science to declare that “atoms” (in Greek, “indivisibles”) were the changeless building blocks of matter—until we split them. Edward Gibbon was puzzled by the fall of the Roman Empire because he failed to realize that its most amazing feature was that it lasted so long. Scientists discover magic disease-fighting compounds only to find that the disease changes faster than they can keep up.
Take it from Heraclitus and put it in your toolkit: Change is the law. Stability and consistency are illusions, temporary in any case, a heroic achievement of human will and persistence at best. When we want things to stay the same, we’ll always wind up playing catch-up. Better to go with the flow.
Subselves and the Modular Mind
Douglas T. Kenrick
Professor of social psychology, Arizona State University; author, Sex, Murder, and the Meaning of Life
Although it seems obvious that there is a single “you” inside your head, research from several subdisciplines of psychology suggests that this is an illusion. The “you” who makes a seemingly rational and “self-interested” decision to discontinue a relationship with a friend who fails to return your phone calls, borrows thousands of dollars he doesn’t pay back, and lets you pick up the tab in the restaurant is not the same “you” who makes very different calculations about a son, a lover, or a business partner.
Three decades ago, cognitive scientist Colin Martindale advanced the idea that each of us has several subselves, and he connected his idea to emerging ideas in cognitive science. Central to Martindale’s thesis were a few fairly simple ideas, such as selective attention, lateral inhibition, state-dependent memory, and cognitive dissociation. Although there are billions of neurons in our brains firing all the time, we’d never be able to put one foot in front of the other if we were unable to ignore almost all of that hyperabundant parallel processing going on in the background. When you walk down the street, there are thousands of stimuli to stimulate your already overtaxed brain—hundreds of different people of different ages with different accents, different hair colors, different clothes, different ways of walking and gesturing, not to mention all the flashing advertisements, curbs to avoid tripping over, and automobiles running yellow lights as you try to cross at the intersection. Hence, attention is highly selective. The nervous system accomplishes some of that selectiveness by relying on the powerful principle of lateral inhibition—in which one group of neurons suppresses the activity of other neurons that might interfere with an important message getting up to the next level of processing. In the eye, lateral inhibition
helps us notice potentially dangerous holes in the ground, as the retinal cells stimulated by light areas send messages suppressing the activity of neighboring neurons, producing a perceived bump in brightness and valley of darkness near any edge. Several of these local “edge detector”–style mechanisms combine at a higher level to produce “shape detectors,” allowing us to discriminate a “b” from a “d” and a “p.” Higher up in the nervous system, several shape detectors combine to allow us to discriminate words, and at a higher level to discriminate sentences, and at a still higher level to place those sentences in context (thereby determining whether the statement “Hi, how are you today?” is a romantic pass or a prelude to a sales pitch).
State-dependent memory helps sort out all that incoming information for later use by categorizing new info according to context: If you learn a stranger’s name after drinking a doppio espresso with her at the local java house, it will be easier to remember that name if you meet again at Starbucks than if the next encounter is at a local pub after a martini. For several months after I returned from Italy, I would start speaking Italian and making expansive hand gestures every time I drank a glass of wine.
Martindale argued that at the highest level all those processes of inhibition and dissociation lead us to suffer from an everyday version of dissociative disorder. In other words, we all have a number of executive subselves, and the only way we manage to accomplish anything in life is to allow only one subself to take the conscious driver’s seat at any given time.
Martindale developed his notion of executive subselves before modern evolutionary approaches to psychology had become prominent, but the idea becomes especially powerful if you combine his cognitive model with the idea of functional modularity. Building on findings that animals and humans use remarkably varied mental processes to learn different things, evolutionarily informed psychologists have suggested that there is not a single information-processing organ inside our heads but instead multiple systems dedicated to solving different adaptive problems. Thus, instead of having a random and idiosyncratic assortment of subselves inside my head unlike the assortment inside your head, each of us has a set of functional subselves—one dedicated to getting along with our friends, one dedicated to self-protection (protecting us from the bad guys), one dedicated to winning status, another to finding mates, yet another to keeping mates (which presents a very different set of problems, as some of us have learned), and still another to caring for our offspring.
Thinking of the mind as composed of several functionally independent adaptive subselves helps us understand many apparent inconsistencies and irrationalities in human behavior, such as why a decision that seems rational when it involves one’s son seems eminently irrational when it involves a friend or a lover, for example.
Predictive Coding
Andy Clark
Professor of philosophy, University of Edinburgh; author, Supersizing the Mind: Embodiment, Action, and Cognitive Extension
The idea that the brain is basically an engine of prediction is one that will, I believe, turn out to be very valuable not just within its current home (computational cognitive neuroscience) but across the board—for the arts, for the humanities, and for our own personal understanding of what it is to be a human being in contact with the world.
The term “predictive coding” is currently used in many ways, across a variety of disciplines. The usage I recommend for the Everyday Cognitive Toolkit is, however, more restricted in scope. It concerns the way the brain exploits prediction and anticipation in making sense of incoming signals and using them to guide perception, thought, and action. Used in this way, predictive coding names a technically rich body of computational and neuroscientific research (key theorists include Dana Ballard, Tobias Egner, Paul Fletcher, Karl Friston, David Mumford, and Rajesh Rao). This corpus of research uses mathematical principles and models that explore in detail the ways that this form of coding might underlie perception and inform belief, choice, and reasoning.
The basic idea is simple. It is that to perceive the world is to successfully predict our own sensory states. The brain uses stored knowledge about the structure of the world and the probabilities of one state or event following another to generate a prediction of what the current state is likely to be, given the previous one and this body of knowledge. Mismatches between the prediction and the received signal generate error signals that nuance the prediction or (in more extreme cases) drive learning and plasticity.
We may contrast this with older models in which perception is a “bottom-up” process, in which incoming information is progressively built (via some kind of evidence-accumulation process, starting with simple features and working up) into a high-level model of the world. According to the predictive-coding alternative, the reverse is the case. For the most part, we determine the low-level features by applying a cascade of predictions that begin at the very top—with our most general expectations about the nature and state of the world providing constraints on our successively more detailed (fine-grain) predictions.
This inversion has some quite profound implications.
First, the notion of good (“veridical”) sensory contact with the world becomes a matter of applying the right expectations to the incoming signal. Subtract such expectations and the best we can hope for are prediction errors that elicit plasticity and learning. This means, in effect, that all perception is some form of “expert perception,” and that the idea of accessing some kind of unvarnished sensory truth is untenable (unless that merely names another kind of trained, expert perception!).
Second, the time course of perception becomes critical. Predictive-coding models suggest that what emerges first is the general gist (including the general affective feel) of the scene, with the details becoming progressively filled in as the brain uses that larger context—time and task allowing—to generate finer and finer predictions of detail. There is a very real sense in which we properly perceive the forest before the trees.
Third, the line between perception and cognition becomes blurred. What we perceive (or think we perceive) is heavily determined by what we know, and what we know (or think we know) is constantly conditioned on what we perceive (or think we perceive). This turns out to offer a powerful window on various pathologies of thought and action, explaining the way hallucinations and false beliefs go hand in hand in schizophrenia, as well as other more familiar states such as “confirmation bias” (our tendency to “spot” confirming evidence more readily than disconfirming evidence).
Fourth, if we now consider that prediction errors can be suppressed not just by changing predictions but also by changing the things predicted, we have a simple and powerful explanation for behavior and the way we manipulate and sample our environment. In this view, action is there to make predictions come true and provides a nice account of phenomena that range from homeostasis to the maintenance of our emotional and interpersonal status quo.
Understanding perception as prediction thus offers, it seems to me, an excellent tool for appreciating both the power and the potential hazards of our primary way of being in contact with the world. Our primary contact with the world, all this suggests, is via our expectations about what we’re about to see or experience. The notion of predictive coding, by offering a concise and technically rich way of gesturing at this fact, provides a cognitive tool that will more than earn its keep in science, law, ethics, and the understanding of our own daily experience.
Our Sensory Desktop
Donald Hoffman
Cognitive scientist, University of California–Irvine; author, Visual Intelligence: How We Create What We See
Our perceptions are neither true nor false. Instead, our perceptions of space and time and objects—the fragrance of a rose, the tartness of a lemon—are all part of our “sensory desktop,” which functions much like a computer desktop.
Graphical desktops for personal computers have existed for about three decad
es. Yet they are now such an integral part of daily life that we might easily overlook a useful concept that they embody. A graphical desktop is a guide to adaptive behavior. Computers are notoriously complex devices, more complex than most of us care to learn. The colors, shapes, and locations of icons on a desktop shield us from the computer’s complexity, and yet they allow us to harness its power by appropriately informing our behaviors, such as mouse movements and button clicks that open, delete, and otherwise manipulate files. In this way, a graphical desktop is a guide to adaptive behavior.
Graphical desktops make it easier to grasp the idea that guiding adaptive behavior is different from reporting truth. A red icon on a desktop does not report the true color of the file it represents. Indeed, a file has no color. Instead, the red color guides adaptive behavior, perhaps by signaling the relative importance or recent updating of the file. The graphical desktop guides useful behavior and hides what is true but not useful. The complex truth about the computer’s logic gates and magnetic fields is, for the purposes of most users, of no use.
Graphical desktops thus make it easier to grasp the nontrivial difference between utility and truth. Utility drives evolution by natural selection. Grasping the distinction between utility and truth is therefore critical to understanding a major force that shapes our bodies, minds, and sensory experiences.
Consider, for instance, facial attractiveness. When we glance at a face, we get an immediate sense of its attractiveness, an impression that usually falls somewhere between hot and not. That feeling can inspire poetry, evoke disgust, or launch a thousand ships. It certainly influences dating and mating. Research in evolutionary psychology suggests that this feeling of attractiveness is a guide to adaptive behavior. The behavior is mating, and the initial feeling of attraction toward a person is an adaptive guide because it correlates with the likelihood that mating with that person will lead to successful offspring.