Know This
Page 29
And finally an example—one among many—of the new questions that the new synthesis enables us to see. Recent work in psychology reveals that we all have a grab bag of surprising implicit biases. Many people, including people who support and work hard to achieve racial equality, nonetheless associate black faces with negative words and white faces with positive words. And there is a growing body of evidence suggesting that these implicit biases also affect our behavior, though we are usually unaware this is happening.
Moral philosophers have long been concerned to characterize the circumstances under which people are reasonably held to be morally responsible for their actions. Are we morally responsible for behavior influenced by implicit biases? That question has sparked heated debate, and it could not have been asked without the new synthesis.
Will all this still be news in the decades to come? My prediction is that it will. We have only begun to see the profound changes the new synthesis will bring about in moral philosophy.
Morality Is Made of Meat
Oliver Scott Curry
Departmental Lecturer, Institute of Cognitive and Evolutionary Anthropology, University of Oxford
What is morality and where does it come from? Why does it exert such a tremendous hold over us? Scholars have struggled with these questions for millennia, and for many people the nature of morality is so baffling that they assume it must have a supernatural origin. But the good news is that we now have a scientific answer to these questions.
Morality is made of meat. It is a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life. Which problems? Caring for families, working in teams, trading favors, resolving conflicts. Which solutions? Love, loyalty, reciprocity, respect. The solutions arose first as instincts designed by natural selection; later, they were augmented and extended by human ingenuity and transmitted as culture. These mechanisms motivate social, cooperative, and altruistic behavior, and they provide the criteria by which we evaluate the behavior of others. And why is morality felt to be so important? Because, for a social species like us, the benefits of cooperation (and the opportunity costs of its absence) can hardly be overstated.
The scientific approach was news when Aristotle first hypothesized that morality was a combination of the natural, the habitual, and the conventional—all of which helped us fulfill our potential as social animals. It was news when Hobbes theorized that morality was an invention designed to corral selfish individuals into mutually beneficial cooperation. It was news when Hume proposed that morality was the product of animal passions and human artifice, aimed at the promotion of the “publick interest.” It was news when Darwin conjectured that “the so-called moral sense is aboriginally derived from the social instincts,” which tell us how “to act for the public good.” And it has been front-page news for the past few decades, as modern science has made discovery after discovery into the empirical basis of morality, delivering evolutionary explanations, animal antecedents, psychological mechanisms, behavioral manifestations, and cultural expressions.
Unfortunately, many philosophers, theologians, and politicians have yet to get the message. They make out that morality is still mysterious, that without God there is no morality, and that the irreligious are unfit for office. This creationist account of morality—“good of the gaps”—is mistaken and alarmist. Morality is natural, not supernatural. We are good because we want to be, and because we are sensitive to the opinions—the praise and the punishment—of others. We can work out for ourselves how best to promote the common good, and with the help of science make the world a better place.
Now, ain’t that good news? And ain’t it high time we recognized it?
People Kill Because It’s the Right Thing to Do
James J. O’Donnell
Classics scholar; university librarian, Arizona State University; author, Pagans: The End of Traditional Religion and the Rise of Christianity
People kill because it’s the right thing to do.
In their 2014 book Virtuous Violence: Hurting and Killing to Create, Sustain, End, and Honor Social Relationships, moral psychologist Tage Shakti Rai at Northwestern and psychological anthropologist Alan Page Fiske at UCLA sketch the extent to which their work shows that violent behavior among human beings is often not a breach of moral codes but an embodiment of them.
In a sense, we all know this, by way of the exceptions we permit. Augustine’s theory of the Just War arose because his god demonstrably approved of some wars. When Joshua fought the battle of Jericho, he had divine approval, “Thou shalt not kill” be damned. To Augustine’s credit and others in that tradition, the Just War theory represents hard work to resist as much licit violence as possible. To their discredit, it represents their decision to cave in to questionable evidence and put a stamp of approval on slaughter. (Am I hallucinating by recalling a small woodcut of Augustine in the margin of a Time essay on the debates over the justice of the Vietnam War? If my hallucination is correct, I remember shuddering at the sight.)
And certainly we have plenty of examples closer to date: Mideast terrorists and anti-abortion assassins are flamboyant examples, but elected statesmen—Americans as well as those from countries we aren’t so fond of—are no less prone to justify killing based on the soundest moral arguments. We glance away nervously and mutter about exceptions. But what if the exceptions are the rule?
If the work of Rai and Fiske wins assent, it points to something more troubling. The good guys are the bad guys. Teaching your children to do the right thing can get people killed. We have other reasons for thinking the traditional model of how human beings work in ideal conditions (intellectual consideration of options informed by philosophical principles leading to rational action) may be not just flawed but downright wrong. Rai/Fiske suggest that the model is not even sustainable as a working hypothesis or faute de mieux but is downright dangerous.
Interdisciplinary Social Research
Ziyad Marar
Global publishing director, SAGE; author, Intimacy
In terms of sheer unfulfilled promise, interdisciplinary research has to stand as one of the most frustrating examples in the world of social research. The challenges modern society faces—climate change, antimicrobial resistance, countless issues to do with economic, social, political, and cultural well-being—do not come in disciplinary packages. They are complex and require an integrated response, drawing on different levels of inquiry. Yet we persist in organizing ourselves in academic siloes and risk looking like those blind men groping an elephant. As Garry Brewer pithily observed back in 1999, “The world has problems, universities have departments.”
The reasons this promise is unfulfilled are equally clear. Building an academic career requires immersion in a speciality, with outputs (articles, books, talks) that win the approval of peers. Universities are structured in terms of departments, learned societies champion a single discipline, and funding agencies prioritize specific work from those who have built the right kind of credibility in this context. And this means interdisciplinary work is hard to do well, often falling between stools and sometimes lost in arcane debate about its very nature, swapping “inter” for “multi,” “cross,” “trans,” “post,” and other candidate angels to place on the head of this pin.
Some disciplines have overcome these hurdles—neuroscience, bioinformatics, cybernetics, biomedical engineering—and more recently we have seen economics taking a behavioral turn and moral philosophy drawing on experimental psychology. But the bulk of the social sciences have proved resistant, despite the suitability of their problems to multilevel inquiry.
The good news is that we are seeing substantial shifts in this terrain, triggered in part by the rise of Big Data and new technology. Social researchers are agog at the chance to listen to millions of voices, observe billions of interactions, and analyze patterns at a scale never seen before. But to seriously engage requires new methods and forms of collaboration, with a consequent er
osion of the once insurmountable barrier between quantitative and qualitative research. An example comes from Berkeley, where Nick Adams and his team are analyzing how violence breaks out in protest movements—an old sociological question, but now with a database (thanks to the number of Occupy movements in the U.S.) so large that the only way to analyze it feasibly requires a Crowd Content Analysis Assembly Line (combining crowd sourcing and active machine learning) to code vast corpora of text. This new form of social research, drawing on computational linguistics and computer science to convert large amounts of text into rich data, could lead to insights in a vast array of social and cultural themes.
These shifts might stick if we continue to see centers of excellence focusing on data-intensive social research, like D Base at Berkeley or the Harvard Institute for Quantitative Social Science, show how institutions can reconfigure themselves to respond to opportunity. As Gary King (director of the latter) has put it:
The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit focused inward to having a major impact on public policy, commerce and industry, other academic fields, and some of the major problems that affect individuals and societies.
More structural change will follow these innovations. Universities around the world, having long invested in social-science infrastructure, are looking to these models. And we are seeing changes in funders’ priorities, too. The Wellcome Trust, for instance, now offers the Hub Award to support work that “explores what happens when medicine and health intersect with the arts, humanities, and social sciences.”
Of course the biggest shaper of future research is at the national level. In the U.K., the proposed implementation of a “cross-disciplinary fund” alongside a new budget to tackle “global challenges” may indicate the Government’s seriousness of interdisciplinary intent. Details will follow, and they may prove devilish. But the groundswell of interest, sustained by opportunities in data-intensive research, is undeniable.
So interdisciplinary social research should increasingly become the norm, although specialization will still be important—after all, we need good disciplines to do good synthetic work. But we may soon see social sciences coalesce into a more singular social science and become more fully engaged with problem domains first and departmental siloes second.
Intellectual Convergence
Adam Alter
Psychologist; associate professor of marketing, Stern School of Business, NYU; author, Drunk Tank Pink
Suppose a team of researchers discovers that people who earn $50,000 a year are happier than people who earn $30,000 a year. How might the team explain this result?
The answer depends largely on whether the team adopts a telephoto zoom lens or a wide-angle lens. A telephoto zoom lens focuses on narrower causes, like the tendency for financial stability to diminish stress hormones and improve brain functioning. A team that uses this lens will tend to focus on specific people who annually earn more or less money and any differences in how their brains function and how they behave. A team that adopts a wide-angle lens will focus on broader differences. Perhaps people who earn more also live in safer neighborhoods with superior infrastructure and social support. Though each team adopts a different level of analysis and arrives at a different answer, both answers can be right.
For decades and even centuries, this is largely how the social sciences have operated. Neuroscientists and psychologists have peered at individuals through zoom lenses, while economists and sociologists have peered at populations through wide-angle lenses.
The big news of late is that these intellectual barriers are dissolving. Scientists from different disciplines are either sharing their lenses or working separately on the same questions and then coming together to share what they’ve learned. Not only is interdisciplinary collaboration on the rise, but papers with authors from different disciplines are more likely to be cited by other researchers. The benefits are obvious. As the income-gap example shows, interdisciplinary teams are more likely to answer the whole question, rather than focusing on just one aspect at a time. Instead of saying that people who earn more are happier because their brains work differently, an interdisciplinary team is more likely to compare the roles of multiple causes in formulating its conclusion.
At the same time, researchers within disciplines are adopting new lenses. Social and cognitive psychologists, for example, have historically explored human behavior in the lab. They still do, but many prominent papers published in 2015 also included brain-imaging data (a telephoto zoom lens) and data from social-media sites and large-scale economic panels (wide-angle lenses). One paper captured every word spoken within earshot of a child during the first three years of his life, to examine how babies come to speak some words earlier than others. A second paper showed that research-grant agencies favor male over female scientists by examining the content of thousands of grant reviews. And a third analyzed the content of 47,000 tweets to quantify expressions of happiness and sadness. Each of these methods is a radical departure from traditional lab experiments, and each approaches the focal problem from an unusually broad or narrow perspective. These papers are more compelling because they present a broader solution to the problem they’re investigating—and they’re already tremendously influential, in part because they borrow across disciplines.
One major driver of intellectual convergence is the rise of Big Data, not just in the quantity of data but also in understanding how to use it. Psychologists and other lab researchers have begun to complement lab studies with huge, wide-angle social-media and panel-data analyses. Meanwhile, researchers who typically adopt a wide-angle lens have begun to complement their Big Data analyses with zoomed-in physiological measures, like eye-tracking and brain-imaging analyses. The news here is not just that scientists are borrowing from other disciplines but also that their borrowing has supplied richer, broader answers to a growing range of important scientific questions.
Weapons Technology Powered Human Evolution
Timothy Taylor
Professor of the prehistory of humanity, University of Vienna; author, The Artificial Ape
Thomas Hobbes’s uncomfortable view of human nature looks remarkably prescient in the light of new discoveries in Kenya. Back in the mid-17th century—before anyone had any inkling of deep time or the destabilization of essential identity that would result from an understanding of the facts of human evolution (i.e., before the idea that nature was mutable)—Hobbes argued that we were fundamentally beastly (selfish, greedy, cruel) and in the absence of certain historically developed and carefully nurtured institutional structures, we would regress to live in a state of nature, in turn understood as a state of perpetual war.
We can assume that John Frere would have agreed with Hobbes. Frere, we may read (and here Wikipedia is orthodox, typical, and, in a critical sense, wrong), “was an English antiquary and a pioneering discoverer of Old Stone Age or Lower Paleolithic tools in association with large extinct animals at Hoxne, Suffolk in 1797.” In fact, while Frere did indeed make the first well-justified claim for a deep-time dimension to what he carefully recorded in situ, saying that the worked flints he found dated to a “very remote period indeed,” he did not think they were tools in any neutral sense, stating that the objects were “evidently weapons of war, fabricated and used by a people who had not the use of metals.”
Frere’s sharp-edged weapons can now be dated to Oxygen Isotope Stage 11—that is, to a period lying between 427,000 and 364,000 years ago—and even he might have been surprised to learn that the people responsible were not modern humans but a species, Homo erectus, whose transitional anatomy would first come to light through fossil discoveries in Java a century later. Subsequent archeolog
ical and paleoanthropological work (significant aspects of it pioneered by Frere’s direct descendant, Mary Leakey) has pushed the story of genus Homo back ever further, revealing as many as a dozen distinct species (the number varies with the criteria used).
Alongside the biological changes runs a history, or prehistory, of technology. It has usually been supposed that this technology, surviving mainly as modified stone artifacts, was a product of the higher brain power that our human ancestors displayed. According to Darwin’s sexual-selection hypothesis, female hominins favored innovative male hunters, and the incremental growth in intelligence led, ultimately, to material innovation (hence the evocative but not taxonomic term Homo faber—Man the maker).
So powerful was this idea that although chipped-stone artifacts dating to around 2.6 million years ago have long been known, there was a strong presumption that genus Homo had to be involved. This is despite the fact that the earliest fossils with brains big enough to be classified in this genus dated at least half a million years later. It was a fairly general hunch within paleoanthropology that the genus Homo populations responsible for early chipped-stone technologies had simply not yet been discovered. Those few of us who, grounded in the related field of theoretical archeology, thought differently remained reliant on a broad kind of consilience to counter this ex-silencio assumption.
So to me it was wonderful news when in May 2015 Sonia Harmand and co-workers published an article in Nature titled “3.3-million-year-old stone tools from Lomekwi 3, West Turkana, Kenya”—because the strata at their site date to a period when no one seriously doubts that australopithecines, with their chimp-sized brains, were the smartest of the savannah-dwelling hominins. The discovery shows unambiguously that technology preceded, by more than a million years, the expansion of the cranium traditionally associated with the emergence of genus Homo.