The first is whether events in early childhood change methylation patterns and whether such patterns are reversible. I find Mitchell’s skepticism convincing, but this aspect of the research is being conducted using methods that lend themselves to rigorous examination. The more ambitious claims of the enthusiasts are currently unwarranted, but if the enthusiasts are right they will eventually be able to make their case via the scientific method.
The second issue is whether environmentally induced changes in methylation are passed on to the next generation. The scientifically interpretable evidence for this is mostly from work with C. elegans (a worm about one millimeter long) and D. melanogaster (the fruit fly), which seems a long way from proving that it happens in humans. But some evidence of intergenerational transmission has also come from laboratory versions of the house mouse, a mammal, which strikes closer to home.81
The most widely publicized of these was the finding in the early 2000s that feeding pregnant mice extra vitamins during pregnancy altered the coat color and disease susceptibility of newborn mice and that the effects lasted for two generations.82 At the end of 2018, a team of 10 geneticists, mostly at Cambridge University, published their finding that the methylation marks on the transposable elements thought to be involved were not transmitted to the next generation.83 In an interview with The Scientist, Dirk Schübeler, a molecular geneticist who was not involved in the study, called the analysis “an enormous technical tour de force.” Before it had been conducted, he continued, the case of the changed coat color had been treated as the tip of an iceberg. “This study shows there is no iceberg.”84
The case for intergenerational transmission isn’t fully resolved, but the proponents face an uphill battle. The simplest reason it’s an uphill battle was explained by the leader of the Cambridge study, Anne Ferguson-Smith. “There’s two rounds of epigenetic programming that basically prevent any epigenetic marks from being transmitted from one generation to the next,” she told The Scientist. “People don’t seem to appreciate this.”85
Bernhard Horsthemke, director of the Institut für Humangenetik at the University of Duisburg-Essen, has expressed the problems at greater length by putting together a “roadmap to proving transgenerational epigenetic inheritance.” I’ve consigned it to a note because it is long and technical—but that’s my point.[86] The accounts of the transgenerational epigenetic effects of famines and the Holocaust that have gotten so much press ignore all of these methodological problems.
This much seems uncontroversial: The study of methylation patterns and their manipulability is at an extremely early stage. Even if one takes all of the conclusions in the reviews of the literature at face value, their applications are far down the road. My point with regard to Proposition #10 is limited. Epigenetics properly understood is a vibrant field with findings that have important medical implications. But as far as I can tell, no serious epigeneticist is prepared to defend the notion that we are on the verge of learning how to turn genes on and off and thereby alter behavioral traits in disadvantaged children (or anyone else).
Recapitulation
One of the signature issues dividing conservative and liberal policy analysts for the last 50 years has been the record of outside interventions on behalf of the poor and disadvantaged. From my perch as one of those on the conservative side of the debate, my appraisal is that the liberals have done well in arguing the benefits of income transfers (their downsides notwithstanding) and the conservatives have done well in documenting the overall failure of job training programs, preschool programs, and elementary and secondary educational reforms (their short-term results notwithstanding).
I will reserve my more speculative conclusions for the final chapter. For now, I want to emphasize a few points that can form broadly shared common benchmarks in assessing the ways in which Proposition #10 might be wrong.
We’ve already tried many, many strategies using the normal tools. For 50 years, social and educational reformers have been coming up with new ideas for interventions. A great many of them have received federal, state, or foundation funding, sometimes lavish funding. As we survey the prospects for better results in the future, it’s not as if there is a backlog of untested bright ideas awaiting their chance.
The modest role of the shared environment seems solidly established. As discussed in chapter 10, the validity of twin studies has survived searching examination of its underlying assumptions. Insofar as violations of those assumptions exist, they probably tend to slightly understate the role of genes. The role of “genetic nurture” is greater than we formerly knew, but that too is rooted in biology. The harder people have looked for purely environmental causes, the more they have turned out to have genetic underpinnings.
The gloomy prospect for systematically affecting the nonshared environment seems vindicated. Nothing in the pipeline shows promise of overturning the negative results to date.
Epigenetics as portrayed in the media has no relevance to Proposition #10 for the foreseeable future. The widespread popular belief that environmental pressures routinely and permanently alter gene expression in humans, that those alterations are reversible, and that their effects are passed down through generations is wrong.
Proposition #10 will eventually be wrong. On the bright side, we can look at recent developments and see reasons that Proposition #10 cannot be true forever. The obvious example is the positive and even life-changing effects that pharmaceuticals developed during the last few decades have had on some forms of depression and other mental disorders. Who knows what role future drugs might play in enhancing learning and positively affecting personality traits and social behavior? Their effects might be dramatic. At some point, the promise of CRISPR for gene editing will be realized, and all bets about the ability to change people by design in substantial numbers will be outdated. If we’re looking at the long term, Proposition #10 will certainly be wrong eventually. Not now.
A Personal Interpretation of the Material in Part III
We live in a world where certain kinds of abilities tend to be rewarded with affluence and professional prestige. Those abilities have a substantial genetic component. That genetic component is a matter of luck: We don’t choose our parents.
The genetic component tends to make social class “stickier,” because successful parents pass along not only money but their talents to their offspring. The inheritance of status is far from an ironclad certainty for any individual—on average, the child of parents with very high IQs and outstanding interpersonal skills will have lower IQ and lesser interpersonal skills than their parents.87 But if we step back and ask where the people with exceptional intelligence and interpersonal skills in the next generation are going to come from, the answer is that they will disproportionately come from high-SES parents.
Putting these facts together—and I submit that the evidence is conclusive enough to warrant treating them as facts—the implication is that advanced societies have replaced one form of unfairness with another. The old form of unfairness was that talented people were prevented from realizing their potential because of artificial barriers rooted in powerlessness and lack of opportunity. The new form of unfairness is that talent is largely a matter of luck, and the few who are so unusually talented that they rise to the top are the beneficiaries of luck in the genetic lottery.
All of these statements apply to frequency distributions and their effects on society as a whole. As individuals, most of our lives are not genetically determined except at the extremes of success. We can’t all become rich and famous if we try hard enough, but just about all of us can live satisfying lives, and we have many degrees of freedom in reaching that goal.
Part IV
Looking Ahead
The future of the liberal arts lies, therefore, in addressing the fundamental questions of human existence head on, without embarrassment or fear, taking them from the top down in easily understood language, and progressively rearranging them into domains of inquiry that unite the be
st of science and the humanities at each level of organization in turn. That of course is a very difficult task. But so are cardiac surgery and building space vehicles difficult tasks. Competent people get on with them, because they need to be done.1
—Edward O. Wilson
That’s Edward O. Wilson writing in Consilience: The Unity of Knowledge, the book that inspired this one. Twenty-two years after I first read it, the social sciences are on the cusp of the future that Wilson foresaw. What next?
I should probably duck the question. Another Wilson, the eminent political scientist James Q. Wilson, had a favorite story about his mentor, the equally eminent Edward C. Banfield. “Stop trying to predict the future, Wilson,” Banfield would say to him. “You’re having a hard enough time predicting the past.” Banfield’s excellent advice weighs heavily on me, but I’ll give it a try.
Chapter 14 focuses on the problem of establishing causation with genomic material and describes a great debate about the role of genomics in social science that is already well under way. Its resolution will determine whether the social science revolution is upon us or will be deferred indefinitely.
In chapter 15, I offer reflections and speculations about the material I have covered in Human Diversity, unabashedly going beyond the data.
14
The Shape of the Revolution
I began Human Diversity by asserting that advances in genetics and neuroscience will enable social scientists to take giant strides in understanding how the world works—that we social scientists are like physicists at the outset of the nineteenth century, poised at a moment in history that will produce our own Ampères and Faradays. Can anything more specific be said about how the coming revolution will unfold? The one certainty is that it will be full of surprises. But I can describe a centrally important debate that is already under way and try to tease out some of its implications.
The Difference Between the Genomic and Neuroscientific Revolutions
I focus on the genomics revolution in this chapter because it will have broader direct effects on social science than will developments in neuroscience. To do quantitative neuroscience research, you need to be a neuroscientist and have access to extremely expensive equipment such as MRI machines. The results of the research will inform a variety of social science questions, but the work won’t be done by social scientists. In contrast, the products of the genomics revolution, especially polygenic scores, will be usable by social scientists with no training in genomics by the end of the 2020s in the same way that IQ scores are used by social scientists with no training in creating IQ tests.
A Place to Stand
Two hallmarks of genuine science are proof of causation and the ability to predict. Until the 1960s, the social sciences barely participated. Econometrics and psychometrics were already established disciplines, but for the most part social scientists wrote narratives with simple descriptive statistics. Some of those narratives had deservedly become classics—for example, W. E. B. DuBois’s The Philadelphia Negro, Robert and Helen Lynd’s Middletown, and Gunnar Myrdal’s An American Dilemma—but social scientists were powerless to analyze causation or make predictions in quantitative ways except with small samples in psychology laboratories working with rats and pigeons. The multivariate statistical techniques for dealing with larger and messier problems of human society had been invented, but the computational burdens were too great.
In the early 1960s, computers began to arrive on university campuses. They were slow, clumsy things—your smartphone has orders of magnitude more computing power and storage than the most advanced university computers then.1 Getting access to them was laden with bureaucracy. But they could perform statistical analyses that were too laborious to be done by hand. For the first time, social scientists could explore questions that required adjusting for multiple variables and make cautious quantitative claims about causation. The changes that followed were dramatic and rapid. In 1960, technical journals in sociology and political science were collections of essays. By 1980, they were collections of articles crammed with equations, tables, and graphs. In the subsequent 40 years, the methods have become ever more sophisticated and the statistical packages ever more powerful. The databases on which we work our analyses are better designed, far larger and more numerous, and often downloadable with the click of a mouse.
And yet in one sense we have been stuck where we were in 1960. Archimedes famously promised to move the Earth if he had a long enough lever and a place to stand. When it comes to analyses of human behavior, the social sciences have had a lever for decades, but no secure, solid place to stand.
The debate about nature versus nurture is not just one of many issues in social science. It is fundamental for everything involving human behavior. At the theoretical level, consider economic behavior. To what extent does the assumption that humans are rational actors explain how the market actually works? Answering that question goes to core issues of how human beings function cognitively, which in turn depends on the relative roles of environmental conditions and biologically grounded deviations from rational calculations. On the practical level, almost every social policy analysis, whether it measures the impact of interventions to deter juvenile crime or tries to predict how a piece of legislation will affect the behavior of bankers, ultimately makes sense or not based on scientific findings about human nature. It is a statement of fact: Most of social science ultimately rests on biology.
But we have had no causally antecedent baseline for analyzing human behavior. Twin studies are a case in point—a powerful method to determine when genes must be involved, but unable to push our understanding beyond heritability estimates. Everything more detailed that we try to say about the role of nature is open to question. Triangulating data can make alternative explanations more or less plausible, but ultimately social scientists have had no place to stand in tackling a central question of their profession: What is innate?
Progress in genetics and neuroscience holds out the prospect—a hope for some, a fear for others—that we can peer into the black box. An intense debate is under way about whether that prospect is real or chimerical.
Interpreting Causation in an Omnigenic, Pleiotropic World
Genetic causation is far more complicated than earlier generations expected. It once seemed straightforward: After the genome had been sequenced, geneticists would slowly assemble a jigsaw puzzle. It would be a complicated one, but eventually they would know which variants caused what outcomes.
As recently as 1999, geneticist Neil Risch, one of the originators of genome-wide analysis, led a team of 31 geneticists trying to find loci affecting autism. They made news by reporting that “the overall distribution of allele sharing was most consistent with a model of ≥ 15 susceptibility loci.”2 They characterized 15 as a large number. Eighteen years later, three of the team’s Stanford colleagues (first author was Evan Boyle) published an article titled “An Expanded View of Complex Traits: From Polygenic to Omnigenic.” In it, they used the Risch study to illustrate how much had changed. A prediction of more than 15 loci for autism “was strikingly high at the time, but seems quaintly low now,” they wrote.3 The intervening years had brought two revolutionary surprises.
The first surprise was that most traits were associated with many, many loci. Effect sizes for common variants were small, and the combined effects of those loci explained only a fraction of predicted genetic variance. As genome-wide analyses became more sophisticated, it was discovered that some loci did have sizable effects, but those loci were usually rare variants—and it began to look as if there might be thousands of them. Almost everything was highly polygenic.
And that’s just counting the SNPs that directly code for proteins. “A second surprise,” the authors wrote, “was that, in contrast to Mendelian diseases—which are largely caused by protein-coding changes—complex traits are mainly driven by noncoding variants that presumably affect gene regulation.”4 Interpreting a statistical association of a certain allele in
a certain SNP with the expression of a trait was going to be arduous.
The numbers of loci involved in a given trait could be staggering. Human height is again a good example. The Boyle study estimated that 62 percent of all common SNPs are statistically associated with a nonzero effect on height—millions of SNPs, in other words. Not all of them are causal, but that’s not much comfort. “Under simplifying assumptions,” the authors wrote, “the best-fit curve suggests that ∼3.8% of 1000 Genomes SNPs have causal effects on height.”5 About 100,000.6 The Boyle study concluded that complex traits routinely follow a similar pattern, even if not quite so extreme. This finding led them to propose what they called the “omnigenic” model of complex traits, incorporating the evidence that tens of thousands of loci can causally affect a single trait.
Another complexity is that a single SNP can affect many traits. This phenomenon is called pleiotropy. Take, for example, a 2018 study that identified 148 loci affecting general cognitive function. The authors tested the genetic correlations between general cognitive function and 52 health-related traits. Thirty-six of them had statistically significant correlations, including traits with no obvious relationship to cognitive function, such as positive correlations with grip strength and negative correlations with angina, lung cancer, osteoarthritis, and heart attack.7
Pleiotropy is ubiquitous. In 2016, Joseph Pickrell and his colleagues assembled statistics for genome-wide studies of 42 traits or diseases ranging from anthropometric traits such as height and nose size to neurological diseases (e.g., Alzheimer’s, Parkinson’s) to susceptibility to infection (e.g., childhood ear infections, tonsillitis). The number of associations ranged from a low of 5 for age at voice drop in men to over 500 for height.8 Such statistical associations could be coincidental or they could be causal.
Human Diversity Page 33