The Conservative Sensibility

Home > Other > The Conservative Sensibility > Page 37
The Conservative Sensibility Page 37

by George F. Will


  Johnson was neither the first progressive nor the first president to employ the rhetorical trope about life as a race. In 1909, Herbert Croly said it was government’s duty to do what it can to make possible for all “an equal start in the race.”7 The next year, Theodore Roosevelt said: “I know perfectly well that men in a race run at unequal rates of speed. I don’t want the prize to go to the man who is not fast enough to win it on his merits, but I want them to start fair.”8 Barack Obama, in his 2013 State of the Union address, advocated universal pre-K education as a means of making “sure none of our children start the race of life already behind.”9 But the many problems with the “race of life” metaphor begin with the fact that a footrace is a simple and zero-sum event. It is, unlike life, simple because it is about a physical skill—the ability of the contestants’ muscles to propel them forward. It is a zero-sum event because all contestants know, and do not resent, the fact that one person will win and the rest will not. In an individual’s life, however, successes are many and various, and most successes do not subtract from the successes or happiness of others. So “the race of life” is much more complex—and satisfying—than a track meet. And to assign government the open-ended task of enabling everyone to have a “fair” opportunity to “win” is to ensure that government’s size, intrusiveness, and frustration will expand in tandem. By now, the “race of life” phrase has become foggy language that obscures much and clarifies nothing. Nevertheless, Johnson’s 1965 speech, to the writing of which Daniel Patrick Moynihan contributed, was a serious attempt to come to grips with the many complexities of the concept of equal opportunity.

  The problem that Johnson addressed at Howard University had been addressed almost a century earlier by Frederick Douglass. Contemplating the “sudden derangement” of the lives of African-Americans in the South in the aftermath of the Civil War, he said that the ex-slave was “free from the individual master but a slave of society. He had neither money, property, nor friends. He was free from the old plantation, but he had nothing but the dusty road under his feet.… He was turned loose naked, hungry, and destitute to the open sky.”10 And then things got worse. Jim Crow laws created a caste system enforced by extra-judicial violence. The sharecroppers’ system that had replaced slavery had itself been virtual slavery enforced by terror. It was peonage: In 1965, Martin Luther King, Jr., met Alabama sharecroppers who, having been paid all their lives in plantation scrip, had never seen US currency. In the sharecropper society of enveloping despair, there often was no money for weddings and no formal divorces because there were no possessions to divide. All the coming weaknesses of the urban underclass were present: out-of-wedlock childbearing, female-headed households, violent crime, substance abuse (mostly home-brew whiskey, but other drugs, too). And then, one hundred years after Appomattox, there were signs that matters might take a turn for the worse.

  In the mid-1960s, Moynihan noticed something that was especially ominous because it was so counterintuitive. It came to be called “Moynihan’s scissors.” Two lines on a graph had crossed, resembling a scissors’ blades. The descending line depicted the decline in the minority—then overwhelmingly African-American—male unemployment rate. The ascending line depicted the simultaneous rise of new welfare cases. The disappearance of the correlation between improvements in employment and decreased welfare dependency was not just bewildering, it was frightening. It shattered policymakers’ serene and long-held faith in social salvation through better economic incentives and fewer barriers, legal and other, to individual initiative.

  We now know what was happening. In the 1960s, as the civil rights movement and legislation dismantled barriers to opportunity, there was a stunning acceleration of a social development that would cripple the ability of many people, especially young people, to take advantage of expanded opportunities. It was social regression driven by the explosive growth of the number of children growing up in single-parent, overwhelmingly female-headed, families. This meant, among other things, a continually renewed cohort of adolescent males—an inherently turbulent tribe—from homes without fathers. This produced chaotic neighborhoods and schools where the task of maintaining elementary discipline eclipsed that of teaching. Policymakers were confronted with the disconcerting possibility that the decisive factors in social amelioration are no longer economic but cultural—habits, mores, customs, dispositions. This was dismaying for two reasons. First, in the 1960s Americans in general, and economists even more than most Americans, believed that they had at last mastered the management of the modern industrial economy. Economic “fine-tuning,” a favorite phrase of the time, supposedly would henceforth smooth out business cycles and guarantee brisk and steady growth. Second, it is easier for government to remove barriers and alter incentives than to manipulate—“fine-tune,” if you will—family structure. A dawning appreciation of the crucial importance of family structure refuted the assumption that the condition of the poor must improve as macroeconomic conditions improve.

  In January 1964, President Johnson, just two months in office, proclaimed in his first State of the Union address: “This administration today, here and now, declares unconditional war on poverty in America.”11 What meaning were listeners supposed to attach to the word “unconditional”? In an actual war, this would mean that there would be no limits to the material mobilizations and excisions from freedom (conscription, censorship, etc.) that the government might deem exigent. Regarding poverty, however, the language of “unconditional war” betokened an ominous misconception: It was that poverty persisted only because of a weakness of governmental will—only because government had not been sufficiently rigorous in marshalling material resources and undertaking necessary behavior modifications that it knew how to administer.

  Johnson’s language indicated that the war on poverty was to be, as actual wars are, mostly about bringing to bear material resources. The war on poverty’s premise was that Ernest Hemingway was right: In his short story “The Snows of Kilimanjaro,” the character Julian, based on F. Scott Fitzgerald, says, “The very rich are different from you and me,” eliciting the reply, “Yes, they have more money.”12 The war on poverty assumed that the poor and the not poor were alike in all essentials other than material possessions and resources. If this assumption were correct, the war on poverty would have been won by something government does constantly and often well: distributing money. But this assumption was to be shredded by Moynihan’s scissors.

  The social policies put in place in those years were shaped by people who themselves were shaped by the searing experience of material deprivation in the 1930s, especially unemployment. And the premise of those policies was that long-term poverty exists in our wealthy nation only because we have not done what clever and warm-hearted people can do—use government to equitably and rationally distribute our material abundance. The assumption was that poverty could be cured by government fiat, by a redistribution of goods and services that government can orchestrate: money, housing, schools, transportation, jobs. In 1966, Sargent Shriver, a good and intelligent man, was in charge of President Johnson’s war on poverty. While testifying to Congress, Shriver was asked how long it would take to end poverty in America. He answered crisply: ten years. That was not a foolish answer—if the premise of social policy until then was still correct. If, that is, poverty was material poverty, to be cured by material measures. Then came Moynihan’s scissors, which meant that in a portion of the nation, government had to deal with an impacted poverty that was, and is, immune to even powerful and protracted economic growth.

  This portion should have been defined and addressed as posing a problem of class, much as the problems of Europe’s urban industrial working class had been defined and addressed. It was not so defined because of this fact: Although a majority of America’s poor were white, a disproportionate portion were not. And the fact of race was entangled with the most toxic and destructive idea that was ever in general circulation and broad acceptance in America. This rule simp
lified the administration of slavery and, later, Jim Crow laws. It also enabled racial thinking with a minimum of thought—no troublesome distinctions about degrees of racial identity. The idea was—and such is the durability of folly, still is—the “one-drop rule.” Also known as the “one black ancestor rule,” it holds that anyone with any ancestor from sub-Saharan Africa is considered black.

  Booker T. Washington, Frederick Douglass, Jesse Owens, Roy Campanella, and, of course, Barack Obama each had a white parent. Martin Luther King, Jr. (who had an Irish grandmother and some Indian ancestry), W. E. B. Du Bois, and Malcolm X had some Caucasian ancestry. The NAACP estimates that 70 percent of those who identify themselves as African-American are of mixed racial heritage. It is impossible to know how American history might have been different if the one-drop rule had never thoroughly infected American thinking. Perhaps the “peculiar institution” of slavery required, for its administration, this peculiar rule. If so, trying to imagine American history without the one-drop rule requires imagining American history without slavery. Be that as it may, the one-drop rule gave an artificial clarity and bogus precision to all race talk. This was talk that might have benefited from a large element of blurriness.

  Instead, there has developed a destructive set of behaviors among some adolescent African-American males that is a perverse assertion of racial identity. The African-American sociologist Elijah Anderson says lack of confidence in the police and criminal justice system produces a defensive demeanor of aggression. This demeanor expresses a proclivity for violent self-help in a menacing environment. A readiness to resort to violence is communicated by “facial expression, gait and verbal expressions—all of which are geared mainly to deterring aggression” and to discouraging strangers “from even thinking about testing their manhood.” Inner city youths are apt to acquire faux families in the form of gangs and construct identities based precariously on possessions—sneakers, jackets, jewelry. The taking and defense of these items is a part of a tense and sometimes lethal ritual. It has, Anderson says, a “zero-sum quality” because raising oneself requires putting someone down. Hence the low threshold of violence among people who feel they have no way of gaining or keeping status other than through physical displays. The street is the alternative source of self-esteem because work experiences are so often unsatisfactory, partly because of demeanors and behaviors acquired in the streets. A prickly sensitivity about “respect” causes many black youths to resent entry-level jobs as demeaning. For such a person, work becomes a horizontal experience of movement from one such job to another. And the young person’s “oppositional culture” is reinforced by the lure of the underground economy of drugs. Furthermore, Anderson says, some young people develop “an elaborate ideology in order to justify their criminal adaption” to their situation, an ideology “portraying ‘getting by’ without working as virtuous.”13 And, in an especially cruel outcome, the criminal justice system becomes the source of a welcomed embrace. In Scott Turow’s novel The Laws of Our Fathers, a judge broods about the endless parade of young black defendants before her bench, each an “atom waiting to be part of a molecule”:

  I’ve been struck by how often a simple, childish desire for attention accounts for the presence of many of these young people. Most of these kids grow up feeling utterly disregarded—by fathers who departed, by mothers who are overwhelmed, by teachers with unmanageable classrooms, by a world in which they learn, from the TV set and the rap of the street, they do not count for much. Crime gathers for them, if only momentarily, an impressive audience: the judge who sentences, the lawyer who visits, the cops who hunt them—even the victim who, for an endless terrified moment on the street, could not discount them.14

  The one-drop rule was, in a sense, self-fulfilling. It identified a category of Americans without ambiguity and concentrated attention on the utter uniqueness of the African-American experience. That experience must have something to do with African-Americans’ especially intense susceptibility to the society-wide phenomenon of family disintegration. The problem, again, is behavior. The problem is not material poverty but rather a poverty of intangible social capital—a poverty of inner resources, of the habits, mores, values, customs, and dispositions necessary for an individual to thrive in a complex, urban, industrial society. These missing attributes range from industriousness to sexual continence, from the ability to defer gratification to the determination to abstain from substance abuse. This is a depressing diagnosis because government does not know how to replenish such intangible social capital, once it has been dissipated.

  Any list of the government’s most substantial successes in the twentieth century is apt to be long on material achievements—the Tennessee Valley Authority, the Manhattan and Apollo projects, the Interstate Highway System. Other successes include the civil rights acts, but with those, the government was not required to impart aptitudes to people; rather government removed impediments to the exercise of aptitudes by people. There would be other successes on the list, but most would be like these, programs that delivered clearly defined durable goods. Similarly, the great successes of nineteenth-century governance were in causing canals and railroads to be built and in distributing a natural bounty, land. But government becomes discouraged, and discouraging, when it tries to deal with concepts more complicated than dams and highways, when it tries to deliver “meaningful jobs” for adults, “head starts” for children, and “model cities” for all. Yet increasingly government is asked to deliver such complicated commodities. That is why, if power is the ability to achieve intended effects, government is decreasingly powerful. This thought only seems perverse if you equate size with power, which admittedly would make sense if government were a machine. But as President William Howard Taft wearily exclaimed to no one in particular when a zealous aide began lecturing him about the “machinery” of government, “The young man really thinks it’s a machine!”15

  It is actually a jumble of fractious, rivalrous power centers through which it sometimes accomplishes marvelous things. Consider two of the most successful government programs of the previous century: Social Security and the GI Bill. Social Security largely eliminated poverty in the elderly portion of the population. But this attack on poverty was not complicated. It involved nothing more arcane than mailing checks to a stable, easily identified population group. The government is good at delivering transfer payments. It is not so adept at delivering services, still less at delivering planned changes in attitudes and behavior. There is, however, one counter-example that is rich in lessons that were not learned by the authors of the Niagara of social policies that came two decades after it.

  The GI Bill was partly intended as a prophylactic measure against prospective discontents: The rise of European fascism had been fueled by the grievances of demobilized servicemen from the First World War. It is not correct to say that hitherto American veterans had been neglected. Far from it. In 1865, one-fifth of Mississippi’s state budget went for artificial limbs to replace limbs left in places like Shiloh and Cold Harbor. And in the 1890s more than 40 percent of the federal budget went to one entitlement: pensions for Civil War veterans. However, veterans of the First World War had not been well cared for, and that mistake would not be repeated.

  The GI Bill’s primary purpose was to jump-start the social project that the war had interrupted, the completion of the New Deal program of social amelioration. This purpose was different, and better, in important ways than the purposes of some social policies in the 1960s. The GI Bill, which subsidized education and home buying for veterans, employed liberal means to achieve profoundly conservative consequences.16 It encouraged a middle-class form of striving to replace the working-class path to upward mobility. In 1940 only one in nine Americans was a high school graduate, there were fewer than 1.5 million college students, and only one in twenty Americans had a college degree. By the spring of 1947, the 1.6 million veterans enrolled in college were 49 percent of all registered students. Sixty percent of the vete
rans enrolled in science and engineering programs, including many among the 400 whose dormitory at the University of Illinois was the ice rink. That university had been expecting at most 11,000 students, but 15,000 enrolled. By the time the national program ended in 1956, 2.2 million veterans had gone to college, 3.5 million to technical schools, and 700,000 had received off-campus agricultural instruction. By 1990, there were 14 million Americans in college, and one in five Americans had a degree. The GI Bill contributed mightily to making college a middle-class expectation. And in modern America, expectations mutate into entitlements.

  In 1940, two-thirds of Americans were renters. By 1949, 60 percent of Americans were homeowners, partly because of subsidized loans for veterans. A veteran of the Navy Seabees, Bill Levitt, bought broad swaths of Long Island farmland and marketed a basic house for $7,990, a bargain at a time when the average family income was about $2,500. A veteran interviewed for a PBS program on the GI Bill remembered: “No money down. I can afford that. And I get four rooms, and there’s a washing machine. In those days people didn’t have washing machines in their houses. They went to the corner to a launderette. But Levitt houses came with a washing machine! Oh, it was unbelievable!”17 Levitt himself saw a political dimension to Levittown: “No man who owns a house and lot can be a Communist. He has too much to do.”18 He had a point: Would the Winter Palace have been stormed if more people in Leningrad had had leaves to rake and lawns to mow?

 

‹ Prev