by Joshua Zeitz
Since 1790, when Congress passed the nation’s first immigration act, prevailing law had restricted naturalized citizenship to “free white persons.” What constituted a white person was by no means clear. While in later years it seemed intuitive to classify German, Irish, or Italian Americans as white, in the mid-nineteenth century many native-born Protestants regarded newcomers as unwhite and therefore unfit for citizenship. In establishment outlets like Harper’s Magazine, editorialists lampooned Irish immigrants as drunken, lazy, and idle, while cartoonists portrayed them as possessing apelike, subhuman physical attributes.
With “whiteness” such a crucial attribute, many immigrants—including Irish Catholics in large, northeastern cities—worked aggressively to draw a sharp distinction between themselves and free African Americans. Black-face minstrelsy, a popular form of entertainment among new immigrants, enabled racially suspect Europeans to establish that they were, in fact, white (after all, only a white person needed to “black up” to play the part of an African American) and to project onto African Americans the same vicious stereotypes that nativists ascribed to Catholic newcomers.
By the late nineteenth century, America’s new cultural and civic diversity—a result of immigration from southern Europe, eastern Europe, and Asia and the emancipation of black slaves—gradually resulted in a popular classification of humans along hierarchal lines. In 1911, a government commission impaneled to investigate the potential effects of mass immigration broke the population into “45 races or peoples among immigrants coming to the United States, and of these 36 are indigenous to Europe.” Bohemians, the report determined, were “the most advanced of all” Slavic racial groups. “The ancient Greeks were preeminent in philosophy and science, a position not generally accredited to the modern Greeks as a race . . . they compare with the Hebrew race as the best traders of the Orient.” Further, “the Gypsy resents the restraint of higher social organization . . . to him laws and statutes are persecutions to be evaded.” The southern Italian was “an individualist having little adaptability to highly organized society.” Whereas German and Irish newcomers seemed distinctly unfit for citizenship in the mid-nineteenth century, scientific racial analysis now considered them a higher category of white than southern and eastern European newcomers, most of whom were Catholic or Jewish.
The era’s nativism rested on a bedrock of labor competition, religious intolerance, and fear of anarchism and communism. But scientific racism supplied its intellectual justification. “The welfare of the United States demands that the door should be closed to immigrants for a time,” a leading congressman declared at the time. “We are being made a dumping ground for the human wreckage of [World War I].” Immigrants from southern Europe and Asia suffered “inborn socially inadequate qualities,” a prominent scientist offered. They were responsible for a hike in urban crime, social disorder, and neighborhood decay. Such ideas formed the foundation of the Immigration Act of 1924, which limited the annual number of immigrants from any given country to just 2 percent of the total number of people born in that country and residing in the United States in 1890. By using 1890 as a benchmark, the law favored older immigrant groups from northern and central Europe. For Jews, Italians, Greeks, Slavs, Poles, Croatians, and Russians, the door effectively swung shut. (For the Chinese, that door had been closed since 1882, when Congress passed the Chinese Exclusion Act.)
The year 1924 was the high-water mark for scientific racism, which became increasingly unpopular in Depression-era America. The Columbia University anthropologist Franz Boas and his protégées Margaret Mead and Ruth Benedict were among the first to blast away at the edifice of “race,” proving in a series of devastating monographs and articles that human behavior and intelligence were products of environment, not blood, and that no “pure” races could even be said to exist. This shift in thinking also emerged as a response to the excesses of Nazi Germany. Although many Americans in the 1920s regarded eugenics and other forms of racial engineering as good science and solid public policy, revelations of Germany’s euthanasia program targeting mentally and physically handicapped children inspired a scientific repudiation of eugenics in the United States. More generally, Nazi race policy and anti-Semitism delegitimized racialist thinking in nearly all of its popular incarnations, influencing works like Ashley Montagu’s Man’s Most Dangerous Myth: The Fallacy of Race, a celebrated volume that argued race was a scientifically “artificial” and “meaningless” invention. In 1938, the American Psychological Association and the American Anthropological Association broke new ground in formally repudiating scientific racism. Other academic organizations followed, and ever since biologists, chemists, and geneticists have steadfastly maintained that race is, at best, a social phenomenon; at worst, a lie. Certainly, few scholars of any repute continue to believe that race is determinative of behavior, intellectual endowment, and physical capacity.
Even as scientific racism came under cultural and academic assault, the 1924 law restricting immigration remained on the books. Ironically, it might have aided the mainstreaming of ethnic Americans by cutting off the supply of newcomers who spoke in foreign tongues, wore strange clothing, and listened to unfamiliar music. The new faces of ethnic America were the children of immigrants who grew up speaking accentless English, became fanatical baseball fans, and wore American garbs. That generation’s contribution during World War II drove a final nail in the coffin of scientific racism.
In the postwar years, northern liberals clamored for both civil rights and immigration reform. As Jim Crow came under increased fire, it became possible to attack the national origins standard (the very backbone of the 1924 law) for what it was: in the words of the Democratic Party platform in 1960, “a policy of deliberate discrimination” that “contradicts the founding principles of this nation.” The same coalition of churches, liberal organizations, and labor and industry groups that championed the Civil Rights Act of 1964 also backed immigration reform.
In the wake of the Republican Goldwater’s defeat, Democrats seized the opportunity to pass the Immigration and Nationality Act, landmark legislation that did not take effect until June 1968—the waning days of LBJ’s presidency—but that fundamentally transformed the country in ways that even the president might not have anticipated. The new law favored newcomers with specialized skills and education or existing family relationships with American citizens or residents and substituted the national origins standard with annual hemispheric limits: 170,000 immigrants from the Eastern Hemisphere, 120,000 from the Western Hemisphere—a breakdown that reflected lingering bias toward Europe. (That provision was later eliminated and replaced with a simple, annual cap of 290,000 immigrants.) Critically, the bill exempted from these caps all immigrants with immediate family members in the United States.
In signing the law, Johnson affirmed that the national origins standard violated “the basic principle of American democracy—the principle that values and rewards each man on the basis of his merit as a man.” While the bill’s champions, including LBJ and New York’s congressman Emanuel Celler, were committed to ethnic and racial pluralism, they anticipated that most of its beneficiaries would hail from Europe. Celler assured his colleagues that as a matter of pure numbers “the effect of the bill on our population would be insignificant,” while Edward Kennedy, JFK’s youngest brother and the initiative’s chief whip in the Senate, maintained that the “ethnic mix of this country will not be upset,” because the legislation would “not inundate America with immigrants from any one country or area, or the most overpopulated and economically deprived nations of Africa and Asia.” The Wall Street Journal agreed, arguing that the family unification provision “insured that the new immigration pattern would not stray radically from the old one.”
The story played itself out differently, and in ways that neither Kennedy nor Celler, Lyndon Johnson nor the Wall Street Journal, anticipated. As Europe’s economy finally emerged from the ashes of World War II, fewer
residents of Ireland, Italy, or Germany moved to the United States, while those residing in the Soviet bloc found it all but impossible to try. But tens of thousands of educated professionals—lawyers, doctors, engineers, scientists—from Asia and Central America did avail themselves of new opportunities in the United States and established roots in the United States legally. So did tens of thousands of refugees from Cuba, Vietnam, and other repressive regimes. By 1972, the Association of American Medical Colleges found that 46 percent of all licensed physicians were foreign-born, with large numbers emigrating from India, the Philippines, Korea, Iran, Thailand, Pakistan, and China. Because the law exempted many categories of family members from the hemispheric caps, these new citizens were soon able to bring their relatives to join them. Many more immigrants than expected thus came to the United States, and those who did so created a much more diverse population. In the first decade of the bill’s enactment, an average of 100,000 legal immigrants above the cap relocated annually to the United States; by 1980, the annual number soared to 730,000. Fifty years after the bill’s passage, foreign-born immigrants made up roughly 13 percent of the total population, approaching the all-time high of 14.7 percent in 1910. Another 20 percent were born in the United States but had at least one foreign-born parent, bringing the proportion of first- and second-generation Americans to near-historic heights. Unlike earlier waves, 90 percent of new Americans after 1965 hailed from outside Europe—from countries like Mexico, Brazil, the Philippines, Korea, Cuba, Taiwan, India, and the Dominican Republic. As a consequence, demographers projected that by 2050 non-Hispanic white Americans would constitute less than half of the U.S. population.
The bill itself did not take effect until shortly before Johnson left office. It was the rare example of Great Society legislation whose long-term consequences neither the president nor his White House staff was able to presage. Yet alongside the administration’s vigorous enforcement of civil rights laws, immigration reform catalyzed a new electoral alignment—which some political scientists have dubbed the “Great Society coalition”—that comprised African Americans, Latinos, and well-educated white voters (many of whom unknowingly benefited from the legacy of Johnson’s higher education policies). This coalition, though not ascendant for at least a quarter century after LBJ left office, would in later years prove a powerful counterweight to the forces of white backlash. On October 3, 1965, the day that LBJ signed the immigration act into law, those days lay far in the distance. The politics of white retrenchment, however, were just around the corner.
• • • • •
The magnitude of Johnson’s legislative and administrative accomplishment in 1965 and 1966 was astonishing by earlier and later comparisons. In addition to civil rights and voting rights, immigration reform, the education act, Medicare, and Medicaid, LBJ’s White House, with the full support of an overwhelmingly Democratic Congress, unleashed a tidal wave of liberal reform. Some of these measures exerted a lasting and positive influence on American life. Others would prove noble but failed experiments.
When he launched the Great Society at the University of Michigan, Johnson emphasized the imperative of ameliorating conditions in America’s cities. At his urging, in 1965 Congress created a new cabinet agency, the Department of Housing and Urban Development (HUD), and assigned it a broad mandate to oversee not only federal housing initiatives but the delivery of better social services, infrastructure assistance—including mundane but critical areas like water and sewage systems—and slum rehabilitation. To lead the new department, Johnson appointed Robert Weaver, the administrator of the Housing and Home Finance Agency and now the nation’s first black cabinet member. A year later, after considerable pressure from the White House, Congress authorized the Model Cities program, originally intended to target aid to six demonstration cities—Washington, Detroit, Chicago, Philadelphia, Houston, and Los Angeles—where HUD would work in close coordination with municipal, civic, and religious leaders to overhaul schools, hospitals and health centers, transportation services, and low-income and middle-class housing. The program would bring to bear “all of the techniques and talents within our society on the crisis of the American city.”
In the end analysis, politics proved the undoing of its mission. Members of Congress did not permit the administration to discover what a handful of demonstration cities could accomplish with a sizable, narrowly targeted infusion of cash and strong benchmarks and oversight from Washington. Instead, explained the New York Times several years later, “every legislator had to have a slice of the pork. The few cities became 75, then 150. Eventually the money was shoveled around only a half-inch deep anywhere. The program was destined to fail.” Despite the failure of the Model Cities program, the Johnson administration gave American urban dwellers a seat at the cabinet table and ensured that their interests would at last receive the same consideration that rural Americans had long enjoyed through the Department of Agriculture and the Department of the Interior.
The Johnson administration’s education initiatives extended far beyond aid to primary and secondary schools. The graduate of a public university, LBJ believed passionately in the cause of higher education and, like many liberal Democrats of his age, had long pressed for stronger federal assistance to college students. In the early 1960s, institutions were under mounting strain to accommodate the rising generation of baby boomers who flooded American campuses. At half of the country’s four-year colleges and over 80 percent of junior colleges, libraries fell short of minimal standards. Moreover, few poor children were able to afford tuition and fees, resulting in a notable disparity between the portion of middle-class (78 percent) and working-class (33 percent) high school graduates who attended college. Over one-fifth of students who began postsecondary studies had to drop out for want of financial wherewithal. For many black families, whose median income was little more than half that of white families, postsecondary education was essentially out of reach. Average tuition, room, and board at a four-year public university ate up 14.5 percent of a typical white family’s income but 26.3 percent of income for a typical black family. The corresponding figures for private four-year institutions were even more staggering: 30.4 percent and 55.1 percent, respectively.
With faith that access to a quality education would help poor people achieve their share of American prosperity, the Johnson White House secured passage in 1965 of the Higher Education Act (HEA). The legislation appropriated funds to support library expansion at private and public universities and established work study, grant programs, and reduced-interest loans to make tuition and fees affordable for poor and working-class students.
In real (inflation-adjusted) dollars, the federal government’s annual spending on postsecondary student expenses increased by over 10,000 percent between 1963 and 2010. When Johnson signed the Higher Education Act into law, he intended its benefits to flow primarily to poor people. “The important role of the federal government is somehow to do something for the people who are down and out,” he affirmed, “and that’s where its major energy in education ought to go.” Despite this original intent, as tuition and fees at both public and private institutions began to climb sharply in the 1980s, successive Congresses and administrations gradually expanded eligibility to include more middle-income students. Upon signing a reauthorization of HEA in 1992, President George H. W. Bush proudly asserted that it “gives a hand up to lower income students who need help the most. But it also reaches out into the middle-income families, the ones who skipped a vacation and drove the old clunker so that their kids could go to college.” Through various, subtle means—including the exemption of home equity in calculating a family’s eligibility for assistance and the gradual shift from grants (which made up roughly half of all student aid in 1965 but today account for less than 30 percent) to loans—the law gradually morphed into a middle-class subsidy that also happened to help those poor families able to capture its benefits.
Despite this departure from the
bill’s founding principles, Johnson left a vital legacy. Today, federal grants account for 10 percent of the total operating budget of all four-year institutions, while subsidized loans underwrite a substantial portion of the tuition and fees that universities and colleges collect. Just as critics identified Medicare as a driver of medical inflation, skeptics of federal education policy cited federal grants and loans—particularly those that flowed to middle-income students—as a primary culprit in the skyrocketing cost of college. Yet as higher education emerged as an entry key to employment in the modern information economy, the role of federal support in broadening access—particularly to groups traditionally locked out—was undeniable.
• • • • •
One indisputable triumph for the Great Society was its lasting contribution to providing food security to the most vulnerable Americans. When a team of researchers surveyed hunger in poor communities in the early 1960s, they found “children whose nutritional and medical conditions we can only describe as shocking—even to a group of physicians whose work involves daily confrontation with disease and suffering. In child after child we saw: evidence of vitamin and mineral deficiencies . . . in boys and girls in every county we visited, obvious evidence of severe malnutrition, with injury to body’s tissues—its muscles, bones and skin as well as an associated psychological state of fatigue, listlessness, and exhaustion. . . . We saw homes with children who are lucky to eat one meal a day . . . who don’t get to drink milk, don’t get to eat fruit, green vegetables, or meat.”
During World War II, roughly half of all counties in the United States participated in a federally subsidized food stamp program that enabled poor people to purchase discounted coupons that could be redeemed for food. The program ended in 1943. Under John Kennedy, the government reinstituted it as a pilot initiative in just eight regions of the country. The 1964 Food Stamp Act took what had been a limited test program and created a new entitlement. Under its provisions, poor families were eligible to purchase cut-price food stamps. The requirement that recipients pay for the coupons was consistent with LBJ’s mandate that there be “no doles” in his War on Poverty, but more fundamentally it represented the administration’s belief that the poor required qualitative assistance to realize their fair portion of the nation’s abundance, not cash transfers. Yet when Congress eliminated the purchase requirement, the food stamp program—whose rolls grew from roughly 500,000 recipients in 1965 to roughly 13 million in 1974—evolved into both a nutritional assistance program and a hidden form of income support. Because food stamps qualify in budgetary terms as a “near-cash” benefit—meaning, they are not technically income—the federal government does not include their value when measuring poverty. Using the alternative Supplemental Poverty Measure (SPM), which counts the value of such benefits as part of household income, census figures in 2010 revealed that food stamps cut the child poverty rate by 3 percentage points. The program also accomplished much of its original mandate to help alleviate hunger and nutritional deficiency among poor people. Though some families may use food stamps—later renamed SNAP (Supplemental Nutrition Assistance Program)—to defray grocery purchases they would otherwise have made, studies establish that the program afforded poor people and particularly poor children greater access to nutritious food.