Book Read Free

The Heartbeat of Wounded Knee

Page 28

by David Treuer


  There were other migrations during this period, too. The Great Migration brought African Americans from the South to the North, and from rural areas to urban ones, in search of better employment opportunities and to escape from racism and the constant threat of violence in the form of lynchings and beatings. In 1910, 90 percent of all African Americans in the United States lived in fourteen states, and only 20 percent of them lived in cities. By the outbreak of World War I, more than half a million African Americans had moved to the North (by 1940, a million African Americans had made the journey). Half of those lived in urban areas, and the number continued to increase over the next few decades, especially during and after World War II. By 1970, six million African Americans had migrated North; more than 50 percent of all African Americans lived in the North and more than 80 percent of African Americans lived in cities across the country.

  Many reasons for this were specific to the lives and qualities of life for blacks in the South. The country had changed. How it saw itself had changed, too. In the eighteenth century Thomas Jefferson had vested the American experiment in the flourishing of the yeoman farmer. Farmers, he thought, were “the chosen people of God, if He ever had a chosen people, whose breast He has made His peculiar deposit for substantial and genuine virtue.” But it was the yeoman farmer who had ravaged the Plains and the West by tilling the soil, planting wheat, and grazing cows. When a bubble in the commodities market caused by the First World War resulted in a subsequent drop in prices, farmers tilled and planted more land, raised more cattle. The overproduction of wheat for export and overgrazing, combined with droughts in the 1920s, helped cause the “worst hard time” of the Dust Bowl. It would be foolish to consider American Indian migrations to cities in the 1940s and 1950s only in the context of federal Indian policy: American Indians moved as Americans alongside African Americans and Anglo Americans as part of large, fundamental shifts in the American demographic.

  By the time World War II rolled around, America had become not only a breadbasket but a metal basket as well. American enterprise had been shifting from agriculture to manufacturing, from rural to urban, for decades. New methods of food production and mechanization changed farming as much as industrialization changed everything. Those ongoing shifts reached an apex after the war. Urban populations swelled, with not only migrating African Americans but Anglo Americans as well. America was changing, and Indian life changed with it, even where government pushed it around with its heavy hand.

  The Kansas Act

  The hand of the government was particularly heavy in Kansas during the 1930s. In 1885, Congress had passed the Major Crimes Act, which stipulated that “major crimes” perpetrated by an Indian against another Indian—which included murder, assault with the intent to kill, rape, larceny, and burglary—would be prosecuted in federal court. The thinking was that while states could prosecute crimes committed by Indians against whites and whites against Indians (though such prosecutions were pretty rare), only the federal government could prosecute crimes committed by Indians against Indians, because only the federal government could regulate tribal and intertribal affairs. The Major Crimes Act itself was a response to a case that had preceded it, Ex Parte Crow Dog. A Lakota named Crow Dog had murdered Spotted Tail, an uncle of the Lakota war chief Crazy Horse, the latest casualty of a feud between the two men that went back to their participation in the Black Hills Wars, the Battle of the Little Bighorn, and their attempts to wrest some power for themselves after the beginning of the reservation period. The tribe tried Crow Dog, found him guilty, and punished him according to tribal law. The Supreme Court eventually ruled that the government couldn’t prosecute Crow Dog. In response Congress passed the Major Crimes Act.

  However, in Kansas, the federal government had abdicated its duty of prosecuting such crimes to the state by the early part of the twentieth century. In line with this precedent, in 1938 a Potawatomi Indian agency superintendent contacted federal legislators in the hopes that Kansas might be granted jurisdiction over the four tribes in Kansas: Potawatomi, Sac and Fox, Kickapoo, and Iowa. He pointed out that as a result of allotment, the majority of Indian land in Kansas had been allotted to tribal members and fell under state jurisdiction. None of the four tribes had functioning tribal courts, and without them, he claimed, the tribes faced an epidemic of so-called lawlessness. Indians who were convicted of crimes were jailed at the expense of the counties or the state of Kansas, and there was considerable expense incurred when Indians had to be transported all the way to federal courts in order to stand trial. And the Indians themselves, he said, wanted the state to take care of prosecutions. Since Kansas already prosecuted crimes that should have fallen under the Major Crimes Act, the new measure would merely confirm “a relationship which the State has willingly assumed, which the Indians have willingly accepted, and which has produced successful results.” It’s hard to know what pressures the superintendent or other officials brought to bear on the tribes, yet all four of them officially supported the measure. The bill was passed as a kind of trial legislation, but within a decade many other states passed similar legislation, including Iowa, North Dakota, California, and New York. The federal government was, slowly and in piecemeal fashion, getting out of the Indian business. Or at least it wanted to.

  Part of the reason the federal government didn’t want to deal with Indians as Indians anymore was because, by its own admission, things had gone poorly for Indians through the 1930s and 1940s. In 1943, the government conducted a new “survey of Indian conditions.” Even though it was less thorough and less impartial than the 1928 Meriam Report, it managed to tell roughly the same story: Indian life was bad, it was hard. Indians were poor. Reservations were rife with disease and deplorable living conditions. Like the Meriam Report, it also had bad things to say about the Office of Indian Affairs and its successor, the Bureau of Indian Affairs: they were doing a horrible job administering to Indians and Indian communities and more often than not made things worse. Despite the similarity of the findings, however, the survey reached a radically different conclusion. The upshot of the Meriam Report had been that Indians could better administer to their own needs and affairs than the Office of Indian Affairs and that tribal governments (until then destroyed or suppressed) should be empowered in Indian communities. The report had led directly to the Indian Reorganization Act, the drafting of tribal constitutions, the creation of Indian-run courts, the strengthening of Indian police, and the like, however faulty and tardy those had been in coming. The 1943 report, however, led the government in the opposite direction.

  The Indian Reorganization Act had been passed just in 1934, so it is hard to know what the surveyors and legislators and commissioners thought would be radically different in the space of only nine years of “self-determination.” In any event, the Senate Committee on Indian Affairs, once again confronted with the “Indian problem” and the graft, corruption, greed, and ineptitude of the Office of Indian Affairs—and supported and emboldened by the Kansas Act and the string of other acts giving states control over Indian lives—reversed itself again. It instituted a new policy, the fifth new policy meant to “help” Indians. After enduring the policies of treaty/reservation, allotment, relocation, and assimilation, the era of termination was about to begin. And as with many previous policies, its author was something of a zealot.

  Termination

  Arthur Vivian Watkins was born in 1886, in Midway, Utah. The eldest of six children and a devout Mormon, he was headed nowhere but up. He attended Brigham Young Academy, then New York University, and then received a law degree from Columbia University in 1912. He moved back to Utah, practiced law, founded a newspaper, and, in 1919, began ranching, eventually owning and running a six-hundred-acre spread by the time he was in his early forties. In 1946, he won a seat in the U.S. Senate. His politics and his religion mixed early and often. As chairman of the Senate Interior Subcommittee on Indian Affairs, he did not hesitate to act on his belief that Indi
ans were being “held back” not only by the BIA but also by laws that treated them as different from other Americans. In his opinion, Indian administration was costly, inefficient, and punitive. What was needed was “the freeing of the Indian from wardship status.” In 1953, he pushed through the legislation (introduced by Henry Jackson) that became known as the Termination Act. It proposed to fix the Indian problem once and for all by making Indians—legally, culturally, and economically—no longer Indians at all. Writing to a church father in 1954, Watkins mused, “The more I go into this Indian problem the more I am convinced that we have made some terrible mistakes in the past. It seems to me that the time has come for us to correct some of these mistakes and help the Indians stand on their own two feet and become a white and delightsome people as the Book of Mormon prophesied they would become. Of course, I realize that the Gospel of Jesus Christ will be the motivating factor, but it is difficult to teach the Gospel when they don’t understand the English language and have had no training in caring for themselves. The Gospel should be a great stimulus and I am longing and praying for the time when the Indians will accept it in overwhelming numbers.”

  Clearly Watkins’s religion played a big part in his policy-making. After the necessary excesses of FDR’s New Deal, the country was shifting toward smaller government. It had a staggering number of expenditures as a result of the Second World War, including a lot of money spent for the reconstruction of a Europe it had helped bomb. It was also becoming more industrial, more urban. At the same time, the newly powerful United States saw its own civil, social, and political institutions as the only effective models in the world and was deeply suspicious of manifestations of collectivity (such as tribes) because of the rising threat of communism: the citizen was king, the commune was suspect. And tribes were communal if nothing else. Nonetheless, termination (like previous policies) required some degree of participation by the tribes themselves.

  The Indian Claims Commission

  The Indian Claims Commission, created in 1946 under the rubric of the Indian Claims Act, is proof (if any was needed) that no good deed goes unpunished.

  The commission was empowered, in large part, because of the overwhelming numbers of Indians who served in World War II and because of the overwhelmingly important nature of their service. Also—after the carnage of the war or because of it—there might have been a feeling, approaching a general feeling, that in a world that could be very cruel it might do to look in one’s own backyard and fix what could be fixed there. But, as with all government initiatives, there was a self-serving component to the Indian Claims Act, too: there were so many claims against the government for wrongful taking of land through force, coercion, removal—takings in violation of not only treaty rights but human rights more generally—and there was no good process for dealing with all of them.

  A century had passed since Congress had created the Court of Claims, allowing Indians to make cases against the federal government. But the Court of Claims specifically excluded tribes from bringing claims against the government because of treaty violation or abrogation. As “domestic dependent nations,” with status much like that of foreign nations, each tribe had to obtain a “special jurisdictional act” from Congress to present its case to the Court of Claims. This was time-consuming and expensive, and also unfair to tribes with less clout. Often when a jurisdictional act of Congress gave a tribe the right to present a claim in the Court of Claims, the basis of the suit was so narrow that there was no real way to consider the full scope of the grievance. And such suits really worked, if they worked at all, only for monetary claims for lost land.

  For instance, the Ho-Chunk (Winnebago) filed a claim in the Court of Claims in 1928. They had been removed from Wisconsin to Nebraska, and many of the Ho-Chunk had died of starvation and exposure en route or once they arrived. Rather than suffer that fate in a strange land, many walked back to Wisconsin. Others took refuge among the Omaha, where, eventually, a reservation was established for them. But the tribe was in disarray: their leaders were scattered and dead, their social and religious institutions fractured and scrambled. In 1942, fourteen years after they filed their claim, it was dismissed by the Court of Claims. The court explained that it was dismissing the case not because the Ho-Chunk hadn’t experienced serious harm, but because it could not set a value on what they had lost because it could not determine comparable value between their old home and their new one. The Ho-Chunk needed a different process and a different venue if they wanted the wrongs done to them to be addressed more fully.

  Moreover, by the 1940s, tribes across the country were filing more claims than ever before. Tribal citizens were getting better at understanding their rights, and tribal governments were getting better at advocating for them. By 1946, there had been more than two hundred claims filed in the United States Court of Claims, mostly for damages for unlawful land seizure or for criminally low prices paid for land taken legally, but only twenty-nine claims had been addressed by the court. The majority of the rest had been dismissed, largely on technicalities. The overwhelming numbers of Indians who served in World War II, and the important nature of their service, had resulted in some sense of obligation to Indians on the part of the government. The government needed a process by which to hear claims, but it also wanted a process that would put an end to them, once and for all. The Indian Claims Commission Act of 1947 was the result, and “finality” was its watchword.

  The Indian Claims Commission expanded the grounds for a suit to include five categories of “wrongs”:

  (1) claims in law or equity arising under the Constitution, laws, treaties of the United States, and Executive orders of the President; (2) all other claims in law or equity, including those sounding in tort, with respect to which the claimant would have been entitled to sue in a court of the United States if the United States was subject to suit; (3) claims which would result if the treaties, contracts, and agreements between the claimant and the United States were revised on the ground of fraud, duress, unconscionable consideration, mutual or unilateral mistake, whether of law or fact, or any other ground cognizable by a court of equity; (4) claims arising from the taking by the United States, whether as the result of a treaty of cession or otherwise, of lands owned or occupied by the claimant without the payment for such lands of compensation agreed to by the claimant; and (5) claims based upon fair and honorable dealings that are not recognized by any existing rule of law or equity. No claim accruing after the date of the approval of this Act shall be considered by the Commission.

  So the Claims Commission intended to address all the wrongs done to Indians by monetizing those damages, with a finish line in sight: all claims were supposed to be filed within five years of the passage of the act. As it turned out, the date would be extended by another five years as tribes—understaffed and often without their own legal teams—limped toward the finish line. The commission wouldn’t finish dealing with the claims, which numbered in the hundreds, until the 1970s. The commission was extended from 1976 to 1978, after which point the claims were transferred to the U.S. Court of Claims. The last claim on the docket wasn’t finalized until 2006.

  While the Indian Claims Commission Act represented a “broad waiver of the United States’ sovereign immunity” and was remedial in nature, it was also coercive. Cash-strapped tribes with no infrastructure and no tax base—and with crushing unemployment and chronic poverty—needed the money for sure (even if most of it would take years, if not decades, to arrive). But it was a mistake to think that the magnitude and kinds of loss could be monetized. It required a very narrow sense of reparations to think that the loss of land, which was at the heart of the Claims Commission mission, was only an economic loss and could be adequately addressed by cash payments. The loss of land had resulted in a loss of life and culture, a loss of a people’s ability to be a people in the manner it understood itself. Many tribes—particularly tribes in the Southwest—had ceremonial lives that revolved arou
nd sacred sites that had irrevocably passed into private ownership. Tribes were left on small islands holding cash they couldn’t use to reconstitute their cultures, their ceremonies, and their homelands.

  To make matters worse, the Indian Claims Commission was used as a lever to move tribes into the next phase of federal policy. And a wind that blows one way one minute and the other way the next is not a particularly good wind for sailing. The broad demographic shifts brought about by World War II—the Great Migration, the shift from farming to manufacturing, the trend away from the broad federalism of the New Deal toward anti-collectivism and private enterprise under Truman and Eisenhower—solidified the belief in majority rule. Democracy, understood as the supremacy of the individual on one hand and market capitalism on the other, was seen as self-evidently not just the best way but the only way. The federal government’s relationship with tribes followed this trend. As in the period from the 1890s through the 1930s, it became federal policy to try to absorb Indians into the mainstream, whether they wanted to be absorbed or not. But instead of “encouraging” Indians to become American via institutions like the boarding schools and allotments, now official thinking had it that institutions like the tribes themselves, as well as the BIA and the Indian Health Service, were blocking Indians from what would otherwise be an inevitable gravitational assimilation into the larger current of American life. It does not seem to have occurred to Watkins and others like him that the “Indian problem” was and had always been a “federal government problem.”

 

‹ Prev