Book Read Free

The Twilight of the American Enlightenment

Page 9

by George Marsden


  In fact, efforts at incremental change only increased the backlash among southern white racists, and it took the African American protest movement to turn the tide. Martin Luther King Jr.’s effective leadership in that movement was built around a combination of the fervor of southern black revivalism and the power of nonviolent resistance. What might not be quite as evident is that the doctrine of nonviolent resistance was based on a realistic view of human nature that power must be met with power. King recognized that a people without political power could nonetheless mobilize their moral power if they were willing to suffer in the cause of justice. To get that to happen, he drew on a tradition of fervor in the black churches.24

  It needs to be added that, underlying these essential factors, what gave such widely compelling force to King’s leadership and oratory was his bedrock conviction that moral law was built into the universe. In this he was different from most of the liberal proponents of civil rights. His conviction was grounded in his Christian beliefs, which in turn were shaped by the “personalist” theology he had studied at Boston University. Personalism was an idealist philosophy based on the premise that God’s person was the center and the source of reality, and hence that human personality had moral significance in that it participated in that most basic aspect of reality. King said that personalism helped him to sustain a faith in a personal God. Integral to that faith was the conviction that God had “placed within the very structure of the universe certain absolute moral laws.”25

  Everything else that King advocated for the movement followed from this confidence in a moral order. King believed that God was working in history toward bringing justice and his kingdom, although the process was not direct or inevitable, but involved human agency in combating evil. The power of nonresistance was a moral power that was built around the belief that all people have some degree of moral sensibility, and so moral suasion is a real form of power. Further, central to all moral actions must be the recognition that all persons, even one’s enemies, are of infinite worth, because they are created in the image of God. Since personality is at the center of reality, history cannot be explained simply by economic forces, but is more basically a matter of personal and moral relationships. The goal of society, King proclaimed, ought to be a “beloved community” in which “brotherhood is a reality.” King blended his progressive idealism with the American political heritage (“let freedom ring”) in such a way as to revive the founding ideals with a latter-day force.26

  Appeal to a higher moral law was the centerpiece of King’s 1963 “Letter from the Birmingham Jail,” in which he admonished moderate white clergy for thinking it “unwise and untimely” to resist unjust laws. For such an audience King invoked St. Augustine to argue that “an unjust law is no law at all,” and St. Thomas Aquinas to say that “an unjust law is a human law that is not rooted in eternal and natural law.” King elaborated his personalist test for what was rooted in eternal or natural law: “Any law that uplifts human personality is just. Any law that degrades human personality is unjust.” By that standard, “all segregation laws are unjust because segregation distorts the soul and damages the personality.”27

  King’s invocation of objective moral law casts light upon the era in a couple of revealing ironies. Progressive observers celebrated King’s stance and agreed that the segregation laws of the American South were self-evidently unjust. Yet the whole structure of King’s thought and the motivation for his action rested on theistic and higher-law premises that many of those same observers believed to be self-evidently untrue. Secular liberal pragmatists could share in King’s moral indignation even while they lacked his rationale for universalizing such moral claims.

  The other irony is that, just as the ideals of universal justice, equality, mutuality, peace, and integrated brotherhood were burning the brightest, they were lighting the torches of identity politics. By the time of King’s death in 1968, the ideal of one American, integrated, consensus-based community had already flamed out, even though not everyone was ready to recognize that. Frustrated hopes had already turned portions of the African American community to Black Power and Black Pride. The African American civil rights movement became in some respects a model for other rights movements—particularly women’s rights, gay rights, and rights for other minorities—but, although some of the rhetoric of justice and equality was similar, it was now reshaped by the frameworks of identity politics. Whatever the merits of these causes, rather than grounding reforms in a universalized moral order, their outlooks were often frankly shaped on perceptions and experiences unique to their group. American founding ideals, such as those of the self-evidence of rights to freedom and equality, were still often proclaimed as though they were moral absolutes, but they glittered as fragments in the ruins of the dream of shaping a nation on the basis of a universal moral order.

  FOUR

  The Problem of Authority: The Two Masters

  If natural law could not be revived as a shared basis for mainstream moral authority, where might such authority come from? There were, of course, shared American traditions, such as liberty and justice, national loyalty, and equal opportunity, that carried some presumptive weight. But by what standards was one to determine the meanings of these very broad concepts when they conflicted or were matters of dispute? Or, when it came to what might be taught in the universities, or in the public schools, or in the magazines, advice books, or guides to life, what were the most commonly shared cultural authorities?

  At all these levels of mainstream American life, from the highest intellectual forums to the most practical everyday advice columns, two such authorities were almost universally celebrated: the authority of the scientific method and the authority of the autonomous individual. If you were in a public setting in the 1950s, two of the things that you might say on which you would likely get the widest possible assent were, one, that one ought to be scientific, and two, that one ought to be true to oneself. But despite the immense acclaim for each of these ideals, there was also a lurking question as to whether these two great authorities, the one objective and the other subjective, were really compatible with each other. The grand hope in the Western world in the eighteenth century was that they would be—that enlightened science would establish principles of individual freedom. But since then, from the romanticism of the nineteenth century through the scientifically augmented totalitarianism of the twentieth, there were many reasons to suppose that they might be in conflict. Such debates were still going on in the mid-twentieth century. Yet, despite such arguments, when it came to the practical aspects of life, the most common and influential cultural attitude was that science and freedom were complementary rather than contradictory.

  As one might expect, the points of tension were most sharply defined in the highly intellectual field of philosophy. On the side of freedom and the individual was the vogue of existentialism in midcentury American thought. Existentialism was largely imported from continental Europe, and it had the appeal of offering a frank look at the human predicament. In the late 1950s and early 1960s, existentialism was popular among sophisticated college students, beatniks, and others looking for alternatives to American conformity, complacency, and scientism.

  One can quickly gain an appreciation for the appeal of existentialism as an expression of dissent from the mainstream by looking at what became the canonical American summation of the outlook, William Barrett’s 1958 volume Irrational Man. Barrett, a professor of philosophy at New York University, summarized existentialism and its critique of Western civilization’s dependence on rationality with compelling clarity.

  It took the disasters of the twentieth century, Barrett observed, for modern Europeans to recognize that the rational ordering of society and hopes for material progress “had rested, like everything human, upon a void.” The modern person became a stranger to himself: “He saw that his rational and enlightened philosophy could no longer console him with the assurance that it satisfac
torily answered the question What is man?” At the heart of existentialism, which Barrett illustrated in the philosophies of Søren Kierkegaard, Friedrich Nietzsche, Martin Heidegger, and Jean-Paul Sartre, was the project of facing the stark reality of one’s own finitude, “the impotence of reason when confronted with the depths of existence, the threat of Nothingness and the solitary and unsheltered condition of the individual before this threat.” The emphasis on human finitude had the appeal of countering the “can do” optimism about human abilities so common in most homegrown American outlooks.

  Barrett characterized existentialism as “the counter-Enlightenment come at last to philosophical expression,” saying that “it demonstrates that the ideology of the enlightenment is thin, abstract, and therefore dangerous.” The rationality and technological reasoning of the modern post­enlightenment world had not freed people, but detached them from meaningful identities. The “lonely crowd” had been discovered by Kierkegaard long before it was documented by David Riesman. Contrary to the enlightenment, which put the essence of man in his rationality, existentialism dealt with “the whole man,” including such “unpleasant things as death, anxiety, guilt, fear, and trembling, and despair.” Modern man had tried to deny these realities or to explain them away through psychoanalysis. “We are still so rooted in the enlightenment—or uprooted in it—that these unpleasant aspects of life are like the Furies for us: hostile forces from which we would escape.” The lesson of the twentieth century was that even “the rationalism of the enlightenment will have to recognize that at the very heart of its light is also darkness.”

  Despite this realism regarding the human condition, Barrett’s existentialist solution otherwise fit much of the spirit of the time in emphasizing the primacy of the self. The difference from easy American optimism was, as he put it, “if, as the Existentialists hold, an authentic life is not handed to us on a platter, but involves our own act of self-determination (self-finitization) within our time and place, then we have got to know and face up to that time, both in its threats and its promises.”

  Existentialism represented one pole of philosophy and of midcentury culture and the arts—the pole celebrating individual freedom, self-determination, and even irrationality. Almost all of the rest of professional American philosophy clustered around the other pole, which flew the flag of rationality based on the scientific ideal. William Barrett was especially scathing in characterizing such tendencies among his fellow philosophers. In fact, if one wanted guidance regarding the meaning of life, he suggested, one of the least likely places to find it would be among professional philosophers. The dominant philosophies in American university philosophy departments, he observed, were examples of what had gone wrong in modern intellectual life. “The modern university,” Barrett declared, “is as much an expression of the specialization of the age as is the modern factory.” Modern knowledge had advanced through scientific specialization. Specialists focused on increasingly narrow and technical issues that only other specialists could understand. Philosophers, believing they needed to carve out a place for themselves in this scheme of things, had imitated the scientists in such specialization. Unlike physicists, however, whose retreat into esoteric specialization could eventually result in something as earthshaking as the production of the bomb, “the philosopher has no such explosive effect upon the life of his time.” Rather, philosophers had given up any traditional role of being the sages who helped guide society and instead were finding that they had less and less influence on anyone beyond other philosophers. “Their disputes have become disputes among themselves,” wrote Barrett.1

  Barrett’s complaint was based on the reality that American professional philosophy had come to be dominated by technical analytic philosophy, which indeed illustrated the disconnect between scientific models for knowledge and humanistic goals. These “logical positivists” were attempting to find definitive criteria for all genuine knowledge by carefully analyzing the differences between the language of hard empirical science and the less precise language used regarding ethics, art, or religion. The project of strict language analysis was developed by Bertrand Russell and G. E. Moore at Oxford University and in the early work of Russell’s most brilliant student, Ludwig Wittgenstein, in the early 1920s. One can gain a sense of what was involved by looking at a relatively accessible encapsulation in A. J. Ayer’s Language, Truth, and Logic. First published in Great Britain in 1936, Ayer’s overview was still widely used as a text in American colleges in the 1950s.2

  According to Ayer, philosophy was a specialized branch of knowledge that was distinguishable from natural science in that it dealt not with empirical verification, but with the logic of propositions that might be proven true. For statements to be true, they needed to be able to meet one of two criteria: either they were statements that were tautologies, or they were statements that could be empirically verified. If nontautological statements were not, at least in principle, subject to empirical verification, they were, strictly speaking, meaningless. With this breathtaking victory by definition, Ayer could sweep away centuries of metaphysical discussions as “superstitions” and dismiss the possibility that theological statements could make truth claims about God. For instance, a seemingly empirical claim of a personal encounter with a deity told us only about the mental state of the observer; it said nothing about the existence of a transcendent being, because it was a statement that had “no literal significance.” Even an ethical statement, such as, “You acted wrongly in stealing that money,” was a “pseudo-concept” with no factual content, and nothing more than an “emotive” expression of a moral sentiment. Logical positivists were not saying that theological, or ethical, or aesthetic statements were pure gibberish and needed to be entirely abandoned. They were claiming these were just not the sorts of statements that could be used to make true-false claims.3

  By the postwar era, many of the analytic philosophers, most notably Wittgenstein himself, were repudiating the strictest early logical-positivist criteria as too rigid and as leading to a sort of self-inflicted reductio ad absurdum. Nonetheless, logical positivism had helped to set the agenda of professional philosophy as a narrow specialization dealing with language and logic. Its purpose was to determine the most reliable foundations for a science of knowledge on which other sciences ought to be built. This project has since come to be called “classical foundationalism” by its many critics.4 In terms of a wider cultural analysis, one can see the dominance of analytical philosophy in American and British academia as a notable instance of that side of modern culture that was attempting to preserve the enlightenment ideal, an ideal that focused on developing principles and procedures of rationality that ought to command the assent of all open-minded hearers. Logical positivism preserved that ideal of finding common ground, but also pointed to the problem involved: strictly speaking (as analytical philosophers were), such agreement could only be established by severely limiting the range of rational discourse, so much so that there was almost nothing left worth talking about. No wonder, as William Barrett pointed out, that professional philosophy was one of the last places to go if one were searching for the meaning of life.

  Furthermore, as Barrett also observed, the differentiation and specialization of modern intellectual life meant that philosophers were not providing foundations for any thought beyond their own discipline. An intelligent generalist, such as Walter Lippmann—or any middlebrow person, for that matter—was not likely to find much guidance from academic philosophers. That was in marked contrast to the situation a generation earlier, when Lippmann had been able to bring the insights of his teacher William James into the public arena. Furthermore, not only did social philosophers not turn to the analytic philosophers for guidance, but also, and more ironically, neither did the practitioners of the sciences themselves. Natural scientists already knew what worked. Moreover, in the social sciences, specialization meant that each discipline was a sovereign domain in which practitioners set their ow
n standards for how best to study the slice of human activity that their specialties considered.

  Though not many people were saying it at the time, it was symptomatic of the crisis in the mainstream thought of the day that few people were listening to its most brilliant philosophers. Existentialists did offer insights on personal authenticity, but their following was small. Analytic philosophers searched for scientific-style verification, but they spoke almost only to each other.

  If one is looking for the practical philosophies of the day that helped to shape the lives of ordinary people, the place to turn is the field of psychology. There one can find similar tensions between science and the individual, but in far more influential form. As psychology was a science, and one of its principal subjects was individual experience, it was inevitable that it would be a focal point for debates on the pivotal question of the day: How do scientific understandings of human behavior fit with faith in human autonomy and freedom? Western culture had inherited these two grand ideals, but did they support each other? In an era when many people had turned to psychology as a guide to life, that was a practical problem as well.

  Although midcentury psychological theories related science to individual autonomy in many different ways, there were two views on the subject that marked opposite ends of the spectrum. These were the views represented first and foremost by B. F. Skinner and Carl Rogers.

 

‹ Prev