But What If We're Wrong?
Page 18
It’s a question people will answer unequivocally only if their answer is no.
If their answer is yes, the response entails a metric shitload of weaselly qualifications. Criticizing the Constitution is a little like criticizing a war hero—you always need to open with a compliment. Attacking the Constitution is attacking America, which means the only people who will do it openly are so radicalized that every subsequent opinion they offer is classified as extremist. When the Constitution is criticized, the disapproval is more often with how the courts have interpreted its language. But if you doggedly ask a person who has studied the Constitution about its flaws, that person will usually concede that the greatest strength of any document is inherently tied to its flaws. Take someone like Jay D. Wexler, for example. Wexler is a law professor at Boston University who wrote a book titled The Odd Clauses, an examination of the Constitution through ten of its most bizarre provisions. His interest in its peculiarities is an extension of his appreciation for the document’s integrity as a whole. He’s fascinated by ideas like the separation of powers, inserted by the founders as a barrier against their ultimate fear, tyranny. He will directly exclaim, “I love the separation of powers!” which is a weird thing to exclaim. But he also realizes this trifurcation comes with a cost.
“One can imagine how the sluggishness and potential for gridlock that such a system creates might actually be our undoing—perhaps because of some single major incident that the government cannot respond to adequately. But more likely because it slowly, quietly, in ways that may be hard to identify, weakens our society and culture and economy, rendering the nation unable to sustain itself and rise to the challenges of the future,” says Wexler. “States and localities play the most significant role in shaping the education of children, which is great—except in those states that water down science education to placate creationists. The Supreme Court can strike down laws that it thinks violate the Constitution, which is great—except when it invalidates campaign finance laws that are designed to make our political system fair. Both houses of Congress have to agree to pass legislation, which is great—except when one house holds the entire country hostage by refusing to pass a budget. And if in some future, far-off day we find ourselves no longer a superpower, we may look back and say that this was the result of a constitutional structure that made it overly difficult to implement wise social and economic policy. Now, I don’t know if the criticism will be justified. I’m just glad that I’ll be dead by then.”
Wexler notes a few constitutional weaknesses, some hypothetical and dramatic (e.g., what if the obstacles created to make it difficult for a president to declare war allow an enemy to annihilate us with nuclear weapons while we debate the danger) and some that may have outlived their logical practicality without any significant downside (e.g., California and Rhode Island having equal representation in the Senate, regardless of population). But like virtually every world citizen who’s not a member of ISIS, he has a hard time imagining how the most beloved constitutional details—the Bill of Rights and the visions of unalienable freedom—could ever be perceived as an Achilles’ heel, even if they somehow were.
“I’d distinguish the parts of the Constitution that we talk about most—the liberty and equality protections and the Fourteenth Amendment—from the parts of the Constitution that create the structure of the government. I think it’s more likely that if we look back with regret at our dedication to the Constitution, it will be with respect to the structural provisions, rather than the liberty and equality ones. The liberty and equality provisions of the Constitution are worded so vaguely that whatever hypothetical blame we might place on them in any faraway future will more likely be aimed at the Supreme Court’s interpretation of the provisions, as opposed to the provisions themselves,” Wexler says. “Now, what if because of these provisions, someone gets away with urging or instructing someone else to blow up the White House, thus instigating a chain of events that leads to a nation-destroying insurrection? Or someone who is arrested without being given the proper Miranda warnings goes free and then blows up the White House? Are we really going to blame the First Amendment or the Fourth Amendment for those catastrophes? If people end up blaming anyone or anything having to do with these provisions—and that itself is a really big if—I think people would blame the Supreme Court and the opinions which gave those amendments the specific content that, when applied, turned out to be disastrous. Earl Warren, rather than James Madison, would turn out to be the real culprit.”
Wexler’s distinction is almost certainly correct. There are a handful of sacrosanct principles within the Constitution that would never be directly blamed for anything that happens, based on the logic that the principles themselves are so unassailable that any subsequent problem must be a manifestation of someone applying those principles incorrectly. In this regard, I’m no different from anyone else. My natural inclination, for most of my life, was to believe that nothing is more important than freedom. I tried very hard to convince myself that my favorite writer was John Locke. My guts still feel that way, and so does much of my mind. But there’s a persuasive sliver of my brain that quietly wonders, “Why do I believe this so much?” I fear it might be because I’ve never allowed myself to question certain things that seem too obvious to question.
“Are we really going to blame the First Amendment?” Wexler asked rhetorically, and he might as well have tacked on the prepositional phrase for anything. And of course the answer is no. There is no amendment more beloved, and it’s the single most American sentiment that can be expressed. Yet its function is highly specific. It stops the government from limiting a person or an organization’s freedom of expression (and that’s critical, particularly if you want to launch an especially self-righteous alt weekly or an exceptionally lucrative church or the rap group N.W.A). But in a capitalistic society, it doesn’t have much application within any scenario where the government doesn’t have a vested interest in what’s being expressed. If someone publishes an essay or tells a joke or performs a play that forwards a problematic idea, the US government generally wouldn’t try to stop that person from doing so, even if they could. If the expression doesn’t involve national security, the government generally doesn’t give a shit. But if enough vocal consumers are personally offended, they can silence that artist just as effectively. They can petition advertisers and marginalize the artist’s reception and economically remove that individual from whatever platform he or she happens to utilize, simply because there are no expression-based platforms that don’t have an economic underpinning. It’s one of those situations where the practical manifestation is the opposite of the technical intention: As Americans, we tend to look down on European countries that impose legal limitations on speech—yet as long as speakers in those countries stay within the specified boundaries, discourse is allowed relatively unfettered (even when it’s unpopular). In the US, there are absolutely no speech boundaries imposed by the government, so the citizenry creates its own limitations, based on the arbitrary values of whichever activist group is most successful at inflicting its worldview upon an economically fragile public sphere. As a consequence, the United States is a safe place for those who want to criticize the government but a dangerous place for those who want to advance unpopular thoughts about any other subject that could be deemed insulting or discomfiting.
Some would argue that this trade-off is worth it. Time may prove otherwise.
[4]The Declaration of Independence predates the Constitution by eleven years and doesn’t have any legislative power. Still, it’s central to everything we think about the US, particularly one sentence from its second paragraph that many Americans assume is actually in the Constitution itself: “We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the Pursuit of Happiness.” Now, there are surface details of this passage that people have alw
ays quibbled with: the use of the word “men” instead of “people,” the fact that the man who wrote these words owned slaves, the fact that the language inserts God into a situation that doesn’t seem particularly religious, and that Thomas Jefferson’s genius did not keep him from capitalizing non-proper nouns. But these problems (except maybe the slave part) are easily deflected by the recognition of the era. The overall premise—tweaked to fit modernity—is still embraced as “self-evident.”
Even though this is not remotely true, in practice or theory.
Pointing out how it’s not true in practice is so easy it doesn’t even require examples; all you need to do is look at the socioeconomic experiences of American citizens from varying races and opposing genders. But it’s not even true with people whose experiences are roughly identical. Take any two white males raised in the same income bracket in the same section of the same city, and assume they receive the same treatment from law enforcement and financial institutions and prospective employers. They’re still not equal. One of these people will be smarter than the other. One will be more physically attractive. One will be predisposed to work harder and care more. Even in a pure meritocracy, they would experience differing levels of happiness. “It is not the case that we are born equal and that the conditions of life make our lives unequal,” writes Karl Ove Knausgaard in his nonfiction novel My Struggle: Book 2. “It is the opposite, we are born unequal, and the conditions of life make us more equal.” The apparent unfairness of reality can’t be blamed on our inability to embody this “self-evident” principle. The world would be just as unfair if we did.
I realize there’s a natural response to the previous statement, and it’s the same response I would have given fifteen years ago: “This is a conscious misreading of the message. Jefferson is not claiming that all men are literally equal. He’s arguing that all men deserve equal protection under the law, and that they are to be treated as if they are equal.” Which, of course, I agree with (because who wouldn’t). But this technical application is not the way the principle is considered. It’s mostly considered symbolically, which means it’s illusionary. That’s the problem. I sometimes wonder if the pillars of American political culture are really just a collection of shared illusions that will either (a) eventually be disbelieved or (b) collapse beneath the weight of their own unreality. And that would certainly be the end of everything (or at least something that will feel like everything to those who live through the collapse).
The men and women who forged this nation were straight-up maniacs about freedom. It was just about the only thing they cared about, so they jammed it into everything. This is understandable, as they were breaking away from a monarchy. But it’s also a little bonkers, since one of the things they desired most desperately was freedom of religion, based on the premise that Europe wasn’t religious enough and that they needed the freedom to live by non-secular laws that were more restrictive than that of any government, including provisions for the burning of suspected witches. The founding fathers saw themselves as old hedgehogs, and the one big thing they knew was that nothing mattered more than liberty. They were of the opinion that a man cannot be happy if he is not wholly free from tyranny, a sentiment that is still believed by almost every American citizen.
But how, exactly, do we know this?
It wasn’t always this way. For a long time, many smart people—Plato, most famously in The Republic—did not automatically think like this.
“During the wars between Athens and Sparta, there were a lot of people questioning if the idea of democracy in Athens made much sense,” says Carlin. “These were guys who came in right after the Roman Republic fell who were basically wiping their brow and saying, ‘Thank god that whole experiment with people running things is over, look where that took us.’ These are thoughts conditioned by what we remember. When we talk about one-man rule—some kind of dictatorship or empire or whatever—look at the examples recent history has given us. They’re not exactly shining examples of how it might work out well, whether it’s a Hitler or a Stalin or whoever, so we don’t have any good examples [of how this could successfully operate]. But in the ancient world, they often had bad examples of democracy. Some of those guys looked at democracies the way we look at failed dictatorships. And yet, had we had, in the 1930s or 1940s, some dictatorship that was run by a real benevolent, benign person who did a really good job and things were great—and let’s throw out the obvious problem of succession, of potentially getting a bad guy after the good guy—we might have a different view of all that.”
This notion, I must concede, is a weird thing to think about, and an even weirder thing to type. It almost feels like I’m arguing, “Democracy is imperfect, so let’s experiment with a little light fascism.” But I also realize my discomfort with such thoughts is a translucent sign of deep potential wrongness—so deep that I can’t even think about it without my unconscious trying to convince me otherwise. The Western world (and the US in particular) has invested so much of its identity into the conception of democracy that we’re expected to unconditionally support anything that comes with it. Voting, for example. Everyone who wants to vote should absolutely do so, and I would never instruct anyone to do otherwise. But it’s bizarre how angry voters get at non-voters. “It’s your civic responsibility,” they will say. Although the purpose of voting is to uphold a free society, so one might respond that a free society would not demand people to participate in an optional civic activity. “But your vote matters,” they argue. Well, it is counted, usually. That’s true (usually). But believing your one vote makes a meaningful difference reflects unfathomable egotism. Even if you’d illegally voted twenty times in the single tightest Florida county during that aforementioned 2000 presidential election, the outcome would have been unchanged. “But what if everybody thought that way,” they inevitably counter. This is the stupidest of arguments—if the nation’s political behavior were based on the actions of one random person, of course that person would vote, in the same way that random person would never jaywalk if his or her personal actions dictated the behavior of society as a whole. But that is not how the world works. “Okay, fine. But if you don’t vote, you can’t complain.” Actually, the opposite is true—if you participate in democracy, you’re validating the democratic process (and therefore the outcome). You can’t complain if you vote. “People in other countries risk their life for the right to vote.” Well, what can I say? That’s a noble act, but not necessarily a good decision.
What’s so strange about these non-persuasive techniques is that—were they somehow successful—they would dilute the overall value of voting, including the ballot of the person making the argument. If you want to amplify the value of your vote, the key is convincing other voters to stay home. But nobody does this, unless they’re actively trying to fix an election. For any lone individual, voting is a symbolic act that retains its illusionary power from everyone else agreeing that it’s indispensable. This is why voters want other people to vote, even if those other people are uninformed and lazy and completely unengaged with politics. This is also why, when my son watches his first election on TV, I’ll tell him that voting is a crucial, profound extension of the American experience, for all the bad reasons he’ll be socially conditioned to accept (until, of course, he doesn’t).
[5]I am of the opinion that Barack Obama has been the greatest president of my lifetime, and by a relatively wide margin. This, I realize, is not a universally held position, and not just among the people who still think he was born in Kenya. With a year remaining in Obama’s tenure, New York magazine polled fifty-three historians about his legacy, most of whom gave him lukewarm reviews. Several pointed to his inability to unite the country. Others lauded ObamaCare while criticizing his expansion of the Oval Office itself. But those critiques remind me of someone looking at the career of Hank Aaron and focusing on his throwing arm and base running. It’s not merely that Obama was the first black president. It’
s that he broke this barrier with such deftness and sagacity that it instantaneously seemed insane no black person had ever been elected president before. In fact, he broke the barrier so fluidly that a few of the polled historians suggested his blackness will eventually be a footnote to his presidency, in the same way that John F. Kennedy’s Catholicism has become a factoid referenced only by Catholics. That seems like a questionable analogy to me (and I say that as someone who’s built a career on questionable analogies). The finer points of Obama’s administration will wash away, but his central achievement—his straightforward aptitude at overcoming the systematic racism that previously made his existence impossible—will loom over everything else. To me, this seems obvious.
I’m very much a One Big Thing kind of guy, though, and especially with presidents. If I’m arguing about the greatest president of all time, it always comes down to Washington vs. Lincoln, and those in the Lincoln camp inevitably point to his freeing of the slaves—which, I will grant, is the definition of a One Big Thing move. But I would traditionally counter that Washington’s One Big Thing mattered more, and it actually involved something he didn’t do: He declined the opportunity to become king, thus making the office of president more important than any person who would ever hold it. This, as it turns out, never really happened. There is no evidence that Washington was ever given the chance to become king, and—considering how much he and his peers despised the mere possibility of tyranny—it’s hard to imagine this offer was ever on the table. It is, I suppose, the kind of act that seems like something Washington would have done, in the same way he seems like the kind of fellow who wouldn’t deny that he iced a cherry tree for no reason. Washington’s kingship denial falls into the category of a “utility myth”—a story that supports whatever political position the storyteller happens to hold, since no one disagrees with the myth’s core message (i.e., that there are no problems with the design of our government, even if that design allows certain people to miss the point). You see the application of other utility myths during any moment of national controversy. Someone will say or do something that offends a group of people, so the offended group will argue that the act was unpatriotic and harmful to democracy. In response, the offending individual will say, “Actually, I’m doing this because I’m patriotic and because I’m upholding democracy. You’re unpatriotic for trying to stop me.” Round and round this goes, with both sides claiming to occupy the spiritual center of the same philosophy, never considering the possibility that the (potentially real) value of their viewpoint hinges on the prospect that patriotism is not absurd and democracy is not simply the system some wig-wearing eighteenth-century freedom junkies happened to select.