Book Read Free

Against Fairness

Page 6

by Asma, Stephen T.


  3

  In Praise of Exceptions

  If nepotistic favoritism is our natural default value, then how do we develop such hostility toward our biases later in life? How do we go from the (at least partly) biology-based partiality of tribalism to the principled ideology of modern fairness? This is a relevant question for historians, childhood psychologists, and anthropologists. Every contemporary American child learns to value non-kin and strangers, granting them historically unprecedented consideration. So, too, Western liberalism has evolved to recognize and include more strangers inside our sphere of ethical consideration.1

  In this chapter, I want to tackle the first of two related topics. How did Western culture develop a new ideology of fairness? In the next chapter, I’ll consider the kind of nurturing that instills such fairness in our kids—how do our children acquire these same cultural principles and creeds of impartiality?

  For most cultural historians and philosophers, the evolution of egalitarianism, from the seventeenth century to the present, has been a stunning success story. But, while I applaud the charity and tolerance enshrined in the usual progress story, I will point out the lost values that fell by the wayside during this long march.

  Building the Grid of Impartiality

  In Rembrandt’s famous The Syndics of the Drapers’ Guild (1662), we find the classic Dutch group portrait. The painting depicts the board of inspectors for the clothmakers’ guild, a sober group of gentlemen that Rembrandt represents as accomplished equals. I first met this painting when I was a kid, on the box cover of my father’s Dutch Masters cigars. A cheap cigar company using Rembrandt’s portrait as a logo has a fitting irony. The product designers wanted to invoke a stylish aristocratic connotation, lending an elite European sophistication to a mass-produced American stogie. But the choice was more poetic than they probably realized, because Rembrandt’s portrait is actually a testament to the new leveling and democratization of seventeenth-century Holland. What better symbol, then, for the everyman’s American cigar than a homogenous group of undistinguished but unsubjugated entrepreneurs?

  Dutch painting exploded in the seventeenth century, after the Eighty Years’ War of independence from Spain (1568–1648). The new Dutch republic—fueled by strong trade, military strength, and scientific advances—quickly became one of the most prosperous parts of the world. Group portraiture, practically a Dutch invention, became very popular during the rise of the new wealthy mercantile class. Civic groups, trustees, militias, scientific societies, and business partners all enjoyed pictorial commemoration, oftentimes splitting the cost of a large portrait and depositing the final painting with the local city council.2

  This style of pictorial representation was a departure from earlier portraiture that tended to exaggerate the centrality and even size of the patron. Important people were always enlarged to convey the relevant hierarchy. And if we go back a little further in the history of painting—before the rediscovery of Euclidean perspective grids—we find the charming world of medieval painting, where popes and exceptional people are drawn bigger than the commoners, buildings, and even mountains. The history of Western painting from the medieval to the Enlightenment era can be read as the increasing standardization of objective proportion. Artists like Albrecht Dürer (1471–1528) actually started to place new perspective grids in front of his subjects in order to more accurately measure objective spatial relations. These were wooden screens crisscrossed by strings that made up a network of equal squares, preventing the artist from biased representation. Clergy are not really bigger than householders, just because they are “closer to God.” Kings and royals generally do not possess supernatural physiques, and patrons should not be drawn bigger than workers just because they’re wealthy.

  Fig. 9. Rembrandt’s The Syndics of the Drapers’ Guild (1662) typifies the growing egalitarianism of northern Europe in the seventeenth century. Art, politics, and ethics became increasingly democratized, in conjunction with the scientific revolution. Drawing by Stephen Asma, based on the Rembrandt painting.

  As wealth grew in Holland, so did humanistic tolerance. Increasingly, individuals were at liberty to pursue their own interests without harassment from the state or the church. A cornerstone of individualism in northern Europe had already been laid by the Protestant Reformation’s critique of Catholic authority. The Peace of Westphalia and the empowerment of middle-class culture meant that industrious commoners of low birth could ascend in the new fungible world of paper currency. Paintings, like group portraits or the growing depictions of domestic life, enshrined a new egalitarian ethos. Everyday life, previously thought “vulgar” and unworthy of artistic representation, became the new subject matter of an increasingly democratic society.3

  Fig. 10. Italian painter Giotto di Bondone (1266–1337) typifies the pre-egalitarian, pre-objective approach to pictorial space and characters. Important people were bigger than buildings, and God’s favorites were further designated by golden halos. Drawing by Stephen Asma, based on Giotto’s Encounter at the Golden Gate (c. 1306, Padua).

  It wasn’t just painting that began to convey egalitarian ideas. Nature itself, during this era, went from being considered a Great Chain of Being to being viewed as a uniform machine.4 Around the same time that Dutch artists were democratizing painting, Galileo (1564–1642) was democratizing matter itself. Prior to Galileo, Aristotelian cosmology held sway for almost two thousand years. Galileo shifted science from a geocentric (earth-centered) cosmos to a heliocentric (sun-centered) solar system—bumping us off our central throne of privilege. We used to be the favorites of the whole universe, and now we were just another of the planets, no more or less special than Mars or Mercury.

  Fig. 11. Atypical pre-modern painting depicting the non-egalitarian “greatness” of its subject (Emperor Otto) and the relative unimportance of other humans. Medieval notions of pictorial perspective reflected theological and political biases, and made no attempt to standardize sizes. Drawing by Stephen Asma, based on a medieval painting of Emperor Otto II (955–983).

  Galileo’s leveling and mechanizing of nature was even more profound because he upset the old physics and metaphysics division between sub-lunar (our earthly realm, stretching from here to the moon) and supra-lunar (the heavenly realm beyond the moon’s orbit). For thousands of years before Galileo, we believed that the earth was composed of four elements (earth, air, fire, and water) and that all motion occurred in straight lines (rectilinear motion). But if you moved up and out of our stratosphere, beyond the moon’s orbit, you would find yourself in a totally different physics. Here, the theory went, planets and stars were composed of a different “stuff”—something airy, light, and divine. This fifth element, called “ether,” was unlike the changeable four elements. Ether was relatively changeless and explained why the heavens moved in beautiful circular motions, rather than mundane rectilinear motions. For two millennia we believed that the heavens were made of metaphysically different stuff—ether substances, crystalline spheres, and divine agencies. Nature itself was hierarchically arranged in a scale of perfection. The night sky was a visible canvas of the more perfect supernatural world.

  Galileo, and then Newton, changed all this. Galileo’s telescope revealed a more realistic and less romanticized view of the planets and stars. They appeared, upon closer inspection, to have earthy qualities—the sun had spots and fluctuations, for example, that seemed inconsistent with the ideas of changeless heavenly perfection; Jupiter could be seen to have orbiting moons of its own, and this violated the belief that all heavenly bodies circled earth; and shockingly our moon (which Dante called an “eternal pearl”) appeared to have earth-like mountains and craters.

  Galileo’s astronomy and physics began to unify nature into one giant system of material substances, processes, and laws. Not only were the heavens made of the same stuff as the mundane world, but this ubiquitous stuff all conformed to predictable laws of motion. And Galileo, together with contemporaries like René Descartes an
d Robert Boyle, also deconstructed the hierarchies of earthly substances by resuscitating the atomic or corpuscular theory of matter. Atomic materialism is the great leveler.5 Atomism is to metaphysics what democracy is to politics. It treats every substance as intrinsically equal—made of the same stuff; the only real difference between gold and garbage is just varying arrangements of these same atoms.

  Isaac Newton (1642–1727) continued this revolution. He demonstrated the universal logic hiding beneath the appearances of diverse nature. Newton’s universal gravitation and three laws of motion (the law of inertia; force equals mass times acceleration; and for every action there is an equal and opposite reaction) continued the egalitarianism of matter itself. He showed how the most mundane motions (e.g., I drop my pen) and the most elegant heavenly motions (e.g., the planetary orbits or trajectories of comets) are governed by the same universal natural laws. When Galileo suggested such unified laws, it was considered heretical—a “constraint” upon the free creativity of the deity. But by the time Newton codified such uniform laws, they were interpreted as signs of the deity’s rational ingenuity. Alexander Pope famously captured this in his epitaph for Sir Isaac:

  Nature and Nature’s laws lay hid in night:

  God said, Let Newton be! And all was light.

  Newton’s successes in natural philosophy inspired a generation of ethical philosophers who wanted a similar universal and rational logic, a scientific foundation, for human society. Philosophers like David Hume and Adam Smith tried to rethink the moral sentiments as a foundation for building a better culture. Exhausted by religious wars, European intellectuals sought new, objective foundations for building peaceful cosmopolitan societies.6 Hume and his friend Adam Smith developed sentiment-based theories of ethics—what we now call emotivist ethics.7 Hume said, “Morals excite passions, and produce or prevent actions. Reason itself is utterly impotent in this particular. The rules of morality, therefore, are not conclusions of our reason.”8

  Let me give an example. Worried that emotions were too subjective, fluctuating, and self-centered to support a science of social life, Adam Smith called for an “impartial spectator” perspective in ethics. If I’m injured and angry, then I’ll be too close to the events and feelings to respond in a healthy and ethical way—I’ll probably give way to revenge and sadistic impulses. But an impartial spectator will pursue the more appropriate balance of justice. This detached, “disinterested,” or “indifferent” perspective helps a person to “harmonize the sentiments and passions,” and in this balance lies the “perfection of human nature” and society.9

  The idea of an impartial spectator survived in the subsequent utilitarian tradition, started by Jeremy Bentham and continued by John Stuart Mill, which tried to formalize and mathematize the sentiment-based ethics of Scottish thinkers like Hume and Smith. In England Jeremy Bentham (1748–1832) tried to formulate a “Pannomion,” an all-encompassing system of laws based on the “greatest happiness principle.” His approach pushed the impartial spectator idea so far that he ended up depersonalizing it entirely and turned instead toward the idea of an ethical calculator.

  Bentham’s utilitarian philosophy began from the acknowledgment that “Nature has placed mankind under the governance of two sovereign masters, pain and pleasure. It is for them alone to point out what we ought to do, as well as to determine what we shall do.”10 Bentham claimed that a “hedonic calculus” (pleasure or happiness calculus) could measure the variables of pleasure and pain (the elements) in any decision-making scenario. The calculus postulated vectors like intensity of pleasure or pain, duration, certainty of occurrence, likelihood of recurrence, number of people impacted, and so on.11

  This sort of utilitarian approach lives on in many of our contemporary fairness philosophies. Systematizing human societies along scientific principles (in this case, using sentiments or feelings) was the beginning of the end of favoritism in the West. The seeds of our opposition to bias and partiality are sown in this Enlightenment era.

  But no one tried harder to transform human ethics into Newtonian science than German philosopher Immanuel Kant (1724–1804). Impartiality could only be achieved, according to Kant, by getting the sentiments, passions, and emotions out of ethics altogether. How you feel about someone will not, according to Kant, help you do the right thing. Feelings, sentiments, attachments, and emotions are surefire paths to bias, favoritism, partiality, and self-interest. Inspired partly by Newton and partly by the Christian ethic, which elevates the pure selfless motive above all else, Kant argued that consequences be damned. The British ethical tradition focused on the consequences of actions, measuring the ethical value of a deed by the amount of happiness it produced. But Kant looked disdainfully at this shopkeeper cost-benefit analysis of morality and claimed that good intentions outweighed all other extrinsic considerations.

  The way to purify and perfect ourselves as impartial spectators—who can best judge right from wrong—is to make us into better logicians. Sentimentalists like Adam Smith wanted us to cultivate our feelings of compassion (and our imaginations) in order to act well, but Kant introduced a different imperative.

  A hypothetical imperative tells us which means are necessary for attaining a given end. If I want to be healthy, I must eat nutritious food. In this example, the end goal is being healthy and the means is nutritious food. Applied to morals, a utilitarian might offer a hypothetical imperative: If I want the greatest happiness for my family, then I should earn income. But Kant argued that hypothetical imperatives cannot tell us which ends we should choose—they can only guide us about means. Moreover, any appeal to our experience in order to settle the rightness or wrongness of an act will fail to give us a universal objective morality because each person’s experience will be subjectively different. His solution to this quandary is the famous categorical imperative.

  Kant’s categorical imperative states, “Act only according to that maxim whereby you can at the same time will that it should become a universal law without contradiction.”12 To see how this works, let’s apply it to a moral question. I’m hungover and thinking about lying to my boss to avoid work—should I do it? According to Kant, I must consider my possible action of lying as if it were a universal law, as if everyone lied to their boss to avoid work. Well, that seems bad from an experiential consequential standpoint because then everybody would be manipulating employers and missing work whenever it was convenient, and labor as we know it would founder. But Kant sees an even deeper problem, a damning logical problem.

  If we think about it carefully enough, Kant suggests, we cannot even conceive coherently of a universal law of lying when convenient. Language itself only works if it can be relied upon, but lying renders language unreliable and contradicts its essential function. Therefore, Kant concludes that rationality alone (without reference to context or consequence) renders lying unacceptable.13

  Trying to re-create social ethics on the model of Newtonian physics may seem like an arcane exercise to contemporary readers, but in fact it directly shaped the U.S. Constitution and the formation of our entire culture. Not only were some of our Founding Fathers friends with Enlightenment philosophers like David Hume, but the new ideas of inalienable human rights (equally apportioned with no regard to name or station) became foundational for the fresh American project. Modern fairness and modern hostility to favoritism were born in the attempt to scientifically systematize human interactions.

  According to Enlightenment thinkers, the good life, for the individual and the state, is the rational life. But unlike the ancient Greek notions of rational society (e.g., Plato’s Republic), the modern view of reason is based on Newtonian ideas of exceptionless laws or inflexible rules. Gravity doesn’t have exceptions, so why should human law? The law of inertia doesn’t discriminate between bodies—it has no double standard—so why should humans do so in their new scientific societies? The rational life is the logically consistent life, the mathematical management of impersonal variables according to f
ormal rules. Even Hume, who always tempered logic with commonsense experience, argued that social life was only possible if we adhered to inflexible universal rules. “Public utility requires that property should be regulated by general inflexible rules; and though such rules are adopted as best serve the same end of public utility, it is impossible for them to prevent all particular hardships, or make beneficial consequences result from every individual case.”14 Making exceptions to these rules—for family, friends, favorites—undermines society itself.

  Whether we’re talking about Kantian categorical imperatives or impartial spectators, the modern starting place of ethical reflection is an abstract disinterested geometric point—hovering over a grid of impartiality.

  When you conceptually lay a perspective grid (like the ones Albrecht Dürer used in his drawings) or a Cartesian coordinate map on a society, you can assess the subjects according to a disinterested objective measuring system (each square in the grid is equal), and you can also compare the subjects to each other objectively. The goal is value neutrality.15

  Recent psychology data confirms that this grid of impartiality still dominates Western liberal notions of morality. Interviews with subjects from different countries and from different U.S. economic and ethnic demographics reveal that well-educated liberal secular Westerners see morality exclusively as the respecting of individual rights. Fairness between autonomous individual agents is the defining feature of our morality (e.g., cheating is perceived more as unfairness to others—disadvantaging competing agents—rather than as a failure of one’s own integrity or a disgrace on one’s family). The grid has taken such a strong hold on educated Westerners that they do not even recognize other cultural views of morality. For example, other cultures, immigrant groups, and even rural cultures in the United States think of morality as more than fairness and rights. They think of it as relating to loyalty and patriotism, sacred/profane issues of purity, temperance, obedience to authority, and other values.16

 

‹ Prev