Book Read Free

The Flamingo’s Smile

Page 19

by Stephen Jay Gould


  The second kind of explanation views people as much of a muchness over time and attributes the downward trend in league-leading batting to changes in the game and its styles of play. Most often cited are improvements in pitching and fielding, and more grueling schedules that shave off the edge of excellence. J.L. Reichler, for example, one of baseball’s premier record keepers, argues (see bibliography):

  The odds are heavily against another .400 hitter because of the tremendous improvement in relief pitching and fielding. Today’s players face the additional handicaps of a longer schedule, which wears down even the strongest players, and more night games, in which the ball is harder to see.

  I do not dispute Reichler’s factors, but I believe that he offers an incomplete explanation, expressed from an inadequate perspective.

  Another proposal in this second category invokes the numerology of baseball. Every statistics maven knows that, following the introduction of the lively ball in the early 1920s (and Babe Ruth’s mayhem upon it), batting averages soared in general and remained high for twenty years. As the accompanying chart shows, league averages for all players rose into the .280s in both leagues during the 1920s and remained in the .270s during the 1930s, but never topped .260 in any other decade of our century. Naturally, if league averages rose so substantially, we should not be surprised that the best hitters also improved their scores. The great age of .400 hitting in the National League did occur during the 1920s (another major episode of high averages occurred in the pre-modern era, during the 1890s, when the decadal average rose to .280—it had been .259 for the 1870s and .254 for the 1880s).

  But this simple factor cannot explain the extinction of .400 hitting either. No one hit .400 in either league during 1931–1940, even though league averages stood twenty points above their values for the first two decades of our century, when fancy hitting remained in vogue. A comparison of these first two decades with recent times underscores both the problem and the failure of resolutions usually proposed—for high hitting in general (and .400 hitting in particular) flourished from 1900 to 1920, but league averages back then did not differ from those for recent decades, while high hitting has gone the way of bird’s teeth.

  Consider, for example, the American League during 1911–1920 (league average of .259) and 1951–1960 (league average of .257). Between 1911 and 1920, averages above .400 were recorded during three years, and the leading average dipped below .380 only twice (Cobb’s .368 and .369 in 1914 and 1915). This pattern of high averages was not just Ty Cobb’s personal show. In 1912 Cobb hit .410, while the ill-fated Shoeless Joe Jackson reached .395, Tris Speaker .383, thirty-seven-year-old Nap Lajoie .368, and Eddie Collins .348. By comparison, during 1951–1960, only three leading averages exceeded Eddie Collins’s fifth-place .348 (Mantle’s .353 in 1956, Kuenn’s .353 in 1959, and Williams’s .388, already discussed, in 1957). The 1950s, by the way, was not a decade of slouches, what with the likes of Mantle, Williams, Minoso, and Kaline. Thus, a general decline in league-leading averages throughout the century cannot be explained by an inflation of general averages during two middle decades. We are left with a puzzle. As with most persistent puzzles, we probably need new kind of explanation, not merely a recycling and refinement of old arguments.

  I am a paleontologist by trade. We students of life’s history spend most of our time worrying about long-term trends. Has life become more complex through time? Do more species of animals live now than 200 million years ago? Several years ago, it occurred to me that we suffer from a subtle but powerful bias in our approach to explaining trends. Extremes fascinate us (the biggest, the smallest, the oldest), and we tend to concentrate on them alone, divorced from the systems that include them as unusual values. In explaining extremes, we abstract them from larger systems and assume that their trends arise for self-generated reasons: if the biggest become bigger through time, some powerful advantage must accompany increasing size.

  But if we consider extremes as the limiting values of larger systems, a very different kind of explanation often applies. If the amount of variation within a system changes (for whatever reason), then extreme values may increase (if total variation grows) or decrease (if total variation declines) without any special reason rooted in the intrinsic character or meaning of the extreme values themselves. In other words, trends in extremes may result from systematic changes in amounts of variation. Reasons for changes in variation are often rather different from proposed (and often spurious) reasons for changes in extremes considered as independent from their systems.

  Let me illustrate this unfamiliar concept with two examples from my own profession—one for increasing, the other for decreasing extreme values. First, an example of increasing extreme values properly interpreted as an expansion of variation: The largest mammalian brain sizes have increased steadily through time (the brainiest have gotten brainier). Many people infer from this fact that inexorable trends to increasing brain size affect most or all mammalian lineages. Not so. Within many groups of mammals, the most common brain size has not changed at all since the group became established. Variation among species has, however, increased—that is, the range of brain sizes has grown as species become more numerous and more diverse in their adaptations. If we focus only on extreme values, we see a general increase through time and assume some intrinsic and ineluctable value in growing braininess. If we consider variation, we see only an expansion in range through time (leading, of course, to larger extreme values), and we offer a different explanation based on the reasons for increased diversity.

  Second, an example of decreasing extremes properly interpreted as declining variation: A characteristic pattern in the history of most marine invertebrates has been called “early experimentation and later standardization.” When a new body plan first arises, evolution seems to explore all manner of twists, turns, and variations. A few work well, but most don’t (see essay 16). Eventually, only a few survive. Echinoderms now come in five basic varieties (two kinds of starfish, sea urchins, sea cucumbers, and crinoids—an unfamiliar group, loosely resembling many-armed starfish on a stalk). But when echinoderms first evolved, they burst forth in an astonishing array of more than twenty basic groups, including some coiled like a spiral and others so bilaterally symmetrical that a few paleontologists have interpreted them as the ancestors of fish. Likewise, mollusks now exist as snails, clams, cephalopods (octopuses and their kin), and two or three other rare and unfamiliar groups. But they sported ten to fifteen other fundamental variations early in their history. This trend towards shaving and elimination of extremes is surely the more common in nature. When systems first arise, they probe all the limits of possibility. Many variations don’t work; the best solutions emerge, and variation diminishes. As systems regularize, their variation decreases.

  From this perspective, it occurred to me that we might be looking at the problem of .400 hitting the wrong way round. League-leading averages are extreme values within systems of variation. Perhaps their decrease through time simply records the standardization that affects so many systems as they stabilize—including life itself as stated above and developed in essay 16. When baseball was young, styles of play had not become sufficiently regular to foil the antics of the very best. Wee Willie Keeler could “hit ’em where they ain’t” (and compile an average of .432 in 1897) because fielders didn’t yet know where they should be. Slowly, players moved toward optimal methods of positioning, fielding, pitching, and batting—and variation inevitably declined. The best now met an opposition too finely honed to its own perfection to permit the extremes of achievement that characterized a more casual age. We cannot explain the decrease of high averages merely by arguing that managers invented relief pitching, while pitchers invented the slider—conventional explanations based on trends affecting high hitting considered as an independent phenomenon. Rather, the entire game sharpened its standards and narrowed its ranges of tolerance.

  Thus I present my hypothesis: The disappearance of the .400 hitter (a
nd the general decline of league-leading averages through time) is largely the result of a more general phenomenon—a decrease in the variation of batting averages as the game standardized its methods of play—and not an intrinsically driven trend warranting a special explanation in itself.

  To test such a hypothesis, we need to examine changes through time in the difference between league-leading batting averages and the general average for all batters. This difference must decrease if I am right. But since my hypothesis involves an entire system of variation, then, somewhat paradoxically, we must also examine differences between lowest batting averages and the general average. Variation must decrease at both ends—that is, within the entire system. Both highest and lowest batting averages must converge toward the general league average.

  I therefore reached for my trusty Baseball Encyclopedia, that vade mecum for all serious fans (though, at more than 2,000 pages, you can scarcely tote it with you). The encyclopedia reports league averages for each year and lists the five highest averages for players with enough official times at bat. Since high extremes fascinate us while low values are merely embarrassing, no listing of the lowest averages appears, and you have to make your way laboriously through the entire roster of players. For lowest averages, I found (for each league in each year) the five bottom scores for players with at least 300 at bats. Then, for each year, I compared the league average with the average of the five highest and five lowest scores for regular players. Finally, I averaged these yearly values decade by decade.

  CREDIT: CATHY HALL.

  In the accompanying chart, I present the results for both leagues combined—a clear confirmation of my hypothesis, since both highest and lowest averages converge towards the league average through time.

  The measured decrease toward the mean for high averages seems to occur as three plateaus, with only limited variation within each plateau. During the nineteenth century (National League only; the American League was founded in 1901), the mean difference between highest and league average was 91 points (range of 87 to 95, by decade). From 1901 to 1930, it dipped to 81 (range of only 80 to 83), while for five decades since 1931, the difference between mean and extreme has averaged 69 (with a range of only 67 to 70). These three plateaus correspond to three marked eras of high hitting. The first includes the runaway averages of the 1890s, when Hugh Duffy reached .438 (in 1894) and all five leading players topped .400 in the same year (not surprising since that year featured the infamous experiment, quickly abandoned, of counting walks as hits). The second plateau includes all the lower scores of .400 batters in our century, with the exception of Ted Williams (Hornsby topped the charts at .424 in 1924). The third plateau records the extinction of .400 hitting.

  Lowest averages show the same pattern of decreasing difference from the league average, with a precipitous decline by decade from 71 to 54 points during the nineteenth century, and two plateaus thereafter (from the mid-40s early in the century to the mid-30s later on), followed by the one exception to my pattern—a fallback to the 40s during the 1970s.

  Patterns of change in the difference between highest and lowest averages and the general league average through time

  Difference between five highest and league average

  Difference between five lowest and league average

  1876–1880

  95

  71

  1881–1890

  89

  62

  1891–1900

  91

  54

  1901–1910

  80

  45

  1911–1920

  83

  39

  1921–1930

  81

  45

  1931–1940

  70

  44

  1941–1950

  69

  35

  1951–1960

  67

  36

  1961–1970

  70

  36

  1971–1980

  68

  45

  Nineteenth-century values must be taken with a grain of salt, since rules of play were somewhat different then. During the 1870s, for example, schedules varied from 65 to 85 games per season (compared with 154 for most of our century and 162 more recently). With short seasons and fewer at bats, variation must increase, just as, in our own day, averages in June and July span a greater range than final-season averages, several hundred at bats later. (For these short seasons, I used two at bats per game as my criterion for inclusion in tabulations for low averages.) Still, by the 1890s, schedules had lengthened to 130–150 games per season, and comparisons with our own century become more meaningful.

  I was rather surprised—and I promise readers that I am not rationalizing after the fact but acting on a prediction I made before I started calculating—that the pattern of decrease did not yield more exceptions during our last two decades, because baseball has experienced a profound destabilization of the sort that my calculations should reflect. After half a century of stable play with eight geographically stationary teams per league, the system finally broke in response to easier transportation and greater access to almighty dollars. Franchises began to move, and my beloved Dodgers and Giants abandoned New York in 1958. Then, in the early 1960s, both leagues expanded to ten teams, and, in 1969, to twelve teams in two divisions.

  These expansions should have caused a reversal in patterns of decrease between extreme batting averages and league averages. Many less than adequate players became regulars and pulled low averages down (Marvelous Marv Throneberry is still reaping the benefits in Lite beer ads). League averages also declined, partly as a consequence of the same influx, and bottomed out in 1968 at .230 in the American League. (This trend was reversed by fiat in 1969 when the pitching mound was lowered and the strike zone diminished to give batters a better chance.) This lowering of league averages should also have increased the distance between high hitters and the league average (since the very best were not suffering a general decline in quality). Thus, I was surprised that an increase in the distance between league and lowest averages during the 1970s was the only result I could detect of this major destabilization.

  As a nonplaying nonprofessional, I cannot pinpoint the changes that have caused the game to stabilize and the range of batting averages to decrease over time. But I can identify the general character of important influences. Traditional explanations that view the decline of high averages as an intrinsic trend must emphasize explicit inventions and innovations that discourage hitting—the introduction of relief pitching and more night games, for example. I do not deny that these factors have important effects, but if the decline has primarily been caused, as I propose, by a general decrease in variation of batting averages, then we must look to other kinds of influences.

  We should concentrate on the increasing precision, regularity, and standardization of play—and we must search for ways that managers and players have discovered to remove the edge that truly excellent players once enjoyed. Baseball has become a science (in the vernacular sense of repetitious precision in execution). Outfielders practice for hours to hit the cutoff man. Positioning of fielders changes by the inning and man. Double plays are executed like awesome clockwork. Every pitch and swing is charted, and elaborate books are kept on the habits and personal weaknesses of each hitter. The “play” in play is gone.

  When the world’s tall ships graced our bicentennial in 1976, many people lamented their lost beauty and cited Masefield’s sorrow that we would never “see such ships as those again.” I harbor opposite feelings about the disappearance of .400 hitting. Giants have not ceded to mere mortals. I’ll bet anything that Carew could match Keeler. Rather, the boundaries of baseball have been restricted and its edges smoothed. The game has achieved a grace and precision of execution that has, as one effect, eliminated the extreme achievements of early years. A game unmatched for style and detail has simply become more balanced and beautiful.

  Postscript />
  Some readers have drawn the (quite unintended) inference from the preceding essay that I maintain a cynical or even dyspeptic attitude towards great achievement in sports—something for a distant past when true heroes could shine before play reached its almost mechanical optimality. But the quirkiness of great days and moments, lying within the domain of unpredictability, could never disappear even if plateaus of sustained achievement must draw in towards an unvarying average. As my tribute to the eternal possibility of transcendence, I submit this comment on the greatest moment of them all, published on the Op-Ed page of the New York Times on November 10, 1984.

  STRIKE THREE FOR BABE

  Tiny and perfunctory reminders often provoke floods of memory. I have just read a little notice, tucked away on the sports pages: “Babe Pinelli, long time major league umpire, died Monday at age 89 at a convalescent home near San Francisco.”

  What could be more elusive than perfection? And what would you rather be—the agent or the judge? Babe Pinelli was the umpire in baseball’s unique episode of perfection when it mattered most. October 8, 1956. A perfect game in the World Series—and, coincidentally, Pinelli’s last official game as arbiter. What a consummate swan song. Twenty-seven Brooks up; twenty-seven Bums down. And, since single acts of greatness are intrinsic spurs to democracy, the agent was a competent, but otherwise undistinguished Yankee pitcher, Don Larsen.

  The dramatic end was all Pinelli’s, and controversial ever since. Dale Mitchell, pinch hitting for Sal Maglie, was the twenty-seventh batter. With a count of 1 and 2, Larsen delivered one high and outside—close, but surely not, by its technical definition, a strike. Mitchell let the pitch go by, but Pinelli didn’t hesitate. Up went the right arm for called strike three. Out went Yogi Berra from behind the plate, nearly tackling Larsen in a frontal jump of joy. “Outside by a foot,” groused Mitchell later. He exaggerated—for it was outside by only a few inches—but he was right. Babe Pinelli, however, was more right. A batter may not take a close pitch with so much on the line. Context matters. Truth is a circumstance, not a spot.

 

‹ Prev