—
You can see the same power of accidental celebrity at work in other fields. One is bestselling fiction. Thanks to the inevitable mistakes in bestseller lists (in 2001 and 2002, 109 books that should have been on the New York Times bestseller list according to their sales were left off), Stanford Business School professor Alan Sorensen was able to show that for books of equal initial popularity, being left off the list—not getting the Nobel Prize, not enjoying Einstein’s superstar treatment on that 1921 visit to the United States—meant fewer subsequent sales.
The same is true of classical musicians. The most important contest for pianists is Belgium’s Queen Elisabeth Competition. Looking at eleven years of the competition, economists Victor Ginsburgh and Jan van Ours found that the top three players went on to become successful professional musicians. Less than half of the others were able to find work of any sort as musicians. But is that a reward for talent or for the celebrity of winning the competition? One clue that officially being named a superstar—winning the competition—had more value than pure talent was an unexpected discovery Ginsburgh and van Ours made when they studied the winners. Placing first, second, or third correlated closely with the randomly determined order in which contestants had competed. So, unless you believe that the random order of participating in the competition is linked to talent, the more obvious conclusion is that the music world celebrity brought by winning the Queen Elisabeth Competition, independent of how good you are, has a powerful effect on your professional success as a musician.
But what about the long tail? One of the promises of the Internet has been that it can weaken the Matthew effect: the Web has low barriers to entry, and we all start out equal online. Matthew Salganik and Duncan Watts tested that premise in 2005 on 12,207 Web-based participants. The research subjects were offered a menu of forty-eight songs. Some participants were shown the songs ranked by popularity in the research group and told how often each song had been downloaded. Others were shown the songs in random order. A separate group was shown the songs in a meek-shall-inherit-the-earth order—the least popular songs were presented as most popular and vice versa. The results largely confirmed Merton’s thesis: being presented as popular, whether that information was true or not, strongly increased a song’s subsequent popularity. The impact was strongest for the songs that were the “worst” as measured by the unmanipulated judgment of listeners. Nor was the effect absolute. Even when presented as the least popular in the “inverted” world, the best songs gradually climbed up the rankings. If you are very, very good, you can break into the superstar league, but it’s an uphill battle.
CAPITAL FIGHTS BACK
On January 11, 1991, Jeffrey Katzenberg, then CEO of Walt Disney Studios, sent a memo to his thirteen top executives titled “The World Is Changing: Some Thoughts on Our Business.” Despite its bland title, the twenty-eight-page note was instantly leaked to the press, probably by Katzenberg himself, and it swiftly became the most read prose in Hollywood. “We are entering a period of great danger and ever greater uncertainty,” the memorandum began. The change Katzenberg was worried about? The rise of superstars.
In 1984, when Katzenberg and his team arrived at Disney with a mandate to turn around the venerable but troubled moviemaker, Disney had been “the most cost-conscious of all studios.” It had saved money mostly “by avoiding the reigning stars of the moment.” Katzenberg wrote, proudly: “Instead we featured stars on the downward slope of their career or invented new ones of our own. Robin Williams suggested to Newsweek magazine that we recruited talent by standing outside the back door of the Betty Ford Clinic. The first instance of this approach to moviemaking was Down and Out in Beverly Hills, a film that reignited the careers of its three stars, Bette Midler, Richard Dreyfuss, and Nick Nolte.”
But as the decade progressed, Disney found itself paying its stars more. What particularly distressed Katzenberg was the Matthew effect—paying stars not just for their talent, but also for their fame, something Katzenberg called the “celebrity surcharge”: “In 1984, we paid Bette only for her considerable talent. Now, we must also pay her for her considerable and well-earned celebrity. This is what might be called the ‘celebrity surcharge’ that must be ante’d up when hiring major stars.”
Katzenberg’s biggest complaint was the signal achievement of “talent” in the second half of the twentieth century: the shift from earning a wage to having a stake in the business. Hedge managers and private equity investors call their stake “the carry.” Movie stars call it “participation.” Katzenberg called it “extremely threatening”: “Unreasonable salaries coupled with giant participations comprise a win/win situation for the talent and a lose/lose situation for us. It results in us getting punished in failure and having no upside in success.”
Actors weren’t the only talent Katzenberg worried about. Writers, he complained, were starting to be paid “$2–$3 million for screenplays.” Instead, Katzenberg thought Disney should be paying “young” writers $50,000 to $70,000 or “proven writers” $250,000 to develop a screenplay for an idea suggested by Disney. Katzenberg admitted that in the new world of superstar scripts, persuading writers to agree to these skimpier rations, ideally on long-term contracts, wouldn’t be easy: “I know many will argue that this just isn’t feasible anymore. Agents won’t let their clients sign long-term contracts because the spec script market is too lucrative. All this means is it will be tougher. It doesn’t mean it’s impossible.”
Katzenberg’s solution was for Disney executives to seek out actors and writers who were talented but either hadn’t achieved or had lost the superstardom that allowed those at the very top to charge a celebrity surcharge. “All the big-time writers have one thing in common,” Katzenberg wrote. “They were all once unknown and thrilled just to make a sale. The future big-time writers are out there and would be grateful just to be considered by our studio. To find them, we have to search harder, dig deeper . . . and be there first.”
As for actors, Katzenberg urged his team to “be aggressive . . . at the comedy clubs searching for future stars, and at the back door of the Clinic picking up the stars that once were and can be again.”
—
Katzenberg is not alone. As superstars have become more powerful, bosses in every field have struggled to find ways to avoid paying them the celebrity surcharge. In addition to haunting the back door of the Clinic, studio chiefs have shifted resources to animated films—illustrators, technologists, and voice actors don’t yet command a superstar premium—and to serials in which the character is the star, and the actor who plays him or her in one installment can be replaced by a cheaper successor if the original becomes too famous. Reality television and competition shows are another way to avoid paying the celebrity premium, by making the hoi polloi the stars and, as Pop Idol does, binding them to contracts that prevent them from demanding any of the upside if their shows make them famous.
Some sports team owners are on a similar quest to pay for talent, not stardom. That is the story of the Oakland A’s and their general manager, Billy Beane, as lionized in Michael Lewis’s Moneyball. Beane is Lewis’s underfunded, underdog hero, but his is really the story of capital—the baseball team owners—looking for a way to avoid paying the celebrity premium to its stars—the players—in this case by looking for athletes whose skills were crucial to the team’s success but were undervalued by the market.
Even in finance, whose superstars are less well known but even better paid than film and sports celebrities, some bosses have been looking for ways to avoid the celebrity premium. Harvard Business School professor Boris Groysberg became the hero of Wall Street’s HR departments in 2010 when he published Chasing Stars, a study that has become the banking industry’s Moneyball. After interviewing more than two hundred Wall Street analysts, Groysberg concluded that recruiting stars from rival firms was a waste of money, because poached analysts tended to falter when they were plucked from their native culture. Warren Buffett famously agr
ees. He emerged from his Omaha fastness to join the battle between capital and talent on Wall Street in the 1990s, when he briefly chaired struggling investment bank Salomon Brothers—a period he described in the next year’s letter to shareholders as “far from fun”—and slashed the bonus pool by $110 million.
But here is the catch in management’s fight to rein in superstar salaries, and one institutional reason the super-elite continue to rise: in the age of the vast, publicly traded joint-stock company, where ownership is widely dispersed and boards lack the time, expertise, and gumption to weigh in on the specifics of how companies operate, the managers themselves are superstars, too. Entertainers and athletes are the most visible superstars, but they are hugely outnumbered by the army of business managers who in the past four decades have been transformed from salarymen to multimillionaires.
The ideas Katzenberg laid out in his 1991 memo have been largely vindicated by subsequent academic research. Mostly strikingly, in a 1999 study analyzing the economics of two hundred movies, Abraham Ravid found that stars had no impact on box office revenue. Katzenberg had a powerful incentive to sniff out the financial danger of paying the celebrity surcharge—as Disney’s CEO, his job was to turn a profit. But the checks on soaring salaries of chief executives and their top teams are much weaker. Even superstars have bosses, but as Jack Welch, the first CEO to become a celebrity, said in a conversation at the 92nd Street Y in the spring of 2011, what the chief executive needs is “a generous compensation committee.”
Or a smart lawyer. Katzenberg’s big complaint about “the talent” was “participations,” or contracts that gave actors a share in a movie’s revenue. It turned out he had cut a similar deal himself, earning a share of the entire studio’s profits in addition to his cash salary and CEO perks. That package was big enough to make a dent not just in one movie’s profits but in the entire company’s bottom line, as Disney shareholders learned when the company settled a legal battle with Katzenberg over his severance package. The terms of the deal were undisclosed, but Hollywood lawyers estimated it was at least $200 million—more than four times the production costs of Dick Tracy, the overbudget movie that inspired Katzenberg’s 1991 cri de guerre.
—
Sometimes the title says it all. That was certainly the case in March 1986, when the Harvard Business Review published an essay headlined “Top Executives Are Worth Every Nickel They Get.” HBR is owned by the Harvard University, and its readers are the aforementioned top executives and their ambitious underlings. So one purpose of the essay was inevitably service journalism’s accustomed function of flattering its constituency. But the piece had a less cynical motivation, too. Its author, Kevin J. Murphy, was in the vanguard of a small group of business school academics who had spent the previous decade trying to solve one of the big problems of twentieth-century market economies: How do you have capitalism without capitalists? Or, to put it another way, who manages the managers?
This is not a new problem. In The Wealth of Nations, Adam Smith compared the executives of a joint-stock company to “the stewards of a rich man” and warned that “being the managers rather of other people’s money than their own, it cannot well be expected, that they should watch over it with the same anxious vigilance with which the partners in a private copartnery frequently watch over their own. . . . Negligence and profusion, therefore, must always prevail.” Writing just over a hundred years later, Alfred Marshall bemoaned the feebleness of the staid British joint-stock company, compared to an America dominated by owner-entrepreneurs: “The area of America is so large and its condition so changeful, that the slow and steady-going management of a great joint-stock company on the English plan is at a disadvantage in competition with the vigorous and original scheming, the rapid and resolute force of a small group of wealthy capitalists, who are willing and able to apply their own resources in great undertakings.”
That small group of wealthy capitalists laid the foundations for America’s astonishing economic ascent in the twentieth century. But as the American economy matured, control of its private businesses began to pass from the hands of the vigorous, scheming, and resolute founders of Marshall’s age to a new generation of stewards. That shift was documented in a seminal paper published in 1931 by Gardiner Means, a New England farm boy and steely-nerved World War I pilot who’d eventually made his way to economics and the Ivy League faculty. Means showed that of the two hundred largest U.S. companies at the end of 1929, 44 percent were controlled by managers rather than by their owners. An even greater share of the wealth of America’s top companies was in the hands of the managerial class—58 percent of the top two hundred companies, as measured by market capitalization, was manager ruled.
Means saw this ascendant managerial class as self-selecting and self-perpetuating: the only institutional parallel he could come up with was the clergy of the Catholic Church. In a book he and Adolf Berle, a professor of corporate law at Columbia, cowrote the next year, they described the rising managerial elite as “the princes of industry.” Berle and Means saw the shift from owners to managers as comparable in its significance to the switch from independent worker-artisans to wage-earning factory employees.
Berle and Means worried about how to keep this managerial aristocracy in check. Thanks to the ability of the publicly traded company to attract capital from millions of retail investors, this managerial class presided over firms of unprecedented scale and power. But the market incentives that governed the actions of owners didn’t necessarily apply to their stewards. In fact, their interests were “different from and often radically opposed to” those of the owners—the hired managers “can serve their own pockets better by profiting at the expense of the company than by making profits for it.”
Berle and Means were leading architects of the New Deal—Berle was an original member of FDR’s “Brain Trust,” and Means, working as an economist in the Roosevelt administration, waged a campaign against price fixing in the steel industry. Their prescription, accordingly, involved state and social intervention. Government should regulate managerial princes who overstepped the mark, and a new set of social conventions should be developed requiring managers to be “economic statesmen” who ran their companies in the collective interest.
Murphy’s “Worth Every Nickel” essay was a robust public statement of a radically different solution to the problem Berle and Means had identified. Like the New Dealers, Murphy and his confreres believed that managing the managers was the central problem of twentieth-century capitalism. But instead of trying to get corporate executives to behave more like public-spirited civil servants, Murphy and his fellow business school professors thought the answer lay in the opposite direction: the stewards needed to be turned into the red-blooded founder-owners they had replaced. To do that, their financial incentives needed to be aligned as closely as possible with the success or failure of the companies they ran. That wouldn’t give them as powerful a profit motive as owning the whole company, to be sure, but it would be a close second-best.
The “Worth Every Nickel” movement was in part a response to the success of the New Dealers’ efforts to create a social and political order in which managers were constrained. Thirty years after Berle and Means worried that managers would be tempted to profit at the expense of the companies they ran, here is how John Kenneth Galbraith, hardly an apologist for the C-suite, described the ethos of corporate America: “Management does not go out ruthlessly to reward itself—a sound management is expected to exercise restraint. . . . With the power of decision goes opportunity for making money. . . . Were everyone to seek to do so . . . the corporation would be a chaos of competitive avarice. But these are not the sorts of things that a good company man does; a generally effective code bans such behavior.”
In a follow-up to his Harvard Business Review cri de coeur, Murphy, along with his coauthor Michael Jensen, found that the culture of restraint in the postwar era could be quantified. During the three decades after the Second World War, t
he U.S. economy grew at a faster, more consistent pace than ever before, and American companies were ascendant around the world. The acknowledged social and economic leaders of this golden age were the country’s captains of industry, yet during that period their salaries actually fell. In our honey-tinged, Mad Men memories of the postwar era, we may imagine it to be a time of the triumph of the company man. But in fact it was an era when the managerial aristocracy was trammeled by the rest of society, even as the companies they oversaw prospered. As Jensen and Murphy concluded in their 1990 paper: “The average salary plus bonus for top-quartile CEOs (in 1986 dollars) fell from $813,000 in 1934–38 to $645,000 in 1974–86, while the average market value of the sample firms doubled.”
Jensen and Murphy agreed with Galbraith’s explanation of what was going on—social pressure was limiting CEO salaries: “Political forces operating both in the public sector and inside organizations limit large payoffs for exceptional performance.”
Plutocrats: The Rise of the New Global Super-Rich and the Fall of Everyone Else Page 16