by Felix Martin
Luckily for Northern Rock—or at least for its bondholders, depositors, and other customers—external assistance was at hand for the second time. Once again, the U.K. sovereign stepped in, but this time into the shoes not of the bank’s lenders, but of its shareholders. New equity capital was required in order to make good the gap between the value of the bank’s assets and its liabilities—and to provide an adequate buffer against potential further declines. The liquidity support operation had consisted of the sovereign merely agreeing to give one fixed promise to pay—a claim on the Bank of England—in return for another fixed promise of supposedly equal value—a claim on Northern Rock. What was now required, however, was something quite different. The sovereign would give its fixed promises to pay in return for equity: a residual claim on the uncertain difference between the value of Northern Rock’s assets and its liabilities. The liquidity support, at least in principle, had involved no risk of profit or loss—just a transfer of liquidity risk from private investors to the sovereign. This new operation would involve, by contrast, a transfer of credit risk. If losses ceased to mount on Northern Rock’s mortgages, the sovereign might not lose money. But if they did not, the sovereign, as equity owner, would be on the hook. This was not a job for the Bank of England—the monetary authority. If the sovereign is deliberately going to put taxpayers’ money at risk, better to ensure that it is its democratically elected government that is doing so. The purchase of Northern Rock’s equity was therefore made by the U.K. Treasury—the fiscal authority. On 17 February 2008, the bank was nationalised.13
Amongst the general public, the initial reaction was one of mystification, even indifference. The bank had failed and had been bailed out—by which arm of the state and how was frankly much of a muchness. Policy-makers and financial professionals, however, recognised the U.K. Treasury’s decisive action as a radical new policy—one which spoke ominously to the potential scale of the crisis ahead, and which set a radical precedent for the policy response to it. By nationalising Northern Rock, the U.K. sovereign had revealed that it felt it necessary to provide not just liquidity, but credit, support to the banking sector. The Bank of England’s lending from September to February had kept depositors, bondholders, and the bank’s existing equity-holders unscathed—on the assumption that there was some value in the bank’s equity. Once the bank’s equity capital had been eroded by losses, the luck of its shareholders had run out. In the normal order of things, its bondholders would have been next in line. Instead, it now transpired, they were able to call on a second line of defence: the U.K. Treasury’s previously undeclared insurance of bank investors against credit losses. Courtesy of the Chancellor of the Exchequer, the British taxpayer was on hand to assume the risk of further losses that would otherwise have had to be borne by Northern Rock’s bondholders.
What, observers asked, could have prompted the U.K. sovereign to such extraordinary generosity? Liquidity support was one thing—it had been official policy at least since Bagehot’s day and unofficial policy even before that. But credit support and bank recapitalisation with direct costs to taxpayers—these were clear and controversial policies historically reserved for the direst circumstances. They were the stuff of the Great Depression—when a special government-funded body, the Reconstruction Finance Corporation, had been established to recapitalise banks in the U.S.—or of the near-collapse of the U.K. economy in the 1970s, when the government had stepped in to provide capital to the secondary banks when private investors would not. Moreover, credit support was historically shunned for good reason. If moral hazard presented a dilemma for the central bank’s role as lender of last resort, how much more of a dilemma did it present for the National Treasury’s role as shareholder of last resort? If every banker—and, just as important, every investor that funded his bank—knew that the sovereign stood ever-ready to cover his losses should things go wrong, what possible discipline could there ever be on lending standards and volumes?
The market began to suspect that there was something terribly wrong. Why else should the line between liquidity and credit support have been crossed, and the natural political reluctance to have taxpayers bail out banks have been trumped? The policy-makers knew only too well the gravity of what they had done, and tried furiously to dispel what they knew would be the fatal impression that no one need any longer watch their backs. “Banks,” announced the U.K. Parliament’s Treasury Select Committee in a desperate attempt to shut the stable door, “should be allowed to ‘fail’ so as to preserve market discipline on financial institutions.”14
But the horse had already bolted. Only the most terrifying warning might chase it back in again. So it was that when Bear Stearns, the fifth-largest U.S. securities dealer, ran into trouble in March 2008, the U.S. authorities made it clear that liquidity support alone would be forthcoming. When it emerged that Bear Stearns was on the brink of failure, it was a private investor—the universal bank, J. P. Morgan—that stepped in to buy its equity. The policy-makers were encouraged. Perhaps the horse had been scared back into its stable. When a second major U.S. investment bank, Lehman Brothers, began to suffer a catastrophic run almost a year to the day after the run on Northern Rock, the emboldened U.S. authorities held their nerve. Alas, the horse was not back in the stable after all. “They can shoot a Bear,” was the gag doing the rounds in the financial markets on Friday, 12 September 2008, “but they can’t shoot the Brothers.” Despite the stand on Bear Stearns, bankers and their investors remained convinced that the policy-makers would fold. The strength of their conviction was measured by the sheer panic which ensued when on Monday, 15 September, credit support from the sovereign was refused, and Lehman Brothers filed for bankruptcy.
The collateral damage to the financial sector and the real economy caused by the failure of Lehman Brothers was beyond all expectations. The heroic attempts of the policy-makers to deny the doctrine of blanket credit insurance disintegrated. What, after all, was the point of trying to preserve market discipline when the markets themselves were no longer functioning? The End of the World—or at least the End of the Banks—was nigh, and had to be prevented at any cost—or at least, any cost to the taxpayer.15 In an instant, the nationalisation of Northern Rock became not an embarrassing aberration, unmentionable in polite society for fear of giving the bankers unsuitable ideas, but the model of good policy. The result was a level of sovereign credit support for the world’s banking sectors unlike anything ever witnessed before. Twenty-five countries experienced major banking crises between 2007 and 2012: two-thirds of them resorted to providing credit support to their banks.16 The sheer scale of some of the interventions was unprecedented. The U.S. spent 4.5 per cent of GDP recapitalising its banks—equal to its entire annual defence budget in the midst of a major war.17 In 1816, Thomas Jefferson had warned that “banking establishments are more dangerous than standing armies.”18 His verdict was proving alarmingly close to the truth, if not in the sense he had intended. The U.K. spent 8.8 per cent of GDP—considerably more than it spends annually on its much-vaunted National Health Service.19 The Irish sovereign spent over 40 per cent of GDP—more than the typical annual budget of every department of government put together. There could no longer be any doubt. The sovereign had the bankers’ backs.
When the dust had settled and the Great Recession set in, the public began to realise what had happened. The banks and their investors had been making a one-way bet. Their business was—just as it always had been—to manage liquidity and credit risk. But if they proved unable to synchronise their payments, the central bank would step in with liquidity support. And if their loans went bad and their equity capital was too thin, the taxpayer would backstop their credit losses. The consequences were, in retrospect, utterly predictable. Around the world, banks had grown in size, reduced their capital buffers, made riskier loans, and decreased the liquidity of their assets. More and more had become too big to fail. As a result, the level of credit insurance that sovereigns had implicitly been providi
ng had ballooned. Only when the crisis had struck, and the policy-makers’ initial efforts to control moral hazard collapsed, had the true scale of the subsidy become clear. In November 2009, a year after the collapse of Lehman Brothers, total sovereign support for the banking sector worldwide was estimated at some $14 trillion—more than 25 per cent of global GDP.20 This was the scale of the downside risks, taxpayers realised, that they had been bearing all along—whilst all the upside went to the shareholders, debt investors, and employees of the banks themselves.
“Same Old Game” indeed—though today it is not just liquidity, but also credit, insurance that the sovereign generously doles out.
(illustration credit 14.1)
It was a world that Walter Bagehot would not have recognised. The doctrine of the central bank as lender of last resort had become the doctrine of the sovereign as loss-bearer of last resort. This innovation of widespread credit support from national treasuries introduced a dramatic new dimension to the political calculus. When the central bank provides liquidity support, nobody, in principle, loses—and the widely shared benefits of a well-functioning monetary system are preserved. When the government provides credit support, however, taxpayers bear a real cost. The question, of course, is who gains? One answer—the one which garnered most attention in the immediate aftermath of the crisis—is the bankers themselves. When the government bailed out the banks, many bank employees continued, at least for a time, to have their jobs and to earn their bonuses. That was politically contentious, but in reality the bankers themselves enjoyed only one part of the taxpayers’ generosity. The banks’ bondholders and depositors—those who freely agreed to fund the bankers’ lending—were also beneficiaries of the sovereign’s unprecedented largesse. When it was refused, as it was to Lehman Brothers, bondholders had to shoulder the losses due to the bad loans that had been made. When it was not, the sovereign relieved them of this unpleasant burden.
Once upon a time, the idea of taxpayers bailing out bank bondholders might not have been politically contentious, because there was little distinction between the two groups. One way or another, via the investments of pension funds and mutual funds, they were by and large one and the same. But in the modern, developed world, two powerful forces have conspired to undermine this convenient correspondence between those who fund the banking system and those who stand to bail it out when things go wrong. The first is increased inequality of wealth and income, which has opened up a divide between the wealthy few who own banks’ bonds, and the more modest majority who do not. Spending public money to protect bank bondholders has become an issue of rich versus poor. The second powerful force has been the internationalisation of finance. In countries such as Ireland and Spain, the globalisation of bond markets has meant that domestic taxpayers found themselves footing the bill for bank recapitalisation that benefited foreign bondholders. Firing civil servants to pay for bail-outs meant to save their own pension fund is one thing. Firing them to pay out foreign pensioners is politically quite another. When, on 31 January 2011, Anglo-Irish Bank—which had been recapitalised to the tune of EUR 25.3 billion by Irish taxpayers—repaid in full and on schedule a EUR 750 million bond to its investors, the distribution of risks under the new regime of sovereign credit support for banks was on stark display. The total cuts to welfare spending in that year’s Irish budget amounted to a little over the same amount.21
The global public’s dismay at this state of affairs is therefore not due to an unfortunate misunderstanding of how the financial world does, and indeed has to, work. People are right to smell a rat. The crisis of 2007–8 and sovereigns’ response to it revealed a profoundly uncomfortable truth: something has gone terribly wrong with the Great Monetary Settlement. The historic deal struck between the sovereign and the Bank of England in 1694 involved a carefully calibrated exchange of benefits. The private bankers got liquidity for their banknotes. The crown’s writ, unlike their own, ran throughout the land, and money that had its blessing could enjoy universal circulation. In return, the bankers provided the financial acumen and the trusted reputation in the City that enhanced the crown’s creditworthiness. In modern terms, the crown provided liquidity support to the Bank, while the Bank provided credit support to the sovereign. Yet the policy-makers’ response to the crisis revealed a starkly different world. Banks, of course, retained their privilege of issuing sovereign money—and the central bank stood ready to guarantee its liquidity in times of need. But far from receiving support to its credit in return, it was the sovereign that ended up supporting the credit of the banks. The banks—their employees, their bondholders, and their depositors—get both liquidity and credit support. The sovereign—that is, the taxpayer—gets nothing. The crisis revealed that the historic quid pro quo had become a quid pro nihilo: something for nothing.
This was bad enough but there was even worse to come. No sooner had the crisis exposed with brutal honesty the strange death of the Great Monetary Settlement that had kept the peace between sovereigns and banks for three hundred years, than it unveiled the equally startling revelation that another hoary old veteran of monetary politics was very much alive—and active on a scale never seen before.
THE COUP D’ÉTAT IN THE CREDIT MARKETS
The great wave of economic deregulation and globalisation that began in the late 1970s, accelerated in the 1980s and ’90s, and reached its zenith in the pre-crisis years of the early 2000s brought with it revolutions in the organisation of industries from car manufacturing to the supply of electricity, and from supermarkets to film-making. The watchword was decentralisation: the hundreds of activities once housed in a single corporation could be hived off to smaller and more specialised companies, and co-ordinated by the market using supply chains and networks of astonishing complexity and length.22 Of course, some complained that it went too far—that the costs saved by moving customer care to a call centre in Bangalore or Manila were really just offloaded on to the enraged customers on the other end of the line. But overall, few could deny that in industry after industry the result for the consumer was a phenomenal reduction in costs and improvement in choice.
Finance was no stranger to these tectonic shifts in industrial organisation. Until the late 1960s, lending to companies and individuals remained for the most part a simple and familiar business undertaken almost exclusively by banks. The borrower visited the bank; the loan officer scrutinised the request and worked it up for approval; the bank manager signed off on the appraisal; the loan was entered in the bank’s loan book as part of the bank’s assets; and a deposit was credited to the borrower as part of the bank’s liabilities. The whole transaction had only two counterparties—the borrower and the bank—and the bank’s balance sheet was where the management of credit and liquidity risk took place. But for centuries there had also existed an alternative way of raising money: by selling financial securities—promises to pay such as shares in the equity of a company or bonds paying a fixed interest over time—directly to investors. The equity capital markets—the stock market, for short—had always been a democratic affair. Even quite small companies could issue shares; they were traded on public exchanges; and there was a vast army of retail investors. The debt capital markets, on the other hand, were more exclusive. Borrowing by issuing bonds was “high finance,” the preserve of only the largest corporations, and, above all, of sovereigns themselves. Likewise, the investors in these securities were mostly “institutional investors” such as pension funds, insurance companies, and mutual funds, which aggregated the savings of many thousands of individuals to reach the scale required to play the bond markets. And rather than hawked on stock exchanges like fish in the marketplace, the buying and selling of bonds was done by brokers through their personal networks, like pieces of antique furniture that needed to be found the right home.
Nevertheless, for most borrowers, banks remained the dominant source of debt capital right up until the late 1970s. It was only then that the revolutions in information technology and suppl
y-chain management began to unlock the logic of specialisation and the division of labour in finance as in so many other industries. The debt capital markets, it was realised, represented a vast opportunity to create intermediaries that specialised in individual component activities of banks; and hence the potential for enormous gains in efficiency. Borrowers could continue to come to the bank, and loan officers to scrutinise their requests and knock them into reasonable shape. But the business of actually approving the loans and of monitoring the borrowers could be done just as well—perhaps better—by investors themselves. The bank would merely arrange, rather than implement, the allocation of credit. The loan itself need never appear on the bank’s balance sheet. It would become instead a financial security—a bond—owed by the borrower, and owned directly by the individual or institutional investor. The traditional banking model of doing everything in-house began to give way to a new model of “originate and distribute”—of specialising in the identification of borrowers and end-investors, while letting others do the screening, warehousing, and monitoring of the loans. The business of financing companies and individuals was beginning a great migration from the world of banks to the world of markets for financial securities distinguished by their issuers’ credit risk—the credit markets, for short.