Postwar
Page 120
Economic competition, however tense, was nonetheless a certain sort of closeness. What was really driving the two continents apart was a growing disagreement about ‘values’. In the words of Le Monde, ‘the transatlantic community of values is crumbling’. Seen from Europe, America—which had become superficially familiar in the course of the Cold War—was starting to look very alien. The earnest religiosity of a growing number of Americans—reflected in their latest, ‘born-again’ president—was incomprehensible to most Christian Europeans (if not to their more devout Muslim neighbours). The American fondness for personal side arms, not excluding fully equipped semi-automatic rifles, made life in the US appear dangerous and anarchic, while for the overwhelming majority of European observers, the frequent and unapologetic resort to the death penalty seemed to place America beyond the pale of modern civilization.396
To these were added Washington’s growing disdain for international treaties, its unique perspective on everything from global warming to international law, and above all its partisan stance in the Israel-Palestine crisis. In none of these instances did American policy completely reverse direction following the election of President George W. Bush in 2000; the Atlantic gap had begun to open up well before. But the new Administration’s harsher tone confirmed for many European commentators what they already suspected: that these were not mere disagreements on discrete policy issues. They were mounting evidence of a fundamental cultural antagonism.
The idea that America was culturally different—or inferior, or threatening—was hardly original. In 1983 the French Culture Minister Jack Lang warned that the widely watched television series Dallas represented a serious threat to French and European identity. Nine years later, when Jurassic Park opened in French cinemas, he was echoed to the letter by one of his conservative successors. When EuroDisney was launched in the spring of 1992, the radical Parisian theatre director Ariane Mnouchkine went a step further and warned that the amusement park would prove ‘a cultural Chernobyl’. But this was the familiar small change of intellectual snobbery and cultural insecurity, mixed—in France as elsewhere—with more than a little chauvinist nostalgia. On the fiftieth anniversary of D-Day, Gianfranco Fini, leader of the ex-Fascist National Alliance Party in Italy, told the Italian daily La Stampa that ‘I hope I won’t be thought to be justifying Fascism if I wonder whether with the American landings Europe didn’t lose a part of her cultural identity’.
What was new about the situation at the beginning of the twenty-first century was that such sentiments were becoming commonplace, and had moved from the intellectual or political fringes deep into the center of European life. The depth and breadth of anti-American feeling in contemporary Europe far exceeded anything seen during the Vietnam War or even at the height of the peace movements of the early 1980s. Although a majority in most countries still believed that the Atlantic relationship could be preserved, three out of five Europeans in 2004 (many more than that in some countries, notably Spain, Slovakia and, strikingly, Turkey) thought strong American leadership in the world to be ‘undesirable’.
Some of this could be attributed to widespread dislike of the policies and personof President George W. Bush, in contrast to the affection in which Bill Clinton, his predecessor, had been held. But many Europeans had been angry at President Lyndon Johnson in the late Sixties; yet their feelings about the war in South-East Asia had not typically mutated into dislike of America or Americans in general. Forty years later there was a widespread feeling, all across the continent (and very much including the British, who angrily objected to their Prime Minister’s enthusiastic identification with his American ally) that there was something wrong with the kind of place that America was becoming—or, as many now insisted, had always been.
Indeed, the presumptively ‘un-American’ qualities of Europe were fast becoming the highest common factor in European self-identification. European values were contrasted with American values. Europe was—or should strive to be—everything that America wasn’t. In November 1998 Jérôme Clément, the President of Arte, a Franco-German television station devoted to culture and the arts, warned that ‘European creativity’ was the only bulwark against the sirens of American materialism and pointed to post-Communist Prague as a case in point, a city in danger of succumbing to ‘une utopie libérale mortelle’ (‘a deadly liberal utopia’): in thrall to deregulated markets and the lure of profit.
In the immediate post-Communist years Prague, like the rest of eastern Europe, would doubtless have pleaded guilty to a longing for all things American, from individual freedom to material abundance. And no-one visiting eastern European capitals, from Tallinn to Ljubljana, could miss the aggressive new élite of snappily dressed young men and women, zipping busily to appointments and shopping expeditions in their expensive new cars, enjoying the deadly liberal utopia of Clément’s nightmares. But even eastern Europeans were taking their distance from the American model: partly in deference to their new association with the European Union; partly because of growing aversion to aspects of American foreign policy; but increasingly because as an economic system and model of society the United States no longer seemed so self-evidently the way of the future.397
Extreme anti-Americanism in eastern Europe remained a minority taste. In countries like Bulgaria or Hungary it was now an indirect, politically acceptable way of expressing nostalgia for national Communism—and, as so often in the past, a serviceable surrogate for anti-Semitism. But even among mainstream commentators and politicians it was no longer commonplace to hold up American institutions or practices as a source of inspiration or an object to be emulated. For a long time America had been another time—Europe’s future. Now it was just another place. Many young people, to be sure, still dreamed of going to America. But as one Hungarian who had worked for some years in California explained to an interviewer:‘America is the place to come when you are young and single. But if it is time to grow up, you should return to Europe’.
The image of America as the perennial land of youth and adventure—with twenty-first-century Europe cast as an indulgent paradise for the middle-aged and risk-averse—had wide currency, especially in America itself. And indeed Europe was growing older. Of the twenty countries in the world in 2004 with the highest share of people over sixty, all but one were in Europe (the exception was Japan). The birth rate in many European countries was well below replacement levels. In Spain, Greece, Poland, Germany and Sweden, fertility rates were below 1.4 children per woman. In parts of Eastern Europe (Bulgaria and Latvia, for example, or Slovenia) they were closer to 1.1, the lowest in the world. Projected forward through 2040 these data suggested that many European countries could expect population to fall by one fifth or more.
None of the traditional explanations for fertility decline seemed to account for Europe’s incipient demographic crisis. Poor countries like Moldova and rich ones like Denmark faced the same challenge. In Catholic countries like Italy or Spain, young people (married and unmarried alike) often lived in their parents’ homes well into their thirties, whereas in Lutheran Sweden they had their own homes and access to generous levels of state-funded child-support and maternity leave. But although Scandinavians were having slightly more children than Mediterranean Europeans, the differences in fertility were less striking than the similarities. And the figures everywhere would have been lower still but for immigrants from outside Europe, who boosted the overall population numbers and had a much greater propensity to procreate. In Germany in 1960 the number of children born with one foreign parent was just 1.3 percent of the total for the year. Forty years later that figure had risen to one child in five.
The demographic scene in Europe was not actually so very different from that across the Atlantic—by the start of the new millennium the indigenous American birth-rate had also fallen below replacement levels. The difference was that the number of immigrants entering the US was so much larger—and they were disproportionately young adults—that overall fertility in the US
looked set comfortably to outdistance that of Europe for the foreseeable future. And although the demographic troughs meant that both America and Europe might have trouble meeting public pension and other commitments in the decades ahead, the welfare systems of Europe were incomparably more generous and thus faced the greater threat.
Europeans were confronted with an apparently straightforward dilemma: what would happen if (when?) there weren’t enough young people working to cover the costs of a burgeoning community of retired citizens, now living much longer than in the past, paying no taxes and placing growing strain on medical services into the bargain?398 One answer was to reduce retirement benefits. Another was to raise the threshold at which those benefits were paid—i.e. make people work longer before retirement. A third alternative was to extract more taxes from the pay packets of those still in work. A fourth option, only really considered in Britain (and then half-heartedly), was to imitate the US and encourage or even oblige people to turn to the private sector for social insurance. All of these choices were potentially politically explosive.
For many free-market critics of Europe’s welfare states, the core problem facing Europe was not demographic shortfall but economic rigidity. It wasn’t that there weren’t, or wouldn’t be, enough workers—it was that there were too many laws protecting their salaries and their jobs, or else guaranteeing such elevated unemployment and pension payments that they lacked all incentive to work in the first place. If this ‘lab our -market inflexibility’ were addressed and costly social provisions reduced or privatized, then more people could enter the workforce, the burden on employers and taxpayers would be alleviated, and ‘Eurosclerosis’ could be overcome.
As a diagnosis this was both true and false. There was no question that some of the rewards of the welfare state, negotiated and locked into place at the peak of the post-war boom, were now a serious burden. Any German worker who lost his or her job was entitled to 60 percent of their last wage packet for the next thirty-two months (67 percent if they had a child). After that the monthly payments fell to 53 percent (or 57 percent) of their last wage packet—indefinitely. Whether this safety net discouraged people from seeking paid work was unclear. But it came at a price. A penumbra of regulations designed to protect the interests of employed workers made it hard for employers in most EU countries (notoriously France) to sack full-time workers: their consequent reluctance to hire contributed to stubbornly high rates of youth unemployment.
On the other hand, the fact that they were highly regulated and inflexible by American standards did not mean that Europe’s economies were necessarily inefficient or unproductive. In 2003, when measured in terms of productivity per hour worked, the economies of Switzerland, Denmark, Austria and Italy were all comparable to the US. By the same criterion Ireland, Belgium, Norway, the Netherlands and France (sic) all out-produced the US. If America was nevertheless more productive overall—if Americans made more goods, services and money—it was because a higher percentage of them were in paid jobs; they worked longer hours than Europeans (three hundred more hours per year on average in 2000); and they had far fewer and shorter holidays.
Whereas the British were legally entitled to 23 paid vacation days annually, the French to 25 and the Swedes to 30 or more, many Americans had to settle for less than half as much paid vacation, depending where they lived. Europeans had made a deliberate choice to work less, earn less—and live better lives. In return for their uniquely high taxes (another impediment to growth and innovation, in the eyes of Anglo-American critics) Europeans received free or nearly free medical services, early retirement and a prodigious range of social and public services. Through secondary school they were better educated than Americans. They lived safer and—partly for that reason—longer lives, enjoyed better health (despite spending far less399) and had many fewer people in poverty.
This, then, was the ‘European Social Model’. It was without question very expensive. But for most Europeans its promise of job security, progressive tax rates and large social transfer payments represented an implicit contract between government and citizens, as well as between one citizen and another. According to the annual ‘Eurobarometer’ polls, an overwhelming majority of Europeans took the view that poverty was caused by social circumstances and not individual inadequacy. They also showed a willingness to pay higher taxes if these were directed to alleviating poverty.
Such sentiments were predictably widespread in Scandinavia. But they were almost as prevalent in Britain, or in Italy and Spain. There was a broad international, cross-class consensus about the duty of the state to shield citizens from the hazards of misfortune or the market: neither the firm nor the state should treat employees as dispensable units of production. Social responsibility and economic advantage should not be mutually exclusive—‘growth’ was laudable, but not at all costs.
This European model came in more than one style: the ‘Nordic’, the ‘Rhineland’, the ‘Catholic’, and variations within each. What they had in common was not a discrete set of services or economic practices, or a particular level of state involvement. It was, rather, a sense—sometimes spelled out in documents and laws, sometimes not—of the balance of social rights, civic solidarity and collective responsibility that was appropriate and possible for the modern state. The aggregate outcomes might look very different in, say, Italy and Sweden. But the social consensus they incorporated was regarded by many citizens as formally binding—when, in 2004, the Social Democratic Chancellor of Germany introduced changes in the country’s welfare payments, he ran into a firestorm of social protest, just as a Gaullist government had done ten years earlier when proposing similar reforms in France.
Ever since the 1980s there had been various attempts to resolve the choice between European social solidarity and American-style economic flexibility. A younger generation of economists and entrepreneurs, some of whom had spent time in US business schools or firms and were frustrated at what they saw as the inflexibility of the European business environment, had impressed upon politicians the need to ‘streamline’ procedures and encourage competition. The aptly named ‘Gauche Américaine’ in France set out to release the Left from its anti-capitalist complex while retaining its social conscience; in Scandinavia, the inhibiting effect of high taxation was discussed (if not always conceded) even in Social Democratic circles. The Right had been brought to acknowledge the case for welfare; the Left would now recognize the virtues of profit.
The effort to combine the best of both sides overlapped, not coincidentally, with the search for a project to replace the defunct debate between capitalism and socialism that had formed the core of Western politics for over a century. The result, for a brief moment at the end of the 1990s, was the so-called ‘Third Way’: ostensibly blending enthusiasm for unconstrained capitalist production with due consideration for social outcomes and the collective interest. This was hardly new: it added little of substance to Ludwig Erhard’s ‘Social Market economy’ of the 1950s. But politics, especially post-ideological politics, is about form; and it was the form of the Third Way, modeled on Bill Clinton’s successful ‘triangulation’ of Left and Right and articulated above all by New Labour’s Tony Blair, which seduced observers.
Blair, of course, had certain advantages unique to his time and place. In the UK, Margaret Thatcher had moved the political goalposts far to the Right, while Blair’s predecessors in the Labour leadership had done the hard work of destroying the Party’s old Left. In a post-Thatcher environment, Blair could thus sound plausibly progressive and ‘European’ merely by saying positive things about the desirability of well-distributed public services; meanwhile his much-advertised admiration for the private sector, and the business-friendly economic environment his policies sought to favour, placed him firmly in the ‘American’ camp. He spoke warmly of bringing Britain into the European fold; but insisted nonetheless on keeping his country exempt from the social protections of European legislation and the fiscal harmonization implicit in the
Union’s ‘single market’.
The Third Way was marketed as both a pragmatic solution to economic and social dilemmas and a significant conceptual breakthrough after decades of theoretical stagnation. Its continental admirers, heedless of the aborted ‘third ways’ in their own national pasts—notably the popular Fascist ‘third way’ of the 1930s—were keen to sign on. Under Jacques Delors (1985-1995) the European Commission had appeared a trifle preoccupied with devising and imposing norms and rules—substituting ‘Europe’ for the lost inheritance of Fabian-style bureaucratic socialism. Brussels, too, seemed in need of a Third Way: an uplifting story of its own that could situate the Union between institutional invisibility and regulatory excess.400
Blair’s new-look politics would not long survive the disastrous decision to embroil his country and his reputation in the 2003 invasion of Iraq—a move which merely reminded foreign observers that New Labour’s Third Way was inseparably intertwined with the UK’s reluctance to choose between Europe and the United States. And the evidence that Britain, like the US, was seeing a dramatic rise in the numbers of the poor—in contrast to the rest of the EU where poverty was increasing modestly, if at all—severely diminished the appeal of the British model. But the Third Way was always going to have a short shelf life. Its very name implied the presence of two extremes—ultra free-market capitalism and state socialism—both of which no longer existed (and in the case of the former had always been a figment of doctrinal imaginations). The need for a dramatic theoretical (or rhetorical) breakthrough had passed.
Thus privatization in the early 1980s had been controversial, provoking widespread discussion of the reach and legitimacy of the public sector and calling into question the attainability of social-democratic objectives and the moral legitimacy of the profit motive in the delivery of public goods. By 2004, however, privatization was a strictly pragmatic business. In eastern Europe, it was a necessary condition for membership of the EU, in conformity with Brussels’ strictures against market-distorting public subsidies. In France or Italy, the sale of publicly owned assets was now undertaken as a short-term book-keeping device to reduce the annual deficit and stay within euro-zone rules.