Out of Our Minds

Home > Other > Out of Our Minds > Page 48
Out of Our Minds Page 48

by Felipe Fernandez-Armesto


  In the 1950s, great red hopes focused on the Chinese ideologue Mao Zedong (or Tse-tung, according to traditional methods of transliteration, which sinologists have unhelpfully abandoned but which linger in the literature to confuse uninstructed readers). To most scrutineers, the Mexican and Chinese revolutions of 1911 showed that Marx was right about one thing: revolutions dependent on peasant manpower in unindustrialized societies would not produce the outcomes Marxists yearned for. Mao thought otherwise. Perhaps because, unlike most of his fellow-communists, he had read little by Marx and understood less, he could propose a new strategy of peasant revolution, independent of the Russian model, defiant of Russian advice, and unprejudiced by Marxist orthodoxy. It was, Stalin said, ‘as if he doesn’t understand the most elementary Marxist truths – or maybe he doesn’t want to understand them’.35 Like Descartes and Hobbes, Mao confided in his own brilliance, uncluttered by knowledge. ‘To read too many books’, he said, ‘is harmful.’36 His strategy suited China. He summed it up in a much-quoted slogan: ‘When the enemy advances, we retreat; when he halts, we harass; when he retreats, we pursue.’37 Through decades of limited success as a vagabond warlord leader, he survived and ultimately triumphed by dogged perseverance (which he later misrepresented as military genius). He throve in conditions of emergency and from 1949, when he controlled the whole of mainland China, Mao provoked endless new crises to keep his regime going. He had run out of ideas but had lots of what he called thoughts. From time to time he launched capricious campaigns of mass destruction against rightists and leftists, bourgeois deviationists, alleged class enemies, and even at different times against dogs and sparrows. Official crime rates were low, but habitual punishment was more brutalizing than occasional crime. Propaganda occluded the evils and failures. Mao suckered Westerners anxious for a philosophy they could rely on. Teenagers of my generation marched in demonstrations against wars and injustices, naively waving copies of the ‘Little Red Book’ of Mao’s thoughts, as if the text contained a remedy.

  Some of Mao’s revolutionary principles were dazzlingly reactionary: he thought class enmity was hereditary. He outlawed romantic love, along with, at one time, grass and flowers. He wrecked agriculture by taking seriously and applying rigorously the ancient role of the state as a hoarder and distributor of food. His most catastrophic expedient was the class war he called the ‘Great Proletarian Cultural Revolution’ of the sixties. Children denounced parents and students beat teachers. The ignorant were encouraged in the slaughter of intellectuals, while the educated were reassigned to menial work. Antiquities were smashed, books burned, beauty despised, study subverted, work stopped. The limbs of the economy got broken in the beatings. While an efficient propaganda machine generated fake statistics and images of progress, the truth gradually seeped out. The resumption of China’s normal status as one of the world’s most prosperous, powerful countries, and as an exemplary civilization, was postponed. The signs of recovery are only beginning to be apparent in the early years of the twenty-first century. In the meantime Mao’s influence held back the world, blighting many new, backward, economically underdeveloped states with a malignant example, encouraging experiments with economically ruinous and morally corrupting programmes of political authoritarianism and command economics.38

  In the absence of credible ideology, the economic and political consensus in the West fell back on modest expectations: delivering economic growth and social welfare. The thinker who did most to shape the consensus was John Maynard Keynes, who, unusually for a professional economist, was good at handling money, and turned his studies of probability into shrewd investments. Privileged by his education and friendships in England’s social acceptance-world and political establishment, he embodied self-assurance and projected his own self-assurance into optimistic formulae for ensuring the future prosperity of the world.

  Keynesianism was a reaction against the capitalist complacency of the 1920s in industrialized economies. Automobiles became articles of mass consumption. Construction flung ‘towers up to the sun’.39 Pyramids of millions of shareholders were controlled by a few ‘pharaohs’.40 A booming market held out prospects of literally universal riches. In 1929 the world’s major markets crashed and banking systems failed or tottered. The world entered the most abject and protracted recession of modern times. The obvious suddenly became visible: capitalism needed to be controlled, exorcized, or discarded. In America, President Franklin D. Roosevelt proposed a ‘New Deal’ for government initiatives in the marketplace. Opponents denounced the scheme as socialistic, but it was really a kind of patchwork that covered up the fraying but left capitalism intact.

  Keynes did the comprehensive rethinking that informed every subsequent reform of capitalism. He challenged the idea that the unaided market produces the levels of production and employment society needs. Savings, he explained, immobilize some wealth and some economic potential; false expectations, moreover, skew the market: people overspend in optimism and underspend when they get the jitters. By borrowing to finance utilities and infrastructure, governments and institutions could get unemployed people back into work while also building up economic potential, which, when realized, would yield tax revenues to cover the costs of the projects retrospectively. This idea appeared in Keynes’s General Theory of Employment, Interest and Money in 1936. For a long time, Keynesianism seemed to work for every government that tried it. It became orthodoxy, justifying ever higher levels of public expenditure all over the world.

  Economics, however, is a volatile science and few of its laws last for long. In the second half of the twentieth century, prevailing economic policy in the developed world tacked between ‘planning’ and ‘the market’ as rival panaceas. Public expenditure turned out to be no more rational than the market. It saved societies that tried it in the emergency conditions of the thirties, but in more settled times it caused waste, inhibited production, and stifled enterprise. In the 1980s, Keynesianism became a victim of a widespread desire to ‘claw back the state’, deregulate economies, and reliberate the market. An era of trash capitalism, market volatility, and obscene wealth gaps followed, when some of the lessons of Keynesianism had to be relearned. Our learning still seems laggard. In 2008 deregulation helped precipitate a new global crash – ‘meltdown’ was the preferred term. Though US administrations adopted a broadly Keynesian response, borrowing and spending their way out of the crisis, most other governments preferred pre-Keynesian programmes of ‘austerity’, squeezing spending, restricting borrowing, and bidding for financial security. The world had hardly begun to recover when, in 2016, a US presidential election installed a government resolved to deregulate again (albeit also, paradoxically, making a commitment to splurging on infrastructure).41

  The future that radicals imagined never happened. Expectations dissolved in the bloodiest wars ever experienced. Even in states such as the United States or the French republic, founded in revolutions and regulated by genuinely democratic institutions, ordinary people never achieved power over their own lives or over the societies they formed. How much good, after so much disappointment, could a modestly benevolent state do? Managing economies in recession and manipulating society for war suggested that states had not exhausted their potential. Power, like appetite according to one of Molière’s characters, vient en mangeant, and some politicians saw opportunities to use it for good or, at least, to preserve social peace in their own interest. Perhaps, even if they could not deliver the virtue ancient philosophers dreamed of (see here), they could at least become instruments of welfare. Germany had introduced a government-managed insurance scheme in the 1880s, but the welfare state was a more radical idea, proposed by the Cambridge economist Arthur Pigou in the 1920s: the state could tax the rich to provide benefits for the poor, rather as ancient despotisms enforced redistribution to guarantee the food supply. Keynes’s arguments in favour of regenerating moribund economies by huge injections of public spending were in line with this sort of thinking. Its most effective expon
ent was William Beveridge.

  During the Second World War the British government commissioned him to draft plans for an improved social insurance scheme. Beveridge went further, imagining ‘a better new world’, in which a mixture of national insurance contributions and taxation would fund universal healthcare, unemployment benefits, and retirement pensions. ‘The purpose of victory’, he declared, ‘is to live in a better world than the old world.’42 Few government reports have been so widely welcomed at home or so influential abroad. The idea encouraged President Roosevelt to proclaim a future ‘free from want’. Denizens of Hitler’s bunker admired it. Postwar British governments adopted it with near cross-party unanimity.43

  It became hard to call a society modern or just without a scheme broadly of the kind Beveridge devised; but the limits of the role of the state in redistributing wealth, alleviating poverty, and guaranteeing healthcare have been and are still fiercely contested in the name of freedom and in deference to the market. On the one hand, universal benefits deliver security and justice in individual lives and make society more stable and cohesive; on the other, they are expensive. In the late twentieth and early twenty-first centuries two circumstances threatened welfare states, even where they were best established in Western Europe, Canada, Australia, and New Zealand. First, inflation made the future insecure, as each successive generation struggled to provide for the costs of care for its elders. Second, even when inflation came under something like control, the demographic balance of developed societies began to shift alarmingly. The workforce got older, the proportion of retired people began to look unfundable, and it became apparent that there would not be enough young, productive people to pay the escalating costs of social welfare. Governments have tried various ways of coping, without dismantling the welfare state; despite sporadic efforts by presidents and legislators from the 1960s onward, the United States never introduced a comprehensive system of state-run healthcare. Even President Obama’s scheme, implemented in the teeth of conservative mordancy, left some of the poorest citizens uncovered and kept the state off the health insurance industry’s turf. Obamacare’s problems were intelligible against the background of a general drift back to an insurance-based concept of welfare, in which most individuals reassume responsibility for their own retirement and, to some extent, for their own healthcare costs and unemployment provision. The state mops up marginal cases.

  The travails of state welfare were part of a bigger problem: the deficiencies and inefficiencies of states generally. States that built homes erected dreary dystopias. When industries were nationalized, productivity usually fell. Regulated markets inhibited growth. Overplanned societies worked badly. State efforts to manage the environment generally led to waste and degradation. For much of the second half of the twentieth century, command economies in Eastern Europe, China, and Cuba largely failed. The mixed economies of Scandinavia, with a high degree of state involvement, fared only a little better: they aimed at universal well-being, but produced suicidal utopias of frustrated and alienated individuals. History condemned other options – anarchism, libertarianism, the unrestricted market.

  Conservatism had a poor reputation. ‘I do not know’, said Keynes, ‘which makes a man more conservative: to know nothing but the present, or nothing but the past.’44 Nonetheless, the tradition that inspired the most promising new thinking in politics and economics in the second half of the twentieth century came from the right. F. A. Hayek initiated most of it. He deftly adjusted the balancing act – between liberty and social justice – that usually topples political conservatism. As Edmund Burke (see here) observed, initiating, toward the end of the eighteenth century, the tradition Hayek fulfilled, ‘to temper together these opposite elements of liberty and restraint in one consistent work, requires much thought, deep reflection, in a sagacious, powerful and combining mind’.45 Hayek’s was the mind that met those conditions. He came close to stating the best case for conservatism: most government policies are benign in intention, malign in effect. That government, therefore, is best which governs least. Since efforts to improve society usually end up making it worse, the wisest course is to tackle the imperfections modestly, bit by bit. Hayek shared, moreover, traditional Christian prejudice in favour of individualism. Sin and charity demand individual responsibility, whereas ‘social justice’ diminishes it. The Road to Serfdom of 1944 proclaimed Hayek’s key idea: ‘spontaneous social order’ is not produced by conscious planning but emerges out of a long history – a richness of experience and adjustment that short-term government intervention cannot reproduce. Social order, he suggested (bypassing the need to postulate a ‘social contract’), arose spontaneously, and when it did so law was its essence: ‘part of the natural history of mankind … coeval with society’ and therefore prior to the state. ‘It is not the creation of any governmental authority’, Hayek said, ‘and it is certainly not the command of the sovereign.’46 The rule of law overrides the dictates of rulers – a recommendation highly traditional and constantly urged (though rarely observed) in the Western tradition since Aristotle. Only law can set proper limits to freedom. ‘If’, Hayek opined, ‘individuals are to be free to use their own knowledge and resources to best advantage, they must do so in a context of known and predictable rules governed by law.’47 For doctrines of this kind the fatal problem is, ‘Who says what these natural laws are, if not the state?’ Religious supremos, as in the Islamic Republic of Iran? Unelected jurists, such as were empowered in the late twentieth century by the rise of an international body of law connected with human rights?

  During the overplanned years, Hayek’s was a voice unheard in the wilderness. In the 1970s, however, he re-emerged as the theorist of a ‘conservative turn’ that seemed to conquer the world, as the political mainstream in developed countries drifted rightward in the last two decades of the twentieth century. His major impact was on economic life, thanks to the admirers he began to attract among economists of the Chicago School, when he taught briefly at the University of Chicago in the 1950s. The university was well endowed and therefore a law unto itself. It was isolated in a marginal suburb of its own city, where professors were thrown on each other’s company. It was out of touch with most of the academic world, variously envious and aloof. So it was a good place for heretics to nurture dissidence. Chicago economists, of whom Milton Friedman was the most vocal and the most persuasive, could defy economic orthodoxy. They rehabilitated the free market as an insuperable way of delivering prosperity. In the 1970s they became the recourse of governments in retreat from regulation and in despair at the failures of planning.48

  ‌The Retrenchment of Science

  When chaos and coherence compete, both thrive. Uncertainty makes people want to escape back into a predictable cosmos. Advocates of every kind of determinism therefore found the postmodern world paradoxically congenial. Attempts were rife to invoke machines and organisms as models for simplifying the complexities of thought or behaviour, and replacing honest bafflement with affected assurance. One method was to try to eliminate mind in favour of brain – seeking chemical, electrical, and mechanical patterns that could make the vagaries of thought intelligible and predictable.

  To understand artificial intelligence – as people came to call the object of these attempts – an excursion into its nineteenth-century background is necessary. One of the great quests of modern technology – for a machine that can do humans’ thinking for them – has been inspired by the notion that minds are kinds of machine and thought is a mechanical business. George Boole belonged in that category of Victorian savants whom we have already met (see here) and who sought to systematize knowledge – in his case, by exposing ‘laws of thought’. His formal education was patchy and poor, and he lived in relative isolation in Ireland; most of the mathematical discoveries he thought he made for himself were already familiar to the rest of the world. Yet he was an uninstructed genius. In his teens he started suggestive work on binary notation – counting with two digits instead of the
ten we usually use in the modern world. His efforts put a new idea into the head of Charles Babbage.

  From 1828 Babbage occupied the Cambridge chair held formerly by Newton and later by Stephen Hawking. When he encountered Boole’s work, he was trying to eliminate human error from astronomical tables by calculating them mechanically. Commercially viable machines already existed to perform simple arithmetical functions. Babbage hoped to use something like them for complex trigonometric operations, not by improving the machines but by simplifying the trigonometry. Transformed into addition and subtraction it could be confided to cogs and wheels. If successful, his work might revolutionize navigation and imperial mapping by making astronomical tables reliable. In 1833, Boole’s data made Babbage abandon work on the relatively simple ‘difference engine’ he had in mind, and broach plans for what he called an ‘analytical engine’. Though operated mechanically, it would anticipate the modern computer, by using the binary system for computations of amazing range and speed. Holes punched in cards controlled the operations of Babbage’s device, as in early electronic computers. His new scheme was better than the old, but with the habitual myopia of bureaucracies, the British government withdrew its sponsorship. Babbage had to spend his own fortune.

  Despite the assistance of the gifted amateur mathematician Ada Lovelace, Byron’s daughter, Babbage could not perfect his machine. The power of electricity was needed to realize its full potential; the early specimens made in Manchester and Harvard were the size of small ballrooms and therefore of limited usefulness, but computers developed rapidly in combination with microtechnology, which shrank them, and telecommunications technology, which linked them along telephone lines and by radio signals, so that they could exchange data. By the early twenty-first century, computer screens opened onto a global village, with virtually instant mutual contact. The advantages and disadvantages are nicely weighed: information overkill has glutted minds and, perhaps, dulled a generation, but the Internet has multiplied useful work, diffused knowledge, and served freedom.

 

‹ Prev