The Modern Mind
Page 113
The tide of David Harvey’s book The Condition of Postmodernity is strikingly similar to Lyotard’s Postmodern Condition. First published in 1980, it was reissued in 1989 in a much revised version, taking into account the many developments in postmodernism during that decade.51 Contrasting postmodernity with modernity, Harvey begins by quoting an editorial in the architectural magazine Precis 6: ‘Generally perceived as positivistic, technocentric, and rationalistic, universal modernism has been identified with the belief in linear progress, absolute truths, the rational planning of ideal social orders, and the standardisation of knowledge and production. Postmodernism, by way of contrast, privileges “heterogeneity and differences as liberative forces in the redefinition of cultural discourse.” Fragmentation, indeterminacy, and intense distrust of all universal or ‘totalising’ discourses (to use the favoured phrase) are the hallmark of postmodernist thought. The rediscovery of pragmatism in philosophy (e.g., Rorty, 1979), the shift of ideas about the philosophy of science wrought by Kuhn (1962) and Feyerabend (1975), Foucault’s emphasis on discontinuity and difference in history and his privileging of “polymorphous correlations in place of simple or complex causality,” new developments in mathematics emphasising indeterminacy (catastrophe and chaos theory, fractal geometry), the re-emergence of concern in ethics, politics and anthropology for the validity and dignity of “the other,” all indicate a widespread and profound shift in “the structure of feeling.” What all these examples have in common is a rejection of ‘metanarratives’ (large-scale theoretical interpretations purportedly of universal application).’52 Harvey moves beyond this summing-up, however, to make four contributions of his own. In the first place, he describes postmodernism in architecture (the form, probably, where most people encounter it); most valuably, he looks at the political and economic conditions that brought about postmodernism and sustain it; he looks at the effect of postmodernism on our conceptions of space and time (he is a geographer, after all); and he offers a critique of postmodernism, something that was badly needed.
In the field of architecture and urban design, Harvey tells us that postmodernism signifies a break with the modernist idea that planning and development should focus on ‘large-scale, metropolitan-wide, technologically rational and efficient urban plans, backed by absolutely no-frills architecture (the austere “functionalist” surfaces of “international style” modernism). Postmodernism cultivates, instead, a conception of the urban fabric as necessarily fragmented, a “palimpsest” of past forms superimposed upon each other, and a “collage” of current uses, many of which may be ephemeral.’ Harvey put the beginning of postmodernism in architecture as early as 1961, with Jane Jacobs’s Death and Life of Great American Cities (see chapter 30), one of the ‘most influential anti-modernist tracts’ with its concept of ‘the great blight of dullness’ brought on by the international style, which was too static for cities, where processes are of the essence.53 Cities, Jacobs argued, need organised complexity, one important ingredient of which, typically absent in the international style, is diversity. Postmodernism in architecture, in the city, Harvey says, essentially meets the new economic, social, and political conditions prevalent since about 1973, the time of the oil crisis and when the major reserve currencies left the gold standard. A whole series of trends, he says, favoured a more diverse, fragmented, intimate yet anonymous society, essentially composed of much smaller units of diverse character. For Harvey the twentieth century can be conveniently divided into the Fordist years – broadly speaking 1913 to 1973 – and the years of ‘flexible accumulation.’ Fordism, which included the ideas enshrined in Frederick Winslow Taylor’s Principles of Scientific Management (1911), was for Harvey a whole way of life, bringing mass production, standardisation of product, and mass consumption:54 ‘The progress of Fordism internationally meant the formation of global mass markets and the absorption of the mass of the world’s population, outside the communist world, into the global dynamics of a new kind of capitalism.’55 Politically, it rested on notions of mass economic democracy welded together through a balance of special-interest forces.56 The restructuring of oil prices, coming on top of war, brought about a major recession, which helped catalyse the breakup of Fordism, and the ‘regime of accumulation’ began.57
The adjustment to this new reality, according to Harvey, had two main elements. Flexible accumulation ‘is marked by a direct confrontation with the rigidities of Fordism. It rests on flexibility with respect to labour processes, labour markets, products and patterns of consumption. It is characterised by the emergence of entirely new sectors of production, new ways of providing financial services, new markets, and, above all, greatly intensified rates of commercial, technological, and organisational innovation.’58 Second, there has been a further round of space-time compression, emphasising the ephemeral, the transient, the always-changing. ‘The relatively stable aesthetic of Fordist modernism has given way to all the ferment, instability, and fleeting qualities of a postmodernist aesthetic that celebrates difference, ephemerality, spectacle, fashion, and the commodification of cultural forms.’59 This whole approach, for Harvey, culminated in the 1985 exhibition at the Pompidou Centre in Paris, which had Lyotard as one of its consultants. It was called The Immaterial.
Harvey, as was said earlier, was not uncritical of postmodernism. Elements of nihilism are encouraged, he believes, and there is a return to narrow and sectarian politics ‘in which respect for others gets mutilated in the fires of competition between the fragments.’60 Travel, even imaginary travel, need not broaden the mind, but only confirms prejudices. Above all, he asks, how can we advance if knowledge and meaning are reduced ‘to a rubble of signifiers’?61 His verdict on the postmodern condition was not wholly flattering: ‘confidence in the association between scientific and moral judgements has collapsed, aesthetics has triumphed over ethics as a prime focus of social and intellectual concern, images dominate narratives, ephemerality and fragmentation take precedence over eternal truths and unified politics, and explanations have shifted from the realm of material and political-economic groundings towards a consideration of autonomous cultural and political practices.’62
39
‘THE BEST IDEA, EVER’
Narborough is a small village about ten miles south of Leicester, in the British East Midlands. Late on the evening of 21 November 1983 a fifteen-year-old girl, Lynda Mann, was sexually assaulted and strangled, her body left in a field not too far from her home. A manhunt was launched, but the investigation revealed nothing. Interest in the case died down until the summer of 1986, when on 2 August the body of another fifteen-year-old, Dawn Ashworth, was discovered in a thicket of blackthorn bushes, also near Narborough. She too had been strangled, after being sexually assaulted.
The manhunt this time soon produced a suspect, Richard Buckland, a porter in a nearby hospital.1 He was arrested exactly one week after Dawn’s body was found, following his confession. The similarities in the victims’ ages, the method of killing, and the proximity to Narborough naturally made the police wonder whether Richard Buckland might also be responsible for the death of Lynda Mann, and with this in mind they called upon the services of a scientist who had just developed a new technique, which had become known to police and public alike as ‘genetic fingerprinring.’2 This advance was the brainchild of Professor Alec Jeffreys of Leicester University. Like so many scientific discoveries, Jeffreys’s breakthrough came in the course of his investigation of something else – he was looking to identify the myoglobin gene, which governs the tissues that carry oxygen from the blood to the muscles. Jeffreys was in fact using the myoglobin gene to look for ‘markers,’ characteristic formations of DNA that would identify, say, certain families and would help scientists see how populations varied genetically from village to village, and country to country. What Jeffreys found was that on this gene one section of DNA was repeated over and over again. He soon found that the same observation – repeated sections – was being made in other experiment
s, investigating other chromosomes. What he realised, and no one else did, was that there seemed to be a widespread weakness in DNA that caused this pointless duplication to take place. As Walter Bodmer and Robin McKie describe it, the process is analogous to a stutterer who repeatedly stammers over the same letter. Moreover, this weakness differed from person to person. The crucial repeated segment was about fifteen base pairs long, and Jeffreys set about identifying it in such a way that it could be seen by eye with the aid of just a microscope. He first froze the DNA, then thawed it, which broke down the membranes of the red blood cells, but not those of the white cells that contain DNA. With the remains of the red blood cells washed away, an enzyme called proteinase K was added, exploding the white cells and freeing the DNA coils. These were then treated with another enzyme, known as Hinfl, which separates out the ribbons of DNA that contain the repeated sequences. Finally, by a process known as electrophoresis, the DNA fragments were sorted into bands of different length and transferred to nylon sheets, where radioactive or luminescent techniques obtained images unique to individuals.3
Jeffreys was called in to try this technique with Richard Buckland. He was sent samples of semen taken from the bodies of both Lynda Mann and Dawn Ashworth, together with a few cubic centimetres of Buckland’s blood. Jeffreys later described the episode as one of the tensest moments of his life. Until that point he had used his technique simply to test whether immigrants who came to Britain and were admitted on the basis of a law that allowed entry only to close relatives of those already living in the country really were as close as they claimed. A double murder case would clearly attract far more attention. When he went into his lab late one night to get the results, because he couldn’t bear hanging on until the next morning, he got a shock. He lifted the film from its developing fluid, and could immediately see that the semen taken from Lynda and Dawn came from the same man – but that killer wasn’t Richard Buckland.4 The police were infuriated when he told them. Buckland had confessed. To the police mind, that meant the new technique had to be flawed. Jeffreys was dismayed, but when an independent test by Home Office forensic experts confirmed his findings, the police were forced to think again, and Buckland was eventually acquitted, the first person ever to benefit in this way from DNA testing. Once they had adjusted to the surprising result, the police mounted a campaign to test the DNA of all the men in the Narborough area. Despite 4,000 men coming forward, no match was obtained, not until Ian Kelly, a baker who lived some distance from Narborough, revealed to friends that he had taken the test on behalf of a friend, Colin Pitchfork, who did live in the vicinity of the village. Worried by this deception, one of Kelly’s friends alerted the police. Pitchfork was arrested and DNA-tested. The friend had been right to be worried: tests showed that Pitchfork’s DNA matched the semen found on Lynda and Dawn. In January 1988, Pitchfork became the first person to be convicted after genetic fingerprinting. He went to prison for life.5
DNA fingerprinting was the most visible aspect of the revolution in molecular biology. Throughout the late 1980s it came into widespread use, for testing immigrants and men in paternity suits, as well as in rape cases. Its practical successes, so soon after the structure of the double helix had been identified, underlined the new intellectual climate initiated by techniques to clone and sequence genetic material. In tandem with these practical developments, a great deal of theorising about genetics revised and refined our understanding of evolution. In particular, much light was thrown on the stages of evolutionary progress, working forward from the moment life had been created, and on the philosophical implications of evolution.
In 1985 a Glasgow-based chemist, A. G. Cairns-Smith, published Seven Clues to the Origin of Life.6 In some ways a maverick, this book gave a totally different view of how life began to the one most biologists preferred. The traditional view about the origins of life had been summed up by a series of experiments carried out in the 1950s by S. L. Miller and H. C. Urey. They had assumed a primitive atmosphere on early Earth, consisting of ammonia, methane, and steam (but no oxygen – we shall come back to that). Into this early atmosphere they had introduced ‘lightning’ in the form of electrical discharges, and produced a ‘rich brew’ of organic chemicals, much richer than had been expected, including quite a large yield of amino acids, the building blocks for the nucleic acids which make up DNA. Somehow, from this rich brew, the ‘molecules of life’ formed. Graham Cairns-Smith thought this view nonsense because DNA molecules are extremely complicated, too complicated architecturally and in an engineering sense to have been produced accidentally, as the Miller-Urey reactions demanded. In one celebrated part of his book, he calculated that for nucleotides to have been invented, something like 140 operations would have needed to have evolved at the same time, and that the chances of this having occurred were one in 10109. Since this is more than the number of electrons in the universe, calculated as 108°, Cairns-Smith argued that there has simply not been enough time, or that the universe is not big enough, for nucleotides to have evolved in this way.7
His own version was startlingly different. He argued that evolution arrived before life as we know it, that there were chemical ‘organisms’ on earth before biochemical ones, and that they provided the architecture that made complex molecules like DNA possible. Looking about him, he saw that there are, in nature, several structures that, in effect, grow and reproduce – the crystal structures in certain clays, which form when water reaches saturation point. These crystals grow, sometimes break up into smaller units, and continue growing again, a process that can be called reproduction.8 Such crystals form different shapes – long columns, say, or flat mats – and since these have formed because they are suited to their micro-environments, they may be said to be adapted and to have evolved. No less important, the mats of crystal can form into layers that differ in ionisation, and it was between these layers, Cairns-Smith believed, that amino acids may have formed, in minute amounts, created by the action of sunlight, in effect photosynthesis. This process would have incorporated carbon atoms into inorganic organisms – there are many substances, such as titanium dioxide, that under sunshine can fix nitrogen into ammonia. By the same process, under ultraviolet light, certain iron salts dissolved in water can fix carbon dioxide into formic acid. The crystal structure of the clays was related to their outward appearance (their phenotype), all of which would have been taken over by carbon-based structures.9 As Linus Pauling’s epic work showed, carbon is amazingly symmetrical and stable, and this is how (and why), Cairns-Smith said, inorganic reproducing organisms were taken over by organic ones.
It is a plausible and original idea, but there are problems. The next step in the chain of life was the creation of cellular organisms, bacteria, for which a skin was required. Here the best candidates are what are known as lipid vesicles, tiny bubbles that form membranes automatically. These chemicals were found naturally occurring in meteorites, which, many people argue, brought the first organic compounds to the very young Earth. On this reasoning then, life in at least some of its elements had an extraterrestrial beginning. Another problem was that the most primitive bacteria, which are indeed little more than rods or discs of activity, surrounded by a skin, are chiefly found around volcanic vents on the ocean floor, where the hot interior of the earth erupts in the process that, as we have already seen, contributes to sea-floor spreading (some of these bacteria can only thrive in temperatures above boiling point, so that one might say life began in hell). It is therefore difficult to reconcile this with the idea that life originally began as a result of sunlight acting on clay-crystal structures in much shallower bodies of water.10
Whatever the actual origin of life (generally regarded as having occurred around 3,800 million years ago), there is no question that the first bacterial organisms were anaerobes, operating only in the absence of oxygen. Given that the early atmosphere of the earth contained very little or no oxygen, this is not so surprising. Around 2,500 million years ago, however, we begin to see in t
he earth’s rocks the accumulation of haematite, an oxidised form of iron. This appears to mean that oxygen was being produced, but was at first ‘used up’ by other minerals in the world. The best candidate for an oxygen-producer is a blue-green bacterium that, in shallower reaches of water where the sun could get at it and with the light acting on chlorophyll, broke carbon dioxide down into carbon, which it utilised for its own purposes, and oxygen – in other words, photosynthesis. For a time the minerals of the earth soaked up what oxygen was going (limestone rocks captured oxygen as calcium carbonate, iron rusted, and so on), but eventually the mineral world became saturated, and after that, over a thousand million years, billions of bacteria poured out tiny puffs of oxygen, gradually transforming the earth’s atmosphere.11