Stranger Than We Can Imagine

Home > Other > Stranger Than We Can Imagine > Page 10
Stranger Than We Can Imagine Page 10

by John Higgs


  In 1932 the American actress Tallulah Bankhead shocked many when she said, ‘I’m serious about love … I haven’t had an affair for six months. Six months! … The matter with me is, I WANT A MAN! … Six months is a long, long while. I WANT A MAN!’ Those offended by female sexuality had to come to terms with both the fact that she made this statement, which would have been unthinkable a generation earlier, and that Motion Picture magazine saw it as fit to print.

  The 1920s had been the jazz age, a golden era for the wealthy that in retrospect sat in stark contrast to the world war that preceded it and the grinding global depression that followed. Its archetypal image was that of a flapper, a woman with pearls, a straight dress and a short bob, dancing joyfully and freely, kicking her legs and enjoying herself unashamedly. The reason such a simple image came to define that era was because of how unprecedented it was. Such public behaviour by a high-status female would have been unacceptable at any earlier point in the Christian era.

  Flappers were not just accepted, they were celebrated. The black Missouri-born dancer Josephine Baker might have been ignored in her homeland due to her colour, but that was not the case in Europe. Her performances in Paris, dressed in nothing more than a skirt made from feathers or bananas, were wild and blatantly sexual and also incredibly funny. They made her one of the most celebrated stars of her day. Baker loved animals and surrounded herself with exotic creatures, including a snake, a chimpanzee and a pet cheetah named Chiquita. She was showered with gifts from wealthy admirers, and claimed to have received approximately fifteen hundred marriage proposals. Following her death in 1975 she became the first American woman to receive full French military honours at her funeral, thanks to her work with the Resistance during the Second World War.

  Jazz music and dances such as the Charleston, the Black Bottom or the Turkey Trot were seen as modern and liberating. Dresses became simpler, and lighter. Skirts became shorter, reaching the previously unimaginable heights of the knee. The amount of fabric in the average dress fell from almost twenty yards before the Great War, to seven. The fashionable body shape was flat-chested and thin, a marked contrast to previous ideals of female beauty. In the nineteenth century lipstick had been associated with prostitutes or, even worse, actresses. By the 1920s it was acceptable for all, and cupid-bow lips were all the rage. In the words of the American journalist Dorothy Dunbar Bromley, women were ‘moved by an inescapable inner compulsion to be individuals in their own right’.

  The credit for the power of the L’ge d’or toe-sucking scene must go to Buñuel. Dalí was not comfortable with female sexuality. His personal sexuality was focused more on voyeurism and masturbation. He was devoted to his wife Gala, but preferred her to sleep with other men. ‘Men who fuck easily, and can give themselves without difficulty, have only a very diminished creative potency,’ he said. ‘Look at Leonardo da Vinci, Hitler, Napoleon, they all left their mark on their times, and they were more or less impotent.’ Dalí was reportedly a virgin on his wedding night, due to his fear of the vagina, and frequently linked seafood with female genitalia or sexuality in his work. His famous 1936 sculpture Lobster Telephone, which was a telephone with a plastic lobster attached to the handle, was also known by the alternative name of The Aphrodisiac Telephone.

  Thanks to his cartoon moustache, his stream-of-consciousness declarations of his own genius and his love of luxury and power, it is tempting to see Dalí’s public persona as some form of calculated performance art. But from descriptions of him by those in his inner circle, there does not appear to have been a private Dalí which differed from the public one. ‘Every morning upon awakening, I experience a supreme pleasure: that of being Salvador Dalí, and I ask myself, wonderstruck, what prodigious thing will he do today, this Salvador Dalí,’ he once said. Very few people would allow themselves to say such a sentence out loud.

  Dalí did not have the self-consciousness filter that most people employ to present a more socially acceptable image of themselves. To use Freud’s model, he lacked the super-ego to stop the id pouring out of him. ‘I am surrealism,’ he once said, as if his ego was unimportant relative to the work that came through him. Freud was certainly impressed. ‘I have been inclined to regard the Surrealists as complete fools, but that young Spaniard with his candid, fanatical eyes and his undeniable technical mastery has changed my estimate,’ he wrote in 1939. Others were less impressed. As Henry Miller put it, ‘Dalí is the biggest prick of the twentieth century.’

  Freud’s model of the id, ego and super-ego was originally only intended to describe individuals. But there is a tradition of using Freudian ideas to help illuminate larger changes in society, for example in works like Wilhelm Reich’s The Mass Psychology of Fascism (1933). Freud’s psychological models can be used alongside a sociological concept known as ‘mass society’, which describes how populations of isolated individuals can be manipulated by a small elite. The idea of the ‘mass media’ is related to that of the mass society. Methods of controlling or guiding mass society were of great interest to political leaders.

  An example of the subconscious manipulation of mass society was the twisting of people’s reaction to different ethnicities in the 1930s. When political leaders promoted hatred of others, it created one of those rare instances that appealed to both the id and the super-ego at the same time. It was possible to unleash the barbaric, destructive energies of the id while at the same time reassuring the super-ego that you were loyally obeying your masters. With the id and the super-ego in rare agreement, the ego could find it hard to resist the darkness that descended on society.

  When the wild energies of the id were manipulated in a precise way, leaders could command their troops to organise genocides. The word ‘genocide’ was coined in 1944 to describe a deliberate attempt to exterminate an entire race. It hadn’t existed before. There had been no need for it before the twentieth century. Exact numbers are difficult to pin down, but most estimates say that Stalin was responsible for more deaths than Hitler, and that Mao Zedong was responsible for more deaths than Hitler and Stalin combined. Men such as Pol Pot, Saddam Hussein and Kim Il Sung all played a part in ensuring that the twentieth century would forever be remembered as a century of genocide.

  The off-hand manner in which genocide was shrugged off is chilling. ‘Who still talks nowadays of the extermination of the Armenians?’ Hitler told Wehrmacht commanders in a speech delivered a week before the invasion of Poland. Hitler was aware of how the global community had either accepted the Armenian genocide, in which the Ottoman government killed up to a million and a half Armenians between 1915 and 1923, or had turned a blind eye to it. As Stalin was reported to have remarked to Churchill, ‘When one man dies it is a tragedy, when thousands die it’s statistics.’

  Modern technology made all this possible. Hitler kept a portrait of the American car manufacturer Henry Ford on the wall of his office in Munich. Ford was a notorious anti-Semite who had developed assembly-line techniques of mass production based on Chicago slaughterhouses. The application of a modern, industrialised approach to killing was one factor which differentiated modern genocides from the colonisation of the Americas and other such slaughters of the past. But the availability of techniques to industrialise mass killing does not in itself explain why such events occurred.

  In 1996 the president of Genocide Watch, Gregory Stanton, identified eight stages that occur in a typical genocide: Classification, Symbolisation, Dehumanisation, Organisation, Polarisation, Preparation, Extermination and Denial. The first of these, Classification, was defined as the division of people into ‘us and them’. This was something that the particular character of the twentieth century was remarkably suited towards. It was a side effect of both nationalism and individualism. Focusing on the self creates a separation of an individual from the ‘other’ in much the same way as identifying with a flag does.

  Genocides arose during the perfect storm of technology, nationalism, individualism and the political rise of psychopaths. The
y revealed that humans were not the rational actors they prided themselves on being, dutifully building a better world year after year. Rationality was the product of the conscious mind, but that mind rested on the irrational foundations of the unconscious. The individual was more complicated than originally assumed. If there was some form of certainty to be found in the post-omphalos world, it wouldn’t be found in the immaterial world of the mind.

  The next question, then, is whether such certainty could be found in the physical world?

  Erwin Schrödinger, c.1950 (SSPL/Getty)

  SIX: UNCERTAINTY

  The cat is both alive and dead

  On the last day of the nineteenth century, six hours before midnight, the British academic Bertrand Russell wrote to a friend and told her that ‘I invented a new subject, which turned out to be all mathematics for the first time treated in its essence.’ He would later regard this claim as embarrassingly ‘boastful’.

  Russell was a thin, bird-like aristocrat who compensated for the frailness of his body through the strength of his mind. He became something of a national treasure during his long life, due to his frequent television broadcasts and his clearly argued pacifism. His academic reputation came from his attempt to fuse logic and mathematics. Over the course of groundbreaking books such as The Principles of Mathematics (1903) and Principia Mathematica (1910, co-written with Alfred North Whitehead), Russell dedicated himself and his considerable brain to becoming the first person to prove that 1 + 1 = 2.

  Russell had a solitary childhood. He was brought up in a large, lonely house by his stern Presbyterian grandmother following the death of his parents, and he lacked children of his own age to play with. At the age of eleven, after his older brother introduced him to Euclidian geometry, he developed a deep love for mathematics. In the absence of other children he became absorbed in playing with numbers.

  Yet something about the subject troubled him. Many of the rules of mathematics rested on assumptions which seemed reasonable, but which had to be accepted on faith. These assumptions, called axioms, included laws such as ‘For any two points of space there exists a straight line that connects those two points’ or ‘For any natural number x, x + 1 is also a natural number.’ If those axioms were accepted, then the rest of mathematics followed logically. Most mathematicians were happy with this situation but, to the mind of a gifted, lonely boy like Russell, it was clear that something was amiss. He was like the child in The Emperor’s New Clothes, wondering why everybody was ignoring the obvious. Mathematics, surely, needed stronger foundations than common-sense truisms. After leaving home and entering academia, he embarked on a great project to establish those foundations through the strict use of logic. If anything could be a system of absolute clarity and certainty, then surely that would be logically backed mathematics.

  In the real world, it is not too problematic to say that one apple plus another apple equals two apples. Nor is there much argument that if you had five scotch eggs, and ate two of them, then you would have three scotch eggs. You can test these statements for yourself the next time you are in a supermarket. Mathematics, however, abstracts quantities away from real-world things into a symbolic, logical language. Instead of talking about two apples, it talks about something called ‘2’. Instead of three scotch eggs, it has the number ‘3’. It is not possible to find an actual ‘2’ or a ‘3’ out in the real world, even in a well-stocked supermarket. You’ll find a squiggle of ink on a price tag that symbolically represents those numbers, but numbers themselves are immaterial concepts and hence have no physical existence. The statement 1 + 1 = 2, then, says that one immaterial concept together with another immaterial concept is the same as a different immaterial concept. When you remember that immaterial concepts are essentially things that we’ve made up, the statement 1 + 1 = 2 can be accused of being arbitrary. Russell’s project was to use logic to prove beyond argument that 1 + 1 = 2 was not an arbitrary assertion, but a fundamental truth.

  He nearly succeeded.

  His approach was to establish clear definitions of mathematical terms using what logicians then called classes, but which are now better known as sets. A set is a collection of things. Imagine that Russell wanted a logical definition of a number, such as 5, and that he also had a vehicle with a near-infinite storage capacity, such as Doctor Who’s TARDIS. He could then busy himself travelling around the world in his TARDIS looking for examples of five objects, such as five cows, five pencils or five red books. Every time he found such an example, he would store it in his TARDIS and continue with his quest. If he successfully found every example of a five in the real world then he would finally be in a position to define the immaterial concept of ‘5’. He could say ‘5’ is the symbol that represents the set of all the stuff he had collected in his magic blue box.

  Producing a similar definition for the number 0 was more complicated. He could hardly travel the world filling his TARDIS with every example of no apples or no pencils. Instead, he defined the number 0 as the set of things that were not identical to themselves. Russell would then fill his TARDIS with every example of a thing that was not the same as itself and, as there aren’t any such things in the world, he would eventually return from this fruitless quest with an empty TARDIS. Under the rules of logic there is nothing that is not identical to itself, so this was a valid representation of ‘nothing’. In mathematical terms, he defined the number 0 as the set of all null sets.

  If Russell could use similar, set-based thinking to produce a clear definition of ‘the number 1’ and the process ‘plus 1’, then his goal of being able to prove beyond doubt that 1 + 1 = 2 would finally be achievable. But there was a problem.

  The problem, now known as Russell’s paradox, involved the set of all sets that did not contain themselves. Did that set contain itself? According to the rules of logic, if it did then it didn’t, but if it didn’t, then it did. It was a similar situation to a famous Greek contradiction, in which Epimenides the Cretan said that all Cretans were liars.

  At first glance, this may not appear to be an important paradox. But that wasn’t the point. The problem was that a paradox existed, and the goal of rebuilding mathematics on the bedrock of logic was that it would not contain any paradoxes at all.

  Russell went back to first principles, and a number of new definitions, arguments and fudges were proposed to avoid this problem. Yet every time he built up his tower of mathematical logic, another problem revealed itself. It felt like paradoxes were unavoidable aspects of whatever self-contained system mathematicians produced. Unfortunately, that turned out to be the case.

  In 1931 the Austrian mathematician Kurt Gödel published what is now known as Gödel’s Incompleteness Theorem. This proved that any mathematical system based on axioms, complex enough to be of any use, would be either incomplete or not provable on its own terms. He did this by coming up with a formula which logically and consistently declared itself unprovable, within a given system. If the system was complete and consistent then that formula would immediately become a paradox, and any complete and consistent system could not contain any paradoxes. Gödel’s theorem was extremely elegant and utterly infuriating. You can imagine how mathematicians must have wanted to punch him.

  This did not mean that mathematics had to be abandoned, but it did mean that mathematical systems always had to make an appeal to something outside of themselves. Einstein had avoided the contradictions in the physical world by going beyond normal three-dimensional space and calling on the higher perspective of space-time. In a similar way, mathematicians would now similarly have to appeal to a higher, external system.

  If any branch of thought was going to provide an omphalos which could act as an unarguable anchor for certainty, then common sense said that it would have been mathematics. That idea lasted no longer than the early 1930s. Common sense and certainty were not faring well in the twentieth century.

  For people with a psychological need for certainty, the twentieth century was about to become a n
ightmare.

  The central monster in that nightmare was a branch of physics known as quantum mechanics. This developed from seemingly innocuous research into light and heat radiation by scientists at the turn of the century, most notably Einstein and the German physicist Max Planck. This spiralled off into an intellectual rabbit hole so strange and inexplicable that Einstein himself feared it marked the end of physics as a science. ‘It was as if the ground had been pulled out from under one,’ he said, ‘with no firm foundation to be seen anywhere, upon which one could have built.’

  Einstein was not alone in being troubled by the implications of this new science. The problem was, in the words commonly attributed to the Danish physicist Niels Bohr, ‘Everything we call real is made of things that cannot be regarded as real.’ As Richard Feynman, arguably the greatest postwar physicist, later admitted, ‘I think I can safely say that nobody understands quantum mechanics.’ The Austrian physicist Erwin Schrödinger probably summed up the situation best when he said, ‘I do not like [quantum mechanics], and I am sorry I ever had anything to do with it.’ But people not liking quantum physics does not change the fact that it works. The computer technology we use every day is testimony to how reliable and useful it is, as the designs of computer chips rely on quantum mechanics.

  Quantum mechanics was entirely unexpected. Scientists had been happily probing the nature of matter on smaller and smaller scales, oblivious to what horrors awaited. Their work had proceeded smoothly down to the level of atoms. If you had a pure lump of one of the ninety-two naturally occurring elements, such as gold, and you cut that lump into two pieces and discarded one, what remained would still be a lump of gold. If you continually repeated the process you would still find yourself with a piece of gold, albeit an increasingly small one. Eventually you would be left with a piece that was only a single atom in size, but that atom would still be gold.

 

‹ Prev