by John Higgs
A younger child accepts their place in the family hierarchy, but as soon as they become a teenager their attention shrinks from the wider group and focuses on themselves. Every incident or conversation becomes filtered through the ever-present analysis of ‘What about me?’ Even the most loving and caring child will exhibit thoughtlessness and self-obsession. The concerns of others become minor factors in their thinking, and attempts to highlight this are dismissed by the catch-all argument, ‘It’s not fair.’ There is a neurological basis for this change. Neuroscientists report that adolescents are more self-aware and self-reflective than prepubescent children.
Aleister Crowley may have been on to something when he declared that the patriarchal age was ending and that the ‘third aeon’ we were entering would be the age of the ‘crowned and conquering child’. The growth of individualism in the twentieth century was strikingly similar to the teenage perspective.
The teenage behavioural shift should not be seen as simple rudeness or rebellion. The child’s adult identity is forged during this adolescent stage, and the initial retreat inwards appears to be an important part of the process. But something about the culture of the mid- to late twentieth century chimed with this process in a way that hadn’t happened previously. In part this was the result of demographics, as the postwar baby boom meant that there was a lot more of this generation than usual. Adolescents were foregrounded and, for the first time, named. ‘Teenager’ is a word that was first coined in the 1940s. Like the words ‘genocide’ or ‘racism’, it is surprising to find that this term did not exist earlier.
A postwar generation gap emerged. The older demographic of parents and grandparents had lived through the Second World War. They saw friends and family die, knew that everything they held dear hung in the balance, and could not look to the future with any degree of certainty. But the world was a vastly different place for their children, and they treated it as such. The long global economic boom, which lasted from the end of the Second World War to the 1970s, had begun. There were jobs for those who wanted them, and that brought money for cars and radios and consumer goods. Conscription into foreign wars such as Vietnam aside, nobody was shooting at postwar teenagers. They had no reason to worry where the next meal was coming from. This was particularly true in the United States, which had not been devastated by the war and did not need to rebuild. It was, instead, a vast landscape full of natural resources in the early stages of a golden age. Things could be fun, as the teenagers saw it, if only the old squares would just lighten up and get off their backs.
Teenagers gained a reputation for violence and juvenile delinquency. In the eyes of the older generation, they were concerned with personal gratification at the expense of any greater purpose. As the teenagers saw it, the older generation were stuck in the past and irrelevant to the modern age. They just didn’t understand. As the hippies in the 1960s would say, ‘Never trust anyone over thirty.’ The dividing line between the generations was marked by an abrupt change in dress. For men, the suits, ties and hats that had been the male uniform for generations were replaced with the more relaxed T-shirts, jeans and sneakers. If you were to show a teenager of the late twentieth century a photograph of any of the modernists, they would dismiss them as a boring old fart. That modernist may have been far more rebellious, dangerous and wild than the teenager, but their neat hair and three-piece suit would have been sufficient reason to dismiss them. What did the old culture have to tell them about life in the second half of the twentieth century anyway?
The youth culture of the 1950s was the beginning of the growth of a counterculture. It defined itself not by what it was, but by what it was not, because its purpose was to be an alternative to the mainstream. Countercultures have existed throughout history, from the followers of Socrates to the Daoist and Sufi movements, but the highly individualistic nature of the period was the perfect ecosystem for them to grow, flourish and run riot.
The counterculture historian Ken Goffman notes that for all countercultures may be defined by their clashes with those in power, that conflict is not what they are really about. Countercultures, he says, ‘seek primarily to live with as much freedom from constraints on individual creative will as possible, wherever and however it is possible to do so’.
Over a period of nearly forty years, starting from the mid-1950s, an outpouring of individual creativity caused the teenage counterculture to grow and mutate in thrilling and unexpected directions. Each new generation of teenagers wanted their own scene, radically different to that of their older siblings. New technology and new drugs powered a period of continual reinvention and innovation. Rock ’n’ Roll was replaced by Pyschedelia, which was replaced by Punk, which was replaced by Rave. Genres of music including Disco, Hip Hop, Reggae and Heavy Metal sprang up and fed into the sense of potential that so characterised popular music in the late twentieth century. These countercultures grew and spread until they had replaced the staid musical culture that they were created to reject. As their teenage audiences grew up, Rock ’n’ Roll became the mainstream.
During this period the move towards individualism became politically entrenched. Its dominant position was cemented by the rise of Margaret Thatcher in Britain in the late 1970s. This led to arguments against individualism being rejected or attacked according to the tribal logic of politics.
Thatcher outlined her philosophy in an interview with Woman’s Own magazine, published on Hallowe’en 1987. She said, ‘I think we have gone through a period when too many children and people have been given to understand “I have a problem, it is the Government’s job to cope with it!” or “I have a problem, I will go and get a grant to cope with it!” “I am homeless, the Government must house me!” and so they are casting their problems on society and who is society? There is no such thing! There are individual men and women and there are families and no government can do anything except through people and people look to themselves first.’
Unusually, her government made a later statement to the Sunday Times in order to clarify this point. ‘All too often the ills of this country are passed off as those of society,’ the statement began. ‘Similarly, when action is required, society is called upon to act. But society as such does not exist except as a concept. Society is made up of people. It is people who have duties and beliefs and resolve. It is people who get things done. [Margaret Thatcher] prefers to think in terms of the acts of individuals and families as the real sinews of society rather than of society as an abstract concept. Her approach to society reflects her fundamental belief in personal responsibility and choice. To leave things to “society” is to run away from the real decisions, practical responsibility and effective action.’
Thatcher’s focus on the primacy of the individual as the foundation of her thinking was perfectly in step with the youth movements of her time. The main difference between Thatcher and the young was that she justified her philosophy by stressing the importance of responsibility. At first this appears to mark a clear gulf between her and the consequence-free individualism of The Rolling Stones. But Thatcher was only talking about individual personal responsibility, not responsibility for others. Personal responsibility is about not needing help from anyone else, so is essentially the philosophy of individualism restated in slightly worthier clothes.
This highlights the schizoid dichotomy at the heart of the British counterculture. It viewed itself as being stridently anti-Thatcher. It was appalled by what it saw as a hate-filled madwoman exercising power without compassion for others. It argued for a more Beatles-esque world, one where an individual’s connection to something larger was recognised. Yet it also promoted a Stones-like glorification of individualism, which helped to push British society in a more Thatcherite direction. So entrenched did this outlook become that all subsequent prime ministers to date – Major, Blair, Brown and Cameron – have been Thatcherite in their policies, if not always in their words.
This dichotomy cuts both ways. Numerous right-wing commentato
rs have tried to argue that the phrase ‘no such thing as society’ has been taken out of context. It should in no way be interpreted to mean that Thatcher was individualist or selfish like the young, they claim, or that she thought that there was no such thing as society.
The pre-Thatcher state had functioned on the understanding that there was such a thing as society. Governments on both sides of the Atlantic had tried to find a workable middle ground between the laissez-faire capitalism of the nineteenth century and the new state communism of Russia or China. They had had some success in this project, from President Roosevelt’s New Deal of the 1930s to the establishment of the UK’s welfare state during Prime Minister Attlee’s postwar government. The results may not have been perfect, but they were better than the restricting homogeny of life in the communist East, or the poverty and inequality of Victorian Britain. They resulted in a stable society where democracy could flourish and the extremes of political totalitarianism were unable to gain a serious hold. What postwar youth culture was rebelling against may indeed have been dull, and boring, and square. It may well have been a terminal buzz kill. But politically and historically speaking, it really wasn’t the worst.
Members of youth movements may have regarded themselves as rebels and revolutionaries, but they were no threat to the capitalist system. It did not matter if they rejected shoes for Adidas, or suits for punk bondage trousers, or shirts for Iron Maiden T-shirts. Capitalism was entirely untroubled over whether someone wished to buy a Barry Manilow record or a Sex Pistols album. It was more than happy to sell organic food, spiritual travel destinations and Che Guevara posters to even the staunchest anti-capitalist.
Any conflict between the counterculture and the establishment occurred on the cultural level only; it did not get in the way of business. The counterculture has always been entrepreneurial. The desire of people to define themselves through newer and cooler cultures, and the fear of being seen as uncool or out of date, helped fuel the growth of disposable consumerism. The counterculture may have claimed that it was a reaction to the evils of a consumerist society, but promoting the importance of defining individual identity through the new and the cool only intensified consumer culture.
This was the dilemma that faced Kurt Cobain, the singer in the American grunge band Nirvana, in the early 1990s. A rejection of mainstream consumerist values was evident in everything he did, from the music he made to the clothes he wore. Yet none of that troubled the music industry. His music was sold to millions, as if it were no different to the music of manufactured teen bands such as New Kids on the Block. Cobain’s values were, to the industry, a selling point which increased the consumerism he was against. His concern about his increasing fame was already evident on Nirvana’s breakthrough album Nevermind, which went on to sell over 30 million copies. On the single In Bloom he attacks members of his audience who liked to sing along but didn’t understand what he was saying. By the time Nirvana released their next and final studio album In Utero, Cobain seemed defeated by this contradiction. The album opened with Cobain complaining that while teenage angst had paid off well, he was now bored and old, and it contained songs with titles such as ‘Radio Friendly Unit Shifter’. Cobain committed suicide the following year. As he wrote in his suicide note, ‘All the warnings from the punk rock 101 courses over the years, since my first introduction to the, shall we say, ethics involved with independence and the embracement of your community, has proven to be very true.’
Cobain failed to reconcile his underground anti-consumerism beliefs with the mainstream success of his music. The ‘all you need is love’ strand of counterculture thought was never able to mount a successful defence against ‘I want’ individualism. For how, exactly, could the difficult task of identifying with something larger than the self compete with the easy appeal of liberation, desire and the sheer fun of individualism? Was there a way of understanding ourselves that recognised and incorporated the appeal of individualism, but which also avoided the isolation and meaninglessness of that philosophy? Cobain, unfortunately, died believing that such a perspective was in no way possible.
The second half of the twentieth century was culturally defined by adolescent teenage individualism. But despite complaints about kids being ungrateful and selfish, the adolescent stage is a necessary rite of passage for those evolving from children to adults. Understanding the world through the excluding filter of ‘What about me?’ is, ultimately, just a phase.
The teenage stage is intense. It is wild and fun and violent and unhappy, often at the same time. But it does not last long. The ‘Thatcher Delusion’ was that individualism was an end goal, rather than a developmental stage. Teenagers do not remain teenagers for ever.
Fractal patterns formed by rivers in Greenland, photographed from space (Barcroft Media/Getty)
TWELVE: CHAOS
A butterfly flaps its wings in Tokyo
The universe, we used to think, was predictable.
We thought that it worked like a clockwork machine. After God had put it together, wound it up and switched it on, his job was done. He could relax somewhere, or behave in mysterious ways, because the universe would continue under its own steam. The events that occurred inside it would do so under strict natural laws. They would be preordained, in that they would transpire according to the inevitable process of cause and effect. If God were to switch the universe off and reset it back to its original starting state, then switch it back on again, it would repeat itself exactly. Anyone who knew how the universe worked and understood exactly what state it was in at any point would be able to work out what was coming next, and what would come after that, and so on.
That was not an idea that survived the twentieth century.
When Armstrong, Collins and Aldrin climbed into a tin can perched on top of a 111-metre-tall firecracker, it was Newton’s laws which they were trusting to get them to the moon. In all credit to Newton, the laws he discovered over 250 years earlier did the job admirably. Relativity and quantum mechanics may have shown that his laws didn’t work at the scale of the extremely small and the extremely large, but it still worked well for objects in between.
The mathematicians who performed the calculations needed to send Apollo 11 to the moon were aware that the figures they used would never be exact. They might proceed on the understanding that the total mass of the rocket was 2.8 million kilograms, or that the first-stage engines would burn for 150 seconds, or that the distance to the moon was 384,400 kilometres. These figures were accurate enough for their purposes, but they were always rough approximations. Even if those numbers were only out by a few hundred thousandths of a decimal place, they would still be out. But this wasn’t a problem, because it was possible to compensate for any discrepancies between the maths and the actual voyage as the mission progressed. If the weight of the rocket was underestimated then it would travel a little faster than expected, or if the angle it left orbit at was slightly off then it would head increasingly off-course. Mission control or the astronauts themselves would then adjust their course by a quick blast of their steering rockets, and all would be well. This made complete philosophical sense to mathematicians. If the variables in their equations were slightly out it would affect the outcome, but in ways that were understandable and easily correctable.
That assumption lasted until 1960, when the American mathematician and meteorologist Edward Lorenz got hold of an early computer.
After he failed to convince President Eisenhower to unilaterally launch a nuclear attack on Russia, John von Neumann, the Budapest-born genius who inspired the character of Dr Strangelove, turned his attention to computers.
Von Neumann had a specific use in mind for computer technology. He believed that computer power would allow him to predict the weather, and also to control it. The weather, in his hands, would be a new form of ‘ultimate weapon’ which he would use to bury all of Russia under a new Ice Age. All the evidence suggests that von Neumann really didn’t like Russia.
He beca
me a pioneer in the world of computer architecture and programming. He designed an early computer that first ran in 1952 and which he called, in a possible moment of clarity, the MANIAC (an acronym for Mathematical Analyzer, Numerator, Integrator and Computer). He also designed the world’s first computer virus in 1949. He was that type of guy.
His intentions for weather control beyond Russia were more altruistic. He wanted to trigger global warming by painting the polar ice caps purple. This would reduce the amount of sunlight that the ice reflected back into space, and hence warm the planet up nicely. Iceland could be as warm as Florida, he decided, which was fortunate because much of Florida itself would have been under water. Von Neumann, of course, didn’t realise this. He just thought that a hotter planet would, on balance, be a welcome and positive thing. This idea was also expressed by the British Secretary of State for the Environment, Owen Paterson, in 2013. Von Neumann’s thinking took place in the years before the discoveries of Edward Lorenz, so in his defence he cannot be said to be as crazy as Paterson.
Von Neumann died in 1957, so he did not live long enough to understand why he was wrong. Like many of the scientists involved in the development of America’s nuclear weapon, he had been scornful of the idea that radiation exposure might be harmful. And also like many of those scientists, he died prematurely from an obscure form of cancer.
At the time, the idea of accurate weather prediction, and ultimately weather control, did not appear unreasonable. Plenty of natural systems were entirely predictable, from the height of the tides to the phases of the moon. These could be calculated with impressive accuracy using a few equations. The weather was more complicated than the tides, so it clearly needed more equations and more data to master it. This was where the new computing machines came in. With a machine to help with the extra maths involved, weather prediction looked like it should be eminently achievable. This was the reason why Edward Lorenz, a Connecticut-born mathematician who became a meteorologist while serving in the US Army Air Corps during the Second World War, sat down at an early computer and began modelling weather.