Stranger Than We Can Imagine
Page 27
Our personal realities, then, were relative. We simply did not have anything absolute to orientate ourselves to. The closest thing the people of the northern hemisphere had to a fixed point in their lives was Polaris, the North Star, the only point in the heavens that remains fixed over the span of a human life. And even Polaris wobbles a little.
We might not like this. We might curse relativity, and crave an absolute position. But that doesn’t change the fact that we do not have one.
Postmodernism was not some regrettable intellectual folly, but an accurate summation of what we had learnt. Even if it was intellectual quicksand from which escape did not appear possible. And which nobody seemed to like very much.
By the early twenty-first century the entire edifice of postmodernism had become routinely rejected. That, unfortunately, tended to include all the understanding that led up to it. Our current ideology stresses that of course there is an absolute. Of course there is truth. Richard Dawkins makes this argument when he says that ‘We face an equal but much more sinister challenge from the left, in the shape of cultural relativism – the view that scientific truth is only one kind of truth and it is not to be especially privileged.’ Or as Pope Benedict XVI said in his pre-conclave speech in 2005, ‘Today, a particularly insidious obstacle to the task of education is the massive presence in our society and culture of that relativism which, recognising nothing as definitive, leaves as the ultimate criterion only the self with its desires. And under the semblance of freedom it becomes a prison for each one, for it separates people from one another, locking each person into his or her own ego.’ As Martin Luther King put it, ‘I’m here to say to you this morning that some things are right and some things are wrong. Eternally so, absolutely so. It’s wrong to hate. It always has been wrong and it always will be wrong.’ Or to quote the British philosopher Roger Scruton, ‘In argument about moral problems, relativism is the first refuge of the scoundrel.’ The existence of absolute truth has also been declared by neoliberalists and socialists, by terrorists and vigilantes, and by scientists and hippies. The belief in certainty is a broad church indeed.
All these people disagree on what form this absolutism takes, unfortunately. But they’re pretty sure that it exists.
This faith in absolute certainty is not based on any evidence for the existence of certainty. It can sometimes appear to stem from a psychological need for certainty which afflicts many people, particularly older men. Cultural debate in the early twenty-first century has, as a result, descended into a War of the Certain. Different factions, all of whom agree about the existence of absolute truth, are shouting down anyone who has a different definition of that absolute truth.
Fortunately, true absolutism is rare. Most people, scientists and non-scientists alike, unconsciously adopt a position of multiple-model agnosticism. This recognises that we make sense of the world by using a number of different and sometimes contradictory models. A multiple-model agnostic would not say that all models are of equal value, because some models are more useful than others, and the usefulness of a model varies according to context. They would not concern themselves with infinite numbers of interpretations as that would be impractical, but they understand that there is never only one interpretation. Nor would they agree that something is not ‘real’ because our understanding of it is a cultural or linguistic construct. Things can still be real, even when our understanding of them is flawed. Multiple-model agnostics are, ultimately, pretty loose. They rarely take impractical, extreme positions, which may be why they do not do well on the editorial boards of academic postmodern journals.
Multiple-model agnosticism is an approach familiar to any scientist. Scientists do not possess a grand theory of everything, but they do have a number of competing and contradictory models, which are valid at certain scales and in certain circumstances. A good illustration of this point is the satellite navigation device in a car. The silicon chip inside it utilises our understanding of the quantum world; the GPS satellite it relies on to find its position was placed in orbit by Newtonian physics; and that satellite relies on Einstein’s theory of relativity in order to be accurate. Even though the quantum, Newtonian and relativity models all contradict each other, the satnav still works.
Scientists generally don’t lose too much sleep over this. A model is, by definition, a simplified incomplete version of what it describes. It may not be defendable in absolutist terms, but at least we can find our route home.
There is still, however, a tendency to frame these contradictory models as part of a hidden absolute, perhaps in order to avoid the whiff of postmodernism.
The absolutist approach to the contradictory nature of scientific models is to say that while all those models are indeed flawed, they will be superseded by a grand theory of everything, a wonder theory that does not contain any paradoxes and which makes sense of everything on every scale. The 2005 book about the quest for a Theory of Everything by the Canadian science journalist Dan Falk was called Universe on a T-Shirt, due to the belief that such a grand theory would be concise enough, like the equation E=mc2, to be printed on a T-shirt.
To a multiple-model agnostic, this idea is a leap of faith. It is reminiscent of Einstein’s mistaken belief that quantum uncertainty must be wrong, because he didn’t like the thought of it. Of course if such a theory were found, multiple-model agnostics would be out celebrating with everyone else. But until that day comes it is not justifiable to assume that such a theory is out there, waiting. It is a call to an external ideal that is currently not supported by the data. A scientist who says that such a theory must exist is displaying the same ideological faith as someone who says God must exist. Both could be brilliant, but presently we should remain relativist enough to recognise them as unproven maybes.
In 1981 the American pop artist Andy Warhol began a series of paintings of the dollar sign. Warhol was a commercial artist from Pittsburgh who found fame in the 1960s with his gaudily coloured, mass-produced screen-prints of cultural icons. His most famous work, a series of prints of a Campbell’s soup can, probably captured the essence of the postwar Golden Age better than any other work of visual art.
The dollar sign paintings Warhol began in the 1980s were not the last work he did. He died in 1987, and mass-produced typically Warholian canvases until the end. It is tempting, however, to see the dollar sign as the moment he finally ran out of ideas. With the exception of an increasing preoccupation with death, there is little in his 1980s work that was new. The dollar sign paintings seem in many ways to be the conclusion of his life’s work.
They were big paintings, over two metres tall and just short of two metres wide. Each single dollar canvas could take up an entire wall in a gallery. The experience of walking into a white space, empty except for huge, bright dollar signs, is an uncomfortable one. Initially it is tempting to dismiss them as superficial, but there remains a lingering doubt that maybe Warhol had a point. Perhaps there really was nothing else he could do but paint the dollar sign as big and bright as he could. Perhaps the neoliberalists were correct and their dollar god was the only genuine power in the world. Maybe we had had a valid omphalos all along.
Money, it seemed in the 1980s, was the only thing solid enough for people to orientate themselves by. Individualism and Do What Thou Wilt had become the fundamental principle of living, so the power to achieve your desires became all-important. That power was most potently distilled in the form of money. It was money that allowed you to do what you wanted, and a lack of money that stopped you. It did not matter that a world where the dollar sign was the only true subject of worship was fundamentally grim. In a postmodern culture, all such judgement calls were subjective. Our artists, thinkers and scientists were free to offer up alternatives to the court of public opinion. The fact that they failed to do so seemed telling.
In 1992, the American political scientist Francis Fukuyama published his most influential book, The End of History and the Last Man. Fukuyama argued that, with the collap
se of the Soviet Union, the neoliberal argument had won. Capitalism was the only option on the table and liberal democracies were the only justifiable form of state. Fukuyama claimed that we had reached the predetermined, final form of society, an argument that was essentially teleological and therefore religious in nature. He wrote in a deliberately prophetic, evangelical tone, proclaiming the ‘Good News’ of the eternal triumph of the capitalist paradise.
In this context, Warhol’s paintings of dollar signs made complete sense. Was this, then, the endpoint of the twentieth century? Was this how our story was going to end?
Fukuyama was, fortunately, entirely wrong, as he would now be the first to admit. He split from the neoconservative movement over the US invasion of Iraq, a conflict he initially was in favour of, and voted for Barack Obama in 2008.
The individualism that had fuelled the neoliberal triumph was not the end point to humanity that people like Fukuyama or Margaret Thatcher once believed it to be. It was a liminal period. The twentieth century had been the era after one major system had ended but before the next had begun. Like all liminal periods it was a wild time of violence, freedom and confusion, because in the period after the end of the old rules and before the start of the new, anything goes.
The coming era would keep the individual freedom that had been so celebrated in the twentieth century, but it would wed it to the last thing that the likes of The Rolling Stones wanted. Individual freedom was about to connect to the one thing it had always avoided. Freedom was about to meet consequence, and a new period of history was about to begin. In the research centres of Silicon Valley, a feedback loop was being built.
A Color Run participant in Barcelona taking a photo with a selfie stick, 2014 (Artur Debat/Getty)
FIFTEEN: NETWORK
A planet of individuals
The twenty-first century began over a twenty-four-hour period, which started at 11 a.m. on 31 December 1999. Or at least, that’s how it appeared to those in Britain watching the BBC’s coverage of the global New Year celebrations.
At 11 a.m. GMT it was midnight in New Zealand, which marked the occasion with a celebratory firework display in Auckland. This was beamed around the world by orbiting communication satellites, technology that was then about thirty-five years old and already taken for granted. Two hours later, a spectacular display over Sydney Harbour marked eastern Australia’s entrance into the twenty-first century. The television coverage continued as the hours passed and a procession of major cities from East to West left the twentieth century behind.
In the Hebrew calendar the date was 22 Teveth 5760. Under the Islamic calendar it was 23 Ramadan 1420. To the UNIX computer operating system, it was 946598400. The significance of the moment was only a product of the perspective used to define it.
As we noted at the start, New Year’s Day 2000 was not technically the beginning of the new millennium. That honour should have come a year later, on 1 January 2001. A few people were bothered by this, and wrote to newspapers, but were largely ignored. New Year’s Eve 1999 was going to mark the end of the twentieth century, because that’s how the planet of individuals wanted it. They had spent the last eighteen years listening to Prince singing that he wanted to ‘party like it’s 1999’. They were impatient to party that way, too. This was a world where the population drove cars and understood the strange appeal of watching all the digits in the milometer change at the same time. Who wanted to wait another year? Seeing 1999 turn into 2000 was clearly more fun than 2000 turn into 2001. People no longer recognised the claims to authority that institutions such as the Royal Observatory at Greenwich had once had. The twenty-first century began at midnight on 31 December 1999 by mutual consent. That’s what people wanted.
In London, to the crowd enjoying a drunken singalong on the banks of the River Thames, the twenty-first century was a blank slate and full of potential. It still felt clean at that point, unsullied by what was to come. They had no inkling of 9/11, the War on Terror, or the coming global financial crash. Among the many songs that those revellers put their heart and soul into was a rendition of the classic Frank Sinatra standard ‘My Way’. That song, perhaps more than any other, could symbolise the twentieth century. The lyrics are based around words such as ‘I’ or ‘me’, never ‘we’ or ‘us’. I’ll make it clear, he says, I’ll state my case, of which I’m certain. It is far from the only song to base its lyrics on ‘me, me, me’, of course, but the proud manner of Sinatra’s delivery marks it out as something special. When it was released it sounded heroic, a statement of pride in one man’s ability to face life on his own terms. And yet, as we entered the twenty-first century, it was becoming possible to see that lyric in a different light. It is not the song of a man who understands himself to be part of something larger, or connected to a wider context in any meaningful way. It’s the song of an isolated man who has spent his entire life attempting to force his own perspective on the world.
In the twentieth century, ‘My Way’ sounded glorious. It was a song frequently played at funerals to celebrate a life well lived. But by the twenty-first century, had it started to sound a little tragic?
The live media coverage of that event was not just a story of celebration and drunk people in fountains. Another story was unfolding at the same time, regarding something called the Millennium Bug. There was concern about how computer systems would deal with the date change.
When the earliest computers were built, computer memory was extremely expensive, so programmers employed neat little tricks to use the memory they had more efficiently. One such fudge involved not using the full four digits of a year, such as ‘1963’, but instead just storing the last two digits, ‘63’, and assuming that the year referred to the twentieth century. As computers developed over the 1970s and 1980s the cost of two storage bits became negligible, and tricks like this became unnecessary. But newer computing systems often had the legacy of older hardware and software in their DNA, and buried deep within the core code of many important systems was the assumption that after 1999 would come the year 1900.
Quite how significant this problem was was hard to tell. Computers had come from nowhere and, over the course of a couple of decades, they had taken over the running of pretty much everything. No one knew for sure exactly what the impact of the Millennium Bug would be. One potential scenario was doomsday. Planes would fall from the sky, the entire global financial system would cease to function, nuclear power stations would go into meltdown and mankind would be thrown back to the Stone Age. Another scenario was that nothing much would happen and the whole thing was a scam by programmers who wanted to up their overtime payments. Such a broad range of speculation was immensely attractive to the news media, who quickly rechristened the problem the ‘Y2K Bug’, on the grounds that it sounded more computery.
As a result of the concern, governments and companies spent a great deal of money updating their computer systems ahead of time, with some estimates placing the total cost as high as $600 million. When the year 2000 arrived without any significant problems there was great relief, and a nagging suspicion that the overtime-rich computer engineers had pulled a fast one.
What the Y2K Bug did do was force people to confront the extent to which they had become dependent on computers. The shift from a pre-digital to a post-digital society had been swift and total, and few people really noticed that it was happening until it was too late. There had been no time, therefore, to think through the most significant aspects of that revolution: all those computers were connected.
*
In the early twentieth century young children such as Wernher von Braun, Jack Parsons and Sergei Korolev had been shaped by the heroic, individualist science fiction of the age. Raised on Flash Gordon and Jules Verne, they dreamt of journeying into space and they dedicated their lives to realising that dream. The task was difficult, and only achieved through the catalyst of global war. The space rocket was born alongside its twin, the intercontinental nuclear missile. This had geopolitical co
nsequences.
Before Hiroshima, the political and military game had been akin to chess. The king stayed at the back of the battlefield, as far away from the conflict as possible. To win the game it was necessary to take the opposing king, and all the other chess pieces were dedicated to making sure that this did not happen. Once the king was lost, the game was over. It was not necessary to completely wipe out your opponent in order to win, only to behead their master.
In the Cold War, different rules applied. The king could be taken immediately, during the first move of the game. It did not matter how far away from the frontline they hid because nuclear warheads were attached to rockets powerful enough to reach anywhere on the globe. It was just as easy to wipe out the White House or Red Square as it was to wipe out anywhere else. The game could be lost with entire armies still intact.
A rethink of the hierarchical power structure was called for. Previously, the king or tsar issued orders, which were passed down the chain of command. Their subjects carried out those orders and reported on their progress. That information was passed up the chain of command. Information could flow up or it could flow down, but it didn’t do anything else.