The Blind Giant
Page 8
Digital technology is not durable: batteries run out, chips need fairly narrow conditions to survive, circuitry suffers from static, from dust, from moisture. Printed paper books are fragile, but not nearly as fragile in some ways as the ebooks presently being touted to replace them. Archiving artworks electronically implies the continued existence not only of digital society but of the formats and knowhow to preserve the electronic infrastructure. Analogue systems have their place. Less complex systems can be more effective in a given situation, and sometimes it’s not helpful to get real-time updates and try to centralize decision-making.
The most acute statement of this truth was probably the 2002 Millennium Challenge, a US military exercise pitting the tiny tinpot nation of Red against the mighty forces of Blue. The challenge was planned in 2000, and Red was no doubt conceived as a generic Middle Eastern nation with a nod towards Iran, though by the time the exercise took place it was pretty clear that it was a dry run for a future invasion of Iraq. Saddam Hussein, it would seem, was the only person not really paying attention, because if he had been, the face of March 2003 might have been rather different.
Commanded by a retired marine officer, Lieutenant General Paul van Riper, Red eschewed technological communications and used motorcycle couriers, sacrificing speed of communications for security. Van Riper launched a surprise attack against Blue’s fleet, coordinated apparently by an audio signal concealed in the muezzin, and sank most of Blue’s vessels. The situation was so desperate that after a while the Blue fleet was ‘refloated’ and van Riper himself was essentially removed from command so that Blue could win second time around.
Blue’s systemic problem, it later emerged, was literally that it had too much information. Van Riper, knowing he couldn’t hope to communicate in the field with his commanders in real time without those messages being disrupted or intercepted by Blue’s forces, had given his men a great deal of leeway within the basic structure of his plans. They were therefore able to react locally and in sympathy with one another, if not actually in concert except where some major signal united their efforts. Blue, meanwhile, had a vast array of sophisticated information-gathering equipment which was used mistakenly to re-direct forces during the battle in real time. This meant Blue was constantly trying to cover all the bases, paralysed with a glut of data which must be interpreted and accounted for. Blue was also assuming a parallel command and control structure in Red’s forces, spending resources blocking transmissions, and presumably also trying to extrapolate a coordinated over-arching plan from the individual initiatives of Red’s distributed decision-making apparatus.
In other words, Blue was over-thinking it.
Although the first Walkman – the device which ushered in the age of portable music, beginning with cassette tapes and moving on to CDs and MP3s – belonged to Sony, the chief mover in the fetishization of the digital device since the turn of the century has been Apple, whose sleek, minimal designs have been masterfully injected into the consciousness of the high street with a mix of music, wit and supremely seamless functionality. Apple’s devices are not simply objects. They are gateways, leading to Apple’s liveried spaces in what is increasingly called the Cloud (only the US National Security Agency seems to use the term ‘cyberspace’ any more). The Cloud is a vague collective noun referring to computers in any number of locations running services that can be accessed remotely. Google offers email, document storage, translation, search and a great deal more in the Cloud. Apple customers can buy media content with a single click. The next episode of a TV show, the next album, the next book is only ever a few moments away.
Apple’s Cloud presence is replicated in its steel and glass outlet stores: a perfectly predictable and stylized shopfront which performs with a minimum of fuss. In 2000 the Canadian design guru Bruce Mau described a selling environment in which ‘the brand identity, signage systems, interiors, and architecture would be totally integrated’. The point was the blurring of information and physical reality. The first Apple Store opened in May the following year – and then something else happened which was absolutely unexpected and appalling.
I can’t begin to unpick the interplay of the iPod’s launch in October 2001 – a month after the 9/11 attacks – with the slow, painful retrenching of American self-perception as being on top o’ the world. It seems facile, in the face of the falling towers, to wonder whether a small white box full of music became a part of the climb back out of the pit. And yet, if not music, what else do you fight horror with? It may be nonsense, suggested by the simple proximity of dates, or it may be an important part of the US relationship with the iPod – and, hence, everyone else’s too. Apple’s decision to go ahead with the launch must have been an almost impossibly hard one to make, but it was, in hindsight, the right one. Digital music went from being another format which might not catch on – like the MiniDisc player – to being the default format for many, myself included. Apple’s gizmo ushered in a new era of technology that was hot and cool at the same time, and – probably not coincidentally – set the stage for the arrival of multi-purpose pocket devices such as the iPhone, which in turn make possible the degree of integration of physical and digital space we’re now seeing, while at the same time opening all of us up, in our homes and our persons, to the tide of information that so upsets some of us.
The rise of Apple, along with Google and Amazon – the latter two both begun in the 1990s but attaining titan status in the same decade – has brought us here, to a place where everything seemingly must be digitized, from libraries to shopping to medicine to streets and maps. The combination of functionality and cool has made each new advance – the latest being Apple’s Siri voice interface, which allows users to ask their phones questions in ordinary language and receive a spoken answer rather than engaging through a screen or keyboard – a must-have item, a consumer product and an identity statement as much as a simple tool. Some aspects of human life – a small number, but an increasing one – are now inaccessible without a smartphone. Our relationship with technology is no longer that of tool-user and tool; it is more complex and emotional. We replace things more often than we have to, and long before they are worn out, so as to be in possession of the latest thing, the cutting edge. (Although it’s fair to point out that our brains factor our habitual tools into our self-perception, so the connection between a craftsman and his awl has always been rather more profound than appearances might suggest.)
There is now such a thing as an ‘unboxing’ – indeed, on YouTube you can watch videos of people removing their new technological gear from its packaging. Writer Neal Stephenson describes one of his characters revealing a piece of kit in his novel Snow Crash; the experience is quasi-sexual. We have, in every sense, fetishized our technology.
We are also, as a culture – the Western world, from Berlin and Paris to Los Angeles and on to Sydney – somewhat addicted to notions of apocalypse. Perhaps it’s because we’re also prone to lock-in; a crisis brings the opportunity to change the rules, to impose resolution on issues that otherwise simply fester. Politicians know this: witness the Neo-Conservative advance planning for a crisis that would allow the Republican Party to reshape the United States’ political landscape, which was then perfectly enabled by the unforeseen horrors of 9/11. In an apocalyptic scenario, all the usual rules can be re-examined, often to the great advantage of political leaders from one camp or the other.
In the present digital moment – the pre-crisis, perhaps – the lock-in hasn’t set in across the board. There are still conflicting platforms for ebooks, for music; still conflicting operating systems, each representing a different philosophy and conferring power and responsibility on different groups. This is, obliquely, an extremely political situation. Governments and corporations are fighting it out with one another and with rebellious groups like Eben Moglen’s Freedom Box Foundation (which exists to bring uncensorable communication and government-proof encryption to the general population), and while various administratio
ns in Europe and the US have arrogated to themselves the right to trawl through digital communication in search of various sorts of crime, those laws have not yet been thoroughly tested. It’s not clear who will own the technological landscape in different areas, although the time window is closing. We don’t yet need an apocalypse to change the rules, because the rules themselves are even now being defined, sometimes knowingly, sometimes not – by us. We are making the landscape, not watching it form.
It’s one of the most frustrating attitudes I see in my occasional side job as a commentator on the publishing industry’s conversion to the digital age: the natural tendency of large corporations appears to be to wait until the smoke clears and a leader emerges, then seek a deal with that person. The infuriating point is that publishing – like many other so-called ‘old’ industries – can’t afford to take this approach this time. It needs to have a hand in defining what happens, because otherwise it will likely be cut out.
The same is true with the rest of us: we can’t just sit back on this one and wait. The world is being made – or, rather, we, collectively, with our purchasing power and our unthinking decisions, are making it – and we have to choose to be part of that process, or else accept that what emerges from it may be a cold thing constructed not around human social life but around the imperatives of digitally organized corporate entities. It may not happen that way on purpose, but the combination of commercialization, government involvement, litigation and societal forces – and the trajectory of digital technologies themselves as a consequence of what’s already happened – suggests to me that what takes place over the next few years, and what is happening now, will be definitive of how we approach and understand this area for the foreseeable future. To explain what I mean by that, I’m going to have to make a brief detour into the relationship between science, technology, society and the individual.
Marshall McLuhan famously asserted that ‘the medium is the message’. His meaning in essence was that the content of a given medium was irrelevant; the technology itself carried with it consequences that could not be denied or altered by what was communicated.
McLuhan’s perception – aside from being the kind of sweeping statement beloved of the Enlightenment and its ultimate modern prophets – is true only as far as it goes. A technology does, of course, shape society around it, but it is also created by that society in the first place and the lessons taken from it are inevitably filtered by cultural perceptions and understanding. It’s not just a praxis, in which ideas become things, but an ongoing, reflexive process in which each generation on one side of the reification divide yields a new generation on the other.
More simply: technology is derived from science. Science is the underlying knowledge; technology is what you then go ahead and do with that knowledge. If you have the science for television, do you create and implement a surveillance nation of closed circuit TV cameras, broadcast soap opera, or improve medical endoscopy? Your cultural bias will decide. (We’ve chosen to do all three in the UK. With the exception of the last one, it’s doubtful this strategy has greatly improved our lot.) Society, of course, is then influenced by the technology it has created. In The Wealth and Poverty of Nations, David Landes discusses the impact of what he calls the first digital device – the mechanical clock.
The mechanical clock is obviously not digital in the sense of being electronic. Rather, it relies on a ‘regular … sequence of discrete actions’ to mark time in equal portions rather than following the flow of the natural day. Until it was developed, time, as experienced by humans, was fluid. In Europe, the churches marked the passing of time in each diurnal cycle with a sequence of masses, but the ‘hours’ were evenly distributed between day and night, no matter what the time of year. They therefore grew shorter as the winter came in and longer in high summer. Time was also centralized, up to a point: the local bells tolled the hours, rather than each individual person or household possessing the means to measure time. There was a time to wake, to trade, to sleep and so on, and all of them were announced by the tolling bells.
On the other hand, as Europe grew more populous and boundaries overlapped, time inevitably varied from place to place – from parish to parish – resulting in disputes. The invention of the mechanical clock, as with the arrival of mechanical printing, diminished the authority of the Church, allowing others to measure and set time. In effect, it also made possible the style of payment which for Karl Marx was typical of capitalism: payment by the amount of time worked, rather than for the product of labour. The mechanical clock, in displaying or creating time as we understand it today, has influenced our understanding of work, and of the length of our lives. In allowing calculation of the longitude it also facilitated the growing naval and mercantile power of Europe, and in cutting the day up into fragments, it paved the way for Newton, Einstein and the rest to examine space and time and uncover the connections between them.
While we’re on the topic of Newton, it’s worth observing that even science is somewhat influenced by the dominant societal perspective, however much researchers and theorists themselves seek to avoid it. Newton apparently drew the idea of gravity between planets from his study of the alchemical notion of the attraction of souls; he was culturally ready to consider gravity in that particular way before he began to do so. If he had evolved his understanding of gravity from a different background, would he have seen its function differently, providing us with a different aspect of the interactions of gravity with the rest of the forces at play in and creating space and time?
So as we look at the relationship between society and digital culture – a distinction that is in any case pretty hard to make in any other than a rhetorical way, digital culture being an aspect of society, not an invasion from an alternate reality – it’s worth remembering that they make one another, and things that are attributed to the digital realm very often are actually just more blatant examples of things that are happening all over. The reason this book even exists at all is that our modern world seems to be completely permeated with digital devices and digitally mediated services and it can seem as if all the machines are taking over, or, at least, as if we’re being changed for good or ill without really seeing how this is happening. I’d say that wasn’t right: we’re changing, for sure – and we always have and we should hope that we always will – and some of those changes are contingent in their form on technology.
But that’s not to say that technology is the root of what’s happening. It isn’t. We are, as part of a cycle of development and change. And that false separation of us from our technologies – whether those technologies are physical ones like the iPhone or satellite television, or mental and societal ones like investment banks and governments – lies at the heart of a lot of what makes us unhappy and afraid in our world. One of the great benefits of digital culture is the growing awareness that we are not separate from one another or from the institutions we have made to do things for us. We are our technology. We just have to reach out and take charge of it, which is vastly easier to do when you know there are 200,000 people thinking very much the same. Twitter isn’t about letting your favourite movie star know that you daydream about him when you’re brushing your teeth. It’s about knowing what everyone else is thinking throughout the day and seeing your own opinion resonate – or not – with a large group. And from that knowledge can come a campaign to save a TV show, or a student protest, or a revolution.
Technology, used in the right way and with the right understanding, makes us more who we are than we have ever been. It has the potential to allow us, not to take back control of our lives and our selves, but to have that control in some degree for the first time ever. Hence, this is a moment of decision – a moment we have been moving towards for a while. We have to choose to take control.
blindgiant.co.uk/chapter3
4
The Plastic Brain
IN 2011 BARONESS Susan Greenfield told Britain’s House of Lords that she feared the immersion
of children in ‘screen life’ could be detrimental to their development. She also expressed a more general concern as to what our relationship with digital technology was doing to us as a society and as individuals. As Professor Greenfield explains it – behind the title Baroness is another, more conventional one: she is Professor of Synaptic Pharmacology at Lincoln College, Oxford, and a specialist in the physiology of the brain – the structure of the individual human brain is determined by genes, environment and practice. Who you are accounts for a certain amount, then it’s where you are and what you do with yourself. The degree to which we can control the first two is questionable – but not so, the third. That is directly affected by how you spend your time, and her fear was and is that the use of digital technology in our society is potentially harmful. One of her chief concerns is that we will become ‘all process’: that we will cease to connect events in a narrative and live from moment to moment, gratification to gratification. Another is that our social interactions will suffer, becoming performative – done in order to be reported – or inauthentic, geared to the screen and not the flesh.
In some ways it’s a familiar worry: when I was younger, it was suggested that television would turn us all into human lab rats endlessly pushing the ‘pleasure’ button. In others, it’s a far more serious notion, proposing that we may accidentally climb back down the ladder of brain evolution to a new version of pre-literate culture and existence while we outsource our serious thinking to machines, remembering things by storing them, letting machines – or, rather, software running in machines – make administrative decisions about utilities, tell us what to buy and what to like, what political parties best represent our interests, who to talk to and who to be friends with. Professor Greenfield is at pains to say that her concerns are theoretical rather than based on strong research evidence, and indeed that research is precisely what she proposes the government should undertake.