The Blind Giant
Page 20
Having managed an initial misfire which went from waspish to profane in a single exchange, the PR company went right ahead into the minefield and told her she should be grateful for their attention. Whatever it was supposed to be, it sounded like the priesthood of the old media and cultural Church reprimanding a lowly acolyte of an insignificant splinter sect. That’s not something anyone would be rash enough to do to the editor of a national newspaper, but for some reason it seemed like a good idea in this context. The result was – to anyone who knows how the digital world works – massively predictable. The Bloggess put the exchange online. She tweeted about it. And it became news. What looked like soft, disconnected power lined up behind her until she was the tip of a spear a few hundred thousand irritated people long. The PR company found its own brand and the names of its senior executives in the midst of a PR nightmare.
The incredible thing about this is not that it happens, but that it happens a lot. In another instance, author Neil Gaiman found himself embroiled in a battle of words with Matt Dean, a Republican in the Minnesota House of Representatives. The spat was covered internationally, and Dean was made to apologize – by his mother3 – but not before he’d managed to be offensive about Gaiman’s massive fan base and garner the kind of bad publicity his political opponents would have paid millions of dollars to arrange if only they could have. Looking at either of these sequences is like watching a cheap banana skin gag in slow motion – one of those where the clown picks himself up off the floor, congratulates himself on his stylish recovery, and walks through a sheet of plate glass into a vat of paint.
The stars of participant media are not like the stars of television shows. The love-hate, predatory aspect of the visual media celebrity culture is absent. The relationship is more friendly, more even-handed and more loyal than obsessive. Figures like the Bloggess are not just characters on the screen; the level of engagement is much higher. They are real people with whom their correspondents and readers strongly identify, whom they admire and whom they consider part of their social circle. They have extended their own hearths until they touch directly on the hearths of others, creating one enormous temporary psychological living room space. Or perhaps it’s more than that, and they are perceived unconsciously as autonomous parts of an extended, digitally mediated self.
The political and cultural clash of these three styles is with us all the time. The unevenly distributed future is matched by an unerased past that continues to play out, occasionally breaking through the patina of recent history. Despite the political fondness – in my mind linked inextricably with John Major but adopted wholeheartedly by Tony Blair and now alas from time to time even by US President Barack Obama – for ‘drawing a line’ under uncomfortable issues, speaking the words does not make it so.
Broadly, it seems to me that the professional culture’s fondness for systems was derived from a mistrust of the foregoing style of individual decision-making, whether that was ideological – an attempt to move away from the tendency to patronage, feather-bedding and ‘jobs for the boys’ – or a practical perception that gathering the expertise of an institution and bringing it to bear was a vital step in dealing with a world that increasingly required sophisticated competences. Michael Kuhn’s book 100 Films and a Funeral describes part of the method that allowed Polygram Filmed Entertainment to achieve such a remarkable string of successes between 1991 and 1999: no project was given final approval until every department was in sync and Kuhn could see that there were no misunderstandings about what a film was supposed to be, to whom it would appeal or how it was going to be sold. The entire journey was mapped out in advance – at least in broad brush – avoiding the classic disaster moment in movie production where the marketing team sees a film for the first time and knows that however perfect a statement of art or identity it may be, the audience is not large enough to support the money spent on it. The system collated the opinions and capabilities of Kuhn’s team and the product speaks for itself.
In a social and administrative setting, creating a system that can be applied universally looks initially like justice. Making everything about a simple set of rules lets people know what to expect and ensures that everyone is treated equally (or perhaps not ‘equally’ but ‘exactly the same’).
Sadly, it also means that cases that don’t fit the structures envisaged by those creating the system are shoehorned into spaces that aren’t right. The notorious response of the uncaring bureaucrat in British comedy and drama – ‘sorry, computer says no’ – speaks to the reality that human life is inevitably messier than attempts to codify it can allow for. We knew that already; it’s why Britain retains its system of precedent in legal matters rather than creating a written constitution. The map must be able to grow to become more and more detailed as the territory is revealed. All the same, the professional era – the modern era – was and is filled with expert systems: abstracted and codified sets of rules intended to be applicable everywhere. Governments, bureaucracies and corporations are all systems on this model; people working for them function within strictly set boundaries. The point of them, indeed, is that while you may use your judgement, you are not there to create a policy, but to implement a set of rules. Decisions have already been taken, were taken when the structure was designed. A person’s whole responsibility within a system of this kind is to fill a role. To go beyond that – to attempt to influence outcomes – is frowned upon. And in a sense, this was in the twentieth century the only way to do it if individual and possibly biased judgement – and a resulting uneven implementation of the rules – were to be avoided.
The culture we created as a consequence, though, was – and still is in some ways – profoundly unattractive. What was supposed to be fairness can become callousness or, at the very least, can feel that way when read from the other side. ‘There’s nowhere on the form for that’ is a truthful response that can also be a self-exculpation, an abdication of responsibility for a situation that may mean that someone loses a house or gets no medical care because the extenuating circumstance on which they wish to base a defence is not acknowledged by the narrow straitjacket of rules. Moreover, in the twentieth century we lost track of the reason for the existence of these systems and the fact that they were part of ourselves. We became alienated, not in the classic Marxist sense of being alienated from the product of physical labour, but from the technologies of the mind we generated to create a fair and effective living environment for ourselves. (Those systems – composed of people – also forgot their own function, so that, for example, banks have begun to use the financial system to maximize short term profit rather than concentrating on wise lending to create genuine growth. Things which were too important and obvious to codify have been set aside as the code as written has become the basis of behaviour. A form of fairydust economics has emerged where numbers in ledgers get larger while the actual world remains the same. Every so often, someone notices the discrepancy and the whole thing collapses.)
This, not incidentally, is another perfect setting for deindividuation: on one side, the functionary behind a wall of security glass following a script laid out with the intention that it should be applied no matter what the specific human story may be, told to remain emotionally disinvested as far as possible so as to avoid preferential treatment of one person over another – and needing to follow that advice to avoid being swamped by empathy for fellow human beings in distress. The functionary becomes a mixture of Zimbardo’s prison guards and the experimenter himself, under siege from without while at the same time following an inflexible rubric set down by those higher up the hierarchical chain, people whose job description makes them responsible, but who in turn see themselves as serving the general public as a nonspecific entity and believe or have been told that only strict adherence to a system can produce impartial fairness. Fairness is supposed to be vested in the code: no human can or should make the system fairer by exercising judgement. In other words, the whole thing creates a collective respons
ibility culminating in a blameless loop. Everyone assumes that it’s not their place to take direct personal responsibility for what happens; that level of vested individual power is part of the previous almost feudal version of responsibility. The deindividuation is actually to a certain extent the desired outcome, though its negative consequences are not.
By contrast – or by reflection – the supplicants on the other side of the glass see themselves as discarded and unregarded by the bureaucratic team, feel pressure from bills or ill health, their identity expressed in customer numbers or taxpayer IDs rather than names, and believe they are treated not as individuals but as cases. What ought to be human fallibility tempered by a system designed to ensure fairness becomes a resource-allocation machine that can’t handle the awkward shapes of genuine human suffering. A sense of hopelessness and despair prevails – and we’re back in the 1930s. Under such pressure, deindividuation plays out as a loss of self and inhibition: actions that should be impossible become conceivable. The social contract is broken, because such systems work not only by fear of retribution but by promise of reward. Where the second is absent, the first is insufficient, even in the degree possible to Egyptian and Tunisian secret policemen. You can see this as the Ultimatum Game again: below a certain level of reward, players sometimes choose to void the game and receive nothing rather than accept a bad deal, even if doing that in practice requires time, effort and the acceptance of risk.
Digital media can potentially ameliorate the disconnection between the two sides if they are permitted to do so, and if we as constituents are up to the task. The system of rules is created to avoid having to decide on each individual case as it arises, something that would have been impossible before the advent of the digital crowd. Technologically, at least, it’s now possible to put hard cases into a pool to be judged in real time by the population at large. As I said: at some point in the not so distant future, we will inevitably have to start asking why we still have representative government when we could have direct participation.
More prosaically, though, digital communication allows people to seek advice from those in a position to know the answers, and look for guidance from those who will process their applications or complaints. It matters less in the context of deindividuation that the advice they receive should solve their problems than that those problems be acknowledged and that they feel they are talking to a human being rather than a blank wall. The communication must be authentic rather than scripted, personal rather than professional. It must acknowledge them as participants rather than supplicants. It must be human – and it need not matter that it is electronically mediated, so long as it is kind. (The word ‘kind’ is worthy of a book in itself: the dual meaning of being of the same type as another person and being good to them bears exploration, especially in this context. We say ‘be a person’ and we mean ‘do the thing you would wish done for you if positions were reversed.’ Kind-ness is an issue of identity as much as of empathy.)
Another important consequence in the arrival of digital technology and its facilitation of feedback is that we can look at large systems and recognize them once more not only as part of ourselves, but also as components that can change. Our relationship with text has shifted; where once almost the only widely available text in Europe was the Bible, the immutable Word, now we are surrounded by millions of tiny, impermanent textual bites. The Gospel of Matthew says ‘It is written’ as a statement of certainty; in the Book of Daniel, the writing on the wall spelled doom for Belshazzar; the Ten Commandments were written in stone. Culturally, for hundreds of years what was written was unalterable. After Gutenberg the monopoly of the Churches on text was broken and people began to express dissent; but still, what was written was written for ever, and carried a weight of authority. Even thirty years ago, what was in the papers carried additional legitimacy; and a large number of people still trust the Encyclopaedia Britannica over Wikipedia, despite a similar error-rate.
Now, though, we live in a world where text is fluid, where it responds to our instructions. Writing something down records it, but does not make it true or permanent. So why should we put up with a system we don’t like simply because it’s written somewhere? We are aware, more than ever, that what has been decided and contracted is simply that: an agreed statement. If we now disagree, that statement can be changed if people are willing to change it. Contracts and even laws are subject to renegotiation. That realization, incidentally, is something many companies and indeed governments – or the people who work in them – have failed to come to terms with in the context of the wider populace but seem able to take on board quite readily in relation to themselves. The UK parliament notably cannot bind a future parliament; the debts of sovereign nations are even now being reassessed and written down. The leaders of the US right trumpet the idea that ‘corporations are people’ but seemingly have not begun to consider the consequences of their constituents following this logic to its conclusion and demanding the right to act like corporations. It’s too late for them to object, anyway: the franchise of rewriting has been extended to include everyone, but the aristocracy of the old regime have yet to acknowledge it.
Those debates to one side for now, the unfortunate side effects of the culture of expert systems make an important point as we begin to build new technologies that will create around themselves new economic ecosystems and institutions; it comes back to the feedback loop that embraces society, technology and the individual: the choices we make now about the technology we use and how we use it will affect the way our world looks in the future. Buying – whether it’s with money or resources or with data – is voting: choices have consequences. Consequences do not fade away, and some become locked in. We have to learn to choose not just for the short term and the immediate bargain, but for the long game and the better outcome. In making or in endorsing new practices, we have to look at what they will create around themselves. We need to practise our choosing.
And more than that, we have to learn to code the change we want to see in the world.
Perhaps the most important corollary of the mutually influential relationship between technology and society is that, knowing it’s there, we have some degree of choice about which direction to take. It’s impossible to know in advance precisely how a given technology will influence the future, but we can attempt to create technologies and systems that are necessary for the kind of world we want to live in. More specifically, we have to create the technologies that are necessary for the institutions we believe we will need to support the way of life we desire.
By way of example: a favourite dream of proponents of the Internet as a great scholastic resource and a driver of education and literacy is the global digital library, a massive accessible collection of information in the form of texts, images and other media available from anywhere in the world. When the Google Book Settlement was in play, one of the arguments in favour (and it is, in my opinion, still the most powerful recommendation of that project, however much I disliked Google’s approach to it) was that the resulting archive would be a moment akin to the arrival of the Gutenberg press. Google’s mission to make the world’s information searchable would dovetail with an absolutely undeniable positive: the creation of a library of extraordinary breadth and depth.
The problem with the Gutenberg comparison is control. When Johannes Gutenberg created his revolutionary press in the middle of the fifteenth century, the consequence was the breaking of the Church’s monopoly on scholarship in the Christian world. The invention of movable type was a moment of massive decentralization. If information wants to be free, that was the first time that it ever saw daylight.
Creating a digital library owned by a private company in one country is not the same thing at all. It inevitably implies the possibility of a monopoly on that information emerging and, indeed, that was one of the profound objections raised in the fairness hearings surrounding the Google Book Settlement (though not the one on which the settlement was rejected). Be
cause Google has, still, more goodwill than any other corporate giant, people tend to be less concerned about the possibility that it will suddenly ‘turn evil’ and require payment for access to such an archive. However, companies do not have solid identities. Granted, I’ve argued that people also are more fluid than we care to believe, but it’s a sliding scale. Corporate entities are bought and sold, and sometimes bits of them are broken off. The last company so central to a vital communications service could arguably be said to be AT&T, which, under a 1982 agreement with the US Department of Justice to resolve an anti-trust case, shed its local operations, creating seven new companies known as ‘Baby Bells’. (In any case it has to be acknowledged that part of Google’s desire for the archive was to improve its own search software by improving the system’s understanding of language.) If you imagine Google – or any media giant – in the same situation, divesting itself of such a digital archive, and that archive then being snapped up by a company with a less benign image, you begin to see the problem. The cost of academic journals in some cases can be thousands of pounds a year; what might EvilCorp Hypothetical Industries charge for access to the only complete digital library in the world?
Moreover, digital files controlled centrally are subject to curious dangers. In July 2009 users of Amazon’s Kindle reading device who had purchased a particular edition of 1984 by George Orwell discovered that it had vanished overnight. The disappearance was the consequence of a copyright wrangle, but the precise grounds hardly mattered. That it should have been Orwell’s book – in which a totalitarian government makes dissidents vanish by dropping them down a ‘memory hole’ – made the situation that much more bleakly ironic, but the basic truth was bad enough: a digital book could be removed from the Kindle device on instruction from a central location. Under what other circumstances might a text be pulled? If a book contained material that embarrassed the government, might it, too, disappear?