Within days, they were targeted by a group of trolls. The trolls were at first amused by the fact that Henderson had lost his iPod in the days before he died and began to post messages implying that his suicide was a frivolous, self-indulgent response to consumer frustration: ‘first-world problems’. In one post, someone attached an image of the boy’s actual gravestone with an iPod resting against it. But what really sent them spiralling into fits of hilarity was the bewildered outrage they could provoke in the unsuspecting family. The more upset the family got in response, the funnier it was.
Over a decade later, an eleven-year-old boy from Tennessee, Keaton Jones, made a heartbreaking video in which, crying, he described the bullying he was subject to in school.32 His mother, Kimberley Jones, posted it on her personal Facebook page, and it swiftly went viral across various social industry platforms. Celebrities, from Justin Bieber to Snoop Dogg, joined in the wave of support for the child, and a stranger set up a crowdfunding appeal to raise money for Jones’s family.
A degree of scepticism about the story would have been entirely warranted. There is already a long tradition of Upworthyi-style, emotive, ‘compassionate’ viral content, much of it manipulative where not downright fabricated. These videos tend to use sentiment to reinforce conventional morality. For example, a well-known viral video featuring a homeless man who spends money donated to him on food for others (rather than on the demon booze) was used to raise $130,000 in donations before it was debunked. Yet there was no such scepticism as far as Keaton Jones’s story was concerned, and it seems to have been true.
Nonetheless, almost as fast as Jones was canonized, the tide turned. Social industry detectives had fished around on Kimberley Jones’s Facebook account and found photographs of her, smiling, with the confederate flag, and posts where she spoke disobligingly about Colin Kaepernick’s NFL protest against racism. Overtly racist comments were attributed to her, based on material found on a fake Instagram account. Rumours, never corroborated, emerged that Jones was bullied because he had used racist epithets in class. Tweets making this claim were retweeted hundreds of thousands of times. A parody account, ‘Jeaton Kones’, which portrayed Jones in stereotypical Southern ‘white trash’ colours, went viral.
Jones was, in the idiom of social industry users, ‘milkshake-ducked’. He had become one of an ever-growing subpopulation of people who, having been adored by ‘the internet’ for five minutes, are abruptly hated because something unpleasant has been discovered or invented about them. But in this case, and not for the first time, the internet became far more ruthless and cynical with its questionable moral alibi than even the most sadistic school bully. As though there is already something potentially violent and punitive in idealizing someone; as though the whole point of such mawkish idealizations is that they have to fail – you set them up, the better to knock them down.
As this was unfolding, the latest in a string of cyberbullying-related child suicides took place in the United States. Ashawnty Davis, who, her parents say, was subject to bullying at school, found that a smartphone video of herself fighting another girl from the same school had been uploaded to a social industry app, where it went viral.33 Davis suffered tremendous anxiety over the video. Within two weeks, she was discovered in a closet, hanged. The discomfiting proximity of these events raises alarming thoughts. Would ‘the internet’ stop, would it even be able to stop, if it had driven Jones to commit suicide? If, rather than simply trolling a grieving family, online swarms had caused their grief in the first place?
A crucial difference between the Henderson story and the Jones story is that the trolls in the first case were marginal, subcultural, self-consciously amoral and easy to revile. In the second case, though trolls were certainly operating, their actions blended into those of millions of other social industry users driven by a mixture of sympathy, identification, emotional voyeurism, the sensation of being part of something important, ultimately souring into resentment, distrust and spite. The trolling was generalized.
One distinction, perhaps, is that trolls, unlike most users, are fully aware of, and exploit the cumulative impact of, hundreds of thousands of small, low-commitment actions, like a tweet or retweet. Most of those who participated in the mobbing of Jones spent at most a few minutes doing so. It was not a concerted campaign: they were just part of the swarm. They were minute decimal points in a ‘trending topic’. Individually, their responsibility for the total situation was often homoeopathically slight, and thus this indulgence of their darker side, their more punitive, aggressive tendencies, was minor. Yet, incentivized and aggregated by the Twittering Machine, these petty acts of sadism became monstrous.
As the trolling slogan has it, ‘None of us is as cruel as all of us.’
IX.
The risk, in appealing to such outré examples, is that it can legitimize a form of moral panic about the internet, and therefore dignify state censorship. This would be the traditional answer to the Oresteian Furies: domesticate them with the ‘rule of law’.34 It is predicated on upholding a traditional hierarchy of writing, at the top of which is a written constitution or sacred text from which written authority flows. What a society deems acceptable and unacceptable is anchored to an authoritative, venerable text. Of course, the rule of law has never been as good at restraining the Furies as liberals hoped. The McCarthyite witch-hunts of mid-twentieth-century America showed that political paranoia could easily be disseminated through the workings of the liberal state.35
What is happening now, however, is that the digitalization of capitalism is disturbing these old written hierarchies, so that the spectacles of witch-hunting and moral panic, and the rituals of punishment and humiliation, are being devolved and decentralized. The spectacle, which the French Situationist Guy Debord defined as the mediation of social reality through an image, is no longer organized by large, centralized bureaucracies.36 Instead, it has been devolved to advertising, entertainment and, of course, the social industry. This has birthed new ecologies of information, and new forms of the public sphere. It has changed the patterns of public outrage. The social industry hasn’t destroyed the power of ancient written authority. What it has added is a unique synthesis of neighbourhood watch, a twenty-four-hour infotainment channel and a stock exchange. It combines the panopticon effect with hype, button-pushing, faddishness and the volatility of the financial markets.
However, the record of the liberal state in dealing with the social industry is poor, and there is a tendency for it to fuse with the logic of online outrage, rather than containing it. Cases of legal overreaction to statements made on the internet are well known. The debacle famously known in the UK as the #twitterjoketrial involved the state arresting, trying and convicting twenty-eight-year-old Paul Chambers for making a joke on Twitter.37 He expressed his irritation with the local airport being closed by ‘threatening’, in clearly sarcastic tones, to blow it ‘sky-high’. Chambers’ conviction was quashed after a public campaign, but not before he lost his job. Less well known, but perhaps just as ridiculous, was the case of Azhar Ahmed, who, in a moment of anger about the war in Afghanistan, posted that ‘all soldiers should die and go to hell’.38 Rather than treating it as an emotional outburst to which he was entitled, the courts convicted him for ‘sending a grossly offensive communication’.
Perhaps more telling are cases where police action was prompted by social media outrage. This is what happened to Bahar Mustafa, a student at Goldsmiths in southeast London.39 As an elected officer in her student union, she had organized a meeting for ethnic-minority women and non-binary students. Conservative students, outraged that white men were asked not to attend, mounted a social media campaign to expose her ‘reverse racism’. In the furore, she was accused of circulating a tweet with the ironic hashtag #killallwhitemen, as proof of this ‘reverse racism’. Mustafa, though insistent that she had never actually sent such a tweet, was arrested. The Crown Prosecution Service, rather than treating this as a bit of internet trivia, trie
d to prosecute her, only withdrawing the case when it became clear it had little chance of success. But it fuelled an apocalyptic multimedia storm of fury, resulting in racist abuse directed at Mustafa and invitations to ‘kill herself’ or offer herself to ‘gang rape’. These tweets did not result in prosecution. Nor do the vast majority of such posts. Instead, the law was fused to arbitrary patterns of outrage flaring up against individuals deemed to have breached thresholds of taste and propriety on the social industry. The Furies are often magnified by the rule of law, rather than being chastened by it.
This means improvised rituals of public shaming, breaking like a thunderstorm on the medium, can feed into official responses. And because the social industry has created a panopticon effect, with anyone being potentially observed at any time, any person can suddenly be isolated and selected for demonstrative punishment. Within online communities, this produces a strong pressure towards conformity with the values and mores of one’s peers. But even peer conformity is no safeguard, because anyone can see into it. The potential audience for anything posted on the internet is the entire internet. The only way to conform successfully on the internet is to be unutterably bland and platitudinous. And even if one’s whole online life is spent sharing ‘empowering’ memes, ‘uplifting’ quotations and viral video clickbait, this is no guarantee against someone, somewhere finding your very existence a fitting target for abuse. Trolls programmatically search for ‘exploitability’ in their targets, where ‘exploitability’ means any vulnerability whatsoever, from grieving to posting while female or black. And trolling is a stylized exaggeration of ordinary behaviour, especially on the internet.40
Not everyone is programmatic in their commitment to exploiting and punishing vulnerabilities, but many still do so, knowingly or otherwise. And it is compounded by the human propensity to confuse the pleasures of aggression with virtue. The late writer, Mark Fisher, described the progressive version of this through the baroque metaphor of the ‘Vampire Castle’.41 In the Castle, Fisher wrote, well-meaning leftists accede to the pleasures of excommunication, of in-crowd conformity and of rubbing people’s faces in their mistakes, in the name of ‘calling out’ some offence. Political faults, or even just differences, become exploitable characteristics. Since no one is pure, and since the condition of being in the social industry is that one reveals oneself constantly, then from a certain perspective our online existence is a list of exploitable traits.
And when a user’s exploitable traits become the basis for a new round of collective outrage, they galvanize attention, add to the flow and volatility, and thus economic value, of the social industry platforms.
X.
‘Language is mysterious’, writes the religious scholar Karen Armstrong.42 ‘When a word is spoken, the ethereal is made flesh; speech requires incarnation – respiration, muscle control, tongue and teeth.’
Writing requires its own incarnation – hand–eye coordination, and some form of technology for making marks on a surface. We take a part of ourselves and turn it into physical inscriptions which outlive us. So that a future reader can breathe, in the words of Seamus Heaney, ‘air from another life and time and place’. When we write, we give ourselves a second body.43
There is something miraculous about this, the existence of a ‘scripturient’ animal, barely a dot in the deep time of the planet’s history. Early theories of writing could hardly resist seeing it as divine – ‘God-breathed’, as the Book of Timothy has it. The Sumerians regarded it as a gift from God, alongside woodwork and metalwork – a telling juxtaposition, as if writing was indeed just another craft, another textile, as in Inca civilization. The Egyptian word ‘hieroglyph’ literally translates as ‘writing of the gods’.
The ancient Greeks exhibited an interesting distrust of writing, worrying that it would break the link to sacred oral cultures and, by acting as a mnemonic device, encourage laziness and deceit. Yet they also considered scripture holy in that it retained a link to the voice. The religious historian David Frankfurter writes that the letters of their alphabet, insofar as they denoted sounds, were regarded by ancient Greeks as ‘cosmic elements’.44 Singing them could bring one to a state of perfection. So in addition to writing as mnemonic, accounting device and craft, here was writing as musical notation, divine poetry.
The relationship of writing to the voice has always been confused by historical myths. The Polish-American grammatologist I. J. Gelb was typical of his Cold War contemporaries in arguing that the purpose of writing was ultimately to represent speech, and therefore alphabets were the most advanced form of writing.45 In the alphabet, each letter represents a sound, or a phonetic element. In other writing systems, elements might include logograms, where a whole word is represented by a single element; ideograms, where a concept is represented without any reference to the vocal sounds involved in saying it; or pictograms, where the written element resembles what it signifies. The assumption of the superiority of alphabets, a progress myth of modernity, is based on the fact that they allow an infinite number of infinitely complex statements to be written down.
Most of the writing we are surrounded by today does not represent speech. Like seismic writing, musical notation, electronic circuit diagrams and knitting patterns, today’s computer programs and internet code and script – the ur-writing of contemporary civilization – mostly dispense with phonetic elements. What is more, our online writing is increasingly rebus-like, drawing on non-alphabetic elements – emojis, check marks, arrows, pointers, currency symbols, trademarks, road signs, and so on – to convey complex tonal information quickly. Indeed, one of the ironies of writing on the social industry is that it uses non-alphabetic notation in order to represent speech better. The parts of our speech that have to do with tone, pitch and embodiment, and which are conveyed in real time in face-to-face conversation, tend to be lost in alphabetic writing, or expressed only with considerable elaboration and care. The economy of emoticons and memes is about giving the voice a convenient embodiment.
XI.
In 1769 the Austro-Hungarian inventor Wolfgang von Kempelen developed the first model of his Sprechmaschine (speaking machine).
It was an attempt to produce a mechanical equivalent of the apparatus – lungs, vocal cords, lips, teeth – which produces the acoustically rich, subtle and varied set of sounds known as the human voice. The inventor struggled, through successive designs using a box, bellows, vibrating reed, stoppers and a leather bag, to make his machine speak. Each time, its idiot leathery mouth yammered, and nothing remotely human came out.
At last, the problem of reproducing speech efficiently was solved with the telephone. Speak into a traditional telephone, and the sound waves hit a diaphragm, making it vibrate. The diaphragm presses on a small cup filled with fine carbon grains which, when pressed together, conduct a low-voltage electrical current. The more the diaphragm presses down, the more densely the grains are packed together, the more the electricity flows. Thus, by means of a mild electrical current, the voice could be separated from the body, uncannily reappearing halfway around the world.
In a way, it was a form of writing. The sound waves inscribed a pattern on the diaphragm and carbon particles, which converted the pattern into an electrical signal for transmission. But it left no permanent trace. The invention of a device which could be programmed with written instructions to carry out a series of logical operations – the computer – changed this, by changing the hierarchy of writing. When you write using an old typewriter, or pen and paper, you leave real, physical inscriptions on a surface. Even when mechanized, the shapes are imperfectly formed, and there are likely to be spelling errors and stray punctuation marks. When you write using a computer, spelling and punctuation errors are usually picked up, and the letters are formed as close to perfectly as possible. But the ‘inscription’ you see is the virtual, ideal representation of an entirely different system of writing being carried out on complex electronic circuitry, whirring discs, and so on.
Our entire experience with the computer, the smartphone and the tablet is designed to conceal the fact that what we’re seeing is writing. According to the software developer Joel Spolsky, what we encounter is a series of ‘leaky abstractions’: ‘a simplification of something much more complicated that is going on under the covers’.46 So where we see a ‘file’, ‘folder’, ‘window’ or ‘document’, these are abstractions. They are simplified visual representations of electrical parts performing a series of logical operations according to written commands. When we see ‘Notifications’ and ‘Feed’, we are seeing the simplified visual representation of the operations of written software code. These abstractions are ‘leaky’ because, though they look and feel perfectly formed, the complex processes they represent can and do fail. As in The Matrix, the writing programmes an image for our consumption: we don’t see the symbols, we see the steak coded by the symbols. The image is the lure. What it obscures is that all media – music, photography, sound, shapes, spaces, moving imagery – has already been translated into the language of written numerical data.
But it is when we begin to write to the Twittering Machine that a new and unexpected wrinkle is introduced into the situation, upending the traditional division between the voice and writing. The Twittering Machine is good at reproducing elements of speech usually lost in writing, in a computer-mediated written format. It is not just that nuances of pacing, tone, pitch and expression are conveyed with some labour-saving economy by means of emoji and other expedients. In ordinary conversation, the participants are all simultaneously present, and the discussion unfolds in real time, not with the usual lag of written correspondence or emails. Because of this, conversation is informal, loose in its use of conventions, and assumes a lot of shared ground between the participants. The social industry aspires to the same celerity, informality, to give the impression of being a conversation. It gives voice to the voice.
The Twittering Machine Page 3