The Twittering Machine
Page 17
But if the simulacrum is indeed the epitome of technocratic rule, disappearing meaning behind the coercive rule of information, it also represents a problem for power. The world of the simulacrum, a world increasingly drained of meaning, deprives power of its seemingly obvious legitimacy. The authoritative statements of politicians, attorney generals, senior journalists and academics come to seem arbitrary. The attempt to reinject meaning into the system by reviving Cold War ideologies and arousing public opinion against a scapegoat ‘postmodernism’ is doomed to fail, however. Since these efforts are themselves part of the simulacrum, they slip easily back into the cycle of meaningless information. The more sophisticated propagandists recognize this, and instead work with the grain of the collapse of meaning. For example, the BBC alleges that Russian disinformation campaigns no longer bother to promote a single narrative, but instead flood the internet with so many competing versions of a story that no one knows what to think.70 It would be prudent to assume that all parties now involved in disinformation are using similar techniques.
The problem is not that the internet is a web of lies. Of course it is. In 2016, a team of researchers published a study of online conduct which found that less than a third of users claimed to be honest about themselves on the internet.71 But the machine was invented to help us lie. From its first beginnings, even in the precocious French public-sector internet known as Minitel, the first thing users did was dissimulate their identities. Anonymity made it possible to wear new textual skins. In the early days of Silicon Valley idealism, anonymity and encryption was all the rage. The ability to lie about ourselves was thought to bring freedom, creative autonomy, escape from surveillance. Lying was the great equalizer. Silicon Valley, as it emerged in the 1980s and 1990s, was shaped by an aleatory fusion of hippy and New Right ideologies. Averse to public ownership and regulation, this ‘Californian Ideology’, as Richard Barbrook and Andy Cameron dubbed it, was libertarian, property-based and individualist.72 The internet was supposed to be a new agora, a free market of ideas.
This connection between lying and creative freedom is not as strange as it may appear. Milan Kundera, reflecting on Stalinist tyranny, argued that the injunction not to lie was one that could never be made to an equal because we have no right to demand answers from equals.73 Indeed, it is only when we acquire the capacity to lie that we really discover freedom of thought: since only then can we be sure that the authorities can’t read our minds. It is only when we can lie about the future, the constructivists exhorted in the Realistic Manifesto, that we can begin to transcend the rule of brute facts. In this sense, there is a genuine utopian kernel to the Californian Ideology, even if its embodiment within social media is a utopia only for trolls and other sociopaths.
The problem is not the lies. It is information reduced to brute fact, to technologies with unprecedented and unforeseen powers of physical manipulation by means of information bombardment. We naively think of ourselves as either ‘information rich’ or ‘information poor’. What if it doesn’t work that way? What if information is like sugar, and a high-information diet is a benchmark of cultural poverty? What if information, beyond a certain point, is toxic?
One is struck, therefore, by the palpable timidity of commonplace diagnoses of ‘fake news’, opinion silos, filter bubbles and the ‘post-truth’ society. This, the ‘sour grapes’ theory of communications, is sensationalism. But all sensationalism is a form of understatement, all moral panic a form of trivialization, and this is glaringly so in the case of our ‘fake news’ panic.74 The problem is not the lies, but a crash in meaning. The problem is what the survivors, scrabbling in the rubble and detritus of the internet for answers, will believe.
The crisis of knowing has roots which run deep into the institutions from which authoritative knowledge was hitherto produced. The Twittering Machine didn’t cause this crisis, but it is its current culmination. The Twittering Machine is a furnace of meaning.
CHAPTER SIX
WE ARE ALL DYING
Is it possible that in their voluntary communication and expression, in their blogging and social media practices, people are contributing to instead of contesting repressive forces?
Michael Hardt and Antonio Negri, Declaration
Silicon Valley calculates with, and not against, the Apocalypse. Its ever-implicit slogan is: ‘Bring it on’
Geert Lovink, Social Media Abyss
Humanity rocks!
Elon Musk to Sam Harris, Twitter.com
I.
‘I’m going to kill all Muslims,’ he shouted, as almost a dozen worshippers and pedestrians lay injured and, in one case, dead. Almost as quickly, he retreated to a more grimly realistic declaration: ‘I did my bit.’
Darren Osborne killed one Muslim, fifty-one-year-old Makram Ali. But he wanted to kill them all. He was psychologically fuelled for genocide.
He had rented a van and driven it to Finsbury Park Mosque, in a working-class north London community. He arrived at quarter past midnight, on 19 June 2017, driving up Seven Sisters Road without much of a plan other than to kill. The night before, he had bragged in a pub that he was a ‘soldier’, and that he was going to ‘kill Muslims’. It is unclear what he would have done, had he not happened on a group of Muslims who had just performed night-time prayers and were attending to a man who had collapsed at a bus stop.
Like other ‘lone wolf’ attacks that had taken place over the previous two years, this one was chaotic, low-tech, disorganized. He simply drove up the road, ploughing the van into the crowd. When the van struck, he was driving at 16 mph. In the moment of his supposed triumph, he was heard saying, ‘kill me’.1
Finsbury Park Mosque is a hate symbol for the British far right. Indeed, on the day after the attack, British fascist Tommy Robinson called it a ‘revenge attack’, blaming the mosque for producing terrorists. In fact, the mosque hadn’t seen a jihadist cadre for almost fifteen years, when the leadership of the Islamist preacher Abu Hamza was ousted. Even if Osborne was the ‘avenger’ that he desperately wanted to be, no one at the mosque, or huddled at a bus stop outside it, could have given him anything to avenge. Nonetheless, Robinson’s claim echoed Osborne’s own incoherent self-justification, that the attack was revenge for an earlier massacre by jihadist militants on London Bridge.
It had taken only weeks for an unemployed man living in Wales to become an ideologically obsessed murderer. According to his relatives and estranged partner, Osborne had never before exhibited any racism. He had been troubled, alcoholic, violent, abusive, depressed – he had even attempted suicide, and made a failed attempt to get himself committed. In fact, according to a neighbour, he had always been a ‘complete cunt’, but never a racist. He was barely politically aware. He wouldn’t even know who the Prime Minister was, according to his sister.2
But then Osborne started consuming content made by fascist group Britain First, and far-right activist Tommy Robinson, chugging it like antidepressants. From alcoholism and drug dependency, he went straight to the ‘red pill’.iv Only then was all of this misery politically weaponized.
Osborne had, a few weeks before the attack, seen a BBC docudrama about a child-grooming scandal in the northern English town of Rochdale. Like most such scandals, it involved middle-aged men, some hitherto respected, taking advantage of underage girls. The girls were often particularly vulnerable because of their class, or because they were in care or in foster homes. In this case, the men were Muslim and the girls were white. And for Osborne, their being Muslim must explain their evil. Indeed, it was as if he had concluded that Islam explained all evil: a universal theodicy.
Nor was Osborne arriving at this conclusion in isolation. In Britain in 2018, Islam had long been a punching bag for politicians and the press, the all-purpose bogey-scapegoat comparable only to those other anti-nationals, immigrants. The psychoanalyst Octave Mannoni once remarked on the surprising numbers of Europeans who, having never been to the colonies or seen a colonial subject, dreamed of them.3 T
he same could be said of many Britons who had encountered Islam only as a manifestation of their own unconscious. The propaganda of Twitter Nazis and YouTube fascists tuned into this dreamwork and turned the volume up by several orders of magnitude. Tellingly, on the day after the murder outside Finsbury Park Mosque, Tommy Robinson took advantage of an ITV platform to say that the Quran was an incitement to violence. For Robinson, having never demonstrated any expertise on the Quran, this too was dreamwork.
It is not difficult to imagine the compensatory, antidepressant effects of consuming such racist propaganda. It puts a name to an otherwise nameless misery and rage. It identifies a specific evil, points to a remedy and a community to which one might belong. It tells its audience – often white men younger than Osborne – that their seething sense of resentment is rational and justified. And it is exciting, and briefly empowering. The eagerness with which the ‘red pill’ is swallowed, and cult figures made of manipulative fascists like Tommy Robinson, is not in that sense hard to explain. Redpilling is, for many of its users, potent self-medication, better than any combination of cognitive behavioural therapy and prescription drugs.
To that extent, fascist propaganda works well on the Twittering Machine, which, among the many things it is, is a pharmacological device. Its economic model presupposes a surplus of misery which, Rumpelstiltskin-like, it spins into gold. As endless correct but point-missing liberal critiques maintain, the social industry does not deal in truth. Of course it doesn’t; it deals in addictive substances, which it administers to the melancholic.
II.
What are the politics of the simulacrum? In cyberspace, the great ‘consensual hallucination’ as William Gibson called it, what we experience as social and political reality is increasingly a graphic representation of digital writing.4 Whoever masters the rapidly evolving idioms of this system of writing has a share in the production of virtual reality.
Fascists have proven to be avid early adopters of new technologies. They were among the first to use email in order to organize without being disrupted by the authorities. A 1993 march by German neo-Nazis in commemoration of Rudolf Hess eluded an official ban by using email communication. Throughout the early 1990s, far-right, Holocaust-denying groups used bulletin board systems and, later, the emerging ecology of ‘alt’ areas within Usenet.5
This colonization of new technologies was not just an imperative for such groups, weakly rooted, their supporters scattered, unlikely to gain sympathetic coverage without subterfuge as to their politics. It was a far-sighted attempt to build a space for white-supremacist and Nazi ideologies within the new mediascape almost before anyone noticed. For example, Stormfront, a hub of far-right activity, was launched by neo-Nazi and former grand wizard of the Alabama Ku Klux Klan, Don Black. He learned his computing skills while in prison for attempting a coup in the Dominican Republic. What began in the early 1990s as a small bulletin board was relaunched as a website in 1996, then evolved into a web forum with roughly 300,000 users. Its ratings on Alexa, the website ranking service, were comparable to commercial media outlets. This is despite the fact that the forum is outmoded in its features and appearance, having changed little since 2001.6
In the subsequent settling of the platforms, the far right has arguably been most successful with YouTube. Alt-right broadcasters have been ‘monetizing’ like microcelebrities, and ‘influencing’ like beauty bloggers. Fascists like Richard B. Spencer, Stefan Molyneux and Tommy Robinson are celebrities of the ‘intellectual dark web’, helped along by outriders such as the libertarian Joe Rogan and conservative Dave Rubin – and, often enough, broadcast media.
And they don’t merely leverage the techniques of microcelebrity and ‘influencing’. They benefit from specific features of the YouTube business model. Journalist Paul Lewis and academic Zeynep Tufekci have each gone down the rabbit hole of YouTube’s ‘up next’ recommendations algorithm.7 The algorithm is there to keep users glued to the screen with content likely to be addictive. As with the other social industry platforms, the priority is time on device or, in the case of YouTube, ‘watch time’. Each found that no matter the viewing history of the dummy accounts they used, the algorithms kept pointing them progressively towards more ‘extreme’ content: from Trump to neo-Nazis, from Hillary Clinton to 9/11 Truth.
But what is so addictive about ‘extreme’ content? Part of the answer is that much of what is characterized as extreme in this context is conspiracy infotainment: for example, in the run-up to the 2016 presidential election, the algorithms were promoting anti-Clinton conspiracy stories.8 When so many distrust the news, and find it frustrating and confusing, infotainment seems to be less ‘hard work’. It offers what can feel like critical thinking in a recognizably digestible and pleasurable way. In the face of official agnotology – the practice of deliberately producing mass ignorance on major issues – it can feel empowering. But it may also be that the algorithms pick up on dark yearnings simmering below the supposedly consensual surface of politics.
So not only do far-right YouTubers network, collaborate and signal-boost one another’s brands, driving their collective content up the viewing charts. Not only are they careful enough to avoid trigger words likely to be caught by an anti-hate speech algorithm. They can expect the platform to promote them precisely because of how riveting their content is supposed, by the algorithms, to be. Zeynep Tufekci argues that ‘YouTube may be one of the most powerful radicalizing instruments of the 21st century’.9
‘In the old days’, wrote Irish academic John Naughton, ‘if you wanted to stage a coup, the first thing to do was to capture the TV station. Nowadays all you have to do is to “weaponize” YouTube.’10 A coup, of course, is a very twentieth-century technology. And one which, as yet, would be beyond the wherewithal of the networked far right. Nonetheless, it would be foolish to discount the aggregate impact of propaganda. Like advertising, it has to work on someone, otherwise the industry would die. YouTube’s liberal critics have a point when they underline the reality-bending effects of this kind of simulacrum. As former Google engineer Guillaume Chaslot put it, YouTube ‘looks like reality, but it is distorted to make you spend more time online’.11
From Twitter revolutions to YouTube coups, technological determinism is attractive because of the way it simplifies problems. But if we succumb to the lure, we miss the real story. The obvious question is, why should neo-Nazi material, or conspiracist infotainment, be so riveting? When pundits complain of ‘radicalization’, the assumption appears to be that it is sufficient to be exposed to far-right material to be conveyed along an escalator belt towards ‘extremism’. Yet, of course, most of the pundits who have viewed this material don’t claim to have been ‘radicalized’ by it. And YouTube isn’t deliberately promoting an agenda. Rather, the platforms, by their nature, are magnetically drawn to drama, whether political or personal. The user becomes, in China Miéville’s term, a ‘dramaphage’.
The content agnosticism of computational capitalism has political valences, but the algorithm’s effects go well beyond political content. The artist James Bridle has written of the surprisingly outré and noir YouTube content for kids, which involves erotic or violent content: Peppa Pig eating her daddy or drinking bleach, for example.12 This material was created to meet a demand identified by the algorithms – in other words, it reflected data coming from users: searches, likes, clicks and watch time.13 In this respect, it was not unlike the algorithm-driven merchandise of previous years: t-shirts with such slogans as ‘Keep Calm and Rape a Lot’, ‘Kiss Me I’m Abusive’ and ‘I Heart Boiling Girls’. And platform behaviour obeys what the ethnographer Jeffrey Juris calls a ‘logic of aggregation’.14 It herds users together in temporary groupings based on this data. It establishes correlations over whole data populations between certain types of content and certain behaviours: stimulus and response. It works only because of the response. There has to be something in some viewers waiting to be switched on. The algorithms, by responding to actual behavio
ur, are picking up on user desires, which may not even be known to the user. They are digitalizing the unconscious.
The platforms thus listen intently to our desires, as we confess them, and give them a numerical value. In the mathematical language of informatics, collective wants can be manipulated, engineered and connected to a solution. And new technologies have only been as successful as they have been by positioning themselves as magical solutions. Not just to individual dilemmas, but to the bigger crises and dysfunctions of late capitalism. If mass media is a one-way information monopoly, turn to the feed, the blog, the podcast. If the news fails, turn to citizen journalism for ‘unfiltered’ news. If you’re underemployed, bid for jobs on TaskRabbit. If you’ve got little money but own a car, use it to make some spare money on the side. If you’re undervalued in life, bid for a share in microcelebrity. If politicians let you down, hold them to account on Twitter. If you suffer from a nameless hunger, keep scrolling. The business model of the platforms presupposes not just the average share of individual misery but a society reliably in crisis.
III.
Why does the far right thrive on YouTube? Why, by the same token, does Donald Trump win Twitter? Why is it that none of his clever online interlocutors, though often more knowledgeable than he is, ever lay a glove on him? What about the Twittering Machine is so congenial to Trump’s performance art? Isn’t trumping the enemy half the pleasure of being on the machine? We should begin to take seriously the possibility that something about the social industry is either incipiently fascistic, or particularly conducive to incipient fascism.