The Blind Giant: Being Human in a Digital World
Page 17
The Internet is not a broadcast medium.
I never get tired of saying that. I keep having to say it because people of my own age and older grew up on the assumptions of a world of television, newspapers, film and so on which was essentially a one-way flow of information and ideas. The paradigm for media was a poster at a bus stop. You told the public what you were doing and then they knew about it. That was it. They could write to you, of course, but it was time-consuming and you weren’t really expected to respond. More, the speed of events was perceived as being slower. Before email and fax, an urgent query would take a day to arrive (providing it was posted before the last collection) and the reply could not take less to return. In other words, the timescale for all but the most urgent correspondence was three days, and more likely a week. News, business and government moved – to outward appearances at least – more slowly. Data were harder to gather, and the effects of policy decisions could only really be estimated over a term of years. The public was not generally privy to government statistics in any case, and the UK’s institutional culture regarded anything not specifically public as confidential. Information did not flow.
The Internet, however, is a mass communications platform: it allows the flow of information in all directions. And out of that quality have emerged the social media, which are also not for the most part hierarchical or top-down. Everyone can communicate with everyone. That’s how they work and what they are. Participating in the social media is a very different activity from merely accessing websites or playing non-social online games. It involves interaction with other people, and they are a discriminating bunch. If you give nothing, you get nothing. If you engage at a low level, what comes back to you is by and large pretty unexciting. On the other hand, if you put some effort into social media, people respond rapidly with perceptions and favourite things of their own. Social media are reciprocal, and you can tell how you’re doing because it will be evident in people’s reactions to you. Social media are about connection rather than isolation: Twitter, Facebook and the rest are each in their own ways feedback structures.
Feedback, if you aren’t quite sure, is a simple notion for something that can become fiendishly complex. Most people are aware of it now as a public relations term: companies and councils are forever seeking our ‘feedback’ on customer satisfaction forms. They then take our opinions of their work and (notionally) use those opinions to improve their service. Everyone benefits. Except, of course, that in many cases it feels to us on the outside that the feedback is simply ignored and the point of the exercise was not improvement but pacification.
Real feedback is the flow of information from the output of a system back into the earlier stages; when a microphone gets too close to a speaker, the output of the speaker is fed back into the amplifier through the mic. The noise gets louder and louder until either the mic is moved, the speaker or the amplifier is switched off or something explodes. In a more constructive setting, though, feedback can be a powerful force for positive change. The best example – as Decision Tree author Thomas Goetz observed in Wired recently – is probably those interactive road signs that tell you how fast you’re going as you approach a pedestrian crossing or a school; you get information about your speed (which you already have, but the sign is external to your car dashboard and hence isn’t part of the regular noise of driving, so you take notice of it) and you compare it to the limit. The result, across the board, is a 10 per cent reduction in speed which persists beyond the immediate vicinity of the sign – generally several miles beyond. There’s no threat, no penalty, no physical restraint. The feedback itself, coupled with a low-level desire not to be a menace to kindergarteners, is enough.
Social media in particular, and digital technology more generally, are capable of doing exactly the same thing – providing relevant information in real time – in more diffuse human situations. That sounds like a small thing, but the effects are potentially huge, especially when that information is combined with a suggested action in response. In my first example of the road signs, the proposed action is obviously lowering your speed – but it can be something more sophisticated: an action that is itself a form of feedback into someone else’s loop. In the case of the Middle East revolutions, users monitoring themselves and their fellows realized that the moment had arrived, that this time something really was going to happen, and then uncovered an array of possible actions in support. Those actions were themselves feedback to the regimes they focused on, urging a modified behaviour: compromise, resign or flee.
The most effective feedback systems, according to Goetz, are those that influence us subtly. The sweet spot is a fuzzy area between obnoxious and intrusive on the one hand and inconsequential on the other. Information supplied in this cosy band is the most likely to have the desired effect. In the context of social media, this is the more likely to occur because the feedback is actively solicited by the user. It’s not an unwelcome irritant from a nagging external source, like the road signs – which I always find a bit finger-waggy – but simply a part of a pre-existing and continuing personal interaction. Lodged in the social network in this way, users are connected to group, place and person. That doesn’t mean, as we know from the Arab Spring, that they retreat to a previous position, but rather that a new set of perceptions of reality are instated as norms. Under some circumstances, this will be a kind of reindividuation, a calming. In others, the collective mood will be of anger and discontent, and that will become a part of the individual until the perceived issues are resolved.
The social media site and the group of people associated with it become, in other words, the repository of a counter-culture – but do not create or define it. That is still done by people. The Riot Wombles weren’t created by social media either; rather, the communications network allowed a local whimsy to reach out across the country and, using a childhood image that is widely known among people in the UK, take root in a variety of locations. Knowing that others were doing the same, and that the media were picking up on the story, and therefore that more people were coming to help, was a virtuous feedback loop.
By putting people in touch with others who feel the same way, digital communications technology compounds perceptions, facilitates the generation of movements, gives reinforcement to those who otherwise might feel alone. Above all, though, it allows us to understand in real time things that historically have taken place on slower scales, over months, years or decades. Websites such as They Work For You allow constituents to monitor the voting records of members of parliament day to day, and constituents unhappy with their representative’s performance can say so immediately. Almost anything can now be observed as it takes place, rather than after the fact, and the heartbeat of nations, rather than playing out across timespans of generations, can be heard every morning and afternoon in the financial figures and the political reports. We are no longer disconnected from what’s happening around us. We can see not only ourselves – courtesy of those road signs, and the systems that allow you to check your electricity consumption, your calorie intake, your use of the working day – but our nations.
Digital technology can also make us conscious of ourselves as parts of the systems that make up our society. This is not to say that we’re all cogs in the machine. We are individuals, each of us interesting and special in ourselves, but we are also, consciously or not, parts in any number of systems – as with the shuttering of the News of the World. The paper’s demise was partly triggered by communications from members of the public to advertisers: ‘don’t associate yourself with this; we are angry.’ The message was heard loud and clear, and advertisers withdrew (demonstrating among other things that commodification does not render you powerless, though it may change the nature of your power). The interesting point is that there was at least a moderate consciousness in the public debate that targeting advertisers was a way to send a message to the paper. It was not simply a question of people disapproving of the brands’ involvement with the News
of the World. It was that in telling the brands of their disapproval, they could create a desired reaction: a conscious use of feedback. Being aware of our status gives us a degree of control over our environment.
In the social sciences there is a somewhat circular debate about agency, or, more plainly, how things happen in the human world. Do individuals have the power to change things? Or are we simply at the mercy of forces in the economy and in demographics that are so vast as to be imponderable? Are we capable of changing the course of events, or do events spring from interactions so complex and weighty that no one could hope to understand, let alone alter, the flow? Perhaps one of the most obvious examples is the question of whether revolutions are the product of heroic individuals working to undermine the established order, or whether they come as a consequence of giant structural forces that cannot be provoked, speeded or slowed. Vladimir Ilyich Lenin wrote that ‘revolutions are not made, they come’, but one might argue that he successfully initiated, transformed and perhaps ultimately betrayed one of the most extraordinary uprisings of the twentieth century.
In ordinary life, we tend at the moment to accept that there are structural forces that act upon us and which we cannot influence. We have been told repeatedly that the banking crisis, for example, was a structural problem, a great institutional madness in which the poor decisions of a few were somehow magnified to create a seismic collapse. But it’s also true – and increasingly obvious – that we are part of these structural forces ourselves. We are bits of the group, and the changes in the group’s structure are those forces we hear so much about.
The idea of the human being as part of a structure makes people profoundly uncomfortable. It plays to images of ant colonies and slavery, notions of the loss of self. That’s cultural, though, and relatively recent. The industrial world’s sense of what we are as humans has moved further and further over the last decades towards the idea of a single person as being complete. It’s a posture that defines our politics – in the form of our freedoms – and our morality. Where previous generations might have responded, in line with the various touchstones by which they identified themselves and located themselves in the matrix of social and cosmological truths as they understood it, that the most significant unit was the family (Margaret Thatcher’s infamous statement on the subject of social organization, which I mentioned earlier, was more properly: ‘There is no such thing as society. There are individual men and women, and there are families’), the state, or the Church, we assume it is the singular human. Some cultures, including subcultures in the industrialized north-west, still feel more collective than not, but in the UK and US as well as elsewhere we generally make our rules and our decisions on the basis of individualism. It’s an ethos that meshes well with the particularly brassy form of free market capitalism presently fashionable, which exalts the risk-taker, the money-maker and the creator of personal wealth over the steward, the good citizen and the bringer of wider prosperity.
The lineage of this combination goes back from the present day by way of Gordon Gekko, the fictional 1980s financial mogul portrayed by Michael Douglas in Wall Street – or through the real life equivalents of Gekko – to the controversial Russian-American writer Ayn Rand and her disciples in American public life (notably including Alan Greenspan, chairman of the US Federal Reserve from 1987 to 2006) to Ralph Waldo Emerson (the philosopher of self-reliance who grudged ‘the dollar, the dime, the cent as I give to such men as do not belong to me and to whom I do not belong’) and on into the diffuse origins of our capitalist world and the Protestant work ethic of which it has subsequently been stripped.
It is, however, not the only way of seeing things, or even necessarily the most persuasive. A single human being, after all, cannot reproduce; in fact, it’s hard to determine a minimum viable population for human beings. Estimates range from the fifteen who resettled the island of Tristan da Cunha some time in the 1800s (the population today is somewhat below 300, with a high incidence of asthma derived most probably from three of the original colonists who had the condition) to a more robust 3,000, as proposed in a 2007 article in the journal Biological Conservation. Somewhere in that range, presumably, is a number that represents the smallest number of humans necessary to sustain the species and in a way, therefore, the minimum human unit. You could also look at the number of plants required to sustain breatheable air for a single person, the animal and vegetable ecosystem necessary to provide food, and the minimum amount of water and the means to recycle it. A human taken out of context is essentially a corpse.
While that discussion is interesting, and works to prise us away from the knee-jerk response, it doesn’t really answer the question of what the basic building block of human society is. It’s obvious we don’t think of human life as being purely a question of genetic self-propagation. While we hold children in high regard, we don’t generally feel that an individual, having reproduced, no longer has any point to their existence. Similarly, we would not acknowledge the identical twin (or for that matter the clone) of a given person as the actual individual. We would say that they were genetically identical, but still distinct. We would point to the minds and the experiences of two separate people. At some point in the development of the species, we became in effect two things at once: a physical self, which is replicated by sexual reproduction, and a mental self, an identity of ideas, which cannot directly reproduce in the sense that consciousness cannot be split and recombined, but which is composed of concepts that can absolutely spread by discussion, narrative and sharing. This mental self – however much it is bound to the physical one and emerges from it – is the one that we supplement with our digital devices and which we have extended beyond the body into journals, books, artworks and now digital technologies.
I don’t wish to imply a literal dualism here. Absent some startling scientific evidence to the contrary, my assumption is that the mind is an artefact of the brain, a fizzing system of conscious cognition, unconscious drives and biological imperatives, all overlapping and intermingled to produce us. (I also don’t mean to rule out the terrifying, splendid possibilities of advanced organ cloning and high technology to replace broken parts of a given brain. It’s not that a mind is anchored irrevocably to a particular collection of cells, or that someone with a chip replacing an aspect of the brain would suddenly be non-human; rather, the mind emerges from the brain’s encounter with the world. What happens thereafter is the adventure.)
That said, many scholars trace the development of the modern individual – and to a certain extent also the modern brain – from the arrival of the phonetic alphabet. According to Derrick de Kerckhove in The Augmented Mind, the adoption of silent reading, the final stage of the arrival of text, ‘helped to turn speakers into thinkers and critics’. The word was fixed, and could be examined; and along with it, everything else, as well. Maryanne Wolf writes that ‘The implications of cognitive automaticity for human intellectual development are potentially staggering. If we can recognise symbols at almost automatic speeds, we can allocate more time to mental processes that are continuously expanding when we read and write. The efficient reading brain,’ Wolf explains, ‘quite literally has more time to think.’
It may also be that the brain has trouble working on concepts for which it has no linguistic template. A 2004 study conducted by Peter Gordon of Columbia University showed that ‘hunter-gatherers from the Pirahã tribe, whose language only contains words for the numbers one and two, were unable to reliably tell the difference between four objects placed in a row and five in the same configuration,’3 suggesting that someone without language to describe a given concept may not be able to understand or learn that concept. Attempts to teach the Pirahã to count in Portuguese (they live in territory claimed by Brazil) were unsuccessful.
So does that mean there are two aspects of human life, one biological, which requires a large-ish pool to sustain itself, and one mental, which is the product of the brain’s encounter with the world? It’s not even
clear that the modern mind can exist alone. The development of language requires a partner to communicate with and to act as a check; without someone to talk to, our grip on the meaning of words shifts with surprising rapidity. More, the individual is from birth engaged in dialogue, in an exchange of gesture and affection with parent or carer that is so much a thing of interplay and rhythm that some researchers have characterized it as ‘communicative musicality’. This protoconversation, preceding the development of language, is our first experience of life, and we live it as part of a small community rather than as a lone individual.
Without language, in turn, some forms of abstract thought are difficult or impossible. ‘Language’ in this context need not be spoken; it can be a language of signs or text. A study of a deaf community in Nicaragua4 compared two generations and found that members of the newer generation, whose system of sign language was more complex and expressive, were better able to pass what psychologists call a ‘false belief’ test. (In the test, subjects are shown a sequence of events in which two children are playing. One child puts a toy in a particular place and leaves the room. The other then moves the toy to a new location. The test question is: where will the first child look for the toy upon returning to the room? Children under four will answer with the location the toy is in now; older kids realize that the returning child can have no knowledge of the other’s prank, and will look where the toy was when they left.) The implication of the Nicaragua study is that it is harder to develop a full sense of the existence of other, independent minds without a robust and complex system of language.
The modern thinking self, which understands itself to be separate from others and knows that their perspective differs from its own, is obtained to a great degree through language, and therefore to a great extent through discourse, meaning once more that to be a complete, rational human being in the sense of how we usually understand the words, you need people around you to interact with. Which means, in turn, that while we experience the world as individuals, we come to it as individuals who are part of a group. And increasingly, we can use our digital technologies to monitor that group and assess our position in it and relative to it in real time, taking decisions not based on what has already happened and cannot be undone, but on what is happening and what we actually want.