How Language Began

Home > Other > How Language Began > Page 27
How Language Began Page 27

by Daniel L. Everett


  Since his was a pre-video-camera era, Efron contracted an artist to help him, Stuyvesant Van Veen. Efron was the first to come up with an effective methodology for studying and recording gestures, as well as a language for describing them. Although later parts of the book orthogonally attacked Nazi-science, the book was a breakthrough. Efron’s work, though pioneering, emerged from a long tradition.

  Aristotle discouraged the overuse of gestures in speech as manipulative and unbecoming, while Cicero argued that the use of gestures was important in oratory and encouraged their education. In the first century, Marcus Fabius Quintilianus actually received a government grant for a book-length study of gesture. For Quintilian and most of the other classical writers, however, gesture was not limited to the hands but included general orientation of the body and facial expressions, so-called ‘body language’. In this they were correct. These early explorers of gesture in human languages discovered that communication is holistic and multimodal.

  The Renaissance rediscovered the work of Cicero and other classical scholars, sparking European interest in the relationship between gesture and rhetoric. The first book in English on gesture was John Bulwer’s Chirologia: or the Naturall Language of the Hand in 1644.

  By the eighteenth century researchers on gesture began to wonder whether gestures might have been the original source for language. This idea is echoed by several modern researchers, but it is one that should be discouraged. Gestures that can serve in place of speech, such as sign languages or mimes, in fact repel speech, as University of Chicago psychologist David McNeill has shown. They replace it. They are not replaced by speech, which would have to be the evolutionary progression if gestures came first.

  Interest in understanding the significance and role of gestures in human language and psychology diminished tremendously, however, in the late nineteenth and early to mid-twentieth centuries. There were several reasons for this decline. First, psychology was more interested in the unconscious than in conscious thinking during this period and it was thought, erroneously, that gestures were completely under conscious control. Gesture studies also dwindled because linguists became more interested in grammar, narrowly defined by some so as to exclude gestures. The interest in the messy multimodality of language had waned. Another factor leading to the decline of gesture studies was that linguistic methods of the day were still not up to the task of studying gestures scientifically. Efron’s work was extremely hard to do and wasn’t amenable to widespread duplication, at least as many perceived it at the time. Not everyone can afford an artist.

  Linguist Edward Sapir was different. He saw language and culture as two sides of a coin. Therefore, his view of gestures was similar to those of current research. As Sapir said, ‘the unwritten code of gestured messages and responses is the anonymous work of an elaborate social structure’. By ‘anonymous’ Sapir meant tacit knowledge or dark matter.

  This raises the fundamental and obvious question: what are gestures? Are sign languages gestures? Is a mime gesture? Are signals such as the ‘OK’ sign with the thumb and forefinger or ‘the bird’ with an upraised middle finger gestures? Yes to all the above. Some researchers, such as David McNeill and Adam Kendon, classify all these different forms along a ‘gesture continuum’ that looks at gestures in terms of their dimensions and their relationship to grammar and language (Figure 32).

  Gesticulation, the most basic element of the continuum, is the core of the theory of gestures. It involves gestures that intersect grammatical structures at the place where gestures and pitch and speech coincide. Gesticulation is, in fact, what most theories of gesture are about. Some gestures are not conventional – they may vary widely and have no societally fixed form (though they are culturally influenced). Gestures that can replace a word, as in Pike’s ‘chestnut tree’ language game, are ‘language-slotted’. These can also be seen if you were to tell someone, ‘He (use of foot in a kicking motion) the ball,’ where the gesture replaces the verb ‘kicked’, or, ‘She (use of open hand across your face) me,’ (for ‘She slapped me’). These gestures occupy the positions in sentences usually taken by words. They are special gestures. These gestures are improvised and used to produce particular effects according to the type of story being told. Fascinatingly, these language-slotted gestures are a window into speakers’ knowledge of their grammars. One cannot use gestures unless one knows how words, grammar, pitch and the rest fit together.

  Figure 32: The gesture continuum

  Our hand movements can also simulate an object or an action without speech. When they do this we are using mime, which follows only limited social conventions. Such forms vary widely. Just play a game of charades with a group of friends to see that. Conventionalised gestures can also function as ‘signs’ in their own right. As I mentioned, two common emblems in American culture are the forefinger and the thumb rounding and touching at their tips to form the ‘OK’ sign and ‘the bird’, the upraised, solitary middle finger.

  These are all distinct from sign languages. These are full-blown languages. They have all the features of a spoken language, such as words, sentences, stories and even their own gestural and intonational highlighters. These are expressed in sign languages by different kinds of body and hand movements and facial expressions. In our discussion of language evolution, it is very important to keep in mind the most salient feature of gesture-based languages. That is that sign languages neither enhance nor interact with spoken language. In fact, sign languages repel speech, to use one of McNeill’s phrases. This is why many researchers believe that spoken languages did not and could not have begun as sign languages.

  Now let’s move to the crux of gesture’s relevance for language evolution. The bedrock concept here, developed in McNeill’s research, is called the ‘growth point’. The growth point is the moment in an utterance where gesture and speech coincide. It is where four things happen. First, speech and gesture synchronise, each communicating different yet related information simultaneously.

  The growth point is further described as that point where gesture and speech are redundant, each saying something similar in different ways, as in Figure 32. The gesture highlights a newsworthy item from the background of the rest of the conversation, again as in Figure 32. Intonation, it should be mentioned, is also active at the growth point and other places in what is being said. Third, at the growth point gesture and speech communicate a psychologically unified idea. In Figure 33, the gesture for ‘up’ occurs simultaneously with the word for ‘up’.

  In short, gesture studies leave us with no alternative but to see language not as a memorised set of grammar rules but as a process of communication. Language is not static, only following rigid grammatical specifications of form and meaning, but it is dynamic, bringing together pitch, gestures, speech and grammar on the fly for effective communication. Language is manufactured by speakers in real time, following their tacit knowledge of themselves and their culture. Gestures are actions and processes par excellence. The boundaries between gestures are clear, being the intervals between successive movements of the limbs, according to McNeill. Like all symbols, gestures too can be decomposed into parts. I won’t go into these here except to say that this all means that gestures, intonation and speech become a multimodal, holistic system, requiring a Homo brain to orchestrate their cooperative action.

  Another crucial component of the dynamic theory of language and gestures that McNeill develops is the catchment. This is a bit technical, but it is essential to understanding how gesture facilitates communication and thus the potential role of gesture at the beginning of language. A catchment indicates that two temporally discontinuous portions of a discourse go together – repeating the same gesture indicates that the points with such gestures form a unit. In essence a catchment is a way to mark continuity in the discourse through gestures. McNeill says:

  Figure 33: The growth point

  [A] catchment is recognized when one or more gesture features occur in at least two (not ne
cessarily consecutive) gestures. The logic is that recurrent images suggest a common discourse theme and a discourse theme will produce gestures with recurring features … A catchment is a kind of thread of visuospatial imagery that runs through a discourse to reveal the larger discourse units that encompass the otherwise separate parts.4

  Assume that while speaking you use an open hand, turned upward with the fingers also pointed upward, whenever you repeat the theme about a friend wanting something from you. The gesture then becomes associated with that theme, highlighting the theme thereby and helping your hearer follow the organisation of your remarks more easily.

  In other words, through the catchment, gestures enable speakers to arrange sentences and their parts for use in stories and conversations. Without gestures there could be no language.

  Various experiments have been developed that illustrate an ‘unbreakable bond’ between speech and gestures. One of the more famous experiments is called delayed auditory feedback. For this test the subject wears headphones and hears parts of their speech on roughly a 0.2 second delay, close to the length of a standard syllable in English. This produces an auditory stuttering effect. The speaker tries to adjust by slowing down. The reduced rate of speech offers no help, however, because the feedback is also slowed down. The speaker then simplifies their grammar. On top of this, the gestures produced by the speaker become more robust, more frequent, in effect trying to take more of the communication task upon themselves. But what is truly remarkable is that the gestures stay synchronised with the speech no matter what. Or, as McNeill puts it, the gestures ‘do not lose synchrony with speech’. This means that gestures are tied to speech not by some internal counting process, but by the intent and meaning of the speaker. The speaker adjusts the gestures and speech harmoniously in order to highlight the content being expressed.

  Other experiments also illustrate clearly the tight connection between speech and gestures in normal talk. One experiment involves a subject referred to as ‘IW’. At age nineteen, IW suddenly lost all sense of touch and proprioception below the neck due to an infection. Experiments show that IW is unable to control his hand movements unless he can see his hands (if he cannot see them, such as when they are below the table he is seated at, then he cannot control them). What is fascinating is that IW, when speaking, uses gestures that are well coordinated, unplanned and closely connected to his speech as though he had no disability at all. The case of IW provides evidence that speech gestures are different from other uses of the hands, even other gesturing uses of the hands. Some suggest that this connection is innate. But we know too little about the connection of gestures and speech in the brain or the physiological history of IW to conclude this. In any case, however, this coordination comes about, gestures in speech are very unlike the use of our hands in any other task.

  One final observation to underscore the special relationship between gestures and speech: even the blind use gestures.* This shows that gestures are a vital constituent of normal speech. The blind’s use of gestures has yet another lesson for us. Since the blind cannot have observed gestures in their speech community, their gestures will not match up exactly to those of the local sighted culture. And yet this very fact shows that gestures are part of communication and that language is holistic. We use as much of our bodies as we are able when we are communicatively engaged. We ‘feel’ what we are saying in our limbs and faces and so on.

  The connection between gestures and speech is also culturally malleable. Field researchers have demonstrated that the Arrernte people of Australia regularly perform gestures after the speech. I believe that the reason for this is simple. The Arrernte simply prefer gestures to follow speech. The lack of synchrony between gestures and speech is simply a cultural choice, a cultural value. Gestures for the Arrernte could then be interpreted similarly to the Turkana people of Kenya, in which gestures function to echo and reinforce speech.

  Were gestures also important for Homo erectus? I believe so, based, once again, on the work of David McNeill. He introduces the term ‘equiprimordiality’, by which he means that gestures and speech were equally and simultaneously present in the evolution of language. There never would have been nor could have been language without gestures.

  If this is correct, claims McNeill, then ‘speech and gesture had to evolve together’. ‘There could not have been gesture-first or speech-first.’ This follows because of my concept of triality of patterning. You cannot have language without grammar, meaning and highlighters. By the same token, there could never have been intonation without language or language without intonation.

  Once this initial hurdle of how gestures become meaningful for humans is overcome, the evolutionary story of the connection between gesture and speech may be addressed. McNeill’s theory hypothesises that early speech by the first speakers and human infants was ‘holophrastic’. That is, in these early utterances there are no ‘parts’, only a whole. To return to an earlier example, say that the first utterance by an erectus was ‘Shamalamadingdong!’ as he saw a sabre-toothed cat run by him only a hundred yards away. He was in all likelihood gesticulating, screaming and engaging his entire body to communicate what he had seen, unless he was frozen with fear. His body and head would have been directed towards the cat. Later, perhaps he recreated this scene, using slightly different gestures and intonation (he is calm now). The first time perhaps he uttered SHAMALAmadingDONG, with hand movements on shama and dong. The next time perhaps his intonation fell on shamalamaDINGdong. Perhaps his gestures remained over ‘shama’ and ‘dong’ or, more likely, they were more closely linked to any change in his intonation. It is now possible that erectus has inadvertently taken a holophrastic – single unit – utterance and transformed it into a construction with individual parts. And this is how McNeill proposes that grammar began to emerge.

  As gestures and speech become synchronised, gestures can then show one of two characteristics. They either represent the viewpoint of the observer – the viewpoint of the speaker – or they represent the viewpoint of the person being talked about. And with these different viewpoints, different ways of highlighting content and attributing ownership of content, we lay the groundwork for distinctions among utterances such as questions, statements, quotes and other kinds of speech acts.

  McNeill gives an example of one person retelling what they saw in a cartoon of Sylvester the cat and Tweety Bird. When their hand movements are meant to duplicate or stand for Sylvester’s movements, then their perspective is Sylvester’s. But when their hand movements indicate their own perspective, then the perspective is also their own.†

  Intentionality – being directed at something – is also a prerequisite to having a language. And intentionality is shown not only in speech but also in gestures and other actions. We see it in anxiety, tail-pointing in canines and in focused attention across all species. One reason that gestures are used is because intentional actions engage the entire body. The orientation of our eyes, body, hands and so on varies according to where we are focusing our attention. This holistic nature of expressing intentions seems to be a very low-level biological fact that is exploited by communication. The fact is, ‘animals use as much of their bodily resources as necessary to get their message across’. If we are on the right track, though, gestures could not have been the initial form of language. They would have occurred simultaneously with intonation and vocalisation. This is not to say that prelinguistic creatures cannot express intentionality by pointing or gesturing in some way. It does mean that real linguistic communication must have always included both gestures and speech. There are a few additional reasons for this judgement.

  First, speech did not replace gesture. Gestures and speech form an integrated system. The gesture-first origin of language predicts a mismatch between gesture and speech, since they would be separate systems. But in reality they are synchronous (they match in time) and parts of a single whole (a gesture plus intonation plus speech coordinated in a single utterance). Further
, people regularly switch between gestures and speech. Why, if speech evolved from gestures, would the two still have this give-and-take relationship? Finally, if the gesture-first hypothesis is correct, then why, aside from languages of the deaf, is gesture never the primary ‘channel’ or mode of communication for any language in the world?

  Intonation was alluded to earlier when discussing ‘Yesterday, what did John give to Mary in the library?’ Whenever we speak we also produce a ‘melody’ over our words. If an example of the importance of intonation is desired, one need only think about how artificial a car’s GPS sounds when it’s giving directions. Although computer scientists long ago learned that speech requires intonation, they still have not produced a computer that can use or interpret intonation well. Intonation, gestures and speech are built upon a stable grammar. The only gestures that provide stability are the conventionalised and grammaticised gestures in sign languages. In this case again, however, gestures are either used instead of or to supplant speech.

  What is crucial is that gestures co-evolved with speech. If sign language, language-slotted gestures or mimes had preceded speech, then there would have been no functional need for speech to develop. The gesture-first idea stakes out an untenable position. We had a well-functioning gestural communication but replaced it wholesale with speech. And some gestures, such as mimes, actually are incompatible with speech.

  This might seem to be contradicted by the earlier example from Kenneth Pike that apparently shows that gestures can substitute for speech. But the gestures Pike discusses are language-slotted gestures, a distinct kind of gesture parasitic on speech, not the type of gesture to function in place of speech. On the other hand, Pike’s example suggests another question, namely whether there could be ‘gesture-slotted speech’ corresponding to speech-slotted gestures. This would be a case of an output in which speech substitutes for what would usually be expressed by gestures. If speech evolved from gestures, after all, this is how it would have come about. And gesture-slotted speech is not hard to imagine. For example, consider someone bilingual in American Sign Language and English substituting a spoken word for each sign, one-by-one, in front of an audience. Yet such an event would not really exemplify gesture-slotted language, since it would be a translation between two independent languages, not speech replacing gestures in a single language. This is important for our point for a couple of reasons. The obviously utilitarian nature of hand signs offers us a clear route to understanding their origin and spread. And the fact that everyone seems to use gestures in all languages and cultures of the world is supportive of the Aristotelian view of knowledge as learned, over the Platonic conception of knowledge as always present. This follows because it shows that the usefulness of gestures is the key to their universality. When a behaviour is an obvious solution to a problem, there is no need to assume that it is innate. The problem alone guarantees that the behaviour will arise if the mind is intelligent enough. This principle of usefulness explains most supposedly universal characteristics of language that are often proposed to be innate. In other words, their utility explains their ubiquity.

 

‹ Prev