How Language Began
Page 18
This does raise an interesting question for the relationship between the brain and language, though. If language is indeed as old as it seems to be, then did Homo erectus or Homo neanderthalensis also have BA 44? That is, can language exist without the modern brain area known as BA 44? No one knows. But then if there is no idea whether this area is found in other species of Homo, no one has any idea which parts of the brain supported syntax in erectus or neanderthalensis. This means that it is not known whether BA 44 is necessary for syntax. One suspects that, because the human brain is flexible, different parts of it will be exploited at different times in the evolutionary history of our genus subsequent to erectus. The portion of the brain used by modern humans may be simply the neurobiology that contemporary Homo brains avail themselves of. Earlier species may well have used different brain structures for syntax.
Another issue with Friederici’s analysis of BA 44 has to do with the fact that it was inspired by very famous experiments with cotton-top tamarin monkeys that were conducted by Tecumseh Fitch and Marc Hauser in the earlier part of the 2000s. The experiments are problematic for at least two reasons.
The first reason is that they may be unreliable. I watched an unsuccessful attempt to run a version of these experiments by Fitch among the Pirahãs in 2007 when he was my guest in the Amazon. But if these experiments failed with human subjects, it is possible that they also did not run as well as believed among tamarins. Perhaps they are not sensitive to syntax in the way claimed for them.
The most serious problem with these experiments, however, is that the science behind them is flawed. In the Psychonomic Bulletin and Review this response was published by Pierre Perruchet and Arnaud Rey:
In a recent Science paper, Fitch and Hauser … claimed to have demonstrated that Cotton-top Tamarins fail to learn an artificial language produced by a Phrase Structure Grammar … generating center-embedded sentences, while adult humans easily learn such a language. We report an experiment replicating the results of F&H in humans, but also showing that participants learned the language without exploiting in any way the center-embedded structure. When the procedure was modified to make the processing of this structure mandatory, participants no longer showed evidence of learning. We propose a simple interpretation for the difference in performance observed in F&H’s task between humans and Tamarins, and argue that, beyond the specific drawbacks inherent to F&H’s study, researching the source of the inability of nonhuman primates to master language within a framework built around the Chomsky’s [sic] hierarchy of grammars is a conceptual dead-end.14
Adding to the scepticism about Fitch and Hauser’s results, Professor Mark Liberman offered his own response, on the widely read blog Language Log, concluding that Fitch and Hauser’s findings were in all likelihood about memory, not grammar per se.15 But if that is correct, then these studies have no bearing on Friederici’s claims, offering no support for either her methodology or her conclusions.
What this scepticism shows is that the popular idea that human brains are hardwired for language is not confirmed by science, even though it is often claimed. Nevertheless, there has always been a temptation since brain studies began in earnest, to associate the structure of the brain – its lobes, layers, sections and other gross anatomical features – with different kinds of intelligence or distinct tasks. The outmoded ‘science’ of phrenology or localisation was a consequence of trying to associate physical features of our skull with the cognitive, emotional and moral properties of the brain encased within it. This is another error of overusing the Galilean metaphor of the universe or the brain as a clock.
The human brain must be able to follow conversations, use words appropriately, remember and execute pronunciations, decode pronunciations it hears from others, keep track of the stories in the conversations, remember who is being talked about and follow topics through long discussions. And this is only a small list of the ways in which language requires memory. No memory, no language. No memory, no culture. But language requires a special set of memories, not just any memory. The different varieties of memory that underlie language are sensory memory, short-term (or ‘working’) memory and long-term memory.
Sensory memory holds information from the five senses in the brain for very short periods of time. It is able to capture visual, aural, or tactile information in less than a second. In this way sensory memory enables one to look at a painting, say, or hear a song, or feel someone’s touch and remember what the experience looked like, sounded like, or felt like. This type of memory is vital to learning from new experiences. And it is particularly important for language learning – remembering how new words sound long enough to repeat them and build them into long-term memory. Sensory information is like a reflex and just seems to happen, but is not sufficient alone to support language. It degrades very quickly. If one examines a sequence of numbers, they might be remembered (though sensory memory is limited to a maximum of about twelve items), but in all probability not long enough to walk across the room and write them all down. There are three varieties of sensory memory: echoic, for sounds; iconic, for vision; and haptic, for tactile sensations.
Another kind of memory, short-term or ‘working’ memory, is also crucial in the use of language. At an MIT conference on 11 September 1956, remembered for a series of brilliant lectures that some refer to now as the ‘cognitive revolution’, psychologist George A. Miller, then of Bell Laboratories, later of Princeton, presented a paper entitled ‘The Magical Number 7 +/− 2’. Miller’s research concluded that, without practice, people can remember up to nine, usually more like five, items at a time for roughly a minute. Some have come to disagree with Miller and believe working memory is actually lower than this, around four items at a time. Miller discovered, however, that if items are ‘chunked’, then people are able to remember larger numbers of items. This turned out to be a great result for science, but also for Bell Labs. Bell discovered that a person might have difficulty remembering the number 5831740263 but that they could easily remember it if it were ‘chunked’, as in (583) (174) (0263). It further turns out that working memory is biased towards sound-based memories, which means not only that it is important for remembering and decoding utterances, but also that it seems to have partially evolved for that very purpose, language once again helping to shape human evolution.
The next form of memory is long term. Most people remember a great deal of their childhoods. The memories may be partially inaccurate in some ways, altered over time by different conversations about them, but something of those early experiences is there for the entirety of people’s lives. Long-term memory can recall vast amounts of data for a nearly unlimited period of time within the confines of human lifespans.
Long-term memory is divided into declarative memory and procedural memory. Procedural memory is implicit memory of processes involving motor skills. When trying to remember a password, your declarative memory might fail you, in the sense that you cannot recall the names of all the symbols you have chosen for your password. But your procedural memory can come to the rescue if you will simply set yourself down at the keyboard and type the password. In a sense, your fingers ‘remember’ a code that your conscious memory has forgotten. Or someone might be trying to teach another person how to play a riff on the guitar and forget the notes. They can still teach the riff, however, by slowly playing it for the learner. But be careful not to play too slowly. Procedural memory seems to prefer to keep things at tempo.
Procedural memory is vital for pronunciation or for gestures in sign languages, providing much quicker use and access to words and signs than declarative memory, just as one’s fingers might better remember their computer password. Without procedural memory, language and much of culture would be impossible. Obviously, any animal that can do routines quickly also has a form of procedural memory. It is not unique to humans.
Declarative memory is subdivided into semantic memory and episodic memory. Semantic memory is associated with facts independent of an
y context, such as ‘a bachelor is an unmarried male’. It is crucial for linguistic meanings. And it is also vital for other context-free, long-term memories, such as ‘9/11 was a horrible day’.
Episodic memories, on the other hand, are long-term memories associated with a specific context and, therefore, tend to be more personal. You might use this memory to recall, ‘That’s where we had our first water pistol fight.’ Or, ‘That’s the Mexicali bar where I had my first tequila.’
Working memory takes place as exchanges between neurons in the frontal cortex. But long-term memories are widely distributed in the brain and seem to be processed initially by the hippocampus, which ‘consolidates’ memory for long-term storage elsewhere. Thus we must have brains that can support the three basic types of memory humans use so frequently and depend on to survive and to speak.
Several cultural traditions have attributed various functions to specific bodily organs and their parts, such as emotions being centred in the heart, thoughts being processed in certain parts of the brain and language being rooted in others. But culture can only get things right as its scientific methods, output and understanding evolve over time. It is now almost universally accepted that the heart has nothing to do with emotions: it is a blood pump, nothing more. And it is likewise, though perhaps less widely, now known that, although the brain is fundamental to the processing of all of our cognitive functions, it is not the sole locus of those functions. The resources of our entire body are marshalled in thinking, just as they are in communicating. (If you doubt this, imagine how illness or a hangover can affect your ability to think clearly.) And the brain’s thoughts largely come from the totality of our personal experiences as they store and embellish them. These are also known as apperceptions – the experiences that make people who they are. But let’s call the question. How is it that the brain enables people to speak? And what prevents other animals from having language?
The French philosopher René Descartes was the morning star of the Renaissance, a touchstone in the culture and history of the Western world. He was a pioneer who brought innovative, original thinking back into fashion after more than five centuries of talking about people instead of ideas. Before Descartes and a few others, there yawned back into the shadows the nearly 1,000-year-long, oppressive Dark Ages, during which ‘reasoning’ was an act of power. Because of this, ad hominem arguments for one’s views – arguments based on some person’s reputation rather than the ideas at the heart of their arguments – were the standard.
Descartes’s work on the mind revolves around his popular thesis of dualism – the proposal that the mind is our soul and is non-material, be it spiritual, Platonic, or merely mental, while the body is material. According to Descartes, these are two irreconcilable substances. He suggested that the soul and the body were connected, however, through the pineal gland. For whatever reason (perhaps it is due to religious traditions that oppose the soul and body), dualism has been an influential thesis for more than 400 years. Descartes’s work was the basis for much of Noam Chomsky’s theory of the mind and language, and their relationship, as Chomsky explains in his book Cartesian Linguistics.16 Chomsky seems to support Descartes’s claim that, while the body is a machine, the mind is not obviously physical.
Dualism places evolutionary accounts of human thinking at a disadvantage, for the simple reason that something non-physical could not have evolved. Therefore, if we accept this dualism, we reject the idea that the mind evolved by natural selection. In public talks and written works one often sees cited the erstwhile theist Alfred Wallace, co-discoverer of natural selection, as support for the dualistic view of the mind as somehow a different substance from the body. Wallace did indeed believe that the mind, as a Cartesian non-material entity, could not have evolved. It would seem, however, that the best way forward is not the way of dualism but the simpler idea that one should first try to explain things in natural, physical terms before proposing new entities or substances. This would especially apply if one believed that there was no physical basis for thought. To argue for such an idea, one would need to begin with the mundane evidence of the fossils, of DNA, of theories of language and culture, as well as comparative primatology, before proposing non-physical explanations for language, human intelligence, or human minds more generally.
Theories of the human brain and mind have been around for millennia. Scholars often are careful to distinguish these two terms, brain vs mind, but this separation is ultimately inspired by both dualism and religion. In this conception, the word ‘mind’ refers to the brain activities and properties people are currently unable to explain in physiological terms. But this position assumes that science could in principle eventually provide neurophysiological explanations for the properties people today refer to as the mental.17
Earlier it was mentioned that palaeoneurologists practise a reverse form of phrenology, when they study endocasts of fossil skulls (the inside of the skull rather than the outside). Some endocasts are naturally formed as the skulls of dead creatures fill with material that fossilises and is left with physical impressions from the inside of the skull. Other endocasts are made by the palaeoneurologist. Researcher Ralph Holloway has summarised how this is done.18 The procedure is to first fill the cranium with layers of latex. Next, when the latex is roughly 1–2 mm thick, place the skull in an oven for three to four hours to cure it, after which you force the latex out of the brain cavity. Holloway says that this can be done by collapsing the latex, using baby powder to prevent it sticking to the fossil, and then carefully extruding it from the fossil through the foramen magnum. After the latex is removed, it will revert immediately to the form it had in the skull.
There is nothing wrong with the reverse phrenology of endocast reading, so long as the palaeoneurologist remembers that the brain regions identified on the endocast are only suggestions for what might have been, not unambiguous evidence for, certainly not proofs for or against, different cognitive or linguistic abilities. Endocasts are less informative than one might like because they say so little about the fossil’s cytoarchitecture, about white- vs grey-matter distribution and about relative ratios of cell types, such as glial cells to neurons and other aspects of brain anatomy, such as neuronal density, that are never revealed on the underside or outside of skulls.
Another problem is that the brain is in some ways more like a blob and less like a heart, with its separate chambers. In the brain different tasks are performed by spontaneous or pre-existing connections that draw first on the parts of the blob that are most conducive to the task and subsequently on more and more firepower until the task is done. Many of these initially spontaneous connections become more routine after frequent exposure to similar tasks, indicating that learning has occurred. There is a system to the activations, but that system is less anatomical than electrochemical. It is more fluid and dynamic than static and hardwired.
In brain organisation, chemicals rule. Hormones generated by our emotions, thinking processes, diet and the overall state of our entire organism control our brain. This is why many neuroscientists have embraced the theory that the brain is ‘embodied’ – built into an anatomical, chemical, electrical and physically constrained system, namely our bodies. To such researchers, it is not so much that the brain thinks as that the entire individual does. Thus the brain is a physical organ, a constituent of the body, as all other organs are. This embodiment, along with the role of culture in our thinking, means that the brain is an organ physically integrated into the world through a body, and not a computer.
The picture of the brain that is emerging here, then, is of a cognitively non-modular organ with no congenitally specialised tissue for language (or cooking or guitar-playing). This is in direct contrast to the innately specialised areas that exist for physical abilities, but not for cultural or conceptual ones. If it is correct that language is a cultural artefact, the absence of a specialised brain area for it is predicted. If this idea is wrong, though, then language is more like vision
and there should be evidence that language is innately linked to a particular location of the brain, specialised for language.
Further evidence for this general-purpose image of the Homo brain comes from disorders of language and speech. It turns out, very surprisingly, given claims frequently found in the literature, that there are no heritable language-specific disorders, supporting the non-compartmentalised theory of the brain.
* There is a great deal of discussion in the literature these days about birdsong and its possible connection to language, given the overlap of music in the human brain (see Bolhuis and Everaert (eds), Birdsong, Speech, and Language). However, one thing that these discussions all miss is the contribution of culture to human language. As I argue in Language: The Cultural Tool, because other animal communication systems, such as birdsong, lack the component of culture as I further define it in Dark Matter of the Mind, they cannot have language. And culture can override innate biases, such as they might exist.
† The use of syntax as a filter was first proposed by Chomsky himself in his 1965 book, Aspects of the Theory of Syntax (Cambridge, MA: MIT Press, 1965).
‡ For example, Merge makes the wrong predictions about the Wari’ language of the Amazon (see Daniel Everett and Barbara Kern, Wari’ (London/New York: Routledge, 1997)). Linguists Ray Jackendoff and Eva Wittenburg have claimed that one looks in vain for Merge in the Riau language of Indonesia (‘What You Can Say Without Syntax: A Hierarchy of Grammatical Complexity’ – https://ase.tufts.edu/cogstud/jackendoff/papers/simplersyntaxwritten.pdf). And Jackendoff and renowned syntactician Peter Culicover co-authored a book entitled Simpler Syntax (Oxford University Press, 2005) in which the authors claim that not all languages use the same syntactic operations.