Thumbs, Toes, and Tears

Home > Other > Thumbs, Toes, and Tears > Page 26
Thumbs, Toes, and Tears Page 26

by Chip Walter


  If we’re upset, scared, or confused, we may have trouble making eye contact. We affectionately tousle the hair of our children, pat a cheek, and hold hands with a child to protect them or silently stay emotionally in touch with a lover.

  5. Daniel McNeill, The Face (Boston: Little, Brown, 1998).

  6. See www.bbc.co.uk/science/humanbody/body/factfiles/facial/frontalis.shtml

  for more on facial muscles.

  7. P. Ekman and W. V. Friesen, “The Repetoire of Non-verbal Behavior: Categories, Origins, Usage, and Coding,” Semiotica 1 (1969): 49–98.

  http://face-and-emotion.com/dataface/nsfrept/psychology.html.

  Basic emotions expressed by the face at

  http://face-and-emotion.com/dataface/emotion/expression.jsp

  8. See National Geographic, May 1997; p. 89.

  9. Had Leonardo da Vinci found the skeleton of Homo erectus as he wandered the hills of Florence, as he often did back in the fifteenth century, even he, the master of detailed observation, would have had a difficult time realizing that the bones he was examining did not belong to a contemporary. Leonardo was fascinated with human anatomy. You might say he was even obsessed. What remain of his legendary notebooks are filled with drawings of hands and feet and forearms, heads and noses and eyes, each comparing the proportions of the others used as measures to reveal the human body’s remarkably symmetrical proportions.

  10. While recent evidence indicates that Homo habilis made some forays beyond Africa into the Middle East and southern Russia, H. erectus roamed even farther. His bones have been found buried in the earth of Indonesia and Australia. Earlier work hinted that improvements in tool technology about 1.4 million years ago—namely, the advent of the Acheulean hand ax—allowed hominids to leave Africa. But new discoveries indicate that H. erectus hit the ground running, so to speak. Rutgers University geochronologist Carl Swisher III and his colleagues have shown that the earliest H. erectus sites outside of Africa, which are in Indonesia and the Republic of Georgia, date to between 1.8 million and 1.7 million years ago. It seems that the first appearance of H. erectus and their initial spread from Africa were almost simultaneous. Why? Food. What an animal eats often dictates how much territory it needs to survive, and carnivorous animals require bigger home ranges than do herbivores of comparable size because they have to roam farther to get the calories they need. (Their food is harder to catch than a plant.)

  Until recently, scientists believed that Homo erectus was the first human ancestor to make its way out of Africa, but between 1999 and 2001 paloegeographer Davit Lortkipanidze’s team found skull fragments of a creature with a brain about the size of Homo habilis in Dmanisi, Georgia (formerly part of the USSR). Though it resembled H. habilis, the feeling is this creature falls somewhere between H. habilis and H. erectus and so was given a new name: Homo georgicus.

  11. While recent evidence indicates that Homo habilis made some forays beyond Africa into the Middle East and southern Russia, H. erectus roamed much farther. His bones have been found buried in the earth of Indonesia and Australia.

  12. See Rick Gore, “The Dawn of Humans: Expanding Worlds,” National Geographic 191, no. 5 (1997): 91–92.

  13. Ibid., 84–109.

  14. For more on Homo erectus see

  http://www.wsu.edu:8001/vwsu/gened/learn-modules/top_longfor/timeline/Homo erectus/Homo erectus.

  15. In primates the neocortex’s corticospinal tract evolved to link the posterior parietal cortex to supplementary motor, premotor, and primary motor cortices to cervical and thoracic anterior-horn spinal interneurons as well as motor neurons that control the arm, hand, and finger muscles for skilled movements such as the precision grip. Just as important, parts of the inferior temporal neocortex evolved to provide visual input that enables us to recognize complex shapes, and the inferior temporal cortex permitting heightened responses to hands and the ability to recognize faces.

  Later evolution of the corticobulbar pathways to the facial nerve enabled intentional facial expressions (such as smiles). Next, scientists believe, Broca’s cranial pathways developed Broca’s-area neocircuits that ran along corticobulbar pathways to multiple cranial nerves, which resulted in the muscle control that now allows us to speak. It is also possible that Broca’s-area neocircuits found their way along the corticospinal pathways to cervical and thoracic spinal nerves that enabled manual sign language and linguisticlike mime cues.

  16. Also see D. McNeill, “So You Think Gestures Are Nonverbal?,” Psychological Review 92, no. 3 (1985): 350–71.

  17. Iverson, J.M., O. Capirici, and M.C. Caselli. “From Communication to Language in Two Modalities,” Cognitive Development 9 (1994): 23–43.

  18. See Michael C. Corballis, From Hand to Mouth: The Origins of Language (Princeton, N. J.: Princeton University Press, 2003).

  19. See Takeshi Nishimura, Akichika Mikami, Juri Suzuki, and Tetsuro Matsuzawa, “Descent of the Larynx in Chimpanzee Infants,” PNAS 100 (2003): 6930–33.

  20. See http://www.abc.net.au/science/news/stories/s862604.htm.

  21. Garcia has written a book and produced a videotape that helps parents teach their infants to use sign language before they start talking. See Joseph Garcia, Sign with Your Baby: How to Communicate with Infants Before They Can Speak (Bellingham, Wash.: Stratton-Kehl Publications, 2001).

  22. Quoted in New York Times before her death in 2003.

  23. That early hand signaling increases IQ might not seem immediately obvious, but it makes sense. Verbal acuity and IQ are linked. And the parts of the brain that control fine hand motion overlap with parts of the brain that send signals to our lungs, throats, lips, and mouths when we speak. The connection between words and gestures is, neurologically speaking, literal. The studies (Acredelo’s study involved 103 children) also revealed that learning and using hand signals helped babies make other transitions to speech. If a hand-signaling child, for example, said “pwease” rather than “please” or “toofbrush” rather than “toothbrush,” the studies revealed that they would make the gesture signifying those things until they mastered the word’s correct pronunciation. This means, strangely enough, that children who are not taught to sign, but who are physically gifted with throats, tongues, and lungs that enable them to speak earlier than most children, may later grow to be more intelligent. In other words, they didn’t speak earlier because they were smarter than other children, they became smarter later because they could speak earlier.

  24. For online movies that illustrate the different hand movements see

  http://www.dartmouth.edu/~lpetitto/nature.html.

  25. All babies, of course, throw their arms and hands all over the place all the time. So how could the researchers distinguish hand tossing from real hand babbling? They videotaped the infants and used optoelectronic tracking systems to record all of the children’s hand movements in three dimensions. These recordings showed that all of the children gestured in rapid, chaotic bursts, but in addition to these, the children whose parents used sign language gestured in very specific ways, more slowly, with their hands placed only in a tightly restricted space in front of their bodies, where all signed language is “spoken.”

  26. Even when children who speak using ASL manage to get a grip on the complex gestures they use (the equivalent of complex spoken sentences), they continue to struggle with gaining full mastery of the language just as other children struggle with spoken language at the same age. Ursula Bellugi at the Salk Institute found that as late as age ten both speaking and signing children make the same grammatical errors, and in one experiment, both struggled to keep the characters in a story straight when telling that complicated story. Bellugi felt that this was because no matter whether the children are using sign language or their voices to communicate, they still draw on Broca’s and Wernicke’s areas to process what they are trying to say.

  27. Gestures and speech are precisely synchronized (see Hand to Mouth: The Origins of Language, by Michael C. Corbalis [Princeton, N.J.: Princeton
University Press, 2002]), p. 100. He suggests that speech and gesture form a single integrated communications system, something that also indicates that they share a common neurological mechanism that controls them. Corbalis’s take on this is that speech and gesture aren’t competing forms of communication, but integrated ones.

  Even when a stroke or a terrible accident eradicates speech completely, a patient often can fall back on gesture and communicate very effectively. But if patients lose the mental capacity to mime or gesture, psychologists generally agree that they have become psychotic or suffer from severe dementia.

  28. Another strange twist here. Talking patients who suffer from aphasia can learn to “speak” more effectively using ASL, so the parts of the brain handling signing and speech must not be exactly the same areas, though they clearly draw on very similar parts of the brain. See S. W. Anderson, H. Damasio, A. R. Damasio, et al., “Acquisition of Signs from American Sign Language in Hearing Individuals Following Left Hemisphere Damage and Aphasia,” Neuropsycholgia 30 (1992): 329–40. This shows how adaptable our brains can be. It is as if our stomachs could learn to digest cellulose or tin cans.

  29. Pettito and Robert Zatorre (at McGill University) also have studied positron emission tomography (PET) brain scans of eleven profoundly deaf people and ten hearing people. Previous work had already shown that deaf people who communicate using ASL process signed sentences mostly in the left hemisphere of their brain, just as hearing people do when they parse spoken language (both Wernicke and Broca’s areas in the brain are almost exclusively in the left hemisphere).

  The PET scans revealed that when most of us who use spoken language are racking our brains to come up with the right word (we all know the feeling), we use a specific structure in the left inferior frontal cortex to capture and express the thought. What was fascinating was that when the brain scans of both hearing and deaf subjects were compared, they revealed that exactly the same areas of the brain activated when deaf subjects struggled to come up with the right sign! Even when the deaf subjects processed totally meaningless grammatical hand movements, the planum temporale lit up just as it does in hearing people when they try to make sense of random incoming syllables that don’t carry the same symbolic meaning that words do.

  30. Lawrence Osborne, New York Times, October 24, 1999.

  31. Ann Senghas, Sotaro Kita, and Asli Ozyurek, “Children Creating Core Properties of Language: Evidence from an Emerging Sign Language in Nicaragua,” Science 305 (September 17, 2004).

  32. See http://www.dartmouth.edu/~lpetitto/optopic.jpg.

  Chapter 5: Making Thoughts Out of Thin Air

  1. Discourse on Method and Mediations, trans. L. Lafleur (1637) (Indianapolis, Ind.: Bobbs-Merrill, 1960).

  2. Another advantage of standing upright is that it reduces the amount of the body you expose to the sun. The long cylinder of a human biped presents a far smaller target to the sun than a gorilla or a lion, which may explain why gorillas live in the rain forest and lions prefer to hunt at night.

  3. Marvin Harris, Our Kind (New York: Harper & Row, 1990), pp. 52–53.

  4. William R. Leonard, “Food for Thought,” Scientific American 13(2) (2002). By using estimates of hominid body size compiled by Henry M. McHenry of the University of California at Davis, Robertson and Leonard reconstructed the proportion of resting energy needs required to support the brains of human ancestors. They calculated that a typical 80- to 85-pound australopith with a 450-cc brain would have devoted about 11 percent of its resting energy to powering its brain. H. erectus, which weighed about 125 to 130 pounds and had a brain about 900 cc in size, would have required 17 percent of its resting energy, or 260 out of 1,500 kilocalories a day.

  5. From

  http://www.anthro.fsu.edu/people/faculty/falk/radpapweb.htm,

  later published in The Evolution in Mammals of Nervous Systems, vol. 5, ed. Todd M. Preuss and Jon H. Kaas (New York: Elsevier-Academic Press, 2004).

  6. Physicians Michel Cabanac and Heiner Brinnel figured out this particular problem by massaging a cadaver’s skullcap. The blood flowed through the venous network from the outside of the skull to the diploic veins within the cranial bones and then to the inside of the braincase.

  7. See http://www.show.scot.nhs.uk/wghcriticalcare/rational%20for%20human%20selective%20brain%20cooling.htm

  for an online version of Cabanac and Brinnel’s paper.

  8. M. A. Baker, “A Brain-Cooling System in Mammals,” Scientific American 240 (1979): 130–139.

  9. Preuss and Kaas, eds., The Evolution of Primate Nervous Systems. Also see

  http://www.anthro.fsu.edu/people/faculty/falk/radpapweb.htm.

  10. In other words, could a system that evolved primarily to reduce overheating and therefore removed obstacles to growth also have played a role in feeding the brain so that it could more rapidly add neurons?

  11. The human race speaks roughly sixty-eight hundred languages, and whether you were born in a hut in the jungles of Borneo or the Bronx’s North Central Hospital, you entered the world capable of uttering every one of them. That includes the clicking sounds that punctuate the language of the !Kung San in the Kalahari Desert, the singsong Mandarin of East China, or the guttural, long, lumber words so common to Germany.

  12. Rachel Smith, “Foundations of Speech Communication,” October 8 2004; kiri.ling.cam.ac.uk/rachel/8oct04.ppt

  13. Terrence W. Deacon, The Symbolic Species (New York: W. W. Norton, 1998), pp. 247–50.

  14. When we speak, most of the process is still unconscious. We don’t contemplate how to make an “s” sound or say the word “the.” But we can and do override our normal, visceral breathing patterns when we talk, and obviously speaking is intentional, not unconscious. The easy way we unconsciously talk with one another all the time might be because by the time we reach age seven or eight we are so practiced at it, something like the way a good pianist can sit and play a complex piece of music she knows well without much thought about where and how her fingers hit the keys, it’s second nature.

  15. It’s difficult to get linguists to agree on an exact number because different accents and dialects of English (as well as other languages) blur the line between two different sounds or the same sound being pronounced slightly differently. The meaning associated with those sounds also is important. “In some languages, where the variant sounds of p can change meaning, they are classified as separate phonemes—for example, in Thai the aspirated p (pronounced with an accompanying puff of air) and unaspi-rated p are distinguished one from the other. “phoneme.” Encyclopædia Britannica, 2004. Encyclopædia Britannica Premium Service, November 24, 2004,

  www.britannica.com/eb/article?tocId=9059762.

  16. The Khoisan of Africa use 141 phonemes, virtually every sound we are capable of making, in their language. Barbara F. Grimes, ed., Ethnologue: Languages of the World, 13th ed. (Summer Institute of Linguistics, 1996). Also see

  64.233.161.104/search?q=cache:Z6Wp6IGHokYJ:salad.cs.swarthmore.edu/sigphon/papers/deboer97.ps.Z+maximum+number+phonemes+language&hl=en&client=safari.

  17. Like gesture and facial expression, prosody has ancient roots. In fact, some aspects of it go all the way back to the gill actions of mouthless Silurian fish that swam in Earth’s seas four hundred million years ago.

  18. The January 2003 issue of Neuropsychology, published by the American Psychological Association, has an interesting article about a study done in Belgium by psychologists interested in how emotions are processed by our minds. At Ghent University, Guy Vingerhoets, Ph.D., Celine Berckmoes, M.S., and Nathalie Stroobant, M.S., knew that the left brain is dominant for language and the right brain is dominant for emotion.

  19. “Complementing earlier studies on hand neurons in macaque F5, Ferrari et al. (2003) studied mouth motor neurons in F5 and showed that about one-third of them also discharge when the monkey observes another individual performing mouth actions. The majority of these “mouth mirror neurons” become active during the execution and obse
rvation of mouth actions related to ingestive functions such as grasping, sucking, or breaking food. Another population of mouth mirror neurons also discharges during the execution of ingestive actions, but the most effective visual stimuli in triggering them are communicative mouth gestures (e.g., lip smacking)—one action becomes associated with a whole performance, of which one part involves similar movements. This fits with the hypothesis that neurons learn to associate patterns of neural firing rather than being committed to learn specifically pigeonholed categories of data. Thus a potential mirror neuron is in no way committed to become a mirror neuron in the strict sense, even though it may be more likely to do so than otherwise. The observed communicative actions (with the effective executed action for different “mirror neurons” in parentheses) include lip-smacking (sucking, sucking and lip smacking); lips protrusion (grasping with lips, lips protrusion, lip smacking, grasping, and chewing); tongue protrusion (reaching with tongue); teeth chatter (grasping); and lips/tongue protrusion (grasping with lips and reaching with tongue; grasping). We thus see that the communicative gestures (effective observed actions) are a long way from the sort of vocalizations that occur in speech.” From M. A. Arbib, “From Monkey-like Action Recognition to Human Language: An Evolutionary Framework for Neurolinguistics,” Behavioral and Brain Sciences 28(2) (2005): 105–24.

  20. The evolving configuration of our throats was making it possible for our ancestors to create a broader range of sounds than other primates. In fact, without this rearrangement, language as we know it would be out of the question. This is why efforts to teach chimps to speak have failed and why Koko the gorilla uses sign language and symbols when she “talks” rather than her voice.

  21. The idea of language being hardwired into the brain as an evolutionary adaptation was first aggressively proposed in the 1950s by linguist Noam Chomsky.

 

‹ Prev