Through the Language Glass: Why the World Looks Different in Other Languages

Home > Other > Through the Language Glass: Why the World Looks Different in Other Languages > Page 26
Through the Language Glass: Why the World Looks Different in Other Languages Page 26

by Guy Deutscher


  In one sense, therefore, the color odyssey that Gladstone launched in 1858 has ended up, after a century and a half of peregrination, within spitting distance of his starting point. For in the end, it may well be that the Greeks did perceive colors slightly differently from us. But even if we have concluded the journey staring Gladstone right in the face, we are not entirely seeing eye to eye with him, because we have turned his story on its head and have reversed the direction of cause and effect in the relation between language and perception. Gladstone assumed that the difference between Homer’s color vocabulary and ours was a result of preexisting differences in color perception. But it now seems that the vocabulary of color in different languages can be the cause of differences in the perception of color. Gladstone thought that Homer’s unrefined color vocabulary was a reflection of the undeveloped state of his eye’s anatomy. We know that nothing has changed in the eye’s anatomy over the last millennia, and yet the habits of mind instilled by our more refined color vocabulary may have made us more sensitive to some fine color distinctions nonetheless.

  More generally, the explanation for cognitive differences between ethnic groups has shifted over the last two centuries, from anatomy to culture. In the nineteenth century, it was generally assumed that there were significant inequalities between the hereditary mental faculties of different races, and that these biological inequalities were the main reason for their varying accomplishments. One of the jewels in the crown of the twentieth century was the recognition of the fundamental unity of mankind in all that concerns its cognitive endowment. So nowadays we no longer look primarily to the genes to explain variations in mental characteristics among ethnic groups. But in the twenty-first century, we are beginning to appreciate the differences in thinking that are imprinted by cultural conventions and, in particular, by speaking in different tongues.

  EPILOGUE

  Forgive Us Our Ignorances

  Language has two lives. In its public role, it is a system of conventions agreed upon by a speech community for the purpose of effective communication. But language also has another, private existence, as a system of knowledge that each speaker has internalized in his or her own mind. If language is to serve as an effective means of communication, then the private systems of knowledge in speakers’ minds must closely correspond with the public system of linguistic conventions. And it is because of this correspondence that the public conventions of language can mirror what goes on in the most fascinating and most elusive object in the entire universe, our mind.

  This book set out to show, through the evidence supplied by language, that fundamental aspects of our thought are influenced by the cultural conventions of our society, to a much greater extent than is fashionable to admit today. In the first part, it became clear that the way our language carves up the world into concepts has not just been determined for us by nature, and that what we find “natural” depends largely on the conventions we have been brought up on. That is not to say, of course, that each language can partition the world arbitrarily according to its whim. But within the constraints of what is learnable and sensible for communication, the ways in which even the simplest concepts are delineated can vary to a far greater degree than what plain common sense would ever expect. For, ultimately, what common sense finds natural is what it is familiar with.

  In the second part, we saw that the linguistic conventions of our society can affect aspects of our thought that go beyond language. The demonstrable impact of language on thinking is very different from what was touted in the past. In particular, no evidence has come to light that our mother tongue imposes limits on our intellectual horizons and constrains our ability to understand concepts or distinctions used in other languages. The real effects of the mother tongue are rather the habits that develop through the frequent use of certain ways of expression. The concepts we are trained to treat as distinct, the information our mother tongue continuously forces us to specify, the details it requires us to be attentive to, and the repeated associations it imposes on us—all these habits of speech can create habits of mind that affect more than merely the knowledge of language itself. We saw examples from three areas of language: spatial coordinates and their consequences for memory patterns and orientation, grammatical gender and its impact on associations, and the concepts of color, which can increase our sensitivity to certain color distinctions.

  According to the dominant view among linguists and cognitive scientists today, the influence of language on thought can be considered significant only if it bears on genuine reasoning—if, for instance, one language can be shown to prevent its speakers from solving a logical problem that is easily solved by speakers of another language. Since no evidence for such constraining influence on logical reasoning has ever been presented, this necessarily means—or so the argument goes—that any remaining effects of language are insignificant and that fundamentally we all think in the same way.

  But it is all too easy to exaggerate the importance of logical reasoning in our lives. Such an overestimation may be natural enough for those reared on a diet of analytic philosophy, where thought is practically equated with logic and any other mental processes are considered beneath notice. But this view does not correspond with the rather modest role of logical thinking in our actual experience of life. After all, how many daily decisions do we make on the basis of abstract deductive reasoning, compared with those guided by gut feeling, intuition, emotions, impulse, or practical skills? How often have you spent your day solving logical conundrums, compared with wondering where you left your socks? Or trying to remember where your car is in a multilevel parking lot? How many commercials try to appeal to us through logical syllogisms, compared with those that play on colors, associations, allusions? And finally, how many wars have been fought over disagreements in set theory?

  The influence of the mother tongue that has been demonstrated empirically is felt in areas of thought such as memory, perception, and associations or in practical skills such as orientation. And in our actual experience of life, such areas are no less important than the capacity for abstract reasoning, probably far more so.

  The questions explored in this book are ages old, but the serious research on the subject is only in its infancy. Only in recent years, for example, have we understood the dire urgency to record and analyze the thousands of exotic tongues that are still spoken in remote corners of the globe, before they are all forsaken in favor of English, Spanish, and a handful of other dominant languages. Even in the recent past, it was still common for linguists to claim to have found a “universal of human language” after examining a certain phenomenon in a sample that consisted of English, Italian, and Hungarian, say, and finding that all of these three languages agreed. Today, it is clearer to most linguists that the only languages that can truly reveal what is natural and universal are the hosts of small tribal tongues that do things very differently from what we are used to. So a race against time is now under way to record as many of these languages as possible before all knowledge of them is lost forever.

  The investigations into the possible links between the structure of society and the structure of the grammatical system are in a much more embryonic stage. Having languished under the taboo of “equal complexity” for decades, the attempts to determine to what extent the complexity of various areas in grammar depends on the complexity of society are still mostly on the level of discovering the “how” and have barely began to address the “why.”

  But above all, it is the investigation of the influence of language on thought that is only just beginning as a serious scientific enterprise. (Its history as a haven for fantasists is of much longer standing, of course.) The three examples I presented—space, gender, and color—seem to me the areas where the impact of language has been demonstrated most convincingly so far. Other areas have also been studied in recent years, but not enough reliable evidence has yet been presented to support them. One example is the marking of plurality. While English requires
its speakers to mark the difference between singular and plural whenever a noun is mentioned, there are languages that do not routinely force such a distinction. It has been suggested that the necessity (or otherwise) to mark plurality affects the attention and memory patterns of speakers, but while this suggestion does not seem implausible in theory, conclusive evidence is still lacking.

  No doubt further areas of language will be explored when our experimental tools become less blunt. What about an elaborate system of evidentiality, for example? Recall that Matses requires its speakers to supply detailed information about their source of knowledge for every event they describe. Can the habits of speech induced by such a language have a measurable effect on the speakers’ habits of mind beyond language? In years to come, questions such as this will surely become amenable to empirical study.

  When one hears about acts of extraordinary bravery in combat, it is usually a sign that the battle has not been going terribly well. For when wars unfold according to plan and one’s own side is winning, acts of exceptional individual heroism are rarely called for. Bravery is required mostly by the desperate side.

  The ingenuity and sophistication of some of the experiments we have encountered is so inspiring that it is easy to mistake them for signs of great triumphs in science’s battle to conquer the fortress of the human brain. But, in reality, the ingenious inferences made in these experiments are symptoms not of great strength but of great weakness. For all this ingenuity is needed only because we know so little about how the brain works. Were we not profoundly ignorant, we would not need to rely on roundabout methods of gleaning information from measures such as reaction speed to various contrived tasks. If we knew more, we would simply observe directly what goes on in the brain and would then be able to determine precisely how nature and culture shape the concepts of language, or whether any parts of grammar are innate, or how exactly language affects any given aspect of thought.

  One may object, of course, that it is unfair to describe our present state of knowledge in such bleak terms, especially given that the very last experiment I reported was based on breathtaking technological sophistication. It involved, after all, nothing short of the online scanning of brain activity and revealed which specific areas are active when the brain performs particular tasks. How can that possibly be called ignorance? But try to think about it this way. Suppose you wanted to understand how a big corporation works and the only thing you were allowed to do was stand outside the headquarters and look at the windows from afar. The sole evidence you had to go on would be in which rooms the lights went on at different times of the day. Of course, if you kept watch very carefully, over a long time, there would be a lot of information you could glean. You would find out, for instance, that the weekly board meetings are held on floor 25, second room from the left, that in times of crisis there is great activity on floor 13, so there is probably an emergency control center there, and so on. But how inadequate all this knowledge would be if you were never allowed to hear what was being said and all your inferences were based on watching the windows.

  If you think this analogy is too gloomy, then remember that the most sophisticated MRI scanners do nothing more than show where the lights are on in the brain. The only thing they reveal is where there is increased blood flow at any given moment, and we infer from this that more neural activity is taking place there. But we are nowhere near being able to understand what is “said” in the brain. We have no idea how any specific concept, label, grammatical rule, color impression, orientation strategy, or gender association is actually coded.

  When researching this book, I read quite a few latter-day arguments about the workings of the brain shortly after trawling through quite a few century-old discussions about the workings of biological heredity. And when these are read in close proximity, it is difficult not to be struck by a close parallel between them. What unites cognitive scientists at the turn of the twenty-first century and molecular biologists at the turn of the twentieth century is the profound ignorance about their object of investigation. Around 1900, heredity was a black box even for the greatest of scientists. The most they could do was make indirect inferences by comparing what “goes in” on one side (the properties of the parents) and what “comes out” on the other side (the properties of the progeny). The actual mechanisms in between were mysterious and unfathomable for them. How embarrassing it is for us, to whom life’s recipe has been laid bare, to read the agonized discussions of these giants and to think about the ludicrous experiments they had to conduct, such as cutting the tails off generations of mice to see if the injury would be inherited by the offspring.

  A century later, we can see much further into the mechanisms of genetics, but we are still just as shortsighted in all that concerns the workings of the brain. We know what comes in on one side (for instance, photons into the eye), we know what goes out the other side (a hand pressing a button), but all the decision making in between still occurs behind closed doors. In the future, when the neural networks will have become as transparent as the structure of DNA, when scientists can listen in on the neurons and understand exactly what is said, our MRI scans will look just as sophisticated as cutting off mice’s tails.

  Future scientists will not need to conduct primitive experiments such as asking people to press buttons while looking at screens. They will simply find the relevant brain circuits and see directly how concepts are formed and how perception, memory, associations, and any other aspects of thought are affected by the mother tongue. If their historians of ancient science ever bother to read this little book, how embarrassing it will seem to them. How hard it will be to imagine why we had to make do with vague indirect inferences, why we had to see through a glass darkly, when they can just see face-to-face.

  But ye readers of posterity, forgive us our ignorances, as we forgive those who were ignorant before us. The mystery of heredity has been illuminated for us, but we have seen this great light only because our predecessors never tired of searching in the dark. So if you, O subsequent ones, ever deign to look down at us from your summit of effortless superiority, remember that you have only scaled it on the back of our efforts. For it is thankless to grope in the dark and tempting to rest until the light of understanding shines upon us. But if we are led into this temptation, your kingdom will never come.

  APPENDIX

  Color: In the Eye of the Beholder

  Humans can see light only at a narrow band of wavelength from 0.4 to 0.7 microns (thousandths of a millimeter), or, to be more precise, between around 380 and 750 nanometers (millionths of a millimeter). Light in these wavelengths is absorbed in the cells of the retina, the thin plate of nerve cells that line the inside of the eyeball. At the back of the retina there is a layer of photoreceptor cells that absorb the light and send neural signals that will eventually be translated into the color sensation in the brain.

  When we look at the rainbow or at light coming out of a prism, our perception of color seems to change continuously as the wavelength changes (see figure 11). Ultraviolet light at wavelengths shorter than 380 nm is not visible to the eye, but as the wavelength starts to increase we begin to perceive shades of violet; from around 450 nm we begin to see blue, from around 500 green, from 570 yellow, from 590 orange shades, and then once the wavelength increases above 620 we see red, all the way up to somewhere below 750 nm, where our sensitivity stops and infrared light starts.

  A “pure” light of uniform wavelength (rather than a combination of light sources in different wavelengths) is called monochromatic. It is natural to assume that whenever a source of light looks yellow to us, this is because it consists only of wavelengths around 580 nm, like the monochromatic yellow light of the rainbow. And it is equally natural to assume that when an object appears yellow to us, this must mean that it reflects light only of wavelengths around 580 nm and absorbs light in all other wavelengths. But both of these assumptions are entirely wrong. In fact, color vision is an illusion played on us by the nervo
us system and the brain. We do not need any light at wavelength 580 nm to perceive yellow. We can get an identical “yellow” sensation if pure red light at 620 nm and pure green light at 540 nm are superimposed in equal measures. In other words, our eyes cannot tell the difference between monochromatic yellow light and a combination of monochromatic red and green lights. Indeed, television screens manage to trick us to perceive any shade of the spectrum by using different combinations of just three monochromatic lights—red, green, and blue. Finally, objects that appear yellow to us very rarely reflect only light around 580 nm and more usually reflect green, red, and orange light as well as yellow. How can all this be explained?

  Until the nineteenth century, scientists tried to understand this phenomenon of “color matching” through some physical properties of light itself. But in 1801 the English physicist Thomas Young suggested in a famous lecture that the explanation lies not in the properties of light but rather in the anatomy of the human eye. Young developed the “trichromatic” theory of vision: he argued that there are only three kinds of receptors in the eye, each particularly sensitive to light in a particular area of the spectrum. Our subjective sensation of continuous color is thus produced when the brain compares the responses from these three different types of receptors. Young’s theory was refined in the 1850s by James Clerk Maxwell and in the 1860s by Hermann von Helmholtz and is still the basis for what is known today about the functioning of the retina.

 

‹ Prev