The Word Detective
Page 14
The furore in Chicago and elsewhere in America as the press started debating mutual intelligibility soon faded. But it reminded us that changes in language typically developed along well-worn trails. Although you might not be able to predict precisely which new words are just over the horizon, it is certainly possible to examine how new words arise, and to identify the general routes they take into English. If you do want to try your hand at prediction, you need to play the percentages—which means that you have to know two things: how words have been formed in the past, and which areas of the language are likely to generate new vocabulary.
Fewer than 1 percent of new words are actually new—well, far fewer than 1 percent, actually. There is no point in trying to create a new word from nowhere. They almost all have some sort of ancestry or association with words already current in the language. Language is for communication, and if you are brave enough to introduce a new word with no links to existing word-formation patterns, then communication more or less comes to an abrupt halt until the people you are trying to communicate with have learnt the new meaning.
A few years ago, the standard example of the word that fell to Earth from nowhere was grok: to understand intuitively or by empathy. In 1961, Robert Heinlein wrote: “Smith had been aware of the doctors but had grokked that their intentions were benign” (Stranger in a Strange Land, Chapter 3). No doubt there were lexicographers dancing in the aisles back then: spontaneous creation; the infant phenomenon; boldly going where no other verb had gone before. This is all that the laconic OED has to say about the genesis of grok: “Arbitrary formation by Heinlein.” As simple as that. There wasn’t really a lot else it could say, as Heinlein’s word had no lexical relatives. But how do you manage to advance communication when you invent something with no known alliances to existing words, and with nothing for the user to grab hold of? It doesn’t really work.
We often came across one-off instances of words which (unlike grok) had never managed to gain even the slightest foothold in the language. In fact, when we were sorting through the card files looking for potential dictionary entries, I would guess that we passed over more items than we set aside for inclusion. Occasionally people asked me which words we left out. I don’t remember examples. In those days, they would have been instanced by fewer than five examples. It goes without saying that they were—as a group—instantly forgettable, because they had been instantly forgotten. If they had attracted an audience, and people had started using them, then we would have discovered more examples. Again, think of the practicalities. This is something else not to waste time on.
If new words aren’t generally pure inventions, then where do they come from? You might think that they are mostly snatched from other languages. You would be wrong there, too, but not so wrong. Borrowings have been a significant factor in vocabulary change in English over the past millennium, but today they represent—statistically—a waning, less productive form of language change than in the past. Nowadays we borrow less than 10 percent of our new vocabulary. And what would be the reasons for this? The reasons are buried deep in economic and cultural adjustments that have spanned centuries. Major language change doesn’t occur overnight. It’s not dependent on some political decision taken on a particular day, but is an accumulation of small changes, sometimes over generations. Loanword or borrowing studies teach us that English was at base a Germanic language, and that at various times it has undergone significant change through invasion, trade, cultural developments, scientific innovation, and the like. Up until the nineteenth century, English was nothing like the high-prestige, global language it is today. As a result, if it wanted to follow the latest trends, it had to import words—especially from French, Latin, and Greek, which were higher-prestige languages in culture, science, and many other areas. I refuse to launch into long lists, but if—in the eighteenth century—we felt that we needed to bulk up our knowledge and vocabulary of military defence techniques, we didn’t take words from Scandinavia or Welsh, but looked to French, the language of Napoleon: abates (defensive barricade), deployment (of troops), pas de souris (passageway from an outwork back to a defensive ditch). If we wanted to introduce a sense of artistic brilliance into English in the same century, we looked to Italian and Latin: agitato, falsetto, even melodrama. These—and many others—were areas in which English was not a market leader.
As we entered the twentieth century, things changed. Other geographical varieties of English, outside Britain, were now strong, and supported increasingly vibrant and self-confident cultures generating their own vocabulary; English generally was becoming the prestige global language, and the balance started to shift so that other languages were absorbing the vocabulary of this newly dominant English. Non-English-speakers aspired to the culture of Britain and America, and the economic strength of America pushed home this position. English has therefore, over the years, found less need to borrow words from elsewhere, and, as a result, looks to its own resources and creates new terms from its own word stock. It’s not that we don’t take on borrowed words at all these days—it’s just that the percentage has, for this and doubtless other reasons, diminished.
The predominant factor in language change today is best described as minor adjustment—use what you know and tweak it slightly, so that you take people with you. Neither spontaneous creation nor word borrowing affect language so much as discreet innovations that make use of existing elements of the language. So, without a great deal of fuss, you can weld together old words already existing in English to form new compounds (snowboard, halterneck), and add affixes to words or word elements to create new terms (microbrewery, interdisciplinary, rageful, selfie).
Another major brand of word formation today is semantic drift, whereby one word develops a new meaning in addition to the ones it already has (including technological arrivistes, such as the computer mouse—which just happens to look like one of those cuddly furry beasts with a clutchable body and—unless wireless—a long, thin tail). I could produce a very long list of these, but not even I would be interested. Think of any word, and then think of the secondary and tertiary meanings it has developed (start here: table, ladder, kid, rotor).
To complete the list of major word-formation types, we should mention conversion, or switching part of speech (e.g., impact the noun becomes to impact the verb: we’ve had this change mode since the dawn of language, so please don’t let it worry you). Then there is blending (e.g., influenza and affluence create affluenza), shortening (abbreviating), and the creation of acronyms and initialisms. The difference between these last two similar types of word formation is that acronyms (e.g., NATO, AIDS) can be and are often pronounced as words, whereas initialisms (e.g., RSPCA and FYI) can’t be, at least without unnatural facial contortions.
The key fact about all of these word-formation methods is that they employ tried-and-tested routes for creating new vocabulary items from old, pre-existing terms. So if you are forecasting the vocabulary of the future, you would be well advised to stick within these guidelines. The relative frequency of use of the methods may change over the years, but they are likely to cover almost all new expressions for the foreseeable future.
We began by thinking about whether it was possible to predict the next big word—an activity I recommend avoiding. But thinking about it has drawn us into what factors do influence language change. Language change typically happens along well-established routes, and it takes place most vigorously in areas that are subject to the most change in real life: politics, medicine, computing, general slang, environmental concerns, SMS. I’m sure you can think of others. But in general, don’t waste your time—my advice is to observe what happens without trying to change it.
It is a strange and little-known fact, but as the New Words group was editing new entries for the dictionary in the mid-1980s, we found ourselves exposed to some of the curious policy decisions made by OED editors in the past. For instance, when we wanted to add some new material to the entry for African, we found th
at the existing entry was unnaturally brief. Way back in 1884, the original editors had intentionally omitted the word, and all that the dictionary now contained was a short entry hastily compiled in the 1970s for the Supplement to the dictionary. The reason the original editors omitted African was that they didn’t really regard it as a word, but as a proper name turned into a pseudo-word by the addition of -an. If you included African in the dictionary, then surely you were opening yourself up to having to include an almost indefinite number of comparable terms based on place names and the names of people. But the original editors had a tremendous change of heart a few months later when they reached the word American—another place name plus -an. Even the objective and neutral OED editors could see that there was more to American than met the eye, and so they condescended to include a rather small entry for the term (about six inches of type).
The various versions through which the OED’s entry for the word American passed from that first appearance in 1884 up to the present day show a number of shifts of editorial and cultural perspective. Although the original OED was firmly based “on historical principles,” the editors allowed themselves to deviate from these if they thought they knew better (or, more accurately, if they thought that as-yet-undiscovered evidence would back up what they felt). So they had a habit of presenting words based on proper names as if there were some universal law that the adjective would appear first in the language, followed later on by the noun. The evidence of real data—and especially from the fund of real data that has become available to us since the emergence of the Internet—seems, on the contrary, to suggest that these nouns typically predate the adjectives.
It turns out that this is an issue with the entry for the word American. The loyal old OED placed the adjective in first place within its entry, before the noun: “Belonging to the continent of America,” dating the term in English from 1598. It then followed the adjective with the noun, of which the first meaning was, in the wording of the time, “An aborigine of the American continent; now called an ‘American Indian,’” but dated it twenty years earlier, from 1578. According to the editors’ own historical principles, the noun should have been placed first. Look at those definitions, too, for further evidence of changes in editorial (or general cultural) perspectives. We use a different word-set these days to describe indigenous peoples (not aborigine, for example). It would no longer be accurate to call the original inhabitants of North America “American Indians,” when a new vocabulary has been introduced for Native Americans. These are changes that have been working through society and language for decades in different regions of English, and they are part of a spectrum which will surely change again in future. It’s intriguing to try to gauge how much these changes are based simply on the use of different words nowadays to describe things, and how much on the much larger issue of different mindsets. We think of things differently now, so we use different words: not new, borrowed words, but modifications of terms we already had in the language. I know it’s a small thing, but it is a slight indication of how perspectives were different in those old days of certainty and empire.
As a universal rule for the OED of the past (when it was generally adhered to) and for today’s editors (by whom it is rigorously adhered to), proper names, such as the geographical name America, or the personal name Mandela, are not included in the OED unless they are used to signify more than the one person or place to which they originally relate. The Duke of Wellington is noticed by the OED not as an individual—for his military prowess, or because he was a prime minister—but because of his boots, amongst other things. The Duke will be remembered for many things: he was, perhaps uniquely, a British hero with a celebratory door-knocker named after him—though that has so far evaded the all-seeing eye of the Pinkertons at the OED. This was an object which, in one of its many incarnations, possessed a stout iron knocker in the shape of a British Lion which came smashing down on a striker plate consisting of a defenceless Napoleonic Eagle, thus encouraging people to visit their neighbours at the same time as contributing to the war effort.
The “Iron Duke” (1830 onwards) left us a considerable verbal legacy: his knee-high military boots were called Wellingtons from 1816, and waterproof lookalikes were developed later in the nineteenth century; we also had Wellington trousers, Wellington hats, and Wellington coats from around 1810. We named a beef dish after him, and even an apple (which the OED slightingly likes to remind us was also called the Dumelow’s seedling).
But for the past hundred years Wellington has perhaps been least remembered for providing the English language with the first example of the word ganja (Indian hemp, for smoking or whatever else you might like it for). And how did it come about that we have Wellington to thank for ganja? In his early career as a soldier he spent much of his time in India, needless to say the home of “Indian hemp.” To keep the soldiers happy, the army ran bazaars at which anything Indian might be bought and sold. Wellington was an indefatigable dispatch-writer, telling his army fellows everything that was happening to him in India. In 1800 he wrote about the bazaars, to which local Indians brought ganja, “bhang,” opium, “country-arrack,” and toddy to sell. Wellington was concerned that the British redcoat should not become debauched or roué, so he enforced a blanket ban on alcoholic arrack or toddy being sold to the soldiers. He seems, however, to have let ganja and opium slip through the official net—perhaps because they were regarded as medicinal tonics for men wounded in battle. The OED is a dynamic masterwork: in due course, we will find earlier evidence for ganja, I’m sure. But just for now, the Iron Duke is credited with the introduction of the word into English—“mentioned in dispatches.”
From time to time, as editor of the New Words group, I became aware of big words that would need addressing—words that dominated cultural or scientific debate. These were always difficult to edit, but on the other hand, they were good indicators of the strength of the dictionary’s editorial policy. High-profile words often rush into public prominence, but their meaning and cultural significance can take a while to settle down—how we regard a word after fifty years can be very different from the way it is interpreted after six months (AIDS, perestroika, online). Our source evidence—texts from the real world—sometimes sends us mixed messages. Sometimes English speakers just alter the spelling or pronunciation of these words subtly as they embed themselves in the language.
By the mid-1980s you did not need to be a historical lexicographer to be aware that AIDS was a major social issue—and consequently one that the OED would need to address comprehensively and yet sensitively. AIDS was one of the trickiest new entries we ever had to confront, in terms of its social context. It kept on moving around. Even the way it was spelt caused problems. The earliest evidence we collected used the form A.I.D.S., with a full set of full stops in the old style. But considerable uncertainty arose, as the term settled into the language. Some people preferred a lower-case form, aids, which came to be regarded as slightly confusing, as we already had a perfectly usable word aids, in the context of helping and assisting. Both coexisted in roughly the same semantic area (medicine). People tried Aids, too, but that didn’t really stick. Eventually a consensus settled on AIDS in the late 1980s and early 1990s. In the early years of preparing and publishing the entry, we had to monitor our headword spelling almost daily to keep up with what our evidence told us was the most frequent spelling.
We had similar problems with the etymology. Early documentary evidence was confused over whether AIDS was short for “acquired immune deficiency syndrome” or “acquired immuno-deficiency syndrome.” Read that again—there is a difference. Later evidence tended to prefer the former (acquired immune deficiency syndrome), but we couldn’t establish definitely which form really was primary. Yet again, you have to accept that language development may not be absolutely straightforward.
The standard technique for establishing the details of the historical emergence and continuity of a term is to follow the documentary evidence.
We would collect as much evidence as we could find and then submit it to a process of analysis. As time went by, we were able to pull in computer power to help with evidence-gathering, but in the end the decision still remained in the hands of the (very human) editor. Put yourself back to this unstable stage in the history of the word AIDS in the mid-1980s: we didn’t have the medical knowledge we had later, and we didn’t have the defining terminology that developed as the condition was better understood, and we didn’t have computer power to collect and sift through reams of documentary evidence. In addition, HIV wasn’t coined until 1986, so aspects of the full picture only became available later.
Common knowledge dated AIDS to 1983. When we researched it we were surprised to find that the name did predate 1983, as it had arisen in medical discussions the previous year. That shouldn’t really have surprised us. It is not unusual for research to uncover an unrecognised prehistory for any term. On 8 August 1982 the New York Times had published a pioneering article on the new condition, tentatively informing its audience about what would turn out to be one of the words of the decade, if not of the century.
That earliest reference from 1982 is another object lesson for users of the dictionary. We find it’s not unusual for the gentle reader to assume that the OED’s first recorded quotation is the very first use of a term ever: to imagine that the expression AIDS was actually coined on 8 August 1982. If you ever find yourself thinking that, give yourself a sharp rap on the back of the head to return yourself to reality. In real life, that is just the earliest reference we have been able to find. Obviously, a reporter on a newspaper isn’t likely to invent a term that is bubbling around amongst specialists at the time. But that is a fact it’s easy to forget.
Once we had collected all of the documentary evidence that we were able to amass for AIDS, we faced the problem of its definition. As already noted, definitions can be problematic if your term has not established itself fully in the language. It was our science editors who were charged with the problem of defining AIDS for the first time. Their first efforts reached print (even then after many changes) in 1989. It now feels curiously dated: