Righting the Mother Tongue: From Olde English to Email, the Tangled Story of English Spelling

Home > Other > Righting the Mother Tongue: From Olde English to Email, the Tangled Story of English Spelling > Page 15
Righting the Mother Tongue: From Olde English to Email, the Tangled Story of English Spelling Page 15

by David Wolman


  ARMS OUTSTRETCHED ACROSS a mess of papers in a sprawling office, Nanci Bell used a pencil to circle my test scores. With shoulder-length brown hair and dressed in a black suit, the director and CEO of Lindamood-Bell looks a little like an academic Jackie Onassis. “I don’t see it,” said Bell. “Nothing in this test shows that you’re a compensated dyslexic. You’re fine.”

  Perplexed, I explained that I’m a horrible speller and if she wanted to, she could inspect the list I’ve been keeping of words whose spellings continually dog me. Bell just shook her head. “David, I’m sorry. I thought you’d be thrilled. But I just don’t see it,” she said, looking over her glasses at my test scores once again. “We gave you tons of tests—word attack, the block tests, symbol imagery, spelling—one of those would have shown it,” by which she meant the supposed deficit.

  We flipped through some of the test papers because I needed to point out mistakes that had caused acute embarrassment during the examination. Look at this, I argued. Cacophony: wrong. Belligerent: wrong. And although I wanted to burn the paper then and there, I reminded Bell of my inexplicable camoflague instead of camouflage. “How can you say I’m normal when I’m doing things like that?” I asked. What I meant was: Please don’t make me walk out of here without a diagnosis.

  Bell didn’t budge. “It’s about the scores,” she said. “Empirical results tell us about what’s happening in a brain,” more so than any humiliation about certain misspelling. That’s the nature of standardized tests, she said with a hint of impatience in her voice. A few weeks after my visit, Bell was slated to be the keynote speaker at the biggest learning-disabilities conference in the country, where the focus is on severe disability: kids who score in the single-digit percentile range on reading tests, and who are falling precariously behind in school. Bell grabbed the page with the three nonsense words: spref, spligrity, and yetterswipper. “Look at this,” she said. “Most people don’t get all of those. Or the two g’s in exaggerate—you got that too.” But again, she explained, it’s not the single episodes of a beligerent here or camoflague there. It’s the percentiles that matter. Statistically speaking, I’m not a bad speller.

  It was then that I realized I had fallen victim to one of the most common mistakes there is when it comes to interpreting one’s experience in the world: I had confused the anecdotal and the empirical. For decades, I’ve called myself a bad speller because it seemed like I was one, and because my siblings enjoy reminding me that I am one. Instances of spelling confusion only served to reinforce this image of myself, while so many million correctly spelled words went largely unnoticed, just part of the process of writing.

  I now see that I’m quite lucky. One of the cruelest things about dyslexia is the downward spiral effect. If young children have trouble turning letters and letter groups into sounds, they get frustrated with reading and have little motivation to keep doing it. Without reading, and without a word-rich environment, they don’t build vocabulary, which is an essential tool for improving other aspects of reading, like guessing meaning and decoding spellings, so they get more frustrated and even less inclined to read, which only sets them further and further back.

  Before leaving, Bell said that if I really want to improve my spelling, I should start by taking a few extra moments to hold the picture of words in my mind. I may never become a Scripps Bee champion, but the exercise would help improve my ability to visualize correct spellings.

  It was a nice suggestion, but why would I bother investing that kind of time when I have spell-check at my disposal? For poor spellers, dyslexic or otherwise, spell-check is the ultimate compensation tool. Back in London, I had mentioned to Frith that I often hear people lament society’s increasing dependence on spell-check, and how technologies like text messaging erode spelling skill. “Ahhh, yes,” said Frith. “It’s the old view that rote learning of spelling has value and shows one’s level of education. I couldn’t disagree more.”

  TEN

  FIXERS

  I did wonder, but I didn’t want to say anything. I thought to myself: You can fly to Australia via the United States.

  German traveler Tobi Gutt. In 2007, Gutt ended up in Sidney, Montana, instead of Sydney, Australia, because of a spelling error while booking his ticket online.1

  LES EARNEST LIVES IN a woody ranch home in the Silicon Valley hamlet of Los Altos Hills. The house is modest for this part of the country, where it’s not unusual for homes to be accented by small vineyards. On a cobalt-sky afternoon in April, Earnest greeted me at the door with a down-to-business hello. He was wearing a gray tie-dyed shirt, loose black pants, and black socks. With a large forehead encroaching on a tiara of gray hair, the grandfather of spell-check looks a bit like Gene Hackman.

  We sat in his living room in front of a large metal circular coffee table that hangs from the ceiling by three chains. “This is actually the biggest computer disc in the world,” said Earnest. “In 1972, we ordered six of them for the lab [at Stanford], but they turned out to be useless.” The discs went up for auction, selling for little more than their value as scrap, so Earnest decided to bring one home.

  A few months ago, I’d sent out some emails to technology experts, inquiring about the origins of spell-check (spellcheck, spell-checker, spelling checker—take your pick). A winding electronic trail began at the Computer History Museum in San Jose, wove through a few in-boxes at Microsoft, and even mentioned Martha Stewart’s boyfriend and one-time International Space Station guest Charles Simonyi. A few weeks later, I received a message from Earnest. “In response to your inquiry, I believe that I created the first spelling checker in 1961.”

  Spell-check, as you probably know, reviews electronic text for spelling errors and then suggests corrections or makes them automatically. Hundreds of millions of people use the checker within Microsoft Word, and hundreds of millions more are double-checking their words or having them double-checked by similar programs within Apple’s operating systems, online tools for blogging and email, or with stand-alone spell-checkers used on open-source platforms like Linux.

  With the exception of lexicographers and stalwarts who believe such spelling aids corrupt the mind, most people never give spell-check much thought. If the program gets our attention at all, it’s when we curse it for flagging a proper noun, or because it failed to notice that the memo just sent to the boss requests a 10 percent raze. Otherwise, spell-check just quietly runs in the background of our digital lives, much in the same way calculators do. But it too has a story.

  Born in San Diego in 1930, Earnest was a mathematics and engineering whiz kid. Like many supersmart teens back then (and now) he ended up in Pasadena at the California Institute of Technology. In the early 1940s, “all of the electrical engineering people carried slide rules in scabbards on their hips,” recalled Earnest. The slide rule was the precursor to the pocket calculator, but Earnest “dis-liked its imprecision—good to just three or four digits.” For greater accuracy, he often performed arithmetic by longhand instead.

  In the wake of World War II and the Manhattan Project, scientists were held in the highest regard. Earnest’s father was an electronics engineer, and his mother worked on her PhD “in her spare time,” eventually becoming a professor at San Diego State University. Earnest was on a similar track, but the Cold War had other plans for him. He wasn’t called to the front lines in Korea, thanks to what he says was a military clerical error. But during what would have been his senior year at Caltech, he was sent to the Naval Air Development Center in Pennsylvania.

  The move east marked the beginning of a hopscotching journey between military labs and a stint with the Central Intelligence Agency, all of which Earnest describes as Dr. Strangelove–like inanity interrupted by bursts of exciting computer science. He landed at the Massachusetts Institute of Technology, where he worked on secret programs with nicknames like Typhoon, Project Whirlwind, Bomarc, and the Semi-Automatic Ground Environment. Some of the projects, like long-range antiaircraft missiles, were the
precursors to what today is known as the Star Wars program.

  As a computer geek in rooms filled with defense experts, Earnest’s responsibilities had nothing to do with military strategy. His role was to work out the equations that became the computational backbone of defense technologies. Doing the math to make these systems operational, said Earnest, gave him a deep understanding of just how flawed they were, so much so that accidental launch wasn’t out of the question. “We wrote a paper that we titled: ‘Inadvertent Erection of the Bomarc Missile.’ That raised some eyebrows,” he said, laughing.

  Still, he couldn’t deny that for cutting-edge research, a government-funded lab was the place to be. Life as a military scientist also afforded him time to pursue side projects, one of which was a cursive handwriting recognition program that became his doctoral thesis at MIT. Long before there was such a thing as a desktop computer or a scanner, Earnest was trying to teach a machine to read.

  Working with a first-generation computer at MIT’s Lincoln Laboratory, he used a special type of pen to write cursive text on the screen, a bit like an Etch A Sketch. From there, the program tried to decipher the different letters, first by seeing if they covered a small area, as with letters like a, n, v, and c; whether they were tall, like b, l, f, and k; or whether they had a stem—p, g, q, y, and j. Once the computer had calculated the string of letters, it would return what it “thought” to be the word or words written on the screen.

  But a computer can’t read without first knowing words and their correct spellings. For the computer to determine if a particular letter was meant to be a cursive a or o, g or q, Earnest’s program needed a dictionary. So he wrote one. More specifically, he encoded a list of ten thousand correctly spelled words. If the program saw what it calculated to be an s, followed by a p, followed by a y, it could check that letter series against the master list and confirm that this was indeed a word: spy.

  In the process of trying to match strings of letters to words stored on the master list, Earnest’s program often came across things it couldn’t identify. Many of those instances were caused by words that weren’t in the computer’s dictionary. A vocabulary of ten thousand words isn’t too impressive: the average high-school graduate has, roughly speaking, a sixty thousand–word vocabulary.2 Proper nouns also caused problems for Earnest’s program, as did sloppy handwriting.

  But other times, the trip-up was caused by a misspelling. The letter string s-p-i wouldn’t have matched anything on the word list, so it would have caused a program error. Earnest had inadvertently built the first spell-checker: A ten thousand–word master list of correctly spelled words, against which other words were compared. It sounds almost prehistoric today, now that we’re surrounded by whiz-bang technologies like mobile phones that triple as music players, global positioning systems, and personal digital assistants. But Earnest was working on this nearly fifty years ago. For a computing power reality check, the machine he used for the handwriting recognizer would barely fit on a basketball court.

  During my visit, Earnest retrieved from his office what looked like a seven-inch filmstrip reel, but octagonal in shape. A green label read: “L. Earnest,” and a blue one read: “Words 7”—the seventh reel of computer tape among the nine that comprise the spelling checker. Gingerly, he took the end of the gray tape and pulled it out of the reel, releasing a microburst of dust. After a couple of feet of blank tape, tiny holes began to appear. Back then, computers were programmed using punched holes on long reels of paper, called tape. Each line of holes is a morsel of data representing either a letter or a space between letters. When people talk about bytes and bits, these patterns of holes are the same thing, but they’re punched out in a long sequence of holes instead of being recorded on microchips. Line up enough dots and spaces and you get a small bundle of information. Bundle those bundles and you get a word, or a sentence. Earnest handed me the reel and it was unexpectedly heavy. The word list that started it all. Sort of.

  In 1965, Earnest took a post working at Stanford’s Artificial Intelligence Laboratory, and he brought the word list tapes with him. He was already aware, however, of the program’s flaws. For one thing, all the words on the original list were in the singular form, which proved problematic when reading plural words. Working with a student researcher, he came up with something called a “suffix stripper,” to reduce slipups relating to y versus i-e-s, for instance, but the fix was a double-edged sword. “If you have wine, wines, or wins, and you strip the suffix, they all become win,” recalled Earnest, which didn’t exactly help. (One time, when he wrote out merry on the screen, the computer responded with “many, merry” as its possible interpretations. Earnest printed up the many merry image and sent it out on that year’s Christmas cards.)

  Based on the limited vocabulary it was working with, Earnest’s program had an accuracy rate of around 90 percent. That might sound good, and for a baseball hitter or blackjack player it would be astonishing. But for a handwriting recognizer or spell-check program, it’s crummy. Think of it at the sentence level. Every tenth word, on average, would be incorrectly corrected or remain incorrect. “If it were interactive you could tolerate higher error rates because you can select right there and then as you go,” he said. But in batch mode—when the program goes over the whole text after writing is complete—the high error rate was a killer. Earnest’s program had no real-world utility.

  Soon after the short-lived affair with the suffix stripper, Earnest left the handwriting analyzer behind. “It did occur to me that a spelling checker of some kind would be useful because my own spelling was atrocious,” he said. But he had no way or reason to pursue it, and didn’t foresee a personal computer and word-processing revolution coming down the pike. Earnest, who by then had become a lecturer in computer science, moved on to solve other puzzles. A truly useful spelling checker was yet to come, by way of a dictionary controversy in the US and a Communist coup in Czechoslovakia.

  TO LEXICOGRAPHERS, THE 1960s are remembered as the decade of the dictionary wars. In 1961, Merriam-Webster published its Third New International Dictionary, Unabridged. The editor, Phillip Babcock Gove, figured his job was to record in the dictionary the language as it was used, not how learned men said it should be used. He dared to include entries such as ain’t, he was less strict about what was considered slang, and he identified, for instance, biweekly as meaning both twice a week and every other week. Flaunt was defined as an acceptable synonym for flout, based on the Merriam-Webster staff’s assessment that in American English prose, many writers were using it that way. Gove wasn’t out to flaunt conventional definitions. He was a realist. No matter how much some people may loathe the sound or look of nucular, irregardless, incentivise, and lite, that doesn’t change the fact that other people are bringing these word forms into the language.

  Gove’s staff at Merriam-Webster brought plenty of prescriptive tradition to the dictionary, and the final product wasn’t nearly as renegade as critics suggested. But some entries were more than enough to infuriate language neocons. It turns out that people not only get jumpy when you talk about language change; they even get jumpy when you talk about documenting changes that have already occurred. Critiques soon appeared in Time, the New Yorker, the Atlantic, and other prestigious publications, accusing Gove of failing miserably in his responsibility to uphold a standard of English language correctness.3

  One person who blew his top over Gove’s descriptivism was James Parton, owner of the history magazine American Heritage. Outraged by all the same decay-of-the-language laments repeated for centuries, Parton tried to buy out Merriam-Webster. He wanted to put out a new dictionary that would reverse, or at least rein in, Gove’s overly inclusive take on acceptable American English. When his takeover failed, Parton contracted with the prestigious publishing firm Houghton Mifflin Company in Boston to launch what would become the American Heritage Dictionary.

  To begin making a new dictionary, Houghton had to decide what words should be included. This task
is complicated. For the Oxford English Dictionary, the answer is simple: All of them. That’s because the OED has adopted a historical approach to language, never jettisoning antiquated words or forms of words. Houghton’s people, however, wanted the American Heritage to restrict entries, like a Merriam-Webster dictionary does, weeding out words that have gone by the wayside so as to provide a more relevant and finite version of the lexicon.

  But what’s the best way to take the measure of a language? If you had wanted to glean the most accurate picture of English in, say, the nineteenth century or before, a diligent approach would have been to read Johnson’s dictionary and all the works of as many writers as possible, and then spend some time—somewhere between ten months and your entire life—in the streets, playhouses, and taverns of towns wherever English was spoken, carefully listening to words and how they were used. But to take the measure of the language in the modern era, to really capture what it is to the extent that is even possible, digital databases are the way to go.

 

‹ Prev