by Nick Bilton
Ridicule or attack on any religious or racial group is never permissible.
No comic magazine shall use the word “horror” or “terror” in its title.
The treatment of love-romance stories shall emphasize the value of the home and the sanctity of marriage.
All scenes of horror, excessive bloodshed, gory or gruesome crimes, depravity, lust, sadism, masochism shall not be permitted.
All lurid, unsavory, gruesome illustrations shall be eliminated.
In other words, titles such as “Casper the Friendly Ghost” were OK, but “Betty Boop” in a halter top or anything with crime or zombies was not.
Was there any real proof that comics caused juvenile delinquency? No. But fear of and anxiety about something different was enough to blame a burgeoning industry for ill-behaved children—who, it turns out, had been around long before comic books were invented.
Computer Shock
Tracing the reaction to the explosion in computing power and the expansion of the Internet is somewhat like rewinding technochondria’s greatest hits. In the short span of a few decades, we’ve seen all the familiar fears and skepticism rear up again—from doubts that computers would produce any benefit to the belief that the technology will harm or destroy our kids.
In the 1970s, as computers grew smaller and more powerful and terminals began to appear on workers’ desks, many experts still couldn’t anticipate the revolution ahead.
Kenneth H. Olsen was an MIT-trained engineer who founded Digital Equipment Corporation in 1957 and helped build some of the early successful minicomputers, which allowed individual workers to take advantage of computing power by using a terminal connected to a midsize computer. In the early days, Olsen said, “we thought even children could understand computers. We thought computers were fun and we thought they could change the world. But we had no idea that they really would do just that.”
Still, even this pioneer and innovator was skeptical about how far the trend would go, telling the magazine Financial World in 1976—the same year the very first Apple computer went on sale—that he didn’t really see a place for computers in the home. “While a computer might be great and educational for a smart kid, I think we already have too much automation at home,” he said. “In general, our lives should be simpler.”
Not surprisingly, Digital Equipment mostly missed out on the personal computer boom.
The potential of the Internet generated a similar response. It started out as a way to allow academics and scientists to share information, and back then, it was slow and clunky. But even as it began to bring in all kinds of users, there were those who dismissed its use in the same way the monks had shrugged off the printing press.
In a classic article in Newsweek in 1995, Clifford Stoll, an astronomer and author, threw cold water on all the dreamy possibilities that the online world seemed to have: “Visionaries see a future of telecommuting workers, interactive libraries, and multimedia classrooms.8 They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems.”
To this, Stoll had a one-word response: “Baloney.”
All those voices online would simply make a lot of noise, he scoffed. And reading or learning online? Preposterous. “Nicholas Negroponte, director of the MIT Media Lab, predicts that we’ll soon buy books and newspapers straight over the Internet. Uh, sure,” he wrote.
Just fifteen years ago, he couldn’t possibly see how we might buy airline tickets, make restaurant reservations, or negotiate purchases online. And, he added, “Who’d prefer cybersex to the real thing?”
Who indeed?
Stoll was sure that human contact was necessary for sales, communication, and education. Yet now he looks as out of step as those writers who predicted that the telephone and the phonograph would kill the arts and human interaction.
What he missed—and what so many of us have trouble grasping—is how difficult it is to foresee exactly what changes a new technology ultimately will bring. As with the printing press, the biggest changes with computing and the Internet took place when people could take the Web with them rather than having to go somewhere to use it.
Just as pocket-size books brought reading to a greater audience, the handheld BlackBerry brought e-mail to a gadget that easily fit in a person’s pocket and made it an indispensable part of daily life. As laptop sales have grown faster than desktop sales and portable machines have gotten cheaper and lighter, the Web has grown exponentially. The Internet, which now has nearly 2 billion users and has grown rapidly, reached only 16.5 million users twenty-five years ago. Similarly, in 1998, as mobile phones began to shrink in size and price, there were only about 4 million active mobile phones in the world. By 2008, when phones were hardly bigger than a pack of gum, that number was 3.8 billion, or almost 70 working mobile phones for every 100 living persons worldwide. In 2009, the number reached 4.6 billion.
The ubiquity of all these gadgets has created a new round of fears and assertions that computers and the Internet are responsible for a raft of societal ills, harming children and adults. For instance, for the better part of the last decade, some teachers and concerned parents have argued that Internet WiFi is damaging to our health, even calling the output from electronics and WiFi “electrosmog.” In 2008, some schools and offices banned all forms of wireless Internet—even though there isn’t a shred of evidence that WiFi specifically is responsible for any health issues. Lakehead University in Canada, which adopted a ban, proclaimed that WiFi could cause “potential chronic exposure for our students” from electromagnetic rays and asserted that the risks of WiFi are equal to those of secondhand tobacco smoke. Yet studies show that older technologies such as televisions, microwaves, and radios emit stronger electronic waves than do WiFi hubs.9
There are also concerns about the ergonomic effects of computers, alarm about the corrupting impact of Google, and worries that the next generation of computer-addicted children will be unable to navigate society properly.
A wave of books have argued that computing, the Internet, and screens are going to bring with them the demise of society and a youth so corrupted that it will only be able to watch MTV and look at picture books.10 In the mid-1990s, in the book The Gutenberg Elegies: The Fate of Reading in an Electronic Age, Sven Birkerts questioned whether the digital age would produce illiterate children who are unable to read long-form literary works and are capable only of passively watching images on screens.
Maggie Jackson, in Distracted: The Erosion of Attention and the Coming Dark Age, argues that multitasking is so bad for society that it could put us back in the Dark Ages, unable to interact with one another and incapable of experiencing meaningful and intimate relationships.
Lee Siegel, a cultural critic, in Against the Machine: Being Human in the Age of the Electronic Mob, suggests that heavy Internet users are destined for a life of technological solitude so bleak that our humanity and individuality could dissipate into the ether.
The members of a group called the Alliance for Childhood are well-respected psychiatrists and childhood development professors who regularly release reports alleging that computers are ruining our youth. The group’s mission statement proclaims, “The lure of electronic entertainment diminishes active play and work and the learning of hands-on skills,” and when it comes to technology and children, “the losses often outweigh the gains.” An older report written by the group titled “Fool’s Gold: A Critical Look at Computers in Childhood” concludes: “Computers pose serious health hazards to children. The risks include repetitive stress injuries, eyestrain, obesity, social isolation, and, for some, long-term physical, emotional, or intellectual developmental damage.”
As should be clear by now, such worries are part of the territory. In all fairness, sometimes they are legitimate: The printing press did shift power away from the clergy and monarchs, and the Internet is giving voice to a broader range of people, kooks and creeps included. It’s
perfectly normal and probably healthy to examine whether these changes are good or bad. But we’ll also no doubt look back at many of the debates a generation from now and see that a lot of these fears were inflated and maybe a bit ridiculous, too.
Long Form Has a Long Life Ahead (31 Characters)
When we adopt a new way of doing something, we also have to give up the old comfortable ways we’re accustomed to, and that kind of change frequently comes with its own anxiety.
In recent years, an increasing amount of information seems to be streaming into the world byte by byte—in text messages on your phone, tweets and status updates from your friends, and headlines swimming across your television screen and on your Google home page. By early 2010, 50 million “tweets” were moving each day via Twitter, the social networking site where people can send messages of up to 140 characters at a time to “followers.” More than 700 million times a week, friends shared shortened links of videos, stories, and websites. The sheer volume of staccato messages, coupled with the sheer volume of information coming at us from a gazillion different directions, has created yet another worry: Will long-form content—the snacks and meals of an educated society—die, leaving a culture that can only graze in small byte-size pieces?
No. Absolutely not.
As we’ve seen, throughout history we have tended to dramatize the death of one form of communication when another is being born.
Sure, there is clearly an abundance of short-form material, but let’s be realistic: This isn’t the first time we’ve communicated in a few words. Newspaper headlines have never offered a lot of verbiage. Radio stories and television stories are surprisingly brief when written out. And honestly, when was the last time you abandoned a book because the table of contents sated your thirst for knowledge?
Maybe you don’t read as many books or watch as many hour-long television shows as you used to because you are doing other things, such as playing video games and catching up with DVDs or downloads on your computer.
Given all the hue and cry, it’s important to take a different look at history. Before there were even screens in our living rooms, the same worries reared their heads. There was a time in the 1920s when cultural critics feared Americans were losing their ability to swallow a long, thoughtful novel or even a detailed magazine piece.
The evil culprit: Reader’s Digest.
Before Twitter, There Was Dewitt Wallace
In 2009 I gave a talk titled “The Future of News” at several conferences across the country. The presentation usually lasted twenty minutes and covered most of the innovative work happening inside the New York Times as well as other technological innovations in journalism. I tried to assure the conference attendees that long-form journalism might look different from how it does today, but it would survive well into the future.
Without fail, at the end of each talk someone would cite Twitter or another short-form technology as the signal that the death of the long form was upon us. At an event in Boston, one attendee argued that “one day, there won’t be any more long-form books or news articles—instead everything will be the length of the Reader’s Digest.”11
I offered the number of investigative journalism books on the bestseller lists and the high number of page views for long-form articles on the Times website as a rebuttal, but the question got me thinking. Could Reader’s Digest really be the model for our future? That question led me to DeWitt Wallace.
Early in the twentieth century, while recuperating from a World War I injury, young DeWitt Wallace was confined to a hospital bed in France for more than four months. He had little else to do but read stacks of magazines from America. Toward the end of his hospital stay, he concluded, rightly, that most people were too busy to read all the wonderful material printed each month. But he came up with a solution: He could condense the best articles and reprint them together in a special “reader’s digest.”
On his return to the United States, Wallace put together a business plan for a magazine that would condense the best articles from American magazines. The notable publishing and business tycoons of the day dismissed his idea as “too niche,” and bankers refused to fund it, saying Reader’s Digest couldn’t possibly gain an audience of more than 300,000 readers.
But Wallace was confident and passionate and didn’t give up easily. He found a business partner—who later became his wife—and in February 1922, his magazine went to press. The first issue of Reader’s Digest contained thirty-one articles, one for each day of the month, all selected and edited by Wallace and condensed into one or two pages in his pocket-size reader.
By 1929, circulation had grown steadily to 200,000. Then it exploded, expanding to just under 1.5 million by 1935, a sevenfold increase in five years. Had Wallace found a magic potion? Was the public concluding that condensed content—reading lite—was the future?
Not exactly.
Sure, the articles were shortish, in big type, and on small pages, so they felt easy to read. But the length of the articles wasn’t the attraction. James Playsted Wood, who wrote a history of Reader’s Digest, noted that “above all, the magazine accentuates the positive, minimizes the negative and strikes a note of hope whenever possible.” People weren’t buying Wallace’s magazine for its shorter stories. Rather, they wanted a homogeneous, religiously and politically conservative magazine. And that’s what they got, with stories such as “Whatever Is New for Women Is Wrong,” “What People Laugh At,” and “Is the Stage Too Vulgar?”
The digest was criticized for its story length too after a reviewer pointed out that some of the “condensed” versions of stories were actually longer than the original magazine articles. Critics also believed that the articles weren’t chosen for their literary or journalistic merit but on the basis of their simplicity and whether the point of view aligned with Wallace’s conservative bent.
By the 1930s, some magazine publishers were so annoyed with Reader’s Digest that they threatened to block Wallace from condensing their articles. They believed the magazine wasn’t just a low-calorie version of their content but a rewriting of work that didn’t match Wallace’s views.
In response, DeWitt Wallace decided to hire his own writers to create his own stories—but the practice evolved in an unusual way. At first Wallace hired a few writers and started to assign and create original stories for the magazine. But he soon realized that he was changing the nature of his publication. So instead, he began to “plant” longer stories in other publications, paying other magazines to run his writers’ work. Reader’s Digest could then excerpt those planted articles. Wallace came to understand that he was providing “first-person editing,” stories plucked and polished to satisfy his particular audience, just as today’s explosion of content allows you to develop your own personalized newsreel.
In 1945, the New Yorker published a five-part investigative series about the Reader’s Digest’s strange “planting” practices and noted that the magazine had, over the course of six years, run 720 condensed reprints from other publications, 316 articles written solely for its publication and described as such, and 682 stories written specifically so that they could be excerpted in the digest.12 In other words, nearly 1,000 articles had been assigned and written at Wallace’s direction. The New Yorker discovered that more than sixty publications had been paid to run articles so that they could be reconstituted in Reader’s Digest. Later, it was discovered that through the 1940s and 1950s, three out of five articles in Reader’s Digest were actually original content commissioned and edited by Wallace.
Although it wasn’t necessarily clear sixty years ago, it’s clear now that the magazine’s popularity reflected its subject matter. Readers weren’t abandoning long stories for short ones; they were gravitating to Wallace’s touch with happy-go-lucky, sanguine, politically conservative articles, well packaged in a little pocket reader. In short, just as with pornographic movies, the appeal of Reader’s Digest was in the overall experience.
Even now, Reader’s D
igest, with its byte-sized approach, has a circulation of about 8 million. The full-meal New Yorker, by contrast, has 1 million.
Irtnog
There was another lesson that grew out of the era’s fears that Americans would ditch their novels and thoughtful magazine pieces for the slick, short fare of the Reader’s Digest: In the rush to adopt new ideas and innovations, we sometimes go overboard, driven not so much by the joy of discovery as by the nagging fear that maybe we will miss something important.
In a clever satirical piece in the New Yorker in 1938, the influential essayist E. B. White captured this classic human response.13 White told the make-believe story of readers who were so determined to keep up with the exploding number of magazines and newspapers that they sound like people e-mailing and texting on their way to work. They “read while shaving in the morning and while waiting for trains and while riding on trains.… Motormen of trolley cars read while they waited on the switch. Errand boys read while walking from the corner of Thirty-ninth and Madison to the corner of Twenty-fifth and Broadway.”
The Reader’s Digest offered an alternative, White wrote, and others started digests too, hoping to capture the original’s success. “By 1939 there were one hundred and seventy-three digests, or short cuts, in America, and even if a man read nothing but digests of selected material, and read continuously, he couldn’t keep up,” White wrote, continuing the gag. “It was obvious that something more concentrated than digests would have to come along to take up the slack.
“It did. Someone conceived the idea of digesting the digests. He brought out a little publication called Pith, no bigger than your thumb.”
Still, that wasn’t enough. So “Distillate came along, a superdigest which condensed a Hemingway novel to the single word ‘Bang!’ and reduced a long article about the problem of the unruly child to the words, ‘Hit him.’ ”