Book Read Free

I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted

Page 21

by Nick Bilton


  My desk was a constant reminder of the speed with which these devices were being developed and marketed. I sat amid a clutter of opened boxes, packing slips, and bubble wrap. On three long tables behind me sat almost every e-reader device made in the last ten years. We referred to the pile of buttons, screens, and power cords as the “gadget buffet.”

  Playing and experimenting with these wacky and innovative inventions helped us alert people in the company to the perpetually changing world of devices. They also helped us understand the direction of the marketplace and how each product would affect news content. For instance, if people were to start reading news on their televisions, we needed to be ready for that transition to a bigger screen and figure out how to organize and display New York Times stories accordingly.

  Though my colleagues and I often saw eye to eye on what worked and what didn’t, we couldn’t agree on the shape and size of that just-right reading device. The more I listened to questions during Q&A sessions or heard from people at cocktail parties or conferences, the clearer it became that a variety of devices might be needed, some with buttons, some with touch screens, some flexible, some rigid, some big, some small. Each person seemed to have a different preference. I came to the conclusion that just as we have TVs that range from pocket-sized to bigger than many New York City apartments, we probably will need a plethora of different kinds of readers too.

  The assumption is that bigger screens will be better, but that may not be the case. Cheryl Bracken, a professor at Cleveland State University, has spent the last decade studying the way we process media content, focusing on whether screen size and the quality of a screen are really relevant in viewing content. Why, she wondered, would someone watch a movie on a 2.5-inch iPod screen when she could sit on the couch and watch it on a 42-inch TV? But that was just an assumption based on her personal experience. Bracken wanted to understand if the next generation, the digital natives, felt the same way.

  She and her researchers recruited ninety-eight undergraduate students to test their viewing experiences on different kinds of screens to find out if one experience was more enjoyable than the other and whether the viewer’s understanding of the narrative is lost when a story produced for a large screen is viewed on a small screen.2

  The students were shown two different clips from a movie on both an iPod with a 2.5-inch screen and a TV with a 32-inch screen. Each clip was approximately ten minutes long. One clip consisted of longish, slowly developing scenes; the other clip was much faster, with rapid edits and a high-speed car chase.

  In theory, at least, the participants would find watching the movie on a larger screen more engaging and immersive than using the smaller version. But in fact, the end results were significantly different. Students who watched the film on the iPod found the experience almost twice as immersive and engaging as those who viewed the clip on a larger television.

  Why? Bracken says the study found that the headphones used with an iPod effectively closed out the rest of the world, helping viewers focus more intently. Further, the subjects holding an iPod felt a greater sense of control over the storytelling and watching experience because they literally held the experience in their hands. Holding a device in your hands allows you to move the device to fit your viewing preferences; a large TV mounted on a wall requires you to move to accommodate it. In other words, screen size, sound, and comfort weren’t the defining factors in the experience for these digital experts. Surprisingly, control over the process and experience, the ability to tune out distractions, and an immersive experience turned out to be hugely important.

  That doesn’t mean that tiny devices are the answer for everyone. One person who doesn’t have a desire to watch movies on small cell phone screens is David Lynch, a director who has been nominated for twelve Oscars and directed some big-name movies, including Blue Velvet, Twin Peaks, and Mulholland Drive. Not only does Lynch not want to watch movies this way, he thinks anyone who chooses this experience won’t get the full effect he intended when he directed the movie. During a recent interview on television, he indignantly scorned people who watch movies on their phones when he said that “now if you’re playing a movie on your telephone, you will never, in a trillion years, experience the film. You’ll think you have experienced it, but you’ll be cheated.” After a brief pause, he yelled into the microphone, “It’s such a sadness that you think you’ve seen a film on your fucking telephone! Get real!”

  OK, we can assume that Lynch won’t be watching the next Super Bowl on his iPod. But that’s the beauty of these digital experiences. I’m perfectly happy watching Mulholland Drive on my iPhone. Lynch might prefer a movie theater. You might be comfortable somewhere in the middle, sitting on your couch at home or watching on your laptop. Digital affords options and preferences, not generalizations.

  But there is a ceiling to these small-screen devices. One limitation of small gadgets comes from the optical accuracy of our vision, what scientists call human visual perception. When screens or fonts or details are very small, our eyes strain to see them clearly, often without success. Eventually the size affects our attention span. That’s why we get headaches from reading really small print or looking at something with too much detail for a long period.

  So if people can enjoy a movie on a two-or three-inch iPod screen, how small is too small? Can you happily watch the latest episode of Entourage on a screen the size of a postage stamp? What about your thumbnail?

  Researchers at the University of Portsmouth in England had the same question when it comes to students and learning. At first they wanted to understand if mobile phones could be used in schools for teaching.3 And if so, was there a cutoff point where the diminished size started to affect the experience? The researchers chose a group of young children in school and tested them on what they learned from the supersmall screen.

  The students were shown different videos on mobile phones with three different screen sizes and then tested to see how much information they retained. The largest screen was a little less than four inches wide, the medium was about the size of an iPod, and the small was a little more than one and a half inches wide.

  One of the videos the students were shown illustrated how to fold a piece of origami. Afterward, the students were asked to try to perform the same task from memory. The students viewing the instructions on the medium and large screens retained significant amounts of information, and the screen size didn’t affect their learning or memory of the video or their enjoyment of the exercise. With the smallest screen, however, the students had just as much fun watching the video as they did on the other two screens, but their ability to recall information from the screen was much lower.

  Nipan Maniar, who led this study in the United Kingdom, said that research consistently shows that students who watch educational videos on medium or large screens retain significantly higher amounts of information. He sees the mobile phone, with its midsize screen, becoming an integral part of the classroom over the next ten years; teachers will be able to hand out coursework wirelessly, communicate one to one, and even allow the students to learn in a highly personalized manner that might incorporate video, reading, multimedia, and games.

  Sound familiar? It would be a classroom for Me!

  Every person’s mind is built completely differently, totally unique from that of everyone else. Asking twenty students to read the same textbook at the same time is like expecting that group of students to be able to run a mile at the exact same speed or to have an equal ability to paint a still life. Our brains are simply not built that way.

  Using screens and digital teaching will allow kids to engage at their own pace in a collaborative fashion that paper just can’t provide.

  What the Future Will Look Like: 1, 2, 10

  Although smart phones are now the rage, a large proportion of my work over the last several years has revolved around these mobile devices. And with good reason: By the end of 2009 there were nearly 4.6 billion active mobile phones in the worl
d.4 With the entire global population at 6.6 billion, that means the penetration rate for mobile could be as high as 70 percent. (Some people own two phones.) And we take these little gadgets with us everywhere, slipping them into and out of our purses or pockets several times a day. As they’ve evolved, so has our dependence on them.

  Several technologists, myself included, believe that the mobile phone probably will outpace desktop computing in the next five years as the central entrance point to the Web. But the mobile phone doesn’t signal the demise of the desktop computer or the large television screen sitting in your living room. Instead, these Web-enabled devices will start to talk to one another and interact in ways that might seem like science fiction today.

  At New York University I teach a course on this topic called “1, 2, 10.” These simple numbers represent the distance a screen is from your eyes. Cell phones and e-books are approximately one foot away when you hold them in your hands. Computer screens are about two feet away. The average television in the living room is, you guessed it, ten feet away. The idea of the course is to explore how content can automatically follow you from screen to screen and place to place, and with this experience the content can automatically change and adapt between these different devices and a person’s locations.

  The 1, 2, 10 concept presents incredible challenges. Designing interfaces for a television screen, where you’re usually sitting ten to fifteen feet away from the image, is a completely different challenge from designing an interface for a mobile phone that is about the size of a chocolate bar. As I teach my students in the class, on top of these vast differences in size, it’s imperative for consumers to switch seamlessly between these experiences without even realizing they moved the same experience to a different screen.

  Imagine if you’re reading an article about a new food recipe on your computer at work. When you get home from the office, your television should know that you’ve read the article and automatically show you video clips of the recipe on this new screen. At the same time, as your phone is in the same room with you, with the flick of a button the television can send the recipe to your mobile phone so that you can pick up ingredients at the grocery store the next day. If you want to take this one step further, you can imagine your fridge notifying your phone which ingredients you already have for the recipe.

  I believe that technology that responds to your precise location at the moment will be in the next wave of products that we start to soon see enter the electronics marketplace, allowing for more customization and personalization of information, entertainment, and advertising. For instance, if I am reading the newspaper at four p.m. on a Friday in the Park Slope section of Brooklyn, the content I see should reflect the time of day (near dinner), the place (what’s nearby), and more. The news feed I’m reading should know what I’ve already read that day and what I haven’t. If I don’t like sports, I shouldn’t see articles about sports. It should factor in what my friends have read and what’s being discussed on my social networks. Most important, these systems should do this without my having to instruct them or tell them anything.

  In the same vein, whatever you’re watching or working on could stay with you, moving from computer to phone to television or actively appearing in a different context on all three if you prefer. Consider the frustration of my friend Michael whom I worked with in the Times research labs. Michael showed up to work one Monday morning, and when I asked how his weekend was, he explained that it was a little frustrating. He told me that he was watching the final innings of a baseball game when a friend invited him to join him at a bar a few blocks away to watch the rest of the game and share a few drinks. Michael wanted to see the friend but didn’t want to lose the thread of the game. “I really wanted the content to follow me, for my phone to know that I was leaving my house and to know that I was watching the game on my TV,” he said. “My phone should know all of this and send me updates as I walked to the bar. When I arrived at the bar, my phone should be aware that I am back in front of a TV and stop updating me with the scores.”

  It’s not an unreasonable idea or the utopian fantasy of a technophile. In fact, Michael and I decided to build a rudimentary version of a similar experience. But instead of a baseball game, we used New York Times news articles as our muse. A mainstream version of this technology doesn’t exist yet, so we had to do a little tinkering and hacking.

  To start, we took a cell phone, placed an RFID chip inside, and then attached an RFID reader to our computers. An RFID (radio-frequency identification) chip is a tiny electronic chip that can store little pieces of information that can be transferred wirelessly to a RFID reader device that interprets the identity of a chip. Many businesses, mine included, give cards with RFID chips to employees so they can enter their office buildings without using a key. RFID chips are also in some credit cards so that you can wave your card in front of an ATM machine instead of swiping it through a scanner. Using these chips and our mobile phones, Michael and I were able to let a computer know we were there just by placing our phones on the desk.

  It was a simple hack to keep track of a person’s presence and location: Place your phone at the desk, and the computer knows you are there. Pick up your phone and walk away, and the computer knows that you left. Using this detection, Michael and I wrote some code that kept track of the articles we were reading on NYTimes.com and could automatically pass the articles back and forth between the phone and the computer without our having to do anything. So if you’re reading an opinion article by Nick Kristof and you’re halfway down the page, we know you are not finished with the article, and when you walk away from your desk, the rest of the story will automatically appear on your phone. We conceptualized scenarios that would take this further. Imagine if you got into your car and your phone automatically started playing the audio of the article or if you came home and a 3-D avatar started reading the rest of the piece to you on your television.

  Right now, a lot of this is wishful thinking. First, many of these devices still aren’t connected to the Web. The TV is on a cable network, the mobile phone is on a cellular network, and the computer is connected to a separate Internet provider. But when all these experiences move to the same network, the Internet, they can easily start talking to one another. Even now we are starting to see a new wave of cars that are connected to the Web and can notify you via e-mail when it’s time for an oil change.

  This three-screen concept has been on its way for years. I can check my e-mail from my laptop and my phone. If I delete an e-mail on one of those devices, it will be deleted on all of them. I can listen to music on my TV, laptop, music player, or phone. But right now, I have to load the music separately to make that happen. What Michael wanted was for his phone to actually talk to his television and vice versa. Just as millions of people are paying $25 a month for Internet services on their smart phones, Michael’s wishful thinking is another example of the kinds of experiences people would willingly pay for if they found the results useful and valuable in their daily lives.

  What the Future Will Look Like: People Pay for Experiences, Not Content

  Daily, we see examples of great experiences that people are clearly willing to pay for—important, eye-opening investigative stories in the form of nonfiction books or newspaper articles; absorbing movies that bring people in droves to theaters; mind-blowing music concerts; moving novels; and of course porn.

  Often, you don’t even need special technology or an unusual innovation: There’s that coffee I buy at my favorite Brooklyn café, where I pay for consistency and convenience. In other instances, it’s adding something to an already existing product: Some Times readers pay concertlike prices to attend the New York Times–created lecture series built around some of the paper’s best-known writers that bring in sold-out crowds. I pay for the New Yorker magazine, which consistently offers enthralling prose no matter whether I experience it in print form or digitally. For kids, this can come in the form of a traditional television experience melde
d with new media. Take the iCarly show, the hottest thing on television for kids and tweens, which uses a filming technique developed by MTV in the late 1980s to produce a fast-paced, engrossing show. Quick cuts, multiple angles, and sometimes a first-person point of view in which the screen is meant to look as if the viewer is holding the camera help keep young viewers involved, as does social networking. Just like stars of blue movies who chatted with viewers and shared little details of their lives, the teen characters on iCarly talk to fans online through social networks and the show’s website, continuing the story and conversation with their audience long after the thirty-minute episode has ended.

  Given how easy it seems to be to identify something that rises above the ordinary, why is it so infuriatingly difficult to figure out the right kinds of great experiences that incorporate and make full use of new technologies? If great content can be made meaningful, why does future revenue still seem so murky for so much of the media world?

  Consider the unfolding battle between the many technologies and approaches to book publishing. It seems pretty clear that some time in the future, paper will fall by the wayside, becoming more expensive to produce and distribute than digital screens, and a good many of us, if not most of us, will read books on some kind of gadget. But with so many publishing companies experimenting with digital books, the best experience—or even a really good one—is far from clear.

  Although we don’t know what will work, a world of Me Economics and the shrinking cost of divergent hardware options probably will mean there will be a choice of reading devices to fit your preferences. Take the approach so far from online booksellers. Amazon.com originally took the low road, offering a simple black-and-white reader and a big inventory of electronic books on the assumption that simplicity and price would be the main drivers. With its $9.99 price tag on most books, it actually loses money on almost every sale, according to New Yorker media writer Ken Auletta. Amazon believes that a low price will build market share and consumer loyalty. Already, Auletta said, Kindle readers buy far more books than they did when they were purchasing print alternatives.5

 

‹ Prev