I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted
Page 14
The first video-game system I owned was the Atari 2600. The Atari debuted in 1977 and made its way into my home when I was five years old, in 1981. I don’t remember a lot about that year, but I do remember holding that Atari joystick in my clammy little hand, excitedly batting a square, grossly pixelated ball across a screen with my friends. I played games like Pong and Space Invaders. Today they are artifacts of game history, but at the time they stimulated my mind to no end.
The Atari’s game controller was a simple, almost primitive device. On the top of its square-shaped paddle was a single joystick. The upper-left-hand corner was home to a single orange button. That was it: one stick, one button.
Today, a single controller for the video game system in my living room has fourteen buttons and three multidirectional joysticks. I still have only ten fingers, but on today’s controllers there’s a place for seventeen different fingers—not including the indisputable fact that I actually have to hold the controller. Yet when I sit down to play video games, I don’t fall into a panic or become overwhelmed with all the buttons and joysticks. I just play. My brain and the technology have both adapted.
A multitude of studies, going back thirty years, show that my experience isn’t isolated—video games are actually extremely stimulating to the human brain.
One of the earliest and most famous studies took place in 1991, when Richard Haier, a psychologist at University of California–Irvine, studied the newly released game Tetris.12 Haier recruited a group of participants between the ages of nineteen and thirty-two who had never played Tetris before. Over an eight-week period, the participants were asked to play Tetris twice a week. Then, before MRI scanning was a possibility, they were asked to go through a positron emission (PET) scanner, which measured glucose levels and pathways within the brain to learn where oxygen was being used and see where the brain was being stimulated.
Haier remembers that it was easy to find a group of students for the study who had never played video games before. “It was the early nineties and not many people had heard of Tetris yet, so it was easy to recruit new players for the study,” he said.
At the start of the study, Haier and his research team explained what Tetris was and how to play, and then the participants’ game scores were monitored as they played. At first their scores were extremely low, just five or ten points each time they played. But numerous areas of their brains showed a dramatic increase in activity in many different functions. Haier explained that these data showed the games were extremely stimulating to the players.
As the study progressed, the participants’ game scores increased pretty dramatically, climbing to more than 100 points per game. But as the scores increased, the amount of brain stimulation or activity decreased, to the surprise of the researchers. The PET scans that previously showed glowing activity returned muted levels of stimulation for many parts of the brain, although other parts remained busy. The brain had adapted fairly quickly to this new visual and interactive form of storytelling.
Although Tetris involved numerous different brain tasks, hand-eye coordination, spatial observation, planning, sight, sound, and more, the brains of novice players quickly figured out how to master each of the jobs.
Haier pointed out that the brain benefits from stimulation. “Clearly video games are impacting our brains—this is why people are so emotional about them,” he explained. “However you are engaged in the world, the brain is engaged. This is why parents buy toys for the crib, things that move around and make noise.” Haier reiterated that “the brain is very adaptable and every generation basically has new stimuli that previous generations had not encountered.” Video games are not bad for our brains, he said; they are just new, and our brains need to figure out how to use them.
The players’ rather rapid improvement in Haier’s studies reflects the concept of plasticity, the way our brains change as we learn new things. In a theory originally put forth a century ago, plasticity essentially posits that our brains are capable of changing shape and structure from learning or experiencing something new.
In 2009, Haier was commissioned by the makers of Tetris to follow up on his 1991 work with a group of teenagers, using much more advanced scanning techniques than were previously available. This time he found it much more difficult to find a group of people who had never played video games before. The new results showed that just as with the earlier research, the brain regions were heightened notably during initial game play. The study also, and more importantly, showed that just like the jugglers whose brains changed shape, the gray matter regions of the brain grew while the participants learned to play Tetris.
Bang, Bang!
Several times a week, a friend and I click into our Xbox for a little game play. In a matter of moments, we are knee-deep in a brutal war, taking on roles as commandos, privates, and sergeants and handling guns and grenades. Quickly, I’m completely engaged by my imaginary job of protecting my country, working with and competing with my buddy as we shoot and destroy enemies and strategize through realistic video images.
I genuinely enjoy this. It relaxes both me and my friend after a long day at work. Video games are, to me, an especially compelling form of storytelling because they allow me to control where the story goes and dive directly into the narrative, using joysticks and buttons. I find my game, Modern Warfare 2, fun and rewarding, and it refreshes me for the work I have to do.
One reason it may be so pleasurable is that like many experiences that are rewarding or exciting, playing games may stimulate brain dopamine, a chemical that plays a part in motion, intelligence, and especially pleasure. Steven Johnson, who has written several books on technology, including Everything Bad Is Good for You, argues that television, video games, and other “bad” forms of entertainment are actually good for our brains and creativity.13 He has written that the neurotransmitter dopamine is constantly being stimulated when we’re playing games and is essentially responsible “for both reward and exploration.” The stuff, he adds, is “the brain’s ‘seeking’ circuitry, which propels us to explore new avenues for reward in our environment.”
But war games like Modern Warfare 2, which put people like me into a lifelike role as the shooter, have, like porn, come under a fair bit of fire from people who fear that the games distort players’ perceptions of reality and lead them to become comfortable with gratuitous violence. Admittedly, there are some issues with the violence in some games and there are concerns about kids who are so addicted to playing that they cannot stop. But these kinds of games, it turns out, also have some profound positive effects on the brain and our abilities.
Neuroscientists first began looking at the effects of video games on brains in the early 1980s as games such as Pac-Man and Donkey Kong became worldwide phenomena. Research showed increased visual skills and better hand-eye coordination. One study in 1989 tested hand-eye reaction time by asking people to press a button when they saw a light.14 The participants were then split in two groups and asked to play an Atari game system for fifteen minutes. When they were tested again, the game-playing group increased hand-eye coordination almost 50 percent. That’s pretty powerful learning.
Then there was Haier’s Tetris research, along with other game-related findings through the early 1990s. But a big break in the power of video games and neuroscience was actually discovered by accident.
Daphne Bavelier is the director of the Brain and Vision Laboratory at the University of Rochester, and although her career didn’t start out this way, she now studies the effects of video games on vision and spatial awareness, specifically the impact video games have on cognition, brain plasticity, and vision.
In 2003, Bavelier and a researcher began to look into learning and brain plasticity and how new kinds of visual stimulation can affect the deaf. One of Bavelier’s PhD candidates, Shawn Green, was preparing to test a visual computer system on a group of deaf participants. Before his formal tests began, Green tried out the test to make sure the instruments and data
collections were all working correctly. This specific study was intended to measure an individual’s visual acuity by identifying a series of dots on a screen.
When Green took the test several times to make sure all the instruments were working correctly, he saw that he was consistently getting perfect scores on the visual attention portion of the test. Assuming there was a bug in the program, Green asked some friends to come to the lab to take the test too. Green and Bavelier soon found that some people consistently scored dramatically higher than others did. After investigation, the research team discovered that those with nearly perfect visual test scores had one commonality: They consistently played video games—first-person shooters, to be precise.
Bavelier ended up studying the players of shooter games and had remarkable results: As a group, these players were not just faster in various hand-eye tasks, they seemed to have greater brain capacity, seeing more with peripheral vision, switching attention from one thing to another, tracking multiple items, and generally showing superior visual skills. These games, which demanded quick reflexes and accuracy, were more effective than games of strategy or role playing.
The research created a bit of a firestorm after a story about it appeared in the New York Times, with the startling headline “Video-Game Killing Builds Visual Skills, Researchers Report.”
Sadly, the focus of the research was lost amid the outcry over the apparent support of first-person shooter games. If people were able to put the content aspect of the argument aside, they would see that Bavelier’s study indicates that game playing clearly has a positive side: The skills that allow game players to move quickly, aim, and make superfast decisions can be transferred to a completely different kind of task. But many people tend to overlook the positive side of the research on video games because they have preconceived notions. Bavelier explained in an interview how frustrated she was that people didn’t see the positive side of her research because they couldn’t see past the fact that first-person shooter games were used in the studies.
Green and Bavelier’s research over the last five years shows that action-video-game players consistently outperformed people who didn’t play video games in multiple visual and hand-eye coordination tests. Their research shows action-video-game players have better “spatial distribution and resolution of visual attention,” more “efficiency of visual attention over time,” and a greater “number of objects that can be attended to simultaneously.” Although the practical applications of these skills will vary on an individual basis, this can translate into being a better driver, a more practiced pilot, or a more accurate surgeon or even being better at navigating the Web.
Although it is up to each individual to find a balance with game play, these findings argue for more game playing, not banning kids from playing, and more interactive and active opportunities. Already, new games and game consoles such as the Nintendo Wii allow players to literally swing tennis rackets, dance, do exercises, and participate in other physical activities while playing. Microsoft’s Project Natal creates an augmented reality gaming experience in which you become the actual game controller and there are no buttons or joysticks to worry about. You can play the game by standing in front of your TV and kicking your legs in the air, thereby kicking a ball on the screen. Mobile augmented reality games encourage players to go outside and run around by chasing a figment of a digital reality on mobile devices, blurring the line between sports and video games. This type of game play should be encouraged and supported, not ignored simply because the words “video” and “game” are in the same sentence.
The fact that these kinds of games are coming along is a good thing, since the genie is already out of the bottle and he’s too big to be stuffed back in. An estimated 97 percent of kids twelve to seventeen years old play video games, and their entertainment is often not solo. A Pew Research survey asked kids how they play games, and although some like to play alone, 27 percent said they play with a friend online and 65 percent said they play with a friend or group in the same room—a different look but the same experience as a rousing board game of Monopoly or War.15
On top of the social aspect, most of the games young people play are fairly tame. In 2008, when the Pew researchers asked the participants to list the top ten games they play on a regular basis, only three games turned out to be first-person shooters. The rest included solitaire, Tetris, racing games such as Mario Kart, and numerous sports games. Of the 2,618 games mentioned, the number one game among kids was Guitar Hero, a game that requires multiple players to get up off the couch and compete by playing a guitar and drums as if they were actually part of a band even if they’ve never had a music lesson in their life.
These players aren’t likely to give up their games any more than I am. And like me, they most likely will play games as much as they read, testing and expanding their brains in different ways. Video games offer engaging, immersive, truly multimedia storytelling and can draw in participants more powerfully than can many traditional storytelling methods. That said, they don’t replace one medium with another. Instead, they fill a new void created by a need for interactive narratives.
It’s important to note that there’s a place for each medium. Video games partially displace some forms of storytelling and in other instances meld to form new scenarios. Reading, for example, drives creativity in the brain in ways that video games can’t. A careful selection of words can help our minds imagine, visualize, and daydream. Good written narratives offer a captivating path to the imagination and are imperative to our comprehension and reason. Stories that are told through audio help our brains learn to imagine in other ways and perfect our auditory senses. Images and video offer skills in visual perception and objective thinking and a different kind of logic. Video games offer a challenge to our cognition, coordination, working memory, and visual engagement, among other areas of the brain.
All these media engage our brains with equal weight and importance. The Web offers a culmination of everything for our brains through a new form of narrative and engagement that pulls us, along with our brains, into a new era of storytelling.
6
me in the middle
the rise of me economics
“I thought you were going to read the news,” I said. “This is my news,” she replied.
The New You, Always in the Center
If you pull out your smart phone and click the button that says “locate me” on your Google or Yahoo! map application, you will see a small dot appear in the middle of your screen. That’s you!
If you start walking down the street in any direction, the whole screen will move right along with you, no matter where you go. This is a dramatic change from the print-on-paper world, where maps and locations are based around places and landmarks, not on you or your location. People don’t go to the store and say, “Oh, excuse me, can I buy a map of me?” They go to the store and ask for a map of New York, or Amsterdam, or the subway system. You and I aren’t anywhere to be seen on these maps. The maps are locations that we fit into.
But today’s digital world has changed that. Kevin Slavin, a creator of location-based services and games and the co-founder of the gaming company Area/Code, put this succinctly at a technology conference last year: “We are always in the center of the map.”1
Though Slavin was talking about location-based games and Google maps, the center of the map, it turns out, is actually much bigger than a dot on the screen. It’s a very powerful place to be.
Being in the center—instead of somewhere off to the side or off the page altogether—changes everything. It changes your conception of space, time, and location. It changes your sense of place and community. It changes the way you view the information, news, and data coming in over your computer and your phone. And it changes your role in a transaction, empowering you to decide quite specifically what content to buy and how to buy and use it rather than simply accepting the traditional material that companies have packaged on your behalf.
&nbs
p; Now you are the starting point. Now the digital world follows you, not the other way around.
This transition has been coming at us in fits and starts for some time. As a thirteen-year-old with goofy milk-bottle glasses, at a time when the Internet existed only through slow dial-up modems, I couldn’t wait to get online. At the time, I had moved with my father from England to Florida, and my transition to America and teenagerhood wasn’t going very well. My father, who had an engineering background, connected the computer in his home office to the Internet and signed up for America Online’s $19.95-a-month service. For that amount, AOL rationed minutes of connection time like they were gold, allowing us only double-digit minutes of online time a week. This sounds ludicrous today, when we have unlimited Internet service for $25 or $30 a month, but in the mid-1990s those minutes were worth every penny.
When I walked in the door from school, I would beg to dial in, even if just for a minute. Going online was completely different from anything I’d ever done before. I could connect and chat with teenagers on the other side of the planet. The fact that I was “talking” to another thirteen-year-old in China or France was undeniably magical. It opened my eyes to a world outside the ten houses on my cul-de-sac.
I could search for answers to questions I had in my homework assignments by using some crude “interactive” encyclopedias or even ask online strangers for help. I’d feel a quick pulse of excitement when I heard the speakers on the desk (in a monotone computerized voice) say, “You’ve got mail!” But best of all, I was in the driver’s seat, completely in control of where I went and when. There was no predetermined beginning or end. Even in the embryonic years, I was at the center of a World Wide Web experience