I Live in the Future & Here's How It Works: Why Your World, Work, and Brain Are Being Creatively Disrupted
Page 12
A yearlong research study released in April 2010 by researchers from the Department of Computer Science at the Korea Advanced Institute of Science and Technology used the social-networking service Twitter to explore the theories of social news gathering and dissemination further.
In July 2009 the researchers set up twenty computers to suck in every single piece of information shared on Twitter: every tweet, every retweet (when someone resends a tweet from another user), the number of followers, and so on. In their collection the researchers gathered 41.7 million user profiles, 1.47 billion social relations, 4,262 trending topics, and 106 million tweets.
So what did they find with this treasure trove of data? A majority of the conversation taking place on Twitter at the time was about news and information sharing. Looking through Twitter’s trending topics during this period, the researchers found that more than 85 percent of the top-level topics were headline news or something newsy in nature. They also found that no matter how many people follow a user on Twitter, anything that is retweeted by other users will reach up to 1,000 users on average.
We can get a real-life glimpse of content spreading through these anchoring communities in a piece of research from a project by Gilad Lotan, a developer and researcher at the Microsoft Research labs in Cambridge, Massachusetts.
In June 2009, as the Iranian revolution spread across the Web, The Nation magazine wrote, “Forget CNN or any of the major American ‘news’ networks. If you want to get the latest on the opposition protests in Iran, you should be reading blogs, watching YouTube or following Twitter updates from Tehran, minute-by-minute.”
As this online revolution took place, Lotan built a tool to monitor how news spreads on Twitter, aptly calling the project ReTweet Revolution. He monitored the use of Twitter over ten days in June as the rebellion against the rigged election was taking place in Iran. Lotan sifted through 230,000 tweets and found 372 distinct threads of information around the protests in Iran. As the Iranian government tried to suppress the spread of information on the Web, shutting down websites, information was able to slip out of Iran only through a few individuals on Twitter. One was a student who called himself “tehranbureau” on the social network. As the protests started to unfold, Lotan said many of the Twitter users were reaching very few followers, even though many of those sharing the news inside Iran had only twenty or thirty people following them. But as people all over the globe shared the news by retweeting it, the content eventually was seen by tens of thousands. That in turn influenced the coverage by mainstream media, something that until that time would have been highly unusual.
This swarmlike behavior does more than spread important news. It also dissipates our fears of information overload or the converse, that we might be missing something. When members of my anchored community let me know that certain products are worth consuming, I trust their recommendation because the social networks I’ve set up have been selected by me and cleansed of members I don’t trust—whether they are computers or people. I’m sure that in some instances I’ve been flushed from someone else’s trust market as well.
This new way of consuming information and storytelling online doesn’t bode well for individuals or companies that create mediocre content and cookie-cutter storytelling. The new mentality says that if it’s not good or important, the group won’t share it. Furthermore, it no longer matters who created the content; if it doesn’t satisfy us, we’re not going to share or filter something up the food chain.
During the 2008 presidential election year, Brian Stelter, a Times media reporter, found that people twenty-five years old and younger tended to share political news with their groups of friends through e-mail or other social outlets.7 They provided news and information to their friends and relied heavily on their friends to do the same. They didn’t need to go through all those newspapers and magazines looking for unexpected stories to find the significant stuff. Their friends did that for them. They used these anchoring communities, their own personal public spheres of friends, family, news outlets, blogs, and random strangers—people like Sam H.—to share and disseminate.
This is the way I navigate today as well. If the news is important, it will find me.
Maria Popova runs the blog Brain Pickings, which looks for fun and interesting tidbits online. She calls herself a cultural curator, and she searches for interesting cultural references on blogs, websites, and Twitter feeds and then shares them with thousands of strangers who follow her, passing along the best of the best on her site and through her Twitter account. She calls the process “controlled serendipity”: “I scour it all, hence the serendipity,” she said. “It’s essentially ‘metacuration’—curating the backbone, but letting its tentacles move freely. That’s the best formula for content discovery, I find.”
As with porn, whether the content is produced by a hundred-million-dollar studio or by people in their bedrooms with a webcam, good content will rise to the top, and our special communities and collective intelligence will help get it there. Our trust communities will help us filter the tsunami of data, opinion, insights, news, and reviews coming our way so that we feel neither overwhelmed nor anxious.
Standing on the sidelines of these social networks and trying to figure out what they all represent and if there’s a purpose to these experiences can be downright daunting. I completely empathize with the trepidation of George Packer and others and their rightful concern that there are only so many hours in the day to deal with too much already. I’ve been there, and although there is some transition in the process, I’m convinced that being guided online by communities that I trust won’t create an information hell that leaves you gasping for air. Instead, trusted anchoring communities will help you filter and navigate a bigger world in an eye-opening way that has never been possible before. You just have to get your brain around the possibilities.
5
when surgeons play video games
our changing brains
Men were twice as likely [as women] to tweet or post status updates after sex.
This Time, We’re Really Going to Hell
In the summer of 2008, Nicholas Carr, an author and writer for The Atlantic magazine, felt his brain slipping ever so slightly from its moorings. In the past, he wrote, “Immersing myself in a book or lengthy article used to be easy.”
Not anymore. “Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do,” he said. “I feel as if I’m always dragging my wayward brain back to the text.”
The problem, he concluded, was the Internet generally and Google quite specifically. In a piece titled “Is Google Making Us Stupid?”1 and later in the book The Shallows: What the Internet Is Doing to Our Brains, he frets that having snippets of massive amounts of information right at our fingertips may be eroding our ability to concentrate and contemplate.
Déjà vu all over again, don’t you think?
In fairness, Carr recognizes that the printing press caused similar hand-wringing. And even though some of the predictions came true—the press actually did undermine religious authority, for instance—the many advantages of printing far outweighed the concerns. So he admits that he may be wrong and “a golden age of intellectual discovery and universal wisdom” may emerge from the text, tweets, bytes, and snacks of today’s online world. But he still worries that deep thinking and serious reflection will be forever lost in the data stream of information the Web affords.
Although Carr is fatalistic about the future, his balanced, well-researched article offers a thoughtful perspective. Most of those who are skeptical about the shift aren’t so reflective. In a San Francisco Chronicle article headlined “Attention Loss Feared as High-Tech Rewires Brain,” the author, Benny Evangelista, citing some mental health experts, saw interpersonal relationships breaking down and attention deficit disorder increasing as more people found themselves unable to separate from e-mail, Facebook, and Twitter.
r /> How bad is the attention loss? The inability to detach oneself from electronic updates has spread from the office to the restaurant to the car—and now has reached into the bedroom. The story quotes a survey that found that 36 percent of people age thirty-five or younger used Facebook or Twitter after having sex. “Men,” the story noted, “were twice as likely [as women] to tweet or post status updates after sex.”
Said an executive who commissioned the survey: “It’s the new cigarette.”
Other news stories and books drip with angst over how these new technologies may be destroying us, ruining our intellect, quashing our ability to converse face-to-face, and fundamentally changing relationships for both kids and adults. “Antisocial Networking?” a New York Times story asked, questioning whether time online diminishes intimacy and destroys the natural give-and-take of relationships. “Scientists warn of Twitter dangers,” said CNN.com, stating that researchers had found “social-networking tools such as Twitter could numb our sense of morality and make us indifferent to human suffering.” A number of books, such as The Dumbest Generation: How the Digital Age Stupefies Young Americans and Jeopardizes Our Future and the previously mentioned Distracted: The Erosion of Attention and the Coming Dark Age, add fuel to the fire.2
But it doesn’t stop there: There’s a recent, often-quoted study, “Emails ‘Hurt IQ More than Pot.’ ” A survey of more than a thousand Britons found that the IQ of those trying to juggle messages and work fell ten points, more than double the drop seen after smoking marijuana.
Over and over at speeches and conferences, I hear the same kinds of fears and anxieties that new technologies and developments have generated for decades: Our brains weren’t wired for all this fast-paced stuff.7 We’re too distracted to do meaningful and thorough work. At the same time, our entertainment is also dangerous and damaging, people tell me. Video games will destroy our children’s brains and their relationships—if Twitter and Facebook don’t do so first. We cannot effectively multitask or jump from e-mail to writing to video, and we never will be able to.
There may be some truth to some of this; we may well be fundamentally different when this is all over. But for the most part, I believe it’s bunk. Just as well-meaning scientists and consumers feared that trains and comic books and television would rot our brains and spoil our minds, I believe many of the skeptics and worrywarts today are missing the bigger picture, the greater value that access to new and faster information is bringing us. For the most part, our brains will adapt in a constructive way to this new online world, just as we formed communities to help us sort information.
Why do I believe this? Because we’ve learned how to do so many things already, including learning how to read.
We were never born to read.
—Maryanne Wolf, Proust and the Squid
Some argue that our brains aren’t designed to consume information on screens, or play video games, or consume real-time information. But the same argument holds true for the words you’re reading now. It’s true: Your brain wasn’t built to read. Several thousand years ago, someone created symbols, which ultimately became an alphabet. That alphabet formed into a written language with its own set of unique rules. As a result, the organization of the human brain changed dramatically. But the human brain doesn’t come automatically equipped with the ability to read these symbols. It’s something that has to be rewired in the circuitry every time it happens. Our brains are designed to communicate and to tell stories with language, whether that is with clicks of the tongue between indigenous tribes in the rain forest or with the English language. But reading letters and words is essentially man-made, just like video games and screens.
Even today, when children learn their letters and form them into words and sentences and big, powerful ideas, their brains still have to re-form and readapt to make the information fall into place.
Stanislas Dehaene, chair of Experimental Cognitive Neurology at the Collège de France, has spent most of his career in neuroscience exploring how our brains learn to read and calculate numbers.3 He explains that human brains are better wired to communicate by speaking. In the first year of life, babies begin to pick up words and sounds simply from hearing them. Sure, they need some help identifying that a cup is a cup and Mommy is Mommy. But by two years old, most children are talking and applying labels to objects without any special lessons or drills.
This is not the case with reading. Most children, even if they share books with their parents and hear stories every single day, won’t pick up reading on their own. Instead, they must learn to recognize letters one by one and put them together into sounds or words before recognizing whole sentences and thoughts. They must learn to decode the symbols.
Some research suggests that in doing this, children and even adults actually develop a new area within the brain.4 Manuel Carreiras at the Basque Center on Cognition, Brain and Language has taken research on language into other complex areas. Carreiras’s work over the years has been focused on the neural processes of human language and the way humans comprehend differently when reading and when interpreting sign language. When he wanted a better understanding of how people learn to read, he decided he needed to find illiterate adults to see how their brains adapt before and after learning how to read words.
At first Carreiras had trouble finding a group of adults who really didn’t have any reading skill, but finally, he recruited forty-two veterans of the Colombian guerrilla wars. Twenty of the ex-fighters had recently completed a Spanish literacy training program to teach them to read. The other ex-fighters still needed to take the course and were for the most part illiterate. The former fighters were tested, taught to read, and then tested again. In the process, areas of the brain actually grew and formed connections that had not existed earlier. The brain had rewired itself while the guerrillas learned how to read.
Carreiras found that the brain changes its structure when someone properly learns to read, particularly in the white matter, which creates connections and helps information move between different areas of the brain. He explained, “We found that the literate members of our group had more white matter in the splenium … a structure that connects the brain’s left and right hemispheres—than did the illiterate members.” As the former guerrillas learned to read, the scientists used imaging techniques to measure what was happening. They saw that reading triggered brain functions in the same areas that had grown over the course of the study. In other words, even adults were able to create new neural pathways as they learned a difficult new skill.
What’s significant in this example is that our brains are something like a muscle, which can grow stronger and more powerful with practice and work. Today, technology is building new connections as our brains interpret content and receive stimulation. There’s a constant and simplistic iterative adaptation taking place in our brains as we use our computers, mobile phones, and e-readers. Our brains are learning how to navigate these gadgets, just as they do when we learn how to read.
There’s one piece of this puzzle that’s important to point out. With the use of computers and digital technologies, our brains are not evolving. Human beings evolve at much slower pace than do new communication mechanisms and the technologies we invent and create. Neuroscientists I spoke with explained that a brain from five hundred years ago or even ten thousand years ago will look pretty much the way it does today, just as humans look pretty much the way they did a few thousand years ago.
To illustrate this point, let’s hypothetically travel back two thousand years and find a newborn baby. Imagine taking that baby and transporting him through our time machine forward to today. This child would be raised in our technology-rich society, growing up in a world of iPods, video games, the Internet, mobile phones, GPS, robotic Elmo toys, banner ads, and more. I asked several neuroscientists if this baby born two thousand years ago, I was told, would likely grow up differently than would a child born today. The resounding answer was “no.” A newborn’s brain from two
thousand years ago, I was told, would likely look and work exactly the same as a brain does today.
But what if you took an adult—let’s say a thirty-year-old man from two thousand years ago—and dropped him in the middle of Times Square. He might well experience a panic attack from all the crowds, cars, flashing lights, and stimulation. But, neuroscientists said, his brain would begin to adapt. He might never get to a point where he could talk and simultaneously send text messages, but numerous research studies show that our brains are capable of substantial adaptation in about two weeks and in some instances seven days. Our two-thousand-year-old man would be just fine. His adaptation to society and the new stimuli would just take brain training, and not as much as you might think.
How do our magnificent minds adapt?5 In 2008, a group of neuroscience researchers from UCLA’s Semel Institute studied the brain activity of twenty-four volunteers when the subjects were reading a book or surfing the Web to see if the Web was rewiring the way our brains function.
The volunteers were divided on the basis of how much experience they had using computers and the Internet. Twelve of the participants were labeled “Net Naive” because they used the Internet or computers once a month at most. Asked to rate their tech savvy, they gave themselves a rating of minimal to none. The other twelve participants were labeled “Net Savvy.” Those in this group used a computer at least once a day, and most of them were online numerous times throughout the day. The members of this group considered themselves moderate to expert on computers and the Internet.