Carol Todd—in thick-rimmed glasses and a black hoodie—seemed a sedate woman, though a determined one. She was guarded when she met me, having already grown hardened by the treatment of media following her daughter’s death. And she had another reason to be suspicious: In the wake of everything, she’d become a victim of cyberbullying herself. A few committed individuals from around the world send her messages, attacks on her daughter, attacks on herself. They are relentless. In several e-mails she sent me before and after our interview, she expressed her anxiety about the “haters” who were “out there.”
Like her daughter, Carol responded to such harassment not by retreating, but by broadcasting herself more. She began to maintain a regular blog, where she advocates for reform in schools and governments. She set up a legacy fund to support her cause and speaks to politicians or packed gymnasiums about her experience. She has even produced a line of clothing and wristbands, emblazoned with her daughter’s name, to raise funds. When YouTube took down Amanda’s video in the days following her death, Carol requested that the video be made live again because “it was something that needed to be watched by many.”
She tells me that something in the world was stirred following her daughter’s death. “Amanda put herself out there. I mean, she wasn’t an angel; I’m the first to admit that. But she did what she did, and I do think it woke up the world.” The media outlets that picked up the story include The New Yorker, Anderson Cooper 360º, and Dateline. Vigils were held in thirty-eight countries.
“Amanda wasn’t unique in having all this happen to her, though,” I said. “Why did she become such a rallying force?”
“Well, it was the video, obviously. It was always the video. If she hadn’t made that video, you wouldn’t be sitting here.”
She was right, of course. We have all, in some way, become complicit in the massive broadcasting that online life invites. But occasionally someone—usually a digital native like Amanda—will shock us awake by turning a banal thing like YouTube into a scorching confessional.
And everyone wants to hear a confession. On the evening of Todd’s death, her mother looked at the video and saw it had twenty-eight hundred views. The next morning, there were ten thousand. Two weeks later, the video had been watched seventeen million times.
It was uploaded to YouTube on September 7, 2012. The picture is black and white. Todd stands before the camera, visible from just below the eyes down to her waist. She holds up, and flips through, a series of flash cards that detail her travails of the few years previous. I still remember adolescent angst and bullying as a deeply private struggle, so for me it’s uncomfortable to watch her feed her trauma into a system like YouTube, to watch her give over so much of herself. The song “Hear You Me” by the band Jimmy Eat World plays softly in the background. Todd flips silently through her flash cards. The script on her cards is simple and, by adult standards, sentimental. It is also a naked cry for help that the YouTube community responded to with unavailing praise and cool scorn. The girls who physically assaulted Todd posted their own cruel comments within hours of the video’s being uploaded.
• • • • •
Extraordinary as Todd’s suicide may have been, we should pause here to note that the violence of her reaction to online harassment is not an anomaly. Recent research from Michigan State University found that, for example, Singapore children who were bullied online became just as likely to consider suicide as those who were bullied offline. In fact, researchers found that cyberbullying produced slightly more suicidal thoughts: 22 percent of students who were physically bullied reported suicidal thoughts, and that number rose to 28 percent in the case of students who were bullied online.
Todd was hardly alone in all this. The stories of a heartless online world keep coming. I recently read about a University of Guelph student who decided to broadcast his suicide live online—using the notorious 4chan message board to attract an audience willing to watch him burn to death in his dorm room. (The twenty-year-old man was stopped midattempt and taken to the hospital with serious injuries.) His message to his viewers: “I thought I would finally give back to the community in the best way possible: I am willing to an hero [commit suicide]6 on cam for you all.” Another 4chan user set up a video chat room for him. Two hundred watched (the chat room’s limit) as he downed pills and vodka before setting his room on fire and crawling under a blanket. As the fire began to consume him, the young man appears to have typed to his viewers from beneath the covers: “#omgimonfire.” Some users on the message board egged him on, suggesting more poetic ways to die. These desperate actions make for an extreme example, but I think they speak to something common in us, in fact. Most of us don’t wish to give our lives over entirely to the anonymous Internet, but there is yet a disturbing intensity to the self-broadcasting that most of us have learned to adore.
To some degree, we all live out our emotional lives through technologies. We’re led into deep intimacies with our gadgets precisely because our brains are imbued with a compulsion to socialize, to connect whenever possible, and connection is what our technologies are so good at offering. Some of my friends literally sleep with their phones and check their e-mail before rolling out of bed, as though the machine were a lover that demands a good-morning kiss. E-mails and tweets and blog posts might easily be dull or cruel—but the machine itself is blameless and feels like a true companion. The bond we have with our “user friendly” machines is so deep, in fact, it makes us confess things we would never confess to our suspect fellow humans.
Yet every time we use our technologies as a mediator for the chaotic elements of our lives, and every time we insist on managing our representation with a posted video or Facebook update, we change our relationship with those parts of our lives that we seek to control. We hold some part of the world at a distance, and since we are forever of the world, we end up holding some part of ourselves at a distance, too. The repercussions of this alienation can be trivial—I’ve heard from many young girls worried about whether some schoolmate has “friended” them or “followed” them—but they can also be deeply, irrevocably tragic.
Perhaps we shouldn’t be surprised when digital natives look for comfort in the very media that torments them. What else would they know to do? As Evgeny Morozov points out in The Net Delusion, if the only hammer you are given is the Internet, “it’s not surprising that every possible social and political problem is presented as an online nail.” Morozov expanded on that analogy in his more recent To Save Everything, Click Here, where he wrote: “It’s a very powerful set of hammers, and plenty of people—many of them in Silicon Valley—are dying to hear you cry, ‘Nail!’ regardless of what you are looking at.” It’s easy, in other words, to become convinced that the solution to a tech-derived problem is more technology. Particularly when that technology has enveloped our entire field of vision. While someone of my generation might see that the Internet is not the entire toolbox, for Todd and her cohort, unplugging the problem—or at least the problem’s mouthpiece—isn’t an apparent option.
Without memories of an unplugged world, the subversion of human emotion to online management systems seems like the finest, most expedient, and certainly easiest way to deal.
Ultimately, we desire machines that can understand our feelings perfectly and even supervise our feelings for us.
There’s an entrenched irony, though, in our relationships with “social” media. They obliterate distance, yet make us lonely. They keep us “in touch,” yet foster an anxiety around physical interaction. As MIT’s Sherry Turkle put it so succinctly: “We bend to the inanimate with new solicitude.” In her interviews with youths about their use of technologies versus interactions with warm human bodies, the young regularly pronounce other people “risky” and technologies “safe.” Here is one of her more revealing interview subjects, Howard, discussing the potential for a robotic guardian:
There are things, which you cannot tell your friends or your parents, which . . . you c
ould tell an AI [artificial intelligence]. Then it would give you advice you could be more sure of. . . . I’m assuming it would be programmed with prior knowledge of situations and how they worked out. Knowledge of you, probably knowledge of your friends, so it could make a reasonable decision for your course of action. I know a lot of teenagers, in particular, tend to be caught up in emotional things and make some really bad mistakes because of that.
As yet, no such robot is ready for Howard.
• • • • •
The quasi-biblical quest to bequeath unto computers an emotional intelligence—to “promote them” from darkness—has long occupied human imagination and appears to have proceeded down a pair of intertwining roads. Down one road, typified by Mary Shelley’s Frankenstein, the birth of artificial intelligence has harrowing repercussions, however soulful its beginnings. Down the other road, we encounter the robot as savior or selfless aide, as seen in Leonardo da Vinci’s 1495 designs for a mechanical knight. Mostly, though, these two roads have crossed over each other—the artificial being is both savior and villain. As Adam and Eve were corrupted and disappointed their God, so our science-fiction writers presume robot intelligence will start off with noble intentions and lead to an inevitable debasement. This twinned identity of friend and antagonist is evident as far back as 1921, when the Czech dramatist Karel Capek premiered his play R.U.R. The acronym stands for Rossum’s Universal Robots (this was in fact the work that popularized the term robot). In Capek’s play, a line of helpful mechanical servants degrades into hostile rebels that attack their human masters and eventually kill us off.
Shelves and shelves of such dystopian fantasies do not dull, though, the hope that our creations may know something we do not, may ease our human suffering. We turn to that promise of artificial intelligence almost with the instincts of lost children, asking them to make sense of our lives or help us to escape the silence of our solitude. It makes sense, then, that the first computer outside of science fiction to speak back with something akin to human concern was an infinitely patient and infinitely permissive therapist called ELIZA.
In the mid-1960s, Joseph Weizenbaum, a computer scientist at MIT, wrote a string of code that allowed a computer to process conversational text and then produce a new piece of text that looked an awful lot like the next step in a dialogue. ELIZA was named for Eliza Doolittle, the Cockney girl in George Bernard Shaw’s play Pygmalion, who must overcome her impoverished upbringing and learn to speak “like a lady.” The program’s efforts to communicate were rudimentary, and her speech was annoyingly reflective in the manner of a Rogerian psychologist. Tell ELIZA, “I don’t like going to school,” and she will respond with, “Tell me why you don’t like going to school.” Inform ELIZA that the room is dark, and she will ask if you really think that is so.
I was able to use the ELIZA program myself, thinking it could do a better job of explaining its workings than I could. Here is the transcript from our first session:
Hello. My name is Eliza. How may I help you?
I want to write about you.
Oh . . . ?
Yes, I’m writing a book and there’s a chapter where you appear.
How long have you been writing a book and there’s a chapter where me appear?
A few months now.
I’m not sure I understand you fully.
I’m not sure you do, either.
Do you believe it is normal to be not sure me does either?
I think it’s probably a typical response to your responses.
Why are you concerned about my responses?
Mainly because they’re nonsensical.
That is quite interesting.
Do you think so?
You’re not really talking about me, are you?
This quickly grew insufferable. But perhaps, I told myself, the experiment was unfair. Self-reflection is not what ELIZA was designed to excel at. She wanted to help me with my problems, not discuss her own secrets. I tried again, this time with a genuine emotional approach.
Hello. My name is Eliza. How may I help you?
I’m stressed about writing a book.
How long have you been stressed about writing a book?
Months.
Okay. . . . “I’m stressed about writing a book.” Tell me more.
Like what?
Does that question interest you?
Oh, fuck off.
Tell me more. . . .
Fuck off. Fuck off. Fuck off.
Why do you repeat yourself?
Fuck off. Fuck off. Fuck off.
Come, come, elucidate your thoughts.
Fuck, fuck, fuckity fuck.
Say, do you have any psychological problems?
This second attempt at communicating with the antique program counted as a failure, too, I think. But I was surprised at how engaged I felt.
When Weizenbaum began allowing MIT students to interact with ELIZA, he, too, was surprised by how drawn in they were. Many of them found her approach charming and helpful. She was, in some ways, an ideal conversationalist: someone willing to parrot back your own opinions and ask over and over how you feel and why you feel that way. Some therapists (hardly looking out for their own financial interests) began suggesting that ELIZA would be a cheap alternative to pricey psychoanalysis sessions. As dull-witted as ELIZA actually was, she gave people exactly what they wanted from a listener—a sounding board. “Extremely short exposures to a relatively simple computer program,” Weizenbaum later wrote, “could induce powerful delusional thinking in quite normal people.”
Today, these delusions are everywhere. Often they manifest in ridiculous ways. BMW was forced to recall a GPS system because German men couldn’t take directions from the computer’s female voice. And when the U.S. Army designed its Sergeant Star, a chatbot that talks to would-be recruits at GoArmy.com, they naturally had their algorithm speak with a burly, all-American voice reminiscent of the shoot-’em-up video game Call of Duty. Fooling a human into bonding with inanimate programs (often of corporate or governmental derivation) is the new, promising, and dangerous frontier. But the Columbus of that frontier set sail more than half a century ago.
• • • • •
The haunted English mathematician Alan Turing—godfather of the computer—believed that a future with emotional, companionable computers was a simple inevitability. He declared, “One day ladies will take their computers for walks in the park and tell each other, ‘My little computer said such a funny thing this morning!’” Turing proposed that a machine could be called “intelligent” if people exchanging text messages with that machine could not tell whether they were communicating with a human. (There are a few people I know who would fail such a test, but that is another matter.)
This challenge—which came to be called “the Turing test”—lives on in an annual competition for the Loebner Prize, a coveted solid-gold medal (plus $100,000 cash) for any computer whose conversation is so fluid, so believable, that it becomes indistinguishable from a human correspondent.7 At the Loebner competition (founded in 1990 by New York philanthropist Hugh Loebner), a panel of judges sits before computer screens and engages in brief, typed conversations with humans and computers—but they aren’t told which is which. Then the judges must cast their votes—which was the person and which was the program? Programs like Cleverbot (the “most human computer” in 2005 and 2006) maintain an enormous database of typical responses that humans make to given sentences, which they cobble together into legible (though slightly bizarre) conversations; others, like the 2012 winner, Chip Vivant, eschew the database of canned responses and attempt something that passes for “reasoning.” Human contestants are liable to be deemed inhuman, too: One warm-blooded contestant called Cynthia Clay, who happened to be a Shakespeare expert, was voted a computer by three judges when she started chatting about the Bard and seemed to know “too much.” (According to Brian Christian’s account in The Most Human Human, Clay took the mistake as a badge of honor—being
inhuman was a kind of compliment.)
All computer contestants, like ELIZA, have failed the full Turing test; the infinitely delicate set of variables that makes up human exchange remains opaque and uncomputable. Put simply, computers still lack the empathy required to meet humans on their own emotive level.
We inch toward that goal. But there is a deep difficulty in teaching our computers even a little empathy. Our emotional expressions are vastly complex and incorporate an annoyingly subtle range of signifiers. A face you read as tired may have all the lines and shadows of “sorrowful” as far as a poorly trained robot is concerned.
What Alan Turing imagined, an intelligent computer that can play the human game at least almost as well as a real human, is now called “affective computing”—and it’s the focus of a burgeoning field in computer science. “Affective” is a curious word choice, though an apt one. While the word calls up “affection” and has come to reference moods and feelings, we should remember that “affective” comes from the Latin word afficere, which means “to influence” or (more sinisterly) “to attack with disease.”
The End of Absence: Reclaiming What We've Lost in a World of Constant Connection Page 6