Technically Wrong

Home > Other > Technically Wrong > Page 13
Technically Wrong Page 13

by Sara Wachter-Boettcher


  It’s no accident that so many digital platforms can be so easily exploited by the worst among us. In fact, it’s by their very design.

  It’s not that their founders intended to build platforms that cause harm. But every digital product bears the fingerprints of its creators. Their values are embedded in the ways the systems operate: in the basic functions of the software, in the features they prioritize (and the ones they don’t), and in the kind of relationship they expect from you. And as we’ve seen throughout this book, when those values reflect a narrow worldview—one defined by privileged white men dead set on “disruption” at all costs—things fall apart for everyone else.

  In this chapter we’ll look at the origins of three platforms that were built with, mostly, good intentions—but that have broken in dangerous ways: Twitter, Reddit, and Facebook. How have the values and biases of each of their creators fundamentally shaped the way these platforms work—and the ways they don’t?

  THE DOWNSIDE TO AN UPDATE

  On July 18, 2016, Milo Yiannopoulos—then an editor at the ultraconservative Breitbart News, and the self-proclaimed “most fabulous supervillain on the internet”—published his review of the new, women-led Ghostbusters movie. No one expected it to be kind; this is someone who had regularly published articles like “Birth Control Makes Women Unattractive and Crazy” and “Fat People Should Absolutely Hate Themselves,” after all. Plus, he’d been posting sneering speculation about the film since he’d called the preview “screechingly terrible” back in May.

  The review lived up—or, more accurately, down—to expectations: he called the stars “teenage boys with tits,” the script an “abomination,” and the audience “lonely middle-aged women.” He insisted that feminists “can only survive by sucking on the teat of Big Government.” He suggested the women should have fought “a giant tub of Ben & Jerry’s” while crying and watching romantic comedies. He cracked jokes deriding lesbians.5

  In other words, like everything Yiannopoulos writes, the review was designed to whip his cultlike following of young men into a froth.

  It worked: within hours, his fans took to Twitter to harass comedian Leslie Jones, who played Patty in the film. They called her ugly, manly, and unfunny. They called her an “ape,” a “big lipped coon,” and scads of other racist names. They even sent photos doctored to look like she had semen on her face. Jones shared the offensive messages she was getting with her followers. She reported the abuse to Twitter. She blocked users who attacked her. But it just wouldn’t stop.

  Meanwhile, Yiannopoulos started tweeting out fake screenshots of offensive tweets that he claimed were from Jones’s account. Once his 388,000 followers got hold of them, the abusive tweets only got worse. By the end of the day, Jones was a wreck. “I feel like I’m in a personal hell,” she tweeted just after midnight. “I didn’t do anything to deserve this. It’s just too much.” Within the hour, she had announced she’d be leaving Twitter for a while, “with tears and a very sad heart.” 6

  This wasn’t the first time Yiannopoulos had led a campaign to harass a woman on Twitter; he’d been directing his followers to attack his “opponents” since the “Gamergate” campaign of 2014, in which women video game developers were systematically targeted with rape and death threats. But it turned out to be the last: on July 20, 2016, Twitter permanently banned him from the service.

  What happened to Jones was horrific. But as Lindy West’s story more than a year and a half earlier shows us, it was far from new. Twitter has been home to abusive behavior since its founding in 2007—much of it misogynist, racist, or both. What was new was Twitter’s response. In a statement after banning Yiannopoulos, the microblogging service announced it would be stepping up measures to prevent harassment on the site:

  Many people believe we have not done enough to curb this type of behavior on Twitter. We agree. We are continuing to invest heavily in improving our tools and enforcement systems to better allow us to identify and take faster action on abuse as it’s happening and prevent repeat offenders.7

  Twitter released the first major results of this effort in November 2016, when it launched features that allow users to mute specific keywords from appearing in their feeds, or to mute entire conversations they’ve been tagged in—helpful if, for example, a harasser includes your username in a Tweet to their followers, and those followers reply with a whole series of harassing messages of their own. At the same time, Twitter also made it easier for users to report accounts for being abusive or harmful, and said it was working on internal processes for evaluating and responding to those reports.8

  In early 2017, Ed Ho, the vice president of engineering, promised that “making Twitter a safer place is our primary focus and we are now moving with more urgency than ever.” A few days later, another round of improvements rolled out: tools to better prevent people who have been banned from creating a new profile; a “safe search” that removes potentially sensitive content and tweets from people you’ve blocked or muted; and a content filter that collapses replies to a tweet that it thinks are abusive or low-quality.9 As I write this, Twitter is promising that even more features are on the way.

  But Yiannopoulos spent the rest of 2016 busy too. The day he was banned from Twitter, he showed up at the Republican National Convention in Cleveland wearing a bulletproof vest and the biggest grin you’ve ever seen. Because by this point, being banned wasn’t a problem. It was a blessing. “It’s fantastic,” he told writer Laurie Penny that day. “It’s the end of the platform. The timing is perfect. I thought I had another six months, but this was always going to happen.” 10

  The reason Yiannopoulos was so gleeful was simple: he’d already gotten what he needed from Twitter. He had an audience of angry young men ready to do his bidding. And being banned gave him a new way to cry victim—to cast himself as a free-speech crusader silenced by the lefties in Silicon Valley. On that day, his status as a poster child for the alt-right movement—or, as those of us unwilling to sugarcoat call it, neofascism—seemed cemented. He had won.

  After the convention, Yiannopoulos took his signature brand of bronze-skinned, designer-sunglassed nihilism on a college speaking tour. In December 2016, at the University of Wisconsin–Milwaukee, he put the name and photo of a transgender student up on the screen behind him, and proceeded to mock her, misgender her, and tell the crowd that she was actually just trying to force herself into women’s locker rooms. The woman, Adelaide Kramer, was in the audience that night. She was petrified. “I didn’t know if I was going to get attacked or not. I was just like, ‘Dear god, I hope nobody recognizes me,’” she told Broadly.11

  In February 2017, Yiannopoulos was also invited to speak at UC Berkeley—where, according to a number of media outlets, he planned to use his time on stage to reveal the names and personal information of students who are undocumented immigrants.12 He never got a chance: 1,500 people came to protest, and a small group of “black bloc” protesters—masked, anarchist demonstrators—turned it into a riot, breaking windows and setting fires. His speech was canceled. His fans were outraged—including Donald Trump himself, who tweeted, “If U.C. Berkeley does not allow free speech and practices violence on innocent people with a different point of view—NO FEDERAL FUNDS?” 13 Yiannopoulos might have been banned from Twitter, but his power to harm others? It was going strong. (At least until later that month, when a 2016 video of Yiannopoulos condoning pedophilia resurfaced, and he lost both a lucrative book deal and his Breitbart job.)

  Racist, antiwoman agitators like Yiannopoulos are often framed as having sprung up in 2016—part of a presidential election that broke every rule in the book. But if we want to understand this story, we have to go back much further, long before even Gamergate or Lindy West’s harassment. We have to start with the idea of Twitter itself.

  Most social networks are built on the concept of reciprocal relationships: a user requests to be your friend on Facebook, or your connection on LinkedIn, and you either approve or deny that
request. Twitter, in contrast, is nonreciprocal by default: unless you specifically lock down your account as private, anyone can follow you, or tweet at you—no prior approval needed.

  That’s because Twitter’s central organizing principle isn’t relationships. It’s updates. “It started with a fascination with cities and how they work, and what’s going on in them right now,” recalled cofounder Jack Dorsey in an LA Times interview in 2009. He started by tinkering with visualizations of all the people who were roaming a city at a given moment, squawking their whereabouts and activities into CB radios or over cell phones: bicycle messengers, truck couriers, taxis, and emergency vehicles. “But it’s missing the public. It’s missing normal people,” he realized.14 That’s the gap Twitter aimed to fill: “real-time, up-to-date, from the road” posts, “akin to updating your [instant messenger] status from wherever you are, and sharing it,” Dorsey wrote back in the spring of 2006, when he shared an early sketch of the service, then in a private beta release, on Flickr.15

  People loved it. At the 2007 South by Southwest Interactive conference the following spring, thousands of tech startups and bloggers started using the service—and suddenly, the platform jumped from sending 20,000 to 60,000 messages per day.16 It wasn’t long before Twitter became a household name for all kinds of people—most of them looking very little like the four young, white men from San Francisco who’d founded the company. By 2009, women were using the service about as frequently as men. By 2010, the platform had seen massive growth: 50 million tweets were being sent per day by March, compared to just 2.5 million a day in January 2009. That influx of new users was more diverse too: in 2010, just 5 percent of white internet users in the United States were on Twitter, while 13 percent of black internet users were. By 2011, that gap was even larger: a full 25 percent of black American internet users reported being on Twitter, compared with just 9 percent of white American internet users.17

  In the early days, Twitter’s “status updates” concept was explicitly stated in the interface itself, which described the service as “a global community of friends and strangers answering one simple question: What are you doing?” 18 Only, the more Twitter grew, the less often people’s tweets answered that question. Because, it turns out, all those “normal people” that Dorsey hoped to attract to Twitter didn’t just want to broadcast where they were getting lunch or when they were leaving for work in the morning. They also wanted to banter, share news, tell jokes, make friends, promote their work, and a million other things. As they did, they developed their own techniques for communicating, like adding “RT” to the beginning of a tweet to signify that it was a retweet of someone else’s post.

  Twitter responded by building many of those features into the product: retweets became a button rather than a manual copy-paste. Link shortening became standard, so that long URLs wouldn’t take up half of a tweet’s 140-character limit. Users started tagging conversations about a specific topic with a hashtag, like #design, so Twitter added a feature that automatically linked hashtags to a search page listing every tweet that included that tag.

  But during all of these product improvements, Twitter built precious few features to prevent or stop the abuse that had become commonplace on the platform. For example, the ability to report a tweet as abusive didn’t come until a full six years after the company’s founding, in 2013—and then only after Caroline Criado-Perez, a British woman who had successfully led a campaign to get Jane Austen onto the £10 note, was the target of an abuse campaign that generated fifty rape threats per hour.19 By then, it wasn’t just Criado-Perez who was experiencing high-volume, high-profile harassment on the platform. Abuse was everywhere—and Twitter, long touting itself “the free speech wing of the free speech party,” 20 had little interest in moderating it.

  By the summer of 2014, abuse on Twitter had crescendoed into Gamergate, an episode that, on its face, was about “ethics in video game journalism”—but, at its core, was a sustained, months-long harassment campaign. It started when an ex-boyfriend of game developer Zoe Quinn released a series of manifesto-like blog posts about the ways Quinn had wronged him, insisting that Quinn had cheated on him with industry journalists to get favorable reviews of her game, Depression Quest. What it became was a mob of thousands spewing rape and death threats at a series of women involved in video games—threats that were so specific, and so violent, that Quinn and others, like feminist video game critic Anita Sarkeesian, felt unsafe in their homes.21 And who was stoking this movement? None other than Milo Yiannopoulos, who wrote a series of pro-Gamergate articles on Breitbart, using Twitter to specifically call out and threaten women involved.

  “Twitter has not just tolerated abuse and hate speech, it’s virtually been optimized to accommodate it,” concluded BuzzFeed News writer Charlie Warzel in August 2016. Warzel had just spent months talking with past employees at Twitter—employees who called the platform “a honeypot for assholes” and said that the product, with its nonreciprocal relationships and anything-goes approach to speech, was “basically built for maximum ease of trolling.” 22

  The root of the problem, one former senior employee told Warzel, was precisely what we’ve seen elsewhere in this book: a homogenous leadership team that spent years “tone-deaf to the concern of users in the outside world, meaning women and people of color.” Another, a former engineering manager named Leslie Miley, added, “If Twitter had people in the room who’d been abused on the internet—meaning not just straight, white males—when they were creating the company, I can assure you the service would be different.” 23

  It’s not that Twitter’s founders had bad intentions. It’s that they built a product centered on a specific vision: an open platform for short updates from anyone, about anything. And because abuse wasn’t really on their radar, they didn’t spend much time working out how to prevent it—or even take it seriously when it happened. It wasn’t part of the vision.

  It wasn’t part of the vision, that is, until Twitter started to fail.

  Reports on the death of Twitter have been commonplace since late 2015,24 when, two years after the company went public, efforts to increase user numbers had stalled, and the company posted a net quarterly loss of $132 million.25 According to many in the industry, Twitter’s failure to fix its abuse problem is part of the reason why it’s struggling—and why no one wants to buy the company, despite Twitter’s best efforts to sell. In 2016, Alphabet (Google’s parent company) turned them down. Then, so did both Salesforce and Disney—both, at least in part, because of Twitter’s reputation for harassment. According to sources who spoke with Bloomberg, Disney pulled out of talks “out of concern that bullying and other uncivil forms of communication on the social media site might soil the company’s wholesome family image.” 26 Around the same time, CNBC’s Mad Money host Jim Cramer said that Salesforce CEO Marc Benioff was turned off of the company by “the hatred,” and said Salesforce was concerned that “the haters reduce the value of the company.” 27

  As of this writing, Twitter is still releasing update after update finally aimed at curbing abuse on the platform. The problem now is that many of those updates are ill considered and hastily made. One example is Twitter lists: lists allow any user to create a collection of accounts that it wants to track together. For example, let’s say you run a magazine. You might create a list of all the Twitter users who are magazine staff and freelancers so you can more easily retweet them or track conversations about the magazine. Other users can also subscribe to lists. For example, if you’re a fan of a magazine, you might subscribe to a list of its contributors so you can easily keep up with their posts.

  But being put on a list isn’t always benign. In the real world, lists are often used as a tool for abuse; for example, a troll might add a bunch of women to a list called “feminazi watch list,” and then share it with all their friends. Suddenly, a group of trolls is monitoring the women on that list—creating the perfect breeding ground for harassment.

  In the past
, you’d be notified if someone added you to a list. But in February 2017, Twitter changed things: “We want you to get notifications that matter,” the company announced. “Starting today, you won’t get notified when you are added to a list.” 28 The idea was simple: getting notifications that you’d been added to an abusive list could be frustrating, so Twitter decided the best course of action was to remove all list addition notifications.

  The backlash was immediate: “This is sweeping a problem under the rug,” replied one user. “This is blinding the vulnerable,” said another. “When I get added to lists with names like ‘stupid bitches’ I would like to be notified. Or not added at all,” yet another added.

  Two hours later, Twitter killed the change. “This was a misstep. We’re rolling back the change and we’ll keep listening,” the company tweeted.29

  I’m glad Twitter canceled this change. But the fact is that teams invested time and money in making a system modification and announcing it to the world. They updated code. They updated email systems. They updated help documentation. All without seeing the massive flaws in their plan. Because, for all of Twitter’s sudden interest in safety—and despite plenty of people who undoubtedly have good intentions working on the product—the company is still, at its core, driven by a vision that makes sense for the people who designed it, and fails far too easily for many of the rest of us.

  AWFULNESS, IN MODERATION

  Twitter is the place where some of the worst online harassment plays out, but it’s often not where that harassment brews. For that, we turn to Reddit, the popular social news site where users create forums around a topic. These topical forums are called subreddits, and there are thousands of them—from general categories like “r/politics” to niches like “r/britishproblems.” In each of these subreddits, anyone can submit and vote on content anonymously.

 

‹ Prev