If you’re not a Reddit user—or “Redditor,” in the parlance of the site—then you might think it’s some kind of fringe community. But it’s actually known as “the front page of the internet”—the place where millions of people start their day online. In 2016, it was the seventh most popular site in the United States, garnering north of half a billion visits from more than 250 million unique people each month. For reference, that puts its popularity, as of this writing, somewhere just behind Wikipedia—in other words, not very niche at all.30
It’s also, in the words of journalist Sarah Jeong, “a flaming garbage pit.” 31
Like Twitter, Reddit was founded with a vision of free speech: anonymous users posting and sharing content about whatever topics they chose. When it launched in 2005, all posts appeared on a general Reddit homepage. Other users could then vote each post up or down, so that—in theory, at least—the best content made it to the top of the page. But as traffic grew, Reddit’s founders, Alexis Ohanian and Steve Huffman—then in their early twenties, just a few months out of college at the University of Virginia—started fielding complaints: people interested in programming reported that the content they wanted was no longer surfacing to the top, so they couldn’t find it. Ohanian and Huffman responded by creating “r/programming,” the very first subreddit.
But, like much of the internet, it didn’t take long for subreddits to turn offensive. For years, Reddit hosted forums like “r/jailbait,” which featured sexualized photos of teen girls stolen from their social media profiles; “r/creepshots,” where users posted “upskirts” and other sexualized photos of women that had been taken secretly; “r/beatingwomen,” which featured graphic depictions of violence against women; and “r/coontown,” an openly racist, antiblack community.
A few patently awful subreddits were shut down over the years—such as r/jailbait, in 2011, and r/creepshots, in 2012. But it wasn’t until June 2015, ten years after it launched, that Reddit started enforcing a more thorough antiharassment policy: “We will ban subreddits that allow their communities to use the subreddit as a platform to harass individuals when moderators don’t take action,” Reddit announced in a statement on the site. “We’re banning behavior, not ideas.” 32 Soon after, five communities that had repeatedly harassed people were banned.
These changes didn’t go over well, to put it mildly: by the start of July, a revolt had begun, with thousands of moderators setting the subreddits they managed to “private,” effectively blacking them out—a major blow to Reddit’s advertising revenue. Many users blamed CEO Ellen Pao for the crackdowns, barraging her with racist and misogynist messages. A few days later, she resigned.
So how did a site that’s used by so many people—one of the most popular sites on the entire web—not just become a haven for awful content, but create a large, vocal community that would revolt against removing that content? Observing the situation from the outside, wrote Jeong at the time, “it looks like a form of collective insanity, a sign that Reddit itself is overrun with the denizens of r/CoonTown, utterly broken beyond repair. . . . How can such a mainstream site appear to be so fringe?” 33
But the answer isn’t that most Reddit users want a site overrun with racist bile and violent sexism. It’s that, like Twitter, the very feature that allowed Reddit to grow is the one that makes the harassment problem impossible to fix: the subreddit.
Each subreddit has one or more moderators—people who set ground rules for forum participants and are responsible for weeding out posts that don’t comply. Reddit itself stays hands-off. Reddit has long touted this approach as being the ideal way to let a community thrive: In a 2014 interview, Erik Martin, Reddit’s first community manager and then the company’s general manager, said, “We try to give them tools to customize their subreddits but to be honest, most of the tools made for moderators were made by other moderators. The community creates what it needs.” 34
Martin saw this hands-off approach as a fundamental strategy for Reddit: “Make the users do the hard part,” 35 he once said. But moderators aren’t paid Reddit employees; they’re volunteers. And by the time Reddit changed its policy in 2015, many of those volunteers had decided that the part Reddit required them to play had become too hard.
Back in August of 2014, just after Mike Brown was shot by police and Ferguson, Missouri, erupted into protests, the subreddit r/blackladies, a community for black women, was inundated with hateful, racist posts. “The moderators . . . tried to delete the hateful content as best they could, but the entire experience exhausted and demoralized them,” wrote Aaron Sankin in the Daily Dot. “They contacted Reddit’s management, but were told that, because the trolls weren’t technically breaking any of the site’s core rules, there was . . . nothing Reddit would do about it.” 36
When management rejected their requests for help, the moderators published an open letter on the r/blackladies subreddit, demanding that the problem be addressed. “Moderators volunteer to protect the community, and the constant vigilance required to do so takes an unnecessary toll,” they wrote. “We need a proactive solution for this threat to our well-being. . . . We are here, we do not want to be hidden, and we do not want to be pushed away.” 37 More than seventy other subreddit moderators cosigned the post.
You might think these moderators would have been pleased, then, about the changes in policy in 2015. But the problem was that, despite the new policies, Reddit still saw its role as fundamentally hands-off: moderators were the ones responsible for enforcing the new rules. As Jeong put it, what’s breaking Reddit is “the same cost-efficient model that made it rise to the top.” 38 She wrote:
Reddit’s supposed commitment to free speech is actually a punting of responsibility. It is expensive for Reddit to make and maintain the rules that would keep subreddits orderly, on-topic and not full of garbage (or at least, not hopelessly full of garbage). Only by giving their moderators near absolute power (under the guise of “free speech”) can Reddit exist in the first place.39
As I write this, Reddit’s moderation problems continue. At the start of February 2017, the site banned two subreddits run by the alt-right movement—ostensibly not for their terrifying hate speech (of which there was plenty), but rather because they violated a core tenet of the site, one of the only guiding principles it has: no doxing. That is, users can’t release the “documents”—anything with personally identifiable information—of another person.40 Just two days earlier, though, Reddit cofounder Ohanian had written an open letter condemning President Trump’s executive order on immigration, and sharing the story of his own immigrant family.41 Many users connected that post with the alt-right ban and accused Ohanian of politicizing the platform. Whether the timing was coincidental doesn’t really matter: as long as Reddit maintains a “free speech” ideology that relies on unpaid moderators to function, it will continue to fall apart—and the victims will be those on the receiving end of harassment.
FAKING NEWS
Speaking of broken platforms and the American president, I would be remiss not to talk about perhaps the biggest story related to technology and the 2016 election: “fake news.” As I write this today, the term has pretty much lost its meaning: the president and his staff now use it to vilify any press coverage they don’t like. But during the fall of 2016, actual fake news was all over Facebook: The Pope endorses Donald Trump! Hillary Clinton sold weapons to ISIS!
According to a BuzzFeed News analysis, in the run-up to Election Day 2016, the top twenty fake stories from hoax and hyperpartisan sources generated more engagement on Facebook than the top twenty election stories from major news sites: 8.7 million shares, comments, and reactions, versus 7.4 million from real news sources.42
Most reports traced the problem back to May of 2016, when tech news site Gizmodo published an article based on an interview with former members of Facebook’s Trending team. One of those staffers claimed that the small group of curators—editors who worked for Facebook on contract through third-party re
cruiting firms—had been biased against conservative news, routinely preventing topics like “Rand Paul” or “Glenn Beck” from appearing in the “Trending” section in the top-right corner of users’ Facebook pages.43 Political pundits went wild, and pretty soon, Senator John Thune of South Dakota—a ranking Republican—was demanding that Facebook explain itself to the Committee on Commerce, Science, and Transportation, where he was chair.44
These curators hadn’t always existed. In 2014, when Facebook launched the Trending feature, an algorithm decided which stories made the cut. But, as at Reddit, the Ferguson protests turned out to be a lightning rod: while #blacklivesmatter and #ferguson lit up Twitter for days, Facebook’s Trending section was full of those feel-good Ice Bucket Challenge videos. Facebook, which had been making a play to be seen as a credible news source, didn’t like being criticized for missing the country’s biggest news story at the time, so it decided to bring in some humans to help curate the news. That’s how the team that Gizmodo profiled in 2016 came to be.
In the wake of the allegations, Facebook launched its own investigation, finding “no evidence of systemic bias.” But it didn’t matter: in August, the Trending team was suddenly laid off, and a group of engineers took its place to monitor the performance of the Trending algorithm.45 Within three days, that algorithm was pushing fake news to the top of the feed: “BREAKING: Fox News Exposes Traitor Megyn Kelly, Kicks Her Out for Backing Hillary,” the headline read. The story was fake, its description was riddled with typos, and the site it appeared on was anything but credible: EndingTheFed.com, run by a twenty-four-year-old Romanian man who copied and pasted stories from other conservative-leaning fake-news sites. Yet the story stayed at the top of the Trending charts for hours. Four different stories from EndingTheFed.com went on to make it into BuzzFeed’s list of most-shared fake-news articles during the election.46 This time, conservative pundits and politicians were silent.
You might think this whole episode was a blunder: Facebook was put under pressure and made a rash decision, and fake news was the unintended consequence. But the truth about Trending isn’t so straightforward. Former curators told Gizmodo that, after working there for a while, they realized they hadn’t really been hired to be journalists. They were meant to “serve as training modules for Facebook’s algorithm.” While there, they were also told not to mention publicly that they were working for Facebook. “I got the sense that they wanted to keep the magic about how trending topics work a secret,” one said. “We had to write in the most passive tense possible. That’s why you’d see headlines that appear in an alien-esque, passive language.” 47
While the curators kept Trending’s headlines faceless and machinelike, engineers were working behind the scenes, tweaking the machines themselves—the algorithms powering the feed of topics that the curators were responsible for picking through. Only, the curators had no direct contact with the engineers—no way to give them feedback on the system, or to confirm whether the machine was getting better or worse.
After the bias allegations, Facebook started testing a new version of Trending, one that replaced curator-written summaries with a simple number denoting how many people were talking about that topic. It also took away editors’ ability to change the source associated with a topic. For example, if the algorithm selected Ending the Fed’s story among several on the same topic—including, say, stories by Fox News or CNN—the editors had no way of shifting the Trending link to one of the other sources.
The tests, with both internal staff and small, randomly selected groups of public users, didn’t go well. “The feedback they got internally was overwhelmingly negative. People would say, ‘I don’t understand why I’m looking at this. I don’t see the context anymore.’ There were spelling mistakes in the headlines. And the number of people talking about a topic would just be wildly off,” a former curator told Slate. The curators expected the new version to be pulled and improved. Instead, they lost their jobs—and that botched version went to the public.48
In other words, Facebook did precisely what it had always intended with Trending: it made it machine-driven. The human phase of the operation just ended more quickly than expected. And when we look closer at Facebook’s history, we can see that this wasn’t a surprising choice at all. It’s right in line with the values the company has always held.
When Facebook launched as a website for college students, back in 2004, users didn’t have “feeds.” They had profile pages. If you wanted to see what people were up to, you’d go to their “wall” and see what was on it; the content wouldn’t come to you. In 2006, the introduction of the News Feed changed that: when you logged into Facebook, you’d get a stream of the latest actions your friends had taken there, like posting a photo, joining a group, or writing a status update. As the site grew more popular, people’s networks got bigger—big enough that people couldn’t keep up with everything in their feed anymore. Over time, Facebook started moving away from the reverse-chronological News Feed, and started moving to an algorithmically ordered one instead. Rather than seeing all your friends’ posts, you then saw the ones Facebook decided were the most relevant to you. “Our whole mission is to show people content that we think that they find meaningful,” said Adam Mosseri, Facebook’s vice president of product management, in a 2015 interview with Time. “Recency is one important input into what people find meaningful, but we have found over and over again that it’s not the only one.” 49
At this point, most of us accept the News Feed algorithm, even if we don’t quite know what it’s doing. That’s just how Facebook works. But it didn’t have to be how Facebook works: users could have been given more power to choose whose posts they saw most, or to keep their profiles chronological. They could have been allowed to split their feed into multiple views, for different topics. They could have been given filter options, such as choosing to see only original posts, not shared content. There are many, many ways Facebook could have solved the problem of people having more content in their feed than they could keep up with. But rather than solving it with more user control, more human influence, they solved it with machines.
The same is true about the allegations of bias on the Trending team: according to Gizmodo’s report, the people involved were nearly all in their twenties and early thirties, and had attended primarily Ivy League colleges or other elite East Coast schools. Most of them leaned liberal. If Facebook was concerned about bias, it could have filled these roles with a more diverse team—people from a variety of backgrounds, ages, and, yes, political leanings. It could have updated the list of news publications that editors relied on to decide whether something was a national story. It could have adjusted editorial oversight. Hell, it could have made the news curators part of the product team, and involved them in the process of improving the algorithm.
But instead, Facebook once again decided to just let the machines sort it out, and pick up the pieces later. Only this time, there was a lot more at stake than users missing a few photos from their favorite friends, or seeing too many updates from a friend’s cousin who they hung out with once at a wedding.
These core values aren’t new. In fact, if you ask Mark Zuckerberg, they’re the core of the company, and always have been. Back when Facebook filed for IPO, in 2012, he lauded those values in a letter to investors: “As most companies grow, they slow down too much because they’re more afraid of making mistakes than they are of losing opportunities by moving too slowly,” he wrote. “We have a saying: ‘Move fast and break things.’ The idea is that if you never break anything, you’re probably not moving fast enough.” 50
Zuckerberg famously calls this approach “the Hacker Way”: build something quickly, release it to the world, see what happens, and then make adjustments. The idea is so ingrained in Facebook’s culture—so core to the way it sees the world—that One Hacker Way is even the official address of the company’s fancy Menlo Park headquarters.
That’s why it was so easy for fake news to take h
old on Facebook: combine the deeply held conviction that you can engineer your way out of anything with a culture focused on moving fast without worrying about the implications, and you don’t just break things. You break public access to information. You break trust. You break people.
Facebook didn’t mean to make fake news a real problem, just like Twitter didn’t mean to enable harassers. But Facebook’s unquestioning commitment to the Hacker Way—to a belief system that puts technical solutions first, and encourages programmers and product teams to take risks without thinking about their implications—made it easy for it to stay blind to the problem, until it was far too late.
The bigger Facebook’s ambitions get, the riskier these cultural values become. It’s one thing to play fast and loose with people’s cat photos and status updates. It’s another when you start proclaiming that you’re “developing the social infrastructure for community—for supporting us, for keeping us safe, for informing us, for civic engagement, and for inclusion of all,” as Zuckerberg himself wrote in a 5,000-word manifesto about the future of the company published in February 2017.51 He went on to talk about a future where Facebook AI listens in on conversations to identify potential terrorists, where elected officials hold meetings on Facebook, and where a “global safety infrastructure” responds to emergencies ranging from disease outbreaks to natural disasters to refugee crises.
Are we comfortable leaving all this to a tech company—one that’s still run mostly by white guys from California, and where the people who actually have professional experience informing the public don’t even get to talk to the engineers who train machines to take their jobs? If we want a world that works for everyone—not just acolytes of the Hacker Way—we shouldn’t be.
Technically Wrong Page 14