Technically Wrong

Home > Other > Technically Wrong > Page 6
Technically Wrong Page 6

by Sara Wachter-Boettcher


  Facebook’s interface allows users to customize their gender to whatever they’d like—but only after they’ve created a profile.

  Some digital products are starting to recognize this societal shift, and adjusting their sign-up forms to allow people to identify as whatever gender they choose. Facebook is one of them: in 2014, it updated its profiles to allow users to identify as Male, Female, or Custom. Users who select Custom can then enter whatever they’d like, or choose from a list of other common answers, like “transwoman” or “nonbinary.”

  While users can change settings once they have a profile, Facebook’s sign-up process still forces them to select Male or Female initially.

  Users can also choose to go by a gender-neutral pronoun, rather than “he” or “she”—so that Facebook will tell friends to “wish them a happy birthday,” for example.

  When Eric Meyer and I wrote Design for Real Life, we called this a compassionate and inclusive move—and I’m sure the designers behind it meant well. But what we didn’t notice is that users aren’t given these options when first signing up for Facebook. Instead, they do have to select either male or female before they can establish an account.

  It’s frustrating, but not surprising: companies are so used to asking for gender—and so used to people providing it—that even though Facebook clearly can support a broader range of identities within its system, the company is still forcing users through a process that just doesn’t work for everyone.

  Gender selection is also mandatory here: you literally cannot set up a Facebook account without selecting Male or Female. Now, since Facebook’s a social network, I can understand why many people want to associate with their gender; it’s a major way that humans define and categorize themselves. But there are also plenty of reasons someone wouldn’t want to list their gender—including simply not finding it relevant to the way they want to use Facebook.

  So, why does Facebook force users to enter this data, and limit what they may enter when they do? Like so many things online, it all comes back to advertising. That’s how Facebook gets its revenue, and what online advertisers pay for is targeting. The more data Facebook has about you, the more filtering options advertisers receive (and in Chapter 6, we’ll look at just how problematic those filters can be). A primary way advertisers want to filter is by gender—either because they sell a product that’s specifically geared toward one group (like bras), or because they want to customize their messaging for different audiences. When a company goes into Facebook’s advertising interface to select the types of profiles where it wants its ads to appear, it can select from three options: All, Women, or Men.

  When you remember how few people change the default settings in the software they use, Facebook’s motivations become a lot clearer: Facebook needs advertisers. Advertisers want to target by gender. Most users will never go back to futz with custom settings. So, Facebook effectively designs its onboarding process to gather the data it wants, in the format advertisers expect. Then it creates its customizable settings and ensures it gets glowing reviews from the tech press, appeasing groups that feel marginalized—all the while knowing that very few people, statistically, will actually bother to adjust anything. Thus, it gets a feel-good story about inclusivity, while maintaining as large an audience as possible for advertisers. It’s a win-win . . . if you’re Facebook or an advertiser, that is. For the rest of us, well, we can either take the deal offered—or leave Facebook entirely. At least, until we all get a lot more comfortable demanding better options.

  ENTITLED TO BETTER

  Back in Chapter 1, I mentioned the story of Dr. Louise Selby, a British pediatrician who couldn’t access her gym’s changing room—because the software used by an entire chain of fitness centers automatically coded anyone with the title “doctor” for entry to the men’s room. That’s an extreme example of bias: Who assumes all doctors are men? But titles cause more problems than we might realize.

  Who the hell needs your title in the first place? No one asks me if I’m married when I buy a sweater at the mall. But as soon as I head online, it seems like everyone needs to know whether I want things shipped to Miss or Mrs. The post office doesn’t require this. The company clearly doesn’t need to know. It’s just one more field that no one ever thought about long enough to simply get rid of. Then there are the sites that still prevent users from selecting Ms., which doesn’t imply a particular marital status—even though it’s been more than four decades since Gloria Steinem named her magazine after the term (which, incidentally, dates back to at least 1901).13

  Over in the United Kingdom, titles can get even more complicated: a rich history of barons and lords and whatnots has made its way into online databases, creating extra opportunities to confuse or misrepresent users. Thankfully, the Government Digital Service, a department launched a few years back to modernize British government websites and make them more accessible to all residents, has developed a standard guideline that solves all this pesky title business: Just don’t. Their standards state:

  You shouldn’t ask users for their title.

  It’s extra work for users and you’re forcing them to potentially reveal their gender and marital status, which they may not want to do. . . .

  If you have to use a title field, make it an optional free-text field and not a drop-down list.14

  Another option gaining steam in the United Kingdom is the gender-neutral term “Mx.,” which is now accepted by the Royal Mail, the National Health Service, and many other governmental and civic organizations. It’s a tiny thing, perhaps—to those of us who never worry about which box to tick.

  BREAKING BIASES

  While many forms build in bias, some companies are taking steps to explicitly design against it. One example is Nextdoor, a social networking service that’s designed to connect you with your neighbors—people who live on your block, or just a couple streets away. Millions of people use Nextdoor, and they post all kinds of things: sharing information about a lost pet, promoting a yard sale, planning community events, and reporting suspicious activity in the neighborhood.

  It’s that last one that was giving Nextdoor a bad rap back in 2015, though. In communities across the United States, residents were posting warnings about “sketchy” people that contained very little information—other than noting the person’s race. Many of them reported mundane activities: a black man driving a car, or a Hispanic man walking a dog. Nextdoor’s CEO, Nirav Tolia, started hearing about the problem from groups in Oakland, California, where he had worked with civic leaders, police, and community groups in the past. In fact, in the fall of 2015, Oakland vice mayor Annie Campbell Washington had even asked the city’s departments to stop using Nextdoor to communicate with citizens, unless the profiling problem was addressed.15

  That same fall, Tolia started meeting with advocacy groups and city officials from Oakland to develop a solution. And that solution came in the shape of none other than form fields.

  See, back in 2015, Nextdoor’s Crime & Safety report was simple: just a blank form with a subject line. Users could, and did, write pretty much whatever they wanted—including making all those reports about “sketchy” people of color. It wasn’t just the outright profiling reports that were problematic either. Because the form required so little information, many users were also reporting real safety concerns in ways that could encourage racial profiling, and that weren’t very helpful for their neighbors.

  For example, a report might detail a crime in the neighborhood, such as a mugging or theft, and include information about the suspect. But that description would often be limited to race and age, rather than including other defining details. As a result, neighbors were encouraged to be suspicious of anyone who fit the vague description—which often meant unfounded suspicion of all people of color.

  So, one of the community groups from Oakland that was involved in the working sessions, Neighbors for Racial Justice, came up with an idea: what if the form itself could prevent profiling
posts, simply by prompting users to provide better information, and rejecting reports that seemed racially biased?

  By January, Nextdoor was talking publicly about the racial profiling problem, and rolling out product changes to address it. The team created content that explicitly banned racial profiling. It introduced a feature that allowed any user to flag a post for racial profiling. And it broke the Crime & Safety form down into a couple of fields, splitting out the details of the crime from the description of the person involved, and adding instructions to help users determine what kind of information to enter.16

  But rather than being satisfied with a few quick tweaks, the design team then spent the next six months rebuilding the process from the ground up, and testing it along the way. They looked at how 911 dispatchers ask callers about suspects—which includes specifically asking about things like clothing, hair, and unique markings such as tattoos or scars. They continued meeting with community groups. And they designed a process that’s meant to do something most forms aren’t: slow people down, and make them think.

  In August 2016, the new user flow launched.17 It starts not with the form itself, but rather with a screen that specifically mentions racial profiling, and reminds users not to rely only on race. “Focus on behavior,” it states. “What was the person doing that made you suspicious?” Sure, a user can tap the button to move forward without reading the message—but the speed bump alone is enough to give some users pause.

  In Nextdoor’s Crime & Safety reporting tool, checks and balances are designed to prevent racial profiling.

  Once in the form, the user is presented with a range of fields, not just a big empty box. When describing a suspect, a user is prompted to enter fields about the suspect’s race, age, gender, and appearance.

  Perhaps most notable is that the form won’t let you submit just anything. At multiple points along the way, the system checks the data entered and provides feedback to the user. For example, if a user focuses on race in the title of their post, the system flags that field and asks them to remove racial information from the title and include it in the description area later in the form instead.

  Within the description section, Nextdoor has built additional rules into the form to prevent profiling. Here, a user is asked to explain what the suspect looked like, including both demographics and appearance. If the user enters data in the race field, the form also requires them to fill out at least two additional fields about the suspect’s appearance: hair, top, bottom, or shoes. According to the form, this is because police say that descriptions of clothing are often the most helpful, and also prevent neighbors from suspecting innocent people.

  The changes worked. Before rolling out the new forms to all of Nextdoor’s users, designers tested them in a few markets—and measured a 75 percent reduction in racial profiling.18

  DEATH BY A THOUSAND CUTS

  As Nextdoor’s results show so clearly, forms do have power: what they ask, and how they ask it, plays a dramatic role in the kind of information users will provide—or if they’ll even be able to use the service in the first place. But in many organizations, forms are still written off as simple, no big deal. Calm down. Does it really matter that you have to select “other”? In fact, here’s a small selection of comments I received when I wrote about this topic on my blog and, later, on Medium:

  People get insulted way too easily these days.

  What planet do you live on? Jesus how imbalanced and twisted your world is.

  First world problems

  Is being forced to use a gender you don’t identify with (or a title you find oppressive, or a name that isn’t yours) the end of the world? Probably not. Most things aren’t. But these little slights add up—day after day, week after week, site after site—making assumptions about who you are and sticking you into boxes that just don’t fit. Individually, they’re just a paper cut. Put together, they’re a constant thrumming pain, a little voice in the back of your head: This isn’t for you. This will never be for you.

  Aimee Gonzalez-Cameron, a software engineer at Uber, has felt this way ever since she sat down to take the SAT. The directions clearly stated that her name on the form needed to match the name on her registration precisely. Only, the form couldn’t fit her whole name. So, as the test started, she sat there, freaking out: what if her scores were invalidated because she couldn’t follow the instructions? Same with the GRE, which she took online—only this time, the system wouldn’t accept a hyphen. Over and over, her hyphenated, Hispanic last name fails to work online—so she finds herself triggering error messages, cutting off pieces of her name, and ultimately ending up managing different versions of it across every system she encounters.

  “‘You don’t fit on a form’ after a while starts to feel like, ‘you don’t fit in a community,’” she told me. “It chips away at you. It works on you the way that water works on rock.” 19

  This is why I think of noninclusive forms as parallel to microaggressions: the daily little snubs and slights that marginalized groups face in the world—like when strangers reach out and touch a black woman’s hair. Or when an Asian American is hounded about where they’re really from (no one ever wants to take “Sacramento” as an answer).

  Lots of people think caring about these microaggressions is a waste of time too: Stop being so sensitive! Everyone’s a victim these days! Those people also tend to be the ones least likely to experience them: white, able-bodied, cisgender men—the same people behind the majority of tech products. As Canadian web developer Emily Horseman puts it, forms “reflect the restricted imagination of their creators: written and funded predominantly by a privileged majority who have never had components of their identity denied, or felt a frustrating lack of control over their representation.” 20

  For those who have felt that lack of control, all those slights—the snotty error messages telling you your name is wrong, the drop-down menus that don’t reflect your race—add up. They get old. They take time. And they ultimately maintain cultural norms that aren’t serving real people.

  INTENTIONAL INTERACTIONS

  Back at Nextdoor, not all the metrics for the new Crime & Safety reporting system are positive—at least not in the way startups tend to aim for. According to Tolia, the CEO who instigated the changes, 50 percent more users are abandoning the new Crime & Safety report form without submitting it than were abandoning the old form.21 In the tech world, such a dramatic decline in use would typically be considered a terrible thing: high abandonment rates are a sign that your form is failing.

  But metrics are only as good as the goals and intentions that underlie them. And in this case, a high number of reports doesn’t lead to greater neighborhood safety. At best, they create clutter—too much noise for users to sift through. That can actually make people less safe, because they become less likely to notice, or take seriously, those reports that do have merit. At worst, as we’ve seen, unhelpful reports perpetuate stereotypes and ultimately target people of color.

  The problem is that in interaction design, metrics tend to boil down to one singular goal: engagement. Engagement is the frequency and depth at which a user interacts with a product: how often they log in, how many pages they view per visit, whether they share content on the site—that sort of thing. As a result, many digital product designers focus solely on increasing daily active users (DAUs) and monthly active users (MAUs): because the more often users return to the site or app, the happier those designers’ bosses (and their companies’ investors) are. If Nextdoor had stuck to that formula, it would never have agreed to make posting about neighborhood crime harder—because fewer Crime & Safety reports means fewer users reading and commenting on those reports.

  In order to address its racial profiling problem, Nextdoor needed to think beyond shallow metrics and consider what kind of community it wanted to create—and what the long-term consequences of allowing racial profiling in its community would be. When it did, the company realized that losing some Crime & Safety post
s posed a lot less risk than continuing to develop a reputation as a hub for racism.

  Sadly, most companies aren’t making these kinds of ethical choices; in fact, the designers and product managers who decide how interactions should work often don’t even realize there’s a choice to be made. Because when everyone’s talking incessantly about engagement, it’s easy to end up wearing blinders, never asking whether that engagement is even a good thing.

  The truth is that most of tech isn’t about to start questioning itself—not if it doesn’t have to. While Tolia probably meant it when he said he hated the idea of his product being used in a racist way, media pressure—ranging from local Bay Area newspapers to the national tech press—surely helped bring about such sweeping changes to the platform. That’s because, like so many tech companies, Nextdoor doesn’t make money—at least not yet. As a result, it’s more beholden to investors than to the people who use the product. And while investors hate losing engagement, they hate headlines that tell the world the company they’ve funded is a home for racial profiling even more. Same with Facebook: the site walked back (though, of course, did not remove) its real-name policy only when a large contingent of users revolted and turned the story into a PR nightmare.

 

‹ Prev