Book Read Free

Technically Wrong

Page 5

by Sara Wachter-Boettcher


  DESIGNING FOR INTERACTION

  First, some background: when I talk about forms and inputs, I mean anything you encounter online that’s full of text boxes, menus, selection bars, or other widgets where you tell the system who you are or what you want.

  For example, if you download an app for ordering food from local restaurants, you’re probably first asked to create an account and provide your name, email address, phone number, home address, and food preferences. Or, say you’re shopping online and you’re ready to check out. You’re taken to a screen where you enter your credit card information, shipping address, and delivery preferences. Or maybe you’re filing for a business license using your city’s municipal website, and you’re asked for information about your type of business, services, locations, and annual revenue. Each of these is an example of a digital form.

  In the tech industry, you’ll typically hear these things referred to as part of “interaction design”—the discipline of determining how an interface responds to user input: How should the system react when you click this button or tap that tab? Should we use a drop-down menu or radio buttons here? How can we make sure more people complete the sign-up process?

  These conversations almost always end up with speed and seamlessness as their primary goals—think Amazon’s one-click purchase. And to some extent, it makes sense: guiding users through a process quickly and easily is good for business, because the fewer people who get frustrated or confused, the more sales or sign-ups are completed.

  The problem, though, is that making interactions feel smooth and simple sounds nice, but it starts to fail as soon as you’re asking users for messy, complicated information. And as you’ll see in this chapter, all kinds of everyday questions can be messy and complicated—often in ways designers haven’t predicted.

  NAMING THE PROBLEM

  Sara Ann Marie Wachter-Boettcher. That’s how my birth certificate reads: five names, one hyphen, and a whole lot of consonant clusters (thanks, Mom and Dad!). I was used to it being misspelled. I was used to it being pronounced all sorts of ways. I was even used to everyone who looks at my driver’s license commenting that it takes up two whole lines. But I didn’t expect my name to cause me so many problems online.

  As it turns out, tons of services haven’t thought much about the wide range of names out there. So, on Twitter I forgo spaces to fit my professional name in: SaraWachterBoettcher. On online bill pay, they’ve truncated it for me: Sara Wachter-Boettch. In my airline’s online check-in system, hyphens straight up don’t exist. The list goes on. It’s irritating. It takes some extra time (do I enter a space between my last names, or just squish them together?). I see more error messages than I’d like. But it’s still a minor inconvenience, compared to what other people experience.

  Take Shane Creepingbear, a member of the Kiowa tribe of Oklahoma. In 2014 he tried to log into Facebook. But rather than being greeted by his friend’s posts like usual, he was locked out of his account and shown this message:

  Your Name Wasn’t Approved.

  It looks like that name violates our name standards. You can enter an updated name again in 1 minute. To make sure the updated name complies with our policies, please read more about what names are allowed on Facebook.1

  Adding to the insult, the site gave him only one option: a button that said “Try Again.” There was nowhere to click for “This is my real name” or “I need help.” Just a clear message: you don’t belong here. And to top it off, he got the message on Indigenous Peoples’ Day, otherwise known as Columbus Day—a day many Native Americans (and a good number of the rest of us) see as celebrating a genocide that started in 1492 and continued across America for centuries.

  It wasn’t just Shane Creepingbear whose name was rejected. Right around the same time, Facebook also rejected the names of a number of other Native Americans: Robin Kills the Enemy, Dana Lone Hill, Lance Brown Eyes. (In fact, even after Brown Eyes sent in a copy of his identification, Facebook changed his name to Lance Brown.)

  Creepingbear wasn’t having it. After the incident, he wrote:

  The removal of American Indians from Facebook is part of a larger history of removing, excluding, and exiling American Indians from public life, and public space. . . . This policy supports a narrative that masks centuries of occupation and erasure of Native culture.2

  When we look closely, there’s a lot to unpack: First, why is Creepingbear required to use his real name? Second, how did Facebook decide to flag his name as fake? And third, why was the message he received so useless and unkind? It turns out, all of these are connected.

  Unlike, say, Twitter, which allows you to select whatever username you want, Facebook requires everyone to use their real name. The official line from Facebook is that this policy increases users’ safety because you always know who you’re connecting with. And it’s true, in some ways; for example, the anonymous trolls who threaten women on Twitter are mostly absent from Facebook.

  But the real-name policy has also received intense criticism from groups like the LGBTQ community, political refugees, people who’ve been victims of stalking and are seeking safety from abusers, and many others who argue that using their legal names on Facebook would either compromise their safety or prevent them from expressing their authentic identity.

  One such group is drag queens and kings. In late 2014, around the same time that Creepingbear’s profile was flagged, hundreds of people from San Francisco and Seattle’s drag communities were locked out of their accounts. Someone had reported their names as fake. Facebook demanded that these users—many of whom use primarily their drag names and do not want their birth names associated with their accounts—change their profile to match their “real” names. The LGBTQ community revolted.

  Facebook responded by telling the drag queens and kings that they could use their drag names to create fan pages instead of profiles. Fan pages are accounts set up for businesses and performers to promote their work; they’re what you get when you “Like” Beyoncé or Burger King. The drag community rejected this idea, saying that many of the people involved were not public figures, but rather private people whose real-life networks were based on using, and being called by, their drag names. They continued to protest.

  Finally, after a meeting at Facebook’s headquarters attended by several prominent members of San Francisco’s LGBTQ community, the company agreed to revise its policy from one based on “real names” to one it calls “authentic names”: the names users go by in everyday life. “For Sister Roma, that’s Sister Roma. For Lil Miss Hot Mess, that’s Lil Miss Hot Mess,” 3 wrote chief product officer Chris Cox.

  These changes helped, but they didn’t go far enough—because a large number of drag queens and kings continued to find their names flagged as fake throughout 2015. Which leads to the second question Creepingbear’s experience first posed: How does Facebook decide which names are authentic and which aren’t?

  For the most part, Facebook relies on others’ reports of fake names. Back when both Creepingbear’s experience and the case of the drag queens and kings hit the news, Facebook’s process for taking those reports was pretty simple: A user would go to the profile of the person they wanted to flag and select “Report.” They’d then be asked for a reason for the report, from “This timeline is pretending to be me or someone I know” to “This timeline is full of inappropriate content.” If they selected “This timeline is using a fake name,” they’d then be asked how they wanted to address the problem: report the profile to Facebook administrators; unfriend, unfollow, or block the profile in question; or send the person a message. The flagging process was seamless and easy to complete—for the person making the report, at least.

  For the person who was reported? Well, that’s another story. Once a report was submitted, it went to a Facebook administrator for review. That administrator decided whether the account appeared to violate policy, and if so, locked the account. The next time the user logged in, they’d receive the same message
that Creepingbear got. People like him—people who weren’t trying to game the system—suddenly had extensive work to do: First they had to figure out how to get help with the problem. Once they sorted that out, they then had to submit documents that proved their names were what they said they were. Not only was the process cumbersome, but many people were also uncomfortable sending copies of official identification to Facebook—no matter how many times Facebook assured them it would delete their IDs as soon as they were verified.

  If you’ve spent any time reading about online harassment, it won’t surprise you to know that many people misused the reporting feature in order to abuse others—flagging, say, hundreds of drag queens, or all the people involved in a Native protest movement, as fake names in a single day. Suddenly, Facebook’s assertion that its real-name policy prevents abuse didn’t feel quite so believable.

  To Facebook’s credit, it did recognize that this process was a bit too easy for users reporting names, and too cumbersome for those on the other end of the reports. In December 2015, it rolled out updates designed to take some of the burden off people accused of using a fake name, and put more on those who make a report. In the revised process, users making a report must identify a reason they’re submitting a profile. They’re then asked to fill in a text box with additional information before Facebook will allow them to submit their report. All said, the process forces those making reports to slow down, making it a little harder to flag profiles en masse—while also giving Facebook more context during the review process.

  To a lesser extent, Facebook also automatically flags a name if it fits a pattern that has been identified as fraudulent in the past. But the accuracy of this method is iffy at best, because it often flags real names too; just ask Beth Pancake and Miranda Batman.4 (And it definitely isn’t great at catching fake ones, either: I tested it out one day by changing my name to Sara Nope Nope Nope—something I thought would be rejected easily, with its obvious nonsense. Nope, indeed; it stayed that way for months.)

  A user whose profile has been flagged now gets a message that’s a lot friendlier than the blamey missive that Creepingbear received. Rather than responding to “Your Name Wasn’t Approved,” as of this writing, they’re asked to “Help Us Confirm Your Name.” At this point, a user now has seven days to complete the verification process, during which time they can still access their account. The revised process also asks whether any special circumstances apply to a user that would help administrators “better understand the name you use on Facebook.” The options include:

  • Affected by abuse, stalking or bullying

  • Lesbian, gay, bisexual, transgender or queer

  • Ethnic minority

  • Other5

  This information, plus any notes a user provides about their situation, then goes to an administrator, who decides whether to require the user to provide copies of identification or other documentation of their name.

  Sure, it’s a kinder process than before, and it probably reduces false flags. But there’s still the fact that Facebook has placed itself in the position of deciding what’s authentic and what isn’t—of determining whose identity deserves an exception and whose does not.

  Plus, names are just plain weird. They reflect an endless variety of cultures, traditions, experiences, and identities. The idea that a tech company—even one as powerful as Facebook—should arbitrate which names are valid, and that it could do so consistently, is highly questionable.

  In other words, Facebook is going to keep screwing this up—even while it invests more and more resources in building new features and hiring administrators to review accounts, and convinces more and more users to send in copies of their personal identification as part of its verification process. But Facebook isn’t some quasi-governmental organization. It doesn’t need photo ID to keep people safe online. If it really wanted to reduce abuse and harassment, it would invest in better tools to identify trolls and harassers, and develop features that empower individual users to stay safe on their own terms—something we’ll talk a lot more about in Chapter 8. But in the meantime, suffice it to say, an “authentic name” policy doesn’t fix abuse—but it does alienate and unfairly target the people who are most vulnerable.

  BY ANY “OTHER” NAME

  Checkboxes. Drop-down menus. Radial buttons. These kinds of design features help us easily select items as we move through a form. They’re quick: just a click or a tap and you can move right along.

  If, of course, you can find an option that fits you.

  When it comes to race and ethnicity, though, it’s not so simple. Because, well, people aren’t so simple.

  Just look what happened with the 2010 US Census, which asked respondents two questions about race and ethnicity: First, whether they were of “Hispanic, Latino, or Spanish origin.” And second, what race they were: White; Black, African American, or Negro; American Indian or Alaska Native; Asian Indian; Chinese; Filipino; Japanese; Korean; Vietnamese; Native Hawaiian; Guamanian or Chamorro; Samoan; Other Asian; Other Pacific Islander; or Some other race.

  Let’s say you’re Mexican American. You check yes to the first question. How would you answer the second one? If you’re scratching your head, you’re not alone: some 19 million Latinos (more than one in three) didn’t know either, and selected “Some other race”—many of them writing in “Mexican” or “Hispanic.” 6

  This is because the US Census labeled Hispanic an ethnicity, not a race (a distinction that’s more than a little contested). So if you’re Latino, and you’re not also black, Asian, or American Indian, you were supposed to check White. But regardless of what the census says, US culture certainly doesn’t consider most people of Latin American descent white—so, as a result, millions of people were not just confused, but also not accurately represented. A similar problem exists for people of North African or Middle Eastern origin: the census said they should mark themselves as White. (I’m sure they feel really “white” whenever they’re being “randomly selected” for secondary screening at airport security.)

  Then you have the 7 percent of Americans who identify as multiple races.7 Up until 2000, the US Census didn’t really account for them at all. But after hearing from many multiracial people, the Census Bureau decided to allow respondents to check more than one box for this question.

  Online forms rarely take this approach, though. Instead, you’ll see lots of forms where you can select only one response for race. People who identify as more than one race end up having to select “multiracial.” As a result, people who are multiracial end up flattened: either they get lumped into a generic category, stripped of meaning, or they have to pick one racial identity to prioritize and effectively hide any others. They can’t identify the way they would in real life, and the result is just one more example of the ways people who are already marginalized feel even more invisible or unwelcome.

  If you’re white, like I am, it’s pretty easy never to think about this. It wasn’t until I started realizing how much power forms hold that I gave this any thought at all. Because forms had always worked just fine for me.

  Imagine if that form listed a bunch of racial and ethnic categories, but not white—just a field that said “other” at the bottom. Would white people freak out? Yes, yes they would. Because when you’re white in the United States, you’re used to being at the center of the conversation. Just look at the concept of the “working class,” something we heard a lot about in 2016: “Working-class Americans are concerned about immigration.” As Jamelle Bouie noted in Slate, this is often a “critical conflation”—because what is actually meant is the white working class.8 But because whiteness is considered the default, it doesn’t even need to be mentioned.

  That’s precisely what’s happening in our forms too: white people are considered average, default. The forms work just fine for them. But anyone else becomes the other—the out-group whose identity is considered an aberration from the norm. This is ridiculous. Most black families have r
oots going back more than two centuries in the United States (compared with white Americans, who are much more likely to descend from the great waves of immigration in the late nineteenth and early twentieth centuries). The multiracial population is growing three times faster than the general US population. And Latinos grew from 6.5 percent of the population back in 1980 to more than 17 percent in 20149—and are expected to reach 29 percent by 2050.10 The reality is clear: America is becoming less white. It’s time our interfaces caught up.

  NONBINARY THINKING

  Why does Gmail need to know your gender? How about Spotify? Apps and sites routinely ask for this information, for no other reason except to analyze trends or send you marketing messages (or sell your data so that others can do that). Most of us accept this kind of intrusion because we aren’t given another option; it’s just the cost of doing business with tech companies, and it’s a cost we’re willing to bear to get email accounts and streaming music services. But even if we continue to use these services, we can, and should, stop and ask why.

  Then there’s the problem of binaries. Most forms still use two options for gender: male or female. But that label doesn’t work for a lot of people—more than you might think, if you don’t happen to know anyone who is trans or nonbinary. According to a 2016 report from the Williams Institute at the UCLA School of Law, which analyzed both federal and state-level data from 2014, about 1.4 million American adults now identify as transgender—around 0.6 percent of the population.11

  That number is also likely to increase. First off, the study found that those aged eighteen to twenty-four were more likely to say they were trans than were those in older age groups. In another study, the researchers also found that youth aged thirteen to seventeen said they were trans at rates about 17 percent higher than the adult population—reflecting a growing awareness and acceptance of trans folks in younger generations, and providing a strong indicator that overall rates are likely to go up in the coming years. Plus, these estimates are based on self-reported data—so if people didn’t feel safe admitting they were trans, they weren’t counted. Researchers noted that states known to be more accepting of trans people had higher self-reporting rates than those that were more repressive.12 We can’t know what the numbers will look like in another generation or two, but odds are good they’ll go up.

 

‹ Prev