The Big Nine
Page 6
Overwhelmingly, these teams who are architecting systems intended to make both choices and decisions are led by men. It’s only a slightly more diverse group than the researchers who met at Dartmouth because of one big development: China. In recent years, China has become an important hub for AI, and that’s because of a massive, government-funded effort at Chinese universities and at Baidu, Alibaba, and Tencent.
In fact, Baidu figured out something that even Zero couldn’t yet do: how to transfer skills from one domain to another. It’s an easy task for humans, but a tricky one for AI. Baidu aimed to tackle that obstacle by teaching a deep neural net to navigate a 2D virtual world using only natural language, just like parents would talk to their children. Baidu’s AI agent was given commands like “Please navigate to the apple” or “Can you move to the grid between the apple and the banana?”—and it was initially rewarded for correct actions. It may seem like a simple enough task, but consider what’s involved here: by the end of the experiment, Baidu’s AI could not only understand language that at the start had been meaningless to it, the system also learned what a two-dimensional grid was, that it could move around it, how to move around it, that bananas and apples exist, and how to tell them apart.
At the beginning of this chapter, I asked four questions: Can machines think? What would it mean for a machine to “think”? What does it mean for you, dear reader, to think? How would you know that you were actually thinking original thoughts? Now that you know the long history of these questions, the small group of people who built the foundational layer for AI, and the key practices still in play, I’d like to offer you some answers.
Yes, machines can think. Passing a conversational test, like the Turing test, or the more recent Winograd schema—which was proposed by Hector Levesque in 2011 and focuses on commonsense reasoning, challenging an AI to answer a simple question that has ambiguous pronouns—doesn’t necessarily measure an AI system’s ability in other areas.46 It just proves that a machine can think using a linguistic framework, like we humans do. Everyone agrees that Einstein was a genius, even if the acceptable methods of measuring his intelligence at the time—like passing a test in school—said otherwise. Einstein was thinking in ways that were incomprehensible to his teachers—so of course they assumed he wasn’t intelligent. In reality, at that time there wasn’t a meaningful way to measure the strength of Einstein’s thinking. So it is for AI.
Thinking machines can make decisions and choices that affect real-world outcomes, and to do this they need a purpose and a goal. Eventually they develop a sense of judgment. These are the qualities that, according to both philosophers and theologians, make up the soul. Each soul is a manifestation of God’s vision and intent; it was made and bestowed by a singular creator. Thinking machines have creators, too—they are the new gods of AI, and they are mostly male, predominantly live in America, Western Europe, and China, and are tied, in some way, to the Big Nine. The soul of AI is a manifestation of their vision and intent for the future.
And finally, yes, thinking machines are capable of original thought. After learning through experience, they might determine that a different solution is possible. Or that a new classification is best. AIs don’t have to invent a new form of art to show us creativity.
Which means that there is, in fact, a mind in AI machines. It is young and still maturing, and it is likely to evolve in ways we do not understand. In the next chapter, we’ll talk about what constitutes that mind, the values of the Big Nine, and the unintended social, political, and economic consequences of our great AI awakening.
CHAPTER TWO
THE INSULAR WORLD OF AI’S TRIBES
The centuries-long struggle to build a thinking machine only recently saw big advancements. But while these machines might appear to “think,” we should be clear that they most certainly do not think like all of us.
The future of AI is being built by a relatively few like-minded people within small, insulated groups. Again, I believe that these people are well intentioned. But as with all insulated groups that work closely together, their unconscious biases and myopia tend to become new systems of belief and accepted behaviors over time. What might have in the past felt unusual—wrong, even—becomes normalized as everyday thinking. And that thinking is what’s being programmed into our machines.
Those working within AI belong to a tribe of sorts. They are people living and working in North America and in China. They attend the same universities. They adhere to a set of social rules. The tribes are overwhelmingly homogenous. They are affluent and highly educated. Their members are mostly male. Their leaders—executive officers, board members, senior managers—are, with few exceptions, all men. Homogeneity is also an issue in China, where tribe members are predominantly Chinese.
The problem with tribes is what makes them so powerful. In insular groups, cognitive biases become magnified and further entrenched, and they slip past awareness. Cognitive biases are a stand-in for rational thought, which slows our thinking down and takes more energy. The more connected and established a tribe becomes, the more normal its groupthink and behavior seems. As you’ll see next, that’s an insight worth remembering.
What are AI’s tribes doing? They are building artificial narrow intelligence (ANI) systems, capable of performing a singular task at the same level or better than we humans can. Commercial ANI applications—and by extension, the tribe—are already making decisions for us in our email inboxes, when we search for things on the internet, when we take photos with our phones, as we drive our cars, and when we apply for credit cards or loans. They are also building what comes next: artificial general intelligence (AGI) systems, which will perform broader cognitive tasks because they are machines that are designed to think like we do. But who, exactly, is the “we” these AI systems are being modeled on? Whose values, ideals, and worldviews are being taught?
The short answer is not yours—and also not mine. Artificial intelligence has the mind of its tribe, prioritizing its creators’ values, ideals, and worldviews. But it is also starting to develop a mind of its own.
The Tribe Leaders
AI’s tribe has a familiar, catchy rallying cry: fail fast and fail often. In fact, a version of it—“move fast and break things”—was Facebook’s official company motto until recently. The idea of making mistakes and accepting failures is in stark contrast to America’s enormous corporations, which avoid risk and move at a snail’s pace, and it’s a laudable aim. Complicated technology like AI demands experimentation and the opportunity to fail over and over in pursuit of getting things right. But there’s a catch. The mantra is part of a troubling ideology that’s pervasive among the Big Nine: build it first, and ask for forgiveness later.
Lately, we’ve been hearing a lot of requests for forgiveness. Facebook apologized for the outcome of its relationship with Cambridge Analytica. As that scandal was unfolding, Facebook announced in September 2018 that an attack has exposed the personal information of more than 50 million users, making it one of the largest security breaches in digital history. But it turns out that executives made a decision not to notify users right away.1 Just one month later, Facebook announced Portal, a video conferencing screen to rival Amazon’s Echo Show, and had to walk back the privacy promises it had made earlier. Originally, Facebook said that it wouldn’t use Portal to collect personal data in order to target users with ads. But after journalists pushed back, the company found itself making an awkward clarification: while Portal wouldn’t use your data to display ads, the data collected as you used the device—who you called, which Spotify songs you listen to—could be used to target you later on with Facebook ads on other services and networks.2
In April 2016, the head of Google Brain’s project, Jeff Dean, wrote that the company had excluded women and people of color during an “Ask Me Anything” session on Reddit. It wasn’t intentional but rather an oversight, and I absolutely believe it was not an intentional omission but that it just didn’t occur to the organ
izers to diversify the session.
Dean said that he valued diversity and that Google would have to do better:3
One of the things I really like about our Brain Residency program is that the residents bring a wide range of backgrounds, areas of expertise (e.g. we have physicists, mathematicians, biologists, neuroscientists, electrical engineers, as well as computer scientists), and other kinds of diversity to our research efforts. In my experience, whenever you bring people together with different kinds of expertise, different perspectives, etc., you end up achieving things that none of you could do individually, because no one person has the entire skills and perspective necessary.4
In June 2018, Google released a diversity report that for the first time included employee data broken down by category. In the report, Google said that globally its workforce was 69.1% male. In the US, only 2.5% of employees were Black, while 3.6% were Hispanic and Latinx. For Google’s bold statements about the need to diversify tech, those numbers—already low—didn’t actually change from several years earlier, when in 2014 its workforce was 2% Black and 3% Hispanic and Latinx.5
To its credit, Google in recent years launched an unconscious bias initiative that includes workshops and training to help employees learn more about social stereotypes and deeply held attitudes on gender, race, appearance, age, education, politics, and wealth that may have formed outside of their own conscious awareness. Some Googlers feel that the training has been more perfunctory than productive, with a Black female employee explaining that the training focused on “interpersonal relationships and hurt feelings rather than addressing discrimination and inequality, which signals to workers that diversity is ‘just another box to check.’”6
Yet in the same years as this training was taking place, Google was rewarding bad behavior among its leadership ranks. Andy Rubin, who created Google’s flagship Android mobile operating system, had been asked to resign after a female staff member made a credible claim that he’d coerced her into oral sex. Google paid Rubin $90 million to walk away—structured in monthly payouts of $2.5 million for the first two years and $1.5 million every month for the following two years. The director of Google’s R&D division X, Richard DeVaul, sexually harassed a woman during her job interview, telling her that he and his wife had an open marriage and later insisting on giving that candidate a topless backrub at a tech festival. Unsurprisingly, she didn’t get the job. He was asked to apologize but not to resign. A vice president who helped run Google’s Search ran into trouble when a female employee accused him of groping her—an accusation that was deemed credible, so he was let go with a multimillion-dollar severance package. Between 2016 and 2018, Google quietly let go 13 managers for sexual harassment.7
This feedback underscores the lackluster impact many unconscious bias training programs have within tech and the venture capital firms that fund it. The reason: while people may be more aware of their biases after training, they aren’t necessarily motivated or incentivized to change their behavior.
When we talk about a lack of diversity within the tech community, the conversation typically oscillates between gender and race. However, there are other dimensions of humanity that get short shrift, like political ideology and religion. A 2017 analysis by Stanford’s Graduate School of Business, which surveyed more than 600 tech leaders and founders, showed that the tribe overwhelmingly self-identified as progressive Democrats. During the 2016 election cycle, they overwhelmingly supported Hillary Clinton. The tribe supports higher taxes on wealthy individuals, they are pro-choice, they oppose the death penalty, they want gun control, and they believe gay marriage should be legal.8
That the senior leadership of Google, Apple, Amazon, Facebook, Microsoft, and IBM don’t accurately represent all Americans could be said of the companies in any industry. The difference is that these particular companies are developing autonomous decision-making systems intended to represent all of our interests. Criticism is coming not just from women and people of color but from an unlikely group of people: conservatives and Republican Party stalwarts. In May 2018, the Republican National Committee sent a letter to Mark Zuckerberg accusing Facebook of bias against conservative Americans, which read in part: “Concerns have been raised in recent years about suppression of conservative speech on Facebook… including censorship of conservative news stories.… We are alarmed by numerous allegations that Facebook has blocked content from conservative journalists and groups.”9 The letter, signed by Ronna McDaniel, chairwoman of the RNC, and Brad Parscale, campaign manager for President Trump’s 2020 reelection campaign, went on to demand transparency in how Facebook’s algorithms determine which users see political ads in their feeds and a review into bias against conservative content and leaders.
The thing is, McDaniel and Parscale aren’t wrong. During the heated 2016 election cycle, Facebook staff did intentionally manipulate the platform’s trending section to exclude conservative news—even through stories that were decidedly anti-Clinton had already been trending on their own. Several of Facebook’s “news curators,” as they were called, said that they were directed to “inject” certain stories into the news feed section even if they weren’t trending at all. They also prevented favorable stories about GOP candidates like Rand Paul from showing up. Facebook’s news curation team was made up of a small group of journalists who’d mainly attended private East Coast or Ivy League universities, and to be fair this plays directly into the narrative offered up by conservatives for decades.
In August 2018, more than 100 Facebook employees used an internal message board to complain about a “political monoculture that’s intolerant of different views.” Brian Amerige, a senior Facebook engineer, wrote: “We claim to welcome all perspectives, but are quick to attack—often in mobs—anyone who presents a view that appears to be in opposition to left-leaning ideology.”10
Talking about diversity—asking for forgiveness and promising to do better—isn’t the same thing as addressing diversity within the databases, algorithms, and frameworks that make up the AI ecosystem. When talking doesn’t lead to action, the result is an ecosystem of systems and products that reflect a certain anti-humanistic bias. Here are just a few of our real-world outcomes: In 2016, an AI-powered security robot intentionally crashed into a 16-month-old child in a Silicon Valley mall.11 The AI system powering the Elite: Dangerous video game developed a suite of superweapons that the creators never imagined, wreaking havoc within the game and destroying the progress made by all the real human players.12 There are myriad problems when it comes to AI safety, some of which are big and obvious: self-driving cars have already run red lights and, in a few instances, killed pedestrians. Predictive policing applications continually mislabel suspects’ faces, landing innocent people in jail. There are an unknowable number of problems that escape our notice, too, because they haven’t affected us personally yet.
A truly diverse team would have only one primary characteristic in common: talent. There would not be a concentration of any single gender, race, or ethnicity. Different political and religious views would be represented. The homogeneity within AI’s tribes is a problem within the Big Nine, but it doesn’t start there. The problem begins in universities, where AI’s tribes form.
Tribes get established within concentrated social environments where everyone is sharing a common purpose or goal, using the same language, and working at the same relative intensity. It is where a group of people develops a shared sense of values and purpose. They form in places like military units, medical school rotations, the kitchens of Michelin-starred restaurants, and sororities. They go through trial and error, success and failure, heartbreak and happiness together.
To borrow an example from a field far away from artificial intelligence, in the 1970s and ’80s, Sam Kinison, Andrew Dice Clay, Jim Carrey, Marc Maron, Robin Williams, and Richard Pryor all spent time living in a house at 8420 Cresthill Road, which was just down the street from what became the legendary Comedy Store in Los Angeles. They were just yo
ung guys living in a house and trying to get stage time in an era when Bob Hope was on TV doing one-liners like “I never give women a second thought. My first thought covers everything.”13 This tribe totally rejected that brand of humor, which the previous generation honed meticulously. Their values were radically different: breaking taboos, confronting social injustice, and telling hyper-realistic stories that tended to reflect pretty badly on the very people sitting in the audience. They workshopped their bits and observations with each other. They commiserated after bombing on stage. They experimented with and learned from each other. This tribe of groundbreaking, brilliant comics laid the foundation for the future of American entertainment.14 Collectively, this group of men still wields influence today.
In a way, AI went through a similar radical transformation because of a modern-day tribe that shared the same values, ideas, and goals. Those three deep-learning pioneers discussed earlier—Geoff Hinton, Yann Lecun, and Yoshua Bengio—were the Sam Kinisons and Richard Pryors of the AI world in the early days of deep neural nets. Lecun studied under Hinton at the University of Toronto where the Canadian Institute for Advanced Research (CIFAR) inculcated a small group of researchers, which included Yoshua Bengio. They spent immeasurable amounts of time together, batting around ideas, testing theories, and building the next generation of AI. “There was this very small community of people who had this in the back of their minds, that eventually neural nets would come to the fore,” Lecun said. “We needed a safe space to have little workshops and meetings to really develop our ideas before publishing them.”15