Don't Be Evil
Page 7
Essentially, BackRub unleashed millions of tiny electronic messengers called bots to crawl all over as many documents as they could reach and tag each one with a code that only BackRub could detect and then tally up all its “back links.” The resulting summary was called PageRank, an opportune pun on Page’s name that was fully intended.
When they first unleashed BackRub, it burned through all the bandwidth on their departmental computers, so Page and Brin commandeered the entire Stanford University system, which had nearly five times as much. Now their bots could roam with impunity all over cyberspace, tagging, tallying—and potentially trespassing over the copyrights of anyone and everyone who had created the content they were linking to in the process, something that Google would eventually do at industrial scale when it purchased YouTube years later. (It’s something they continue to try to defend with vociferous lobbying against the tougher copyright rules being pushed by both the European Union and some politicians in the United States.)
To Page and Brin, there was nothing nefarious about this. They simply sought to capture the knowledge tucked away in computer archives across the country to benefit humanity. If it benefited them, too, so much the better. It was the first instance of what later might be classified as lawful theft. If anyone complained, Page expressed mystification. Why would anyone be bothered by an activity of theirs that was so obviously benign? They didn’t see the need to ask permission; they’d just do it. “Larry and Sergey believe that if you try to get everyone on board it will prevent things from happening,” said Terry Winograd, a professor of computer science at Stanford and Page’s former thesis adviser, in an article in 2008. “If you just do it, others will come around to realize they were attached to old ways that were not as good….No one has proven them wrong—yet.”16
This became the Google way. As Jonathan Taplin wrote in his book, Move Fast and Break Things, when Google released the first version of Gmail, Page refused to allow engineers to include a delete button “because Google’s ability to profile you by preserving your correspondence was more important than your ability to eliminate embarrassing parts of your past.” Likewise, customers were never asked if Google Street View cameras could take pictures of their front yards and match them to addresses in order to sell more ads. They adhered strictly to the maxim that says it’s better to ask for forgiveness than to beg for permission—though in truth they weren’t really doing either.
It’s an attitude of entitlement that still exists today, even after all the events of the past few years. In 2018, while attending a major economics conference, I was stuck in a cab with a Google data scientist, who expressed envy at the amount of surveillance that Chinese companies are allowed to conduct on citizens, and the vast amount of data it produces. She seemed genuinely outraged about the fact that the university where she was conducting AI research had apparently allowed her to put just a handful of data-recording sensors around campus to collect information that could then be used in her research. “And it took me five years to get them!” she told me, indignantly.
Such incredulity is widespread among Valley denizens, who tend to believe that their priorities should override the privacy, civil liberties, and security of others. They simply can’t imagine that anyone would question their motives, given that they know best. Big Tech should be free to disrupt government, politics, civic society, and law, if those things should prove to be inconvenient. This is the logic held by the band of tech titans who would like to see the Valley secede not just from America, but from California itself, since, according to them, the other regions aren’t pulling their economic weight.
The kings (and handful of queens) of Silicon Valley see themselves as prophets of sorts, given that tech is, after all, the future. The problem is that creators of the future often feel they have little to learn from the past. As lauded venture capitalist Bill Janeway once put it to me, “Zuck and many of the rest [of the tech titans] have an amazing naïveté about context. They really believe that because they are inventing the new economy, they can’t really learn anything from the old one. The result is that you get these cultural and political frictions that are offsetting many of the benefits of the technology itself.”
Frank Pasquale, a University of Maryland law professor and noted Big Tech critic whose book The Black Box Society is a must-read for those who want to understand the effects of technology on politics and the economy, provided a telling example of this attitude. “I once had a conversation with a Silicon Valley consultant about search neutrality [the idea that search engine titans should not be able to favor their own content], and he said, ‘We can’t code for that.’ I said this was a legal matter, not a technical one. But he just repeated, with a touch of condescension: ‘Yes, but we can’t code for it, so it can’t be done.’ ” The message was that the debate would be held on the technologist’s terms, or not at all.17
A lot of people—including many of our elected leaders in Washington—have bought into that argument. Perhaps that’s why, from the beginning, the rules have favored the industry over the consumers they supposedly serve. The most notable example of “special” rules that benefit Big Tech is the get-out-of-jail-free card provided by section 230 of the Communications and Decency Act of 1996 (CDA), which exempts tech firms from liability for nearly all kinds of illegal content or actions perpetrated by their users (there are a few small exceptions for copyright violations and certain federal crimes).
In the early days of the commercial Internet, back in the mid-1990s, one of the refrains we heard over and over from Silicon Valley was the notion that the Internet was like the town square—a passive and neutral conduit for thoughts and activities—and that because the online platforms were, by this definition, public spaces, the companies who ran them were not responsible for what happened there. The idea was that the scrappy entrepreneurs starting message boards, chat rooms, or nascent search engines out of their basements or garages simply did not have the resources or manpower to monitor the actions of users, and that requiring them to do so would stymie the development of the Internet.
Times have, of course, changed. Today, Facebook, Google, and other companies absolutely can—and do—monitor nearly everything we do online. And yet, they want to play both sides of the fence when it comes to taking responsibility for the hate speech, Russian-funded political ads, and fake news that proliferate on their platforms. Apparently, they have no difficulty tracking every purchase we make, every ad we click on, and every news article we read, but to weed out articles from sketchy conspiracy websites, block anti-Semitic comments, or spot nefarious Russian bots still proves too onerous a task. That’s because doing so requires real human beings earning real wages using real judgment—and that’s something that platform companies that have grown on the back of automation have tried to avoid.
There have been periods when the tech giants have become more vigorous about policing for PR reasons—consider the variety of actions taken by Facebook, Google, GoDaddy, and PayPal to block or ban pornography, or to limit right-wing hate groups’ use of their platforms in the wake of racially charged violence in Charlottesville, Virginia. You can argue that this is laudable or not, depending on your relative concern about hate speech versus free speech. But there’s a key business issue that has been missed in all the hoopla: These companies are incentivized to err on the side of allowing content, if it will get eyeballs. They also have the power to censor. Matthew Prince, the chief executive of Cloudflare, a Web-infrastructure company that dropped the right-wing Daily Stormer website as a client back in 2017 under massive public pressure and against the firm’s own stated policies, summarizes the issue well: “I woke up in a bad mood and decided someone shouldn’t be allowed on the Internet,” said Prince. “No one should have that power.”18
But Big Tech does. It has exactly that power. It’s a schizophrenia that reflects ambivalence, both on the part of the companies and society its
elf, about what they are. Media players? News organizations? Platform technology firms? Retailers? Logisticians?
Whatever they are, the current rules by which they play—which is to say, not very many rules at all—aren’t working. The rise of Google, Facebook, Amazon, and the other platform giants has seemingly placed their leaders above the expectations, the ethical standards, and even the laws that apply to ordinary citizens. To really understand the culture, we have to dig into the business models that enabled the kings of Silicon Valley to ascend so far above others.
CHAPTER 3
Advertising and Its Discontents
Back in November 2017, a year after the election of Donald Trump, Americans got a first look at the ads that Russian groups had bought on Facebook in order to sow the political discontent that may have tipped the election in Trump’s favor.1 They made for sickening viewing. Russian-linked actors had created animated images of Bernie Sanders as a superman figure promoting gay rights, and pictures of Jesus wrestling with Satan along with a caption that had the Antichrist declaring that “If I win Clinton wins!” There were calls for the South to rise again emblazoned on a Confederate flag, and yellow NO INVADERS ALLOWED signs protesting a supposed onslaught of immigrants at the border.
The images were released by lawmakers who then had a chance to question not the CEOs and decision makers who’d signed off on a business model that allowed such propaganda to be monetized, but their lawyers. As per usual, the top brass at the platforms were eager to deflect and deny any wrongdoing. The companies—not just Facebook, but also Twitter and Google—all claimed that they sent their chief counsels rather than the business decision makers because they were best positioned to respond to queries. But as their congressional testimony makes clear, the attorneys were there to make sure that the CEOs didn’t have to take the fall.
“I must say, I don’t think you get it,” said Senator Dianne Feinstein, a California Democrat, who left the hearing feeling profoundly disappointed. “I asked specific questions, and I got vague answers.” Jackie Speier, a House Democrat, also from California, summed up the situation well. “America, we have a problem. We basically have the brightest minds of our tech community…and Russia was able to weaponize your platforms to divide us, to dupe us and to discredit democracy.”
The companies have since attended many such meetings, and in some cases sent their top brass to testify before Congress. But the message hasn’t really changed. The line from the C-suites at Facebook and Google has been consistent: We are very sorry, and we couldn’t have imagined that any of this would ever happen. But if you interview people who’ve worked on targeted advertising at such companies, this is patently untrue. The leadership at YouTube, Google, Facebook, and Twitter have known for years about the risks of platforms being misused by nefarious actors to send users down rabbit holes of propaganda. They just decided that fixing this problem wasn’t worth the risks to their own business model.
The Data-Industrial Complex
A few years back, Guillaume Chaslot, a former engineer for YouTube who is now at the Center for Humane Technology, a group of Silicon Valley refugees who are working to create less harmful business models for Big Tech, was part of an internal project at YouTube, the content platform owned by Google,2 to develop algorithms that would increase the diversity and quality of content seen by users. It was an initiative that had begun in response to the “filter bubbles” that were proliferating online, in which people would end up watching the same mindless or even toxic content again and again, because algorithms that tracked them as they clicked on cat videos or white supremacist propaganda once would suggest the same type of content again and again, assuming (often correctly) that this was what would keep them coming back and watching more—thus allowing YouTube to make more money from the advertising sold against that content. But because the subtler algorithms resulted in lower “watch time” than the original ones, the project was dropped.
Chaslot was gutted; he believed that these new algorithms would not only help mitigate the fake news problem, they would also increase business over the long haul. More diverse content, he reasoned, could open up lines of revenue that would pay off over time, as opposed to sensationalized, eye-popping content that pays off in shorter—albeit more immediately profitable—bursts. But the powers that be disagreed. Their mentality, according to Chaslot, was that “watch time was an easy metric, and that if users want racist content, ‘well, what can you do?’ ” This was a culture in which the metrics were always right. The company was simply serving users, even if that meant knowingly monetizing content that was undermining the fabric of democracy.3
A spokesperson at YouTube, which doesn’t contradict the basic facts of Chaslot’s account, told me in 2018 that the company’s recommendation system has “changed substantially over time” and now includes other metrics beyond watch time, including consumer surveys and the number of shares and likes. And, as this book goes to press in the summer of 2019, YouTube is, in the wake of the FTC investigations along with numerous reports of pedophiles using the platform to find and share videos of children,4 considering whether to shift children’s content into an entirely separate app to avoid such problems.5 But as anyone who uses the site knows, you are, at this moment, still served up more of whatever you have spent the most time with—whether that’s videos of cats playing the piano or conspiracy theories. It’s true that both Google and Facebook now throw more resources at unmasking suspect accounts and removing content. But, ultimately, they do not want to be censors, and are no good at it anyway, as shown by the frequent muddles over what they do and do not decide to take down.
As for the tweaking of algorithms, Google chief counsel Kent Walker (the only high-level Googler to agree to an interview for this book) puts the company’s philosophy quite simply. “We built Google for users….When you’re a search company, every time you make a change to any algorithm, half the people go up, and half go down [meaning the producers of content being ranked by the search engine]. And the half of people that go up think, ‘Well, great to see someone’s recognized how great I am,’ and the people who go down say, ‘Wait a second, what’s going on with this.’ ”
Walker, who told me in an interview in January 2019 that the company had made “in excess of 2,500 changes to the algorithm last year” to stop various nefarious activity, nonetheless admits that “there’s always a risk of manipulation,” which is why the company sticks with its simple mantra of giving users what they want, which implies a focus on the consumer rather than the society at large.
Fair enough. But the point also drives home the power, in lieu of stronger regulation, that digital platforms like Google have to amplify humanity’s worst tendencies. “Citizenship in our time,” Columbia academic and Big Tech critic Tim Wu has said, “is about how you spend your attention.”6 It’s a truth that has, ironically, been put into sharpest focus by Silicon Valley insiders themselves. In a speech at a European privacy commissioners conference in late October 2018, Apple CEO Tim Cook decried the “data industrial complex” made up of companies (including Google and Facebook) that make the vast majority of their money by keeping people online for as long as possible in order to garner as much of their personal data as possible. “Our own information—from the everyday to the deeply personal—is being weaponized against us with military efficiency,” said Cook, whose own company still makes most of its money from hardware.
Apple has its own issues—from tax offshoring to legal battles over intellectual property infringement, which we will explore later. And Cook is being somewhat hypocritical when he criticizes his competitors for “keeping people online for as long as possible,” given that Apple tries, with some exceptions, to do that, too, particularly via its promotion of the “freemium” gaming that hooked my son.
But in this particular area, it’s true that other companies—Facebook, Twitter, Instagram, Snapchat, and Google—have t
he deeper problems. That’s because their core business fundamentally depends on data mining by manipulating behavior, using an odd mix of Las Vegas–style techniques and opaque algorithms to keep users hooked.7 These companies truly are attention merchants. We as consumers perceive their services to be free, but in reality, we are paying—unwittingly—not only with our attention but our data, which they go to great lengths to capture and then monetize.8
What is even more alarming, however, is how vulnerable their complex and opaque digital advertising systems are to exploitation, no matter how many people they put on the problem. The very same week that Google’s $90 million Andy Rubin sex scandal hit the papers, there was news of another and perhaps even more telling debacle: 125 Android apps and websites were subject to a multimillion-dollar scam. Essentially, fraudsters acquired legitimate apps—many targeted at kids, including a number of popular games, a selfie app, a flashlight app, and more—from their developers (paid for in Bitcoin) and sold them to shell companies in Cyprus, Malta, the British Virgin Islands, Croatia, Bulgaria, and elsewhere. Unbeknownst to the users, these apps had been loaded up with bots programmed to capture their every click, scroll, and swipe—then mimic that behavior to artificially boost traffic to the apps’ ads and collect bigger payouts from advertisers, even as they increased the risk of compromising the data of the real human beings who were being duped.