by Alec Ross
“Ideally, the more transparency there can be into who’s investing in educating policy makers the better,” he told me. Indeed, the number of registered lobbyists in Washington has declined over the last decade, while the money spent on lobbying remained relatively constant, an indication that even professional influence peddlers are retreating into the shadows.
Social media is also enabling more subtle, data-driven influence campaigns than would ever have been possible in the past. “The paths by which organizations engage government as a stakeholder are now surround sound—they’re now 5-, 6-, 7-cushion bank shots,” Mehlman said. “To influence policy makers, you want to influence what they read. You can now [through] big data have a far better understanding of who they find persuasive—who do they follow, who are they most likely to tweet or retweet, who have they posted on their Facebook pages, who do they quote in floor speeches—all of which is machine-searchable. That leads to the most sophisticated players thinking six steps removed.”
By way of illustration, Mehlman told me a story about a pharmaceutical company hoping to influence the thinking of a specific United States senator. This senator and her staff are “especially” wary of corporate lobbyists, according to Mehlman, so the company turned to advanced artificial intelligence tools developed by ex–NSA employees. Based on the senator’s use of Twitter, the AI program can determine whom the senator was reading and therefore who was influencing her. They determined that she was a close follower of the journalist Ezra Klein. Mehlman noted that “Ezra Klein is a pretty sophisticated fellow and you can’t just bring a PR slick or executive to schmooze Ezra Klein and get Ezra Klein to do what you want. But then they went through a ton of Ezra Klein’s writings and found a professor at Boston University that Ezra Klein clearly followed.” Mehlman said the AI program identified that the professor was “not a famous guy” but Klein referred to his writing with unexpected frequency, and he was therefore persuasive. What did the pharmaceutical company do with this information? They commissioned an analysis from this professor, which they anticipated would be noted by Klein, who in turn would write about it and therefore influence the senator.
Will that work? It seems unlikely, but not impossible. That’s the frightening level of forethought that now goes into lobbying. But, of course, it does not always take that level of sophistication.
In the United States, the last decade has opened the door for companies to add a blunter tool to their arsenal for influencing Washington: campaign finance. In 2010, the Supreme Court ruled that limiting “independent political spending” by corporations, wealthy donors, and other groups violates their right to free speech. Since then, money has poured into American elections. Outside groups spent a total of $680 million during the five national election cycles preceding the Citizens United decision. In the five cycles after the ruling, they spent $4.4 billion. The total spending on congressional and presidential elections has grown from $4 billion in 2000 to more than $14 billion in 2020.
Still, some companies hesitate to openly support one candidate over another—it could be bad for business if they back the wrong horse. But thanks to Citizens United, there are avenues for these cautious corporate donors to anonymously funnel money to their preferred candidates and causes. Under the ruling, political nonprofits can donate to super PACs and other outside groups without disclosing the origin of their money. Using these “dark money” groups, wealthy individuals and companies can support candidates without the public ever knowing. Dark money groups spent nearly $1 billion on elections in the decade after Citizens United. None of that would have been possible before the Citizens United decision.
Such a laissez-faire approach to campaign finance lets companies and wealthy individuals play outsized roles in shaping the government and its legislative agenda. Richard Trumka, president of the AFL-CIO, put the situation vividly when I spoke with him.
If you go back to the time of the signing of the Constitution, there was an argument between Thomas Jefferson and Hamilton. Hamilton wanted to create corporations, and Jefferson said these are very dangerous creatures. To allow them to accumulate wealth and power could negate all the benefits of the revolution we fought. And Hamilton said, “But Tom we will give them a few prescribed rights, and that’s how we’ll control them.” Corporations currently have more rights than the three humans sitting in this room. They have more rights, not equal rights, more rights. And then the Supreme Court decides that money equals free speech. Now, I don’t think Jefferson and Hamilton and Washington fought for revolution so that Hamilton could say to Jefferson, “You know, Tom, I have more money than you, therefore, I will have more free speech than you.” But that’s what we have.
Donations do not guarantee a victory at the ballot box, but evidence shows that they certainly help. In 2018, 83 percent of Senate races and 89 percent of House races were won by the candidate who spent the most money. The result is that candidates are beholden to their donors, first to get elected and then once they take office. It is difficult to link a politician’s stance on a specific issue to a specific donor, but the impact of campaign finance becomes more evident at the macro level. Candidates need donations to win elections, and to get donations, they must craft their platforms to appeal to donors, especially the ones with the deepest pockets. In 2018, just 225,000 people—less than one-tenth of 1 percent of American adults—contributed a combined $3.1 billion to political campaigns, PACs, parties, and outside groups. Collectively, that small fraction of the population provided 55 percent of the money spent in the 2018 race. Candidates who do not appeal to the concerns of this elite donor class—and shape their platforms accordingly—are unlikely to win.
Once politicians are in office, evidence shows that they continue to prioritize issues that are relevant to donors above those of everyday people. In 2014, a pair of political scientists studied how closely government policy correlated to the preferences of voters in different income brackets. After comparing public opinion and public policy on nearly 1,800 different issues, they drew a stark conclusion: “Economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.”
Thanks to the influence industry and lax campaign finance laws, the US government has effectively been captured by the interests of corporations and a very small number of wealthy elites. It would be unfair to blame the government’s current dysfunction entirely on the private sector, but a weak state certainly increases the relative power of industry.
At best, the private sector is complicit in the stagnation of American democracy. At worst, it plays an active role in promoting corporate autocracy and enabling corporate socialism, where the costs of bailouts are paid by taxpayers while gains are captured by shareholders.
When it comes to the influence game, many business leaders respond to criticism by saying that they did not break the law. This is also a common response when companies are criticized for optimizing the short term over the long term, or for catering to shareholders over employees, the environment, and customers. But there is a snag in this logic.
According to Columbia Law School professor Tim Wu, it downplays both the private sector’s responsibilities and the consequences of its actions. “It’d be one thing if the rules were completely made external to your business. But if you help influence how those rules are made … you’re playing by your own rules,” Wu told me. “I think the older modeled corporation … had an idea of ethical duty that went beyond what was legal.”
But since the rise of shareholder capitalism, the private sector has tended to favor legal soundness over moral correctness. And in doing so, it has fundamentally undermined the social contract in the United States, said Stanford University historian Niall Ferguson.
“There are three kinds of countries in the world,” Ferguson told me. “There are countries where there really isn’t that much corruption and
there is a fairly high level of probity in both business and political life. There are countries where there is the exact opposite: corruption is endemic … private interests treat the state as a gravy train. And then there’s the third category of countries, where corruption is legal and institutionalized and transparent. You know who’s giving money to the campaign and you basically know who’s in the room where it happens. The social contract says, ‘we hold these truths to be self-evident, but cash counts.’”
Ferguson’s case in point for that last category: the United States.
* * *
EVERY NATION IN the world is navigating some combination of factors that interfere with government’s ability to meet the needs of its people; this is the reality of any complex society. But the confluence of forces in the United States puts it in unique territory. The US has seen more power shift from the government to capital than just about any other developed country. The United States is the wealthiest nation in the world—it boasts the world’s largest companies, the world’s richest individuals, the world’s highest GDP, and the world’s highest government budget. Yet many Americans of working age find themselves operating without an effective safety net.
This is a break in the social contract that emerged from the Industrial Revolution. In industrialized societies, where citizens’ well-being can rise and fall with the forces of the market, the earliest decades of industrialization and instability known as Engels’ Pause came to a close only when governments began taking on responsibility for citizens’ economic security. Developed countries enacted policies to promote economic growth and to ensure that citizens shared in those gains. They also created social programs meant to prevent society’s most vulnerable members from falling into poverty, a safety net that keeps people from hitting the ground when fortunes rise and fall.
Safety net programs have existed in various forms for millennia. The Roman Empire ran a pension program for retired soldiers, paying out veterans in land and lump sums of cash. During the Tang dynasty, the Chinese government distributed tax-free land to families with senior citizens, provided professional servants to people over the age of eighty, and distributed allowances to widows. Social welfare was similarly baked into the laws of the early Islamic caliphates. Citing Muslims’ obligation to provide for the poor (a concept known as zakat), governments would redistribute a portion of the taxes they collected to the needy. These programs were so successful that in the 8th century officials struggled to find anyone poor enough to receive welfare.
Modern safety net programs tend to be more targeted than these general cash transfers. They can include retirement benefits, health insurance, unemployment insurance, housing allowances, childcare benefits, food assistance, and other initiatives meant to keep the country’s most vulnerable members out of poverty. Some countries also provide cash welfare in the form of tax breaks and other transfers. While it is not a safety net program in the traditional sense, public education serves as a social good by offering children of every race, ethnicity, and class a path to economic stability.
The United States has always had a more limited social safety net than other Western democracies. American culture places a premium on individualism and self-sufficiency, and it tends to look down on those who receive support from the government. Unlike many other developed countries, the US does not provide free health care or higher education. Instead of relying on the state to provide a safety net, most Americans receive benefits like health care and retirement savings through their employer.
The private model emerged out of a quirk of World War II–era policy—whereby companies could use benefit packages, like health insurance, to compete for new employees while complying with a wartime wage freeze. The model stuck around as the postwar economy boomed, and it worked reasonably well when employees worked for the same company their entire career and benefit levels were high. Today, these are less often the case. Even those companies that offer long-term employment have seen benefits shrink over decades of shareholder capitalism.
Meanwhile, the costs of basic necessities like housing, education, and health care have skyrocketed. Between 2002 and 2018, Americans saw the cost of housing rise 26 percent, the cost of health care rise 35 percent, and the cost of education rise 70 percent. These growing expenses disproportionately affect low-income families, who already spend a larger chunk of their earnings on basic necessities than wealthier households.
In 2019, researchers found that 40 percent of American households were one paycheck away from falling into poverty. The figure rose to 57 percent for nonwhite families, and that was before COVID hit. Today, millions of Americans belong to the working poor, living right around the poverty line despite holding a job.
Private benefits also only work when people have jobs. As unemployment soared in the early days of the COVID-19 pandemic, millions of people lost health coverage at the time they needed it most. As people filed for unemployment, food stamps, and other benefits, US government safety net programs became overwhelmed. Some people found themselves waiting months to receive unemployment payments. Workers who received welfare payments found they covered only a small portion of the income they had lost due to the pandemic.
The pandemic revealed an inconvenient truth about the industry-driven social contract in the United States: it no longer works. Young people exiting school now seem more likely to have thirty jobs in thirty years than they are to have one job and one employer. Giving the private sector outsized influence over the country’s economic, political, and social health has not led to better outcomes for Americans, and companies would be delighted to not be the principal source of benefits. Companies are driven by the demands of the market, and those do not always align with the demands of a fair and just society. It is in those moments when the market does not deliver that the government is supposed to step in.
* * *
THIS DEEP DIVE into the American system shows us how a combination of internal governmental issues and external influences of business and private wealth can ultimately warp the social contract. The whole picture looks grim when laid out in brief, but it is by no means unfixable or inevitable. In a sense, the United States went on a bender after its chief rival, the USSR, sputtered and fell. It took the guardrails off its economic system, massively decreased taxes for corporations and high earners, and now is realizing how the social contract becomes skewed when government leaves the market unchecked.
But there are numerous examples of effective social contracts and safety nets that we can look to in trying to right the balance. As you will read in greater depth later in the book, a number of European nations, as well as South Korea, Australia, and New Zealand, have found ways to balance commerce and innovation with robust protections for workers who get caught up in the ebbs and flows of the market. While each model has its strengths and weaknesses, they are all instructive.
The Nordics, Canada, Australia, and New Zealand have used democracy to build some of the world’s strongest social safety nets. Under their social contracts, the state and its institutions guarantee citizens a high quality of life from the cradle to the grave.
The political and economic systems of the United States seek to provide Americans with equality of opportunity, knowing there will be inequality of outcome. In Japan, it is the opposite: everyone can expect a near-equal outcome.
Countries as distant geographically and culturally as South Korea and Israel have embraced the unregulated freedom of American-style capitalism but pair it with a stronger safety net.
In contrast, the world’s second leading power, China, has followed a completely different model. Over the last three decades, the Chinese Communist Party successfully implemented a social contract that combines the top-down control of authoritarianism with the efficiency and profitability of capitalism. This mix of political control and economic freedom—considered impossible decades ago—has transformed China from an agrarian state with two-thirds of its population living in extreme poverty into a fo
rmidable geopolitical player, though that requires consistently steep levels of continual growth in the absence of much in the way of formal state safety net programs.
The reality around the world is that developing nations are now choosing what social contract they will adopt as they come into their own in the 2020s and beyond. And the two leading models are the world’s greatest powers, the United States and China. The United States has long been the gold standard among developed nations, and if it rights the rifts in its social contract, then it will continue to be the nation that other nations aspire toward. From its founding, the US government was designed to be flexible enough to meet the needs of its people but inefficient enough, with layers of checks and balances, to avoid governmental overreach. The result was a framework that could foster both human liberty and security—goals that any individual around the world can appreciate. But the United States’ recent woes have invited doubts around the world about its future and its framework—even about the very compatibility of basic democracy with high-octane capitalism.
The growing interest among developing nations in China’s model rests on exactly these doubts. China’s model is rigid; it is founded on fast growth and firm executive action, which has kept companies and citizens alike in line with the party’s goals. But China’s social contract is ripe for the abuse of power. It is a throwback to social contracts from centuries past, where the people have little to no say in the direction taken by their governments. China has made it work so far, while it has been on a decades-long economic growth streak: Chinese citizens have seen their quality of life improve dramatically as the economy has opened up. Meanwhile, the iron hand of the Chinese government has offered shelter from the economic volatility that shook many other countries in the 2000s and 2010s. But if ever China’s growth stalls or reverses, then the tacit social contract could be challenged. The Chinese people have given up major freedoms in exchange for growth and well-being. If the latter stalls, the deal will start to look very bad for hundreds of millions of people—at which point, authoritarianism and force are what would hold the government-dominated social contract in place.