Book Read Free

Throwing Rocks at the Google Bus: How Growth Became the Enemy of Prosperity

Page 8

by Douglas Rushkoff


  It’s highly digital, not in the sense that things are recorded in bits but in the way a digital society is free to reprogram its obsolete code. This is economics as an open-source proposition, where nothing is too sacred for reconsideration. We may even seek to maximize human well-being instead of continuing to devalue human contributions as a matter of course. Instead of ending up in the same logical dead ends, we are empowered to rethink our way toward entirely new constructs. We change the program to the new circumstances, rather than the other way around.

  3. Surrender to Abundance: Guaranteed Minimum Income

  A surplus of productivity should not be a problem. It’s only troublesome in an economy in which markets are driven by scarcity alone and value is understood as something to be extracted from people rather than created for them. That’s the zero-sum economic approach that sees something like, say, renewable energy as such a problem: how do we compensate for the loss of profit to be derived from digging oil out of the ground? When an economy has been based in exploiting real and artificial scarcity, the notion of a surplus of almost anything is a mortal threat. As individuals living in an economic landscape constructed over centuries to remove humans altogether, we fear being sacked the moment we are no longer necessary to our employers. As business owners, we fear technologies that render our claim to scarce resources or other competitive advantages obsolete.

  We’ve gotten too competent to maintain this position. Rather, we must take a step outside the economic model in which we are living and accept the potentially scary truth that we have finally succeeded. In spite of our dehumanized approach (or maybe because of it), we have managed to produce enough stuff to give out a livable share to everyone as a matter of course, and for free. A whole lot of what used to be scarce is now plentiful, and between 3-D printing and other forms of distributed production, the rest of everything could turn out to be plentiful as well. We may be approaching what economic futurist Jeremy Rifkin calls “the zero marginal cost society,” in which new technologies reduce the cost of everything to nearly nothing at all.60

  In all honesty, I’m skeptical about digital technology’s ability to deliver on this potential anytime soon. Technologies such as 3-D printing may make it appear as if former consumers are now manufacturing goods from scratch, but this is an illusion. Most of these printers fabricate items with plastic and resin that still has to be sourced from somewhere. Future models will allow people to make things out of metal or other materials, which will still have to be dug out of the ground and refined by someone, somewhere. If the environmental and labor footprint of a single smartphone is any indication, the true cost of 3-D printing will be anything but zero. Additionally, someone will still be manufacturing the 3-D printers themselves, and if they’re at all like the printer manufacturers of today, they will be upgrading often. Just as street curbs today are littered with old monitors and ink-jet printers, tomorrow’s will likely be strewn with first- and second-generation 3-D manufacturing technology. Another growth industry.

  That said, I’d much rather we bump up against the limits of our technology and resourcefulness than the limits of our economic model. The last thing standing in the way of more distributed prosperity should be a paucity of imagination and courage. If we’re really going to make human beings central to the business equation, we at least have to entertain the possibility that not everyone needs full-time employment in order to receive a share of the bounty. Reducing or eliminating the work requirement may prove less taxing on the economy, on the environment, and on our society.

  This is not a new idea. A government-funded “guaranteed income” program has been proposed off and on since Eisenhower’s administration first noticed the impending postindustrial surplus economy. Nixon pitched a version of a minimum-income subsidy as part of his Family Assistance Plan in 1969. He tried to make the argument that he was doing something other than taxing one person “so another can choose to live idly.”61 Most of Congress agreed with the concept, as articulated by New York Representative William F. Ryan:

  Guaranteed annual income is not a privilege. It should be a right to which every American is entitled. No country as affluent as ours can allow any citizen or his family not to have an adequate diet, not to have adequate housing, not to have adequate health services and not to have adequate educational opportunity—in short, not to be able to have a life with dignity.62

  Congress even passed a guaranteed-income provision in 1970 by a vote of 243 to 155, but the Senate rejected the bill and others like it for fear it would make Americans more lazy. Not even the support of free-market competition advocate Milton Friedman was able to convince them otherwise. Over the next decade, Margaret Thatcher and Ronald Reagan’s version of the social contract gained acceptance, stressing individual responsibility and pure market solutions to social problems. Since that time, the notion of a guaranteed income or negative income tax has sounded preposterous to most of us, a scheme disproved by the fall of the Soviet Union.

  Our best data63 suggests that to implement a negative income tax today in the United States—one capable of eliminating poverty entirely—would cost only an additional 1 percent of GNP. That’s going from government spending representing 21.1 percent of GDP up to about 22.6 percent.

  Critics of such plans are concerned that any guaranteed minimum income scheme will disincentivize work—particularly if there are clear cutoffs for participation, or what are known as “benefits cliffs.” Some plans work against this by giving everyone the same amount of money whether or not they’re poor or by focusing tax credits on children.

  In any case, good evidence that minimum income plans lead to listlessness just doesn’t exist. The most famous experiments, conducted back in the 1970s, did show an overall reduction in work hours when guaranteed income kicked in. But these were largely a result of people cutting back from 60-hour workweeks to 40-hour ones. Some people left work to finish high school—hardly a symptom of sloth. Most researchers were unable to find any statistically significant labor market withdrawal at all.64 Even if the data is wrong and the naysayers are right, the upside of worker withdrawal is that it would make it easier for those still seeking work to find jobs. It would also free up the dropouts to contribute to society in all those ways that are currently not compensated.

  4. Redefine Work: Getting Paid to Address Real Needs

  Don’t worry about the lazy getting too well rewarded. In a guaranteed-income or public work scheme, not everyone gets a mansion. We can still compete in a free market for one of those. People who want to live in luxury, buy lots of movie tickets, take vacations, or enjoy fine wines, well, they can work for a living. Not in useless jobs created simply to stoke employment, but doing the sorts of things that humans do best.

  Unlike a guaranteed minimum income, “guaranteed minimum-wage public jobs” can actually redirect human effort toward the areas that need attention. As with the plans implemented for GIs returning from World War II, citizens are guaranteed jobs appropriate to their abilities that provide them with a living wage. The government sets those people to task building infrastructure or providing another civic service. Before you start seeing red, remember: unlike the socialism of the twentieth century, the motivation here is not to contend with scarcity but to contend with abundance. Although this may not have been the best approach to labor in a barely industrial Soviet Union, it may be entirely more effective at creating value for an abundant, digital-age America. Our problem is not a scarcity of toothpaste; it’s finding enough consumers to keep all the toothpaste workers employed.

  Working on questions of wealth inequality for the past fifty years, Oxford economist Sir Anthony B. Atkinson has gone the furthest to model these public work scenarios, albeit for the United Kingdom.65 His empirical approach to the data and comprehensive analysis of historical patterns conclusively confirm the economic validity of more equitable distribution of wealth—as well as an increase in total wealth—throug
h public service options. The activities of these workers, even though funded by the government, end up contributing to the wealth creation of others in ways that more than compensate the tax base for the wages they have earned. Moreover, the money such workers earn tends to be spent back into the economy quite rapidly, recycling wealth to an extent that stock shares do not.

  In a human-focused economy, there will never be a lack of need for humans. Although diagnosing and medicating people might someday be done better by computers, caring for them will not. Health-care workers, home aides, and nurses—not to mention teachers, companions, nannies, and child-care workers—are some of the least appreciated, most underpaid professionals in our society. We have been conditioned to think of these occupations as demeaning when they are really the most economically pivotal and personally rewarding ones.

  These are the high-touch activities that cannot be replicated by machines. It’s their necessary connection to human providers, their very unscalability, that makes them so incompatible with industrial-age values. These workers create value in real time, often one-on-one. This may make their services poor areas for growth, but they are more than sustainable professional paths for those who can perform them well and terrific entrepreneurial opportunities for those who develop the platforms, networks, and equipment to enable them.

  The same goes for agriculture, textiles, and many other sectors where returning to local, human-scaled enterprise will lead to less worker exploitation and environmental damage while producing better, healthier products. Nonindustrial practices may be more labor-intensive, but they’re also better for us all. For those of us used to white-collar jobs, the idea of growing vegetables or making clothes may seem like a big step backward toward more menial labor. But consider for a moment the sorts of activities the wealthiest Americans or most satisfied retirees engage in enthusiastically: brewing craft beers, knitting, and gardening. If there’s really not enough work to go around and there are so many extra people to employ, we can always farm in shifts.

  Those with a penchant for global conquest can still work overtime and become legendary by solving the real problems of our society: topsoil depletion, global warming, slave populations, and energy production, to name just a few. They can track the entire global supply chain of the products everyone is using and root out the parts that place an unfair labor burden upon certain people. (A low-cost smartphone that requires workers to dig for rare metals in dangerous mines is not a low-cost smartphone.) Some of these problems will be mitigated simply by taking our emphasis off this relentless quest to employ more people the old way. Once we’re no longer worried about growing the economy mainly to create more jobs, we will be free to consider tackling real challenges, like the poor global distribution of crucial resources and the stultifying debt of developing nations.

  Will any of this happen? Not likely very soon, especially in an environment where the game of competitive scarcity has become equated with fairness, and success within that scheme is seen as a sign of grace. From a strictly factual perspective, however, the reason we can’t slow down has nothing to do with the supply of goods. We can make a whole lot less stuff—or even stop making more stuff—and still not end up waiting in 1970s’ Soviet-style lines for toothpaste. Adopting policies aimed not at increasing employment but at actively decreasing it means challenging the assumption that the economy has to keep growing at all.

  This would be a tough sell in a Congress that can’t even agree to pay for last year’s budget obligations, much less understand the difference between a debt ceiling and an operating deficit. Luckily, unlike Renaissance monarchs who depended on their exclusive power over lawmaking to rewrite the rules of commerce in their favor, we live in a digital landscape in which rules can be rewritten from the outside in. The industrial age may have been all about one-size-fits-all solutions, but the digital age will be about a wide range of distributed ones.

  That’s why we have to look at what we can do as business owners, investors, bankers, and individuals to program an economic operating system that works for people instead of against them.

  Chapter Two

  THE GROWTH TRAP

  CORPORATIONS ARE PROGRAMS

  Plants grow, people grow, even whole forests, jungles, and coral reefs grow—but eventually, they stop. This doesn’t mean they’re dead. They’ve simply reached a level of maturity where health is no longer about getting bigger but about sustaining vitality. There may be a turnover of cells, organisms, and even entire species, but the whole system learns to maintain itself over time, without the obligation to grow.

  Companies deserve to work this way as well. They should be allowed to get to an appropriate size and then stay there, or even get smaller if the marketplace changes for a while. But in the current business landscape, that’s just not permitted. Corporations in particular are duty bound to grow by any means necessary. For Coke, Pepsi, Exxon, and Citibank, there’s no such thing as “big enough”; every aspect of their operations is geared toward meeting new growth targets perpetually. That’s because, like a shark that must move in order to breathe, corporations must grow in order to survive. This requirement is in their very DNA or, better, the code we programmed into them when we invented them. Seeing how that was close to a thousand years ago, corporations have had a pretty long and successful run as the dominant business entity.

  The economy we’re operating in today may have been built to serve corporations, but not many of them are doing too well in the digital environment. Even the apparent winners are actually operating on borrowed time and, perhaps more to the point, borrowed money. Neither digital technology nor the corporation itself is necessarily to blame for the current predicament. Rather, it’s the way the rules of corporatism, written hundreds of years ago, mesh with the rules of digital platforms we’re writing today. A corporation is just a set of rules, and so is software. It’s all code, and it doesn’t care about people, our priorities, or our future unless we bother to program those concerns into it.

  That’s why it’s useful—particularly in a rapidly changing media environment—to look at corporations as if they were forms of media: programs, written by people at a particular moment in history in order to accomplish specific goals. Once we have a handle on the corporate program, we’ll have a much easier time understanding what happened when we plugged it into the digital economy, as well as what to do about it.

  Marshall McLuhan, the godfather of media theory, liked to evaluate any medium or technology by asking four related questions about it.1 The “tetrad,” as he called it—really an updated version of Aristotle’s four “causes”—went like this:

  What does the medium enhance or amplify?

  What does the medium make obsolete?

  What does the medium retrieve that had been obsolesced earlier?

  What does the medium “flip into” when pushed to the extreme?

  It sounds trickier than it is. The automobile, for example, amplified speed. What did it make obsolete? The horse and buggy. It retrieved the values of knighthood—the sort of jousting and machismo we see in everything from drag races to NASCAR. And when pushed to the extreme, it actually leads to traffic jams, working against the whole point of cars to begin with. Or take the cell phone: It amplifies our mobility and freedom. It makes landlines obsolete. It retrieves conversation. And flipped to the extreme, it becomes a new kind of leash, making us constantly available and accountable to everyone.

  The best part about looking at the corporation as a technology or medium is that, in the process, we remind ourselves that it didn’t just emerge as a natural phenomenon. It’s not as if businesses were getting so big that they evolved a corporate structure in order to keep growing properly. Quite the contrary: the corporation was invented by monarchs to stem the tide of a burgeoning middle class and its thriving new marketplace and usurp the growth they were enjoying. The fact that corporations were invented should alone empower us to r
einvent them to our liking.

  So, then, what were corporations invented to amplify? The power of shareholders and the primacy of their capital. Feudal lords, who had lived off the labor of peasants for centuries, were getting poorer as the people began to make and trade goods with one another. The aristocracy needed a way to preserve their wealth and position in an increasingly free market. So they invented the chartered monopoly—a piece of paper with a list of rules—through which a king could grant exclusive dominion over an industry to his favorite merchant. In return, the king and other aristocrats got the right to invest in the enterprise. This way, they could use their wealth alone to make more money. Did the merchant need investors? For the most part, no.* But he made this concession in order to get the king’s charter and protection. The investors were like shareholders, and the merchant was like the CEO. Except these shareholders were also the ones writing the laws of the land.

  What did corporations render obsolete? They killed the local bazaar and all the peer-to-peer value creation and exchange that took place there.2 They also worked against the marketplace’s values of innovation and competition. If a company won the exclusive right to make clothing or to exploit the riches of the East Indies, then its only job was to extract value. It had no competition and no reason to innovate. We have to remember this part of the program because it’s so counterintuitive: the core code of the corporate charter is to repress exchange, competition, and innovation. It was intended to extinguish the free market.

  The third part of the tetrad, retrieval, is a tricky notion. It usually has a lot to do with cultural values—something from the deep past that gets rediscovered in a new form. Corporatism, by enhancing the power of the king and his ability to conduct great global enterprises, retrieved the values of empire. That’s how we got the Renaissance—quite literally, the “re-nascence” or “rebirth” of the values of ancient Greece and Rome. This time around, instead of the Holy Roman Empire, we got colonialism. The colonial powers reduced places to territories and people to human resources from which to extract labor. Local values had to give way to those of the chartered corporations and the gunships protecting them.

 

‹ Prev