Super Thinking

Home > Other > Super Thinking > Page 6
Super Thinking Page 6

by Gabriel Weinberg


  472 (1947-1949)

  4

  Historically, vaccination rates stayed above the respective herd immunity thresholds to prevent outbreaks, so free riders didn’t realize the harm they could be inflicting on themselves and others. In recent years, however, vaccination rates have dipped dangerously low in some places. For example, in 2017, more than seventy-five people in Minnesota, most of whom were unvaccinated, contracted measles. We can expect outbreaks like this to continue as long as there exist communities with vaccination rates below the herd immunity threshold.

  Unfortunately, some people cannot be medically immunized, such as infants, people with severe allergies, and those with suppressed immune systems. At no fault of their own, they face the potentially deadly consequences of the anti-vaccination movement, a literal tragedy of the commons.

  Herd immunity as a concept is useful beyond the medical context. It applies directly in maintaining social, cultural, business, and industry norms. If enough infractions are left unchecked, their incidence can start increasing quickly, creating a new negative norm that can be difficult to unwind. For example, in Italy, a common phrase is used to describe the current cultural norm around paying taxes: “Only fools pay.” Though Italy has been actively fighting tax evasion in the past decade, this pervasive cultural norm of tax avoidance took hold over a longer period and is proving hard to reverse.

  In situations like these, dropping below a herd immunity threshold can create lasting harm. It can be difficult to put the genie back in the bottle. Imagine a once pristine place that is now littered with garbage and graffiti. Once it has become dirtied, that state can quickly become the new normal, and the longer it remains dirty, the more likely it will remain in the dirty state.

  Hollowed-out urban centers like Detroit or disaster-ridden areas like parts of New Orleans have seen this scenario play out in the recent past. People who don’t want to live with the effects of the degradation but also don’t want to do the hard work to clean it up may simply move out of the area or visit less, further degrading the space due to lack of a tax base to fund proper maintenance. It then takes a much larger effort to revitalize the area than it would have taken to keep it nice in the first place. Not only do the funds need to be found for the revitalization effort, but the expectation that it should be a nice place has to be reset, and then people need to be drawn back to it.

  All these unintended consequences we’ve been talking about have a name from economics: externalities, which are consequences, good or bad, that affect an entity without its consent, imposed from an external source. The infant who cannot be vaccinated, for example, receives a positive externality from those who choose to vaccinate (less chance of getting the disease) and a negative externality from those who do not (more chance of getting the disease). Similarly, air pollution by a factory creates a negative externality for the people living nearby—low air quality. If that same company, though, trained all its workers in first aid, residents would receive a positive externality if some of those workers used that training to save lives outside of work.

  Externalities occur wherever there are spillover effects, which happen when an effect of an activity spills over outside the core interactions of the activity. The effects of smoking spill over to surrounding people through secondhand smoke and, more broadly, through increased public healthcare expenditures. Sometimes spillover effects can be more subtle. When you buy a car, you add congestion to the roads you drive on, a cost borne by everyone who drives on the same roads. Or when you keep your neighbors up with loud music, you deprive them of sleep, causing them to be less productive.

  Over the next few days, look out for externalities. When you see or hear about someone or some organization taking an action, think about people not directly related to the action who might experience benefit or harm from it. When you see someone litter, be aware of the negative externality borne by everyone else who uses that space. Consider that if enough people litter, the herd immunity threshold could be breached, plunging the space into a much worse state.

  Addressing negative externalities is often referred to as internalizing them. Internalizing is an attempt to require the entity that causes the negative externality to pay for it. Ideally the “price” attached to the unwanted activity is high enough that it totally covers the cost of dealing with that activity’s consequences. A high price can also stop the harm from occurring in the first place. If you see a sign warning of a five-hundred-dollar fine for littering, you will be sure to find a trash can.

  There are many ways to internalize negative externalities, including taxes, fines, regulation, and lawsuits. Smoking externalities are internalized via cigarette taxes and higher health insurance premiums for smokers. Traffic congestion externalities are internalized through tolls. On a personal level, your neighbor might file a noise complaint against you if you consistently play music too loud.

  Another way to internalize externalities is through a marketplace. Ronald Coase won the Nobel Prize in economics in 1991 in part for what has become known as the Coase theorem, essentially a description of how a natural marketplace can internalize a negative externality. Coase showed that an externality can be internalized efficiently without further need for intervention (that is, without a government or other authority regulating the externality) if the following conditions are met:

  Well-defined property rights

  Rational actors

  Low transaction costs

  When these conditions are met, entities surrounding the externality will transact among themselves until the extra costs are internalized. If you recall the Boston Common example, the externality from overgrazing was internalized by setting a limit on the number of cows per farmer (regulation). There were no property rights though.

  The Coase theorem holds that instead of limiting the cows, another solution would have been to simply divide the grazing rights to the commons property among the farmers. The farmers could then trade the grazing rights among themselves, creating an efficient marketplace for the use of the commons.

  Governments have similarly tried to address the negative externalities from the burning of fossil fuels (e.g., climate change) through cap-and-trade systems, which are modern-day applications of the Coase theorem. The way these systems work is that the government requires emitters to hold permits for the amount of pollutants they emit. The government also sets a fixed number of total permits, which serves as the emission cap in the market, similar to the imposed limit on the number of cows that could graze on Boston Common. Then companies can trade permits on an open exchange. Such a system satisfies the conditions of the Coase theorem because property rights are well defined through the permitting process, companies act rationally to maximize their profits, and the open market provides low transaction costs.

  If you’re in charge of any system or policy, you want to think through the possible negative externalities ahead of time and devise ways to avoid them. What spillover effects could occur, and who would be affected by them? Is there a common resource that free riders could abuse or that could degrade into a tragedy of the commons? Is there another way to set up the policy or system that would reduce possible negative effects?

  RISKY BUSINESS

  Another set of unintended consequences can arise when people assess risk differently based on their individual positions and perspectives. These types of complications happen a lot with insurance, where risk assessments create financial consequences. For example, will you drive more recklessly in a rental car after you purchase extra rental insurance, simply because you’re more protected financially from a crash? On average, people do.

  This phenomenon, known as moral hazard, is where you take on more risk, or hazard, once you have information that encourages you to believe you are more protected. It has been a concern of the insurance industry since the seventeenth century! Sometimes moral hazard may involve only one person: wearing a bike helmet may give you a false sense of security, leading you to bike more re
cklessly, but you are the one who bears all the costs of a bike crash.

  Moral hazards can also occur when a person or company serves as an agent for another person or company, making decisions on behalf of this entity, known as the principal. The problem arises when the agent takes on more risk than the principal would if the principal were acting alone, since the agent is more protected when things go wrong. For instance, when financial advisers manage your money, they try to stick to your risk profile, but they are more likely to take greater risks than you would on your own, simply because it isn’t their money, and so losses do not impact their net worth as much.

  Agency can lead to other issues as well, collectively known as the principal-agent problem, where the self-interest of the agent may lead to suboptimal results for the principal across a wide variety of circumstances. Politicians don’t always act in the best interest of their constituents; real estate agents don’t always act in the best interest of their sellers; financial brokers don’t always act in the best interest of their clients; corporate management doesn’t always act in the best interest of its shareholders—you get the idea. The agent’s self-interest can trump the principal’s interests.

  Some fascinating studies of this concept have measured the behavior of agents when they are serving themselves compared with how they serve others. Real estate agents tend to sell their own houses at higher prices compared with their clients’ houses, in large part because they are willing to leave them on the market longer. In Freakonomics, Steven Levitt and Stephen Dubner dig into the reason why:

  Only 1.5 percent of the purchase price goes directly into your agent’s pocket.

  So on the sale of your $300,000 house, her personal take of the $18,000 commission is $4,500. . . . Not bad, you say. But what if the house was actually worth more than $300,000? What if, with a little more effort and patience and a few more newspaper ads, she could have sold it for $310,000? After the commission, that puts an additional $9,400 in your pocket. But the agent’s additional share—her personal 1.5 percent of the extra $10,000—is a mere $150. . . .

  It turns out that a real-estate agent keeps her own home on the market an average of ten days longer and sells it for an extra 3-plus percent, or $10,000 on a $300,000 house. When she sells her own house, an agent holds out for the best offer; when she sells yours, she encourages you to take the first decent offer that comes along. Like a stockbroker churning commissions, she wants to make deals and make them fast. Why not? Her share of a better offer—$150—is too puny an incentive to encourage her to do otherwise.

  Moral hazard and principal-agent problems can occur because of asymmetric information, where one side of a transaction has different information than the other side; that is, the available information is not symmetrically distributed. Real estate agents have more information about the real estate market than sellers, so it is hard to question their recommendations. Similarly, a financial adviser generally has more information about the financial markets than their clients.

  It is also not always completely transparent to principals how agents are compensated, which might cause principals to make different decisions than they would if they had the full picture. If you knew that your financial adviser was getting paid to recommend a financial product to you, you might be less likely to invest in it. Disclosure laws and the increase in open information via the internet can reduce the effects of asymmetric information.

  Sometimes, though, the consumer has the upper hand when it comes to asymmetric information. This is often the case with insurance products, where the person or company applying for insurance usually knows more about their own risk profile than the insurance company does.

  When parties select transactions that they think will benefit them, based at least partially on their own private information, that’s called adverse selection. People who know they are going to need dental work are more likely to seek out dental insurance. This unfortunately drives up the price for everyone. Two ways to mitigate adverse selection in the insurance market are to mandate participation, as many localities do for car insurance, and to distinguish subpopulations based on their risk profiles, as life insurers do for smokers.

  Like crossing a herd immunity threshold, rampant and persistent asymmetric information in a market can lead to its collapse. Consider a used car market where the sellers know the quality of their cars, but the buyers cannot distinguish between lemons (bad cars) and peaches (good cars).

  In such a market, buyers will want to pay only an average-quality price for the cars on the market, since they can’t tell the difference between peaches and lemons. Sellers who know their cars are peaches, however, will not want to sell them in this market because they know their cars are worth more than the average price. As they pull their peaches out of the market, the average quality drops and, in turn, the price of the used cars left in the market keeps dropping. The sellers of lemons free-ride on the market until it collapses into just a market of lemons.

  Adverse selection was an early concern with the state health insurance exchanges as part of the Affordable Care Act (ACA) in the United States. Extending the metaphor, the lemons are sick people applying to the exchanges, and the peaches are healthy people applying. There was an individual mandate requiring health insurance, but the penalties for not complying were low, so the concern was that many healthy people would just opt to pay the fine rather than participate. Sick people, who need the insurance, would therefore make up more of the applicants, causing premiums to rise so that health insurers could cover their costs to care for them. This would, in turn, eject from the market more healthy participants not willing to pay these higher premiums, further raising prices. This situation is still unfolding, with those invested in the success of the ACA trying to ensure that it doesn’t spiral out of control.

  The “Death Spiral” of Adverse Selection

  Sometimes there are ways to break the cycle. In the case of the used car market, services like Carfax try to restore symmetric information. This arrangement allows buyers to distinguish between lemons and peaches, and it eventually pushes lemons out of the market. In contrast, one of the goals of the ACA was to make sure that people with preexisting conditions were not pushed out of the market. The only way this system works, though, is if healthy, lower-risk people continue paying into the system, allowing insurers to spread out the cost of higher-risk individuals. This helps keep premiums from rising too high, making care more affordable for everyone. That’s why the ACA mandate was so important to the viability of the overall system.

  The mental models from the last section (tragedy of the commons, externalities, etc.) and those from this section (moral hazard, information asymmetry, etc.) are signs of market failure, where open markets without intervention can create suboptimal results, or fail. To correct a market failure, an outside party must intervene in some way. Unfortunately, these interventions themselves can also fail, a result called government failure or political failure.

  Antibiotics present a good case study in market and political failure. As we described earlier, overuse of antibiotics can reduce their efficacy as a common resource because bacteria have a chance of evolving to develop resistance each time an antibiotic is used. Overprescription of antibiotics therefore contributes to the negative externality of widespread antibiotic resistance.

  Before antibiotics, bacterial infections were a leading cause of death. Scarlet fever, a complication of untreated strep infections (usually strep throat) regularly killed children in the early 1900s, and at its peak tuberculosis caused 25 percent of all deaths in Europe. Anything that could lead to an infection, such as surgery or even a small cut, could be deadly.

  If bacteria develop resistance to all antibiotics, large-scale bacterial outbreaks could be a reality again. That’s why public health officials want to sideline some antibiotics for which bacteria have not yet developed any resistance, to be used in potential doomsday outbreak scenarios. We need a supply of new antibiotics to defend agains
t these bacterial “superbugs” that cannot be killed by any current antibiotics.

  For these new antibiotics to be useful in future doomsday scenarios, though, they must be used sparingly, only in cases where they are absolutely necessary. That’s because each time they are used, the risk of bacteria developing resistance to them goes up. Let’s suppose the government addresses the market failure of overuse of these antibiotics by permitting their use only in dire circumstances.

  There is the clear need for these new antibiotics to be created, but at the same time they would not be able to be regularly used or sold. Assuming that their development and production continue to be left primarily to the private market, the government failure from this regulatory environment emerges: How can pharmaceutical companies get a return on their investment given current patent laws? Drug patents are likely to expire or come close to expiring before the drugs are needed, effectively erasing most potential profits for the pharmaceutical companies. Also, throughout this period, some amount of the drugs still need to be continually produced in case they are needed, with batches of them continually expiring before they can be sold. (Unfortunately, it is not possible to cost-effectively sideline large-scale manufacturing capacity while waiting for peak demand.)

  These uncertainties lead to a second market failure of severe underinvestment in the development of new antibiotics, leaving us collectively vulnerable to future outbreaks. In fact, most large pharmaceutical companies have long since totally discontinued research-and-development investment in this area.

  According to a 2014 report commissioned by the U.S. Department of Health and Human Services, there is a huge disconnect between the value of a new antibiotic for society and the value to the private market. In some cases, as for bacterial ear infections, the expected value to the private market is actually negative, while the value to society is estimated to be approximately $500 billion!

 

‹ Prev