The Apprentice Economist
Page 13
The point is actually more difficult to grasp than we first appreciate. Ingredients are an imperfect means to an end. Each ingredient contains different combinations of one or more and perhaps even all of the basic characteristics. By combining ingredients we enhance or blunt the effect of different characteristics in the final meal. The fixed composition of characteristics in the ingredients may never allow us to attain precisely the ideal mix of characteristics we seek. The skill of the cook is in combining the ingredients at his or her disposal in such a manner as to approach as closely as possible this ideal “point” in the multi-dimensional “gastronomic characteristics space”.
In 1966, Kelvin Lancaster generalized this concept in a mathematical model that laid the basis for what is now known as home economics. The idea is to try to specify some domestic “production function” that shows how basic inputs, each with different amounts of the end-characteristics sought, can be combined to maximize some objective of the consumer. Production functions of course are not the only consideration. You have also to consider the prices of the basic inputs. The most practical application of home economics comes in calculating the budget needed to keep a person acceptably fed. You need to tabulate the caloric, protein, fat, and vitamin content of hundreds of foods and find the combinations that yield the best mix of these characteristics for the lowest price. This is how armies have determined their menus, and while military fare may not be the most highly prized, it is at least following Lancaster’s algorithm for the cost-minimizing combination of ingredients.
The ideology of constraints vs. preferences
THE NEED FOR Lancaster’s approach to consumer theory was in a certain sense ideological, but not in any political sense. The ideology in question was the belief that as little appeal as possible should be made to the consumer’s preferences, in trying to model how he or she would choose between goods. Preferences are not directly observable and as such should be invoked as seldom as possible when interpreting differences in consumer behavior. Since the only things we can directly observe are incomes and prices we should focus on these in tests of the validity of our theory of the consumer.
By seeing the ultimate object of consumption as characteristics and not goods Lancaster was introducing a new and, he hoped, measurable source of variation into the consumer model that could resolve some of its seeming weaknesses. Sometimes people seemed to consume more of one good even though neither its price nor their incomes had changed. The easy but untestable explanation was that preferences had changed. Lancaster’s approach suggested we first look to see if the characteristics content of that good had changed. Maybe quality had gone up, and this was the explanation for the rise in demand. If you could measure quality you could apply some simple statistical analysis to see if variation in quality “explained” variation in demand.
Looking beneath the surface of goods to see where they were situated in characteristics space also inspired economists to rethink how economic progress should be calculated. If you compared just the price of a PC in 1981 to the price thirty years later, you might find that there had been a only slight drop. But a PC thirty years later was thousands of times more powerful than the original. Comparing apples and oranges is a weak metaphor to describe the problem. A more apt metaphor is comparing garden sheds and skyscrapers. The characteristics approach allows one to go beyond such metaphors so that comparisons can be made on the basis of numbers. The trick is to identify the characteristics relevant to the performance of a computer and see how their costs have fallen. Millions of operations per second used to be an important benchmark, but others, such as memory, can be imagined. If the price of a PC has remained constant over thirty years, but the number of operations has increased by a multiple of a thousand, then the price per calculation, which is the relevant characteristic, has fallen a thousand fold.
A new perspective of the meaning of price
ECONOMIST WILLIAM NORDHAUS explained the problem this way, “If we are to obtain accurate estimates of the growth of real incomes over the last century, we must somehow construct price indexes that account for the vast changes in the quality and range of goods and services that we consume, that somehow compare the services of horse with automobile, of Pony Express with facsimile machine, of carbon paper with photocopier, of dark and lonely nights with nights spent watching television, and of brain surgery with magnetic resonance imaging” (1997, 30).
Nordhaus followed up on this insight by asking how changes in the quality of lighting had improved our material condition over the ages. Call it a character study of lighting. The relevant quality characteristic of visible light is the lumen. Wood fires are an inefficient means of generating lumens because of the large amount of energy needed to produce a lumen using this “technology”. Candles are slightly more efficient and thus have a lower cost per lumen, but not by much.
The cost of generating lumens explains why through much of history people went to sleep at dusk and rose at dawn, and why streets were rarely lit. The invention of gas lighting vastly reduced the cost per lumen. The result was reduced street crime, a boom in the evening entertainment business, and factories and businesses that never ceased operating. Electric lighting led to further falls in the price of lumens to the point where at the end of the 20th century the price of a lumen was thousands of times less than what it would have been two hundred years earlier.
To help appreciate the importance of this insight, Nordhaus asked how much labor it would have taken for the average person to buy lighting in the past, and lighting in the present. He found that “ … one modern one-hundred-watt incandescent bulb burning for three hours each night would produce 1.5 million lumen-hours of light per year. At the beginning of the last century, obtaining this amount of light would have required burning seventeen thousand candles, and the average worker would have had to toil almost one thousand hours to earn the dollars to buy the candles” (1997, 50).
The practical lesson to be drawn from this fascinating study of lighting is that the way we measure the consumer price index is severely flawed. Instead of putting goods and their prices directly into the index we should reduce all goods to their constituent characteristics. Then we should evaluate how these good can best be combined to minimize the cost of consuming these characteristics. Such an approach would allow us to include new goods in the consumer price index without worrying about whether the index of today is comparable to that of ten years ago when the good did not exist. Such an approach would also allow governments to more precisely calculate the rate at which welfare and other forms of aid should be increased. At present such calculations tend to overestimate the cost of living because they do not take into account the manner in which increases in quality reduce the monetary cost of maintaining a certain standard of living.
While these may sound like important applications of Lancaster’s approach, their adoption has been slow. Judging what the underlying characteristics of goods are is in part a subjective exercise. People prayed and still pray in front of candles. Few people I know pray in front of 20 watt, self-balasted, compact florescent bulbs. Do you include a spiritual component to lighting quality, and if you do, how should you measure it? Questions such as this one have denied in practice the ample promise that Lancaster’s theory showed when he first introduced it.
Rosen and the equalizing difference
IN LANCASTER’S CHARACTERISTICS space, people have room to maneuver. Consumers combine goods representing a “vector of characteristics” while balancing costs to produce an amalgam of characteristics close to their desired point. In this manner they “span” characteristics space. In Lancaster’s world, characteristics could be freely varied by combining goods in different proportions.
In his 1974 and 1986 articles Sherwin Rosen asked what would happen if you were limited in how you could move about through characteristics space. He pointed out that sometimes when buying a product with several underlying characteristics you could not just go out and span characterist
ics space by buying a bit of another product with the same characteristics but in different proportions. The reason was that sometimes when you buy something, you are selling something at the same time and are able to sell uniquely to one purchaser. Recombining goods to balance characteristics to suit your tastes is not possible. Rosen called such exchanges tied-sales.
The oldest kind of tied-sale in the world is marriage. Both parties to a marriage bring a bundle of characteristics to the bargaining table. Within the framework of economics each is selling himself or herself and buying something in the other partner simultaneously. If the characteristics balance each other the match may proceed. If not, then a dowry, or in the inverse case, a bride price may be required. Rosen called this cash emolument an “equalizing difference”.
What is special about tied-sales is that who you are can matter as much to the sale as how much you pay. This forces tied-sales markets to work on two levels. First there is a matching problem to be solved, so that people with the most desirable exchange of characteristics find each other. But should these characteristics not balance each other, then on a second level an equalizing difference of cash is required. Rosen showed that tied-sales could lead to the segregation of people by their types. Segregation has a bad name, and justly so. But Rosen also argued that the worst effects of segregation could be palliated by a market that resolved supply and demand of complicated tied sales situations through a monetary payment he called an “equalizing difference”.
Cyanide and gold
A SIMPLE BUT relevant example of how tied-sales trap people in characteristics space is that of a dirty job such as gold mining. Some mines use toxic cyanide to extract ore. Others use a simple non-toxic grinding and filtering method. Workers are all equally capable, but all differ in their tolerance for poisonous working conditions. Some mind quite a bit, others a bit less, and some not at all. Companies all differ in their costs of adopting the clean technology. Here is a labor market with a tied-sale. Workers sell their services to the mine, but at the same time the mine implicitly “sells” a certain level of cyanide toxins to the workers while buying their services. To entice workers to these sites, companies that use cyanide must offer a wage premium above that paid to miners working on clean sites. Rosen called this premium an “equalizing difference”. Workers are the implicit purchasers of the disamenity, but unlike the market for goods that brings positive value to consumers, when the good in question is a disadvantage or hazard, the worker must be explicitly paid to “consume” it.
Most of those enticed to work at the dirty site will actually be paid more than what they would accept as minimum compensation because their distaste for pollution may be quite mild. Only the workers on the margin of indifference between clean and dirty jobs feel they get a compensation just equal to the disamenity they feel they suffer.
How much a mine is willing to pay workers to consume poison depends on how costly it is for the company to stop using cyanide and invest in clean technology. Some companies may find it too costly to operate a clean mine. Others may have an aptitude and skill for running a clean mine. Those firms for whom the wage premium is above the cost of cleaning will run clean mines and those for whom the wage premium is below cleaning costs will run dirty mines. As a result we have potentially different gold mines and workers. Some gold mines will decide to use cyanide techniques, and some miners will choose to work in the resulting poisoned environment. Other companies will decide to be clean and some workers will join them.
How does this sifting and segregation take place? Assume that workers all have the same skills so competition between them for jobs will ensure they all will get some base salary. At this base salary almost no one wants to work in a dirty job except the few workers for whom dirt is of no account. There is a surplus then of workers seeking clean jobs. To correct this imbalance firms with some dirt will bid up the premium. The premium will adjust until the group of high-tolerance workers are employed by the mines with high cleanup costs, and low-tolerance workers are employed in the clean mines, which end up selling cleanliness by not offering the wage premium. The premium helps to segregate workers according to their taste for pollution.
Technically, the level of segregation is efficient in the sense that it would be impossible to change the wage premium in such a way that any single firm or worker were made better off without making someone else worse off. Economists call this a “Nash equilibrium”. On an intuitive level, the segregation is efficient because workers who do not mind toxins are paired off with companies who have a hard time cleaning it up, and people who mind toxins are paired with people who are good at cleaning it up. What in the end balances this sorting is the wage premium, which varies until everyone is matched up. Or you might use the word “segregated”. Segregation of tolerant workers with dirty firms and intolerant workers with clean companies through the wage mechanism gets the toxin issue out of the way and allows the underlying labor market to compensate workers for their productive contributions.
Rosen’s prime motivation for analyzing situations in which wages act as a balancing monetary component pertinent to a tied sale, was somewhat obscure. He was interested primarily in showing how to interpret impact factors or “regression coefficients” arising from efforts to see the connection between labor market disamenties and wage premiums. In order to correctly interpret the regression coefficient as the wage premium all workers require to be just sufficiently compensated for a little bit more of the disamenity, they basically had to all have the same distaste for it. That seems quite obvious. But Rosen went on to use his model to show how, when workers differ in their preferences, the economist could still draw meaning from the regression coefficient, as either an upper or lower bound on the equalizing difference.
While this might at first glance seem like the musings of an overspecialized economist, Rosen’s work was in fact of key importance to the field of cost-benefit analysis. It pointed the way to evaluating the benefits of government interventions that reduced the suffering from industrial pollution. Rosen’s method could also be used to evaluate how much people would pay to avoid having to bear a small risk to their lives. This came to be misnamed as the “value of life”, but despite the nomenclature is of vital importance in evaluating the benefits from government infrastructure projects such as road improvements, that make roads safer. His method of analysis also had a huge impact on understanding how labor markets, and any other types of markets requiring tied sales worked. These insights can be unusual and provocative and force us to reconsider common perceptions about how government can fruitfully intervene in tied-sales exchanges. Let us examine these situations more closely.
Equalizing differences and segregation
EVEN THOUGH THE labor market tries to cope with the tied-sale through a wage premium it would be better for society if someone invented a costless way of eliminating the toxic by-products of gold mining and allowed the market for labor to proceed unhampered by the tied-sale.
I mean “society” in the following sense. Companies that were previously polluting would not have to pay a wage premium, while those that were clean would not have to pay cleanup costs. Not everyone would be happy with this change. Some “infra-marginal” workers in the polluting industry who had a high tolerance for pollution were earning more than they needed to be compensated to continue in their jobs. In fact, most workers in the toxic mines were better off with pollution than without it because the compensating wage premium rises until it just barely compensates the most reluctant worker to enter a toxic mine.
Anyone who has worked in some northern wasteland on a oil rig knows this. Provided you are hearty, job conditions do not matter much to you and you rejoice at the wage premium. That premium exists to draw in those extra workers the company needs but who have no stomach for the work. The premium the hearty worker receives is a pure bonus or “rent” as economists call it. The example illustrates why economists are so finicky about the distinction between margins and averag
es. The wage premium in the market adjusts until the last worker drawn into the toxic mines is “at the margin of indifference” between that and a clean job. Those “infra-marginal” workers already drawn to dirty mines have a higher average tolerance for pollution, and thus earn on average “rents” from the wage premium. While these workers would lose out from the disappearance of pollution, their loss would exactly be balanced by the gain of previously polluting companies who no longer have to pay a wage premium.
So far then, the exercise is a financial wash. The net gain to society comes from clean companies who previously were cleaning their mines. If the new technology allows costless elimination of pollution then these companies would save and that would also represent a net savings for society. I keep talking about “society” because economists are interested in how human behavior adds to the pile of real wealth in the world. While the wealth may belong to individuals, it still represents an accretion for the society in which they dwell and as such may be accessed by that society through taxation.
Barring the invention of a costless means of cleaning pollution, a wage premium that matches preferences for cleanliness with abilities to clean up toxic sites is a limited but elegant response to the problem of the tied-sale of work and pollution, for it allows demand and supply to equilibrate. Without the wage premium, firms with low pollution costs and workers with high tolerance would not be able to exploit their abilities to clean and tolerate. Clean and dirty jobs would now look the same to workers and companies. What might equilibrium look like in such a case? Well, if the wage cannot vary with the pollution workers with low tolerance for toxins would keep knocking on doors of businesses they knew to be efficient cleaners. They would lobby for a position because short of that, they would have nothing to distinguish them from an identically skilled worker with a high tolerance for toxic waste. Lobbying is no guarantee of getting a job but the effort invested would be like money spent buying lottery tickets. Similarly, companies with high cleaning costs would solicit high tolerance workers to come work for them. It is possible to prove that an equilibrium, with the same degree of segregation as in the wage premium case would emerge. In other words, “things” would get “sorted out”.