The Apprentice Economist

Home > Other > The Apprentice Economist > Page 26
The Apprentice Economist Page 26

by Filip Palda


  Here was a natural experiment in the sense that Argentina’s draft lottery was intended purely as a recruiting device and not as an experiment in the effect of military life on subsequent criminal behavior. The authors found convincing evidence that “participation in conscription increases the likelihood of developing a subsequent criminal record, particularly for crimes against property and white collar crimes … conscription has detrimental effects on future job market performance” (2011, 119).

  Then there is the case of US food aid to Africa. After the famous 1985 Live Aid concert that raised funds to send food to starving Ethiopians, no one dared question the usefulness of such gifts. Twenty or so years later professors Nathan Nunn and Nancy Qian from Harvard and Yale used a natural experiment to show that food aid could have negative side effects. They wrote, somewhat obscurely, that their “paper examines the effect of US food aid on conflict in recipient countries. To establish a causal relationship, we exploit time variation in food aid caused by fluctuations in US wheat production together with cross-sectional variation in a country’s tendency to receive any food aid from the United States” (2012, 1).

  For decades aid researchers had wondered whether food aid was given in response to conflicts in recipient countries or whether food aid caused conflicts. It was quite possible that there was a circular relationship between aid and conflict. Aid could enhance conflict, which could enhance aid. Most people call this a vicious circle. Economists called it a positive feedback relation, or a simultaneous equation relation. No one seemed to be able to disentangle one effect from the other. The ensuing ambiguity fuelled conflicts between researchers and policy-makers with opposing views on the merits of aid.

  To break out of the “causal loop” between aid and conflict, Nunn and Qian made use of the fact that in the US when there is a bumper crop of wheat, by law some of the surplus is put aside to be sent to needy countries the following year. The legally mandated nature of this aid created an automatic mechanism that broke the causal loop. Chance fluctuations in weather may lead to a US bumper crop. Because chance is at work, this crop will have no relation to whether some conflict is taking place in some far-off country. This independently determined aid surplus is then sent off to Africa. Nunn and Qian were astute enough to track the surplus aid and they then attempted to associate it with conflicts in the recipient areas. They concluded that, “an increase in US food aid increases the incidence, onset and duration of civil conflicts in recipient countries … increasing food aid by ten percent increases the incidence of conflict by approximately 1.14 percentage-points” (3). For every ten dollars of food aid there is a one tenth increase in war conflict.

  Computer experiments

  THE CREDIBILITY REVOLUTION in econometrics applies mainly to so-called microeconomic phenomena. Did a quota, attributed by lottery on the minimum number of women to serve as head of town councils in India, lead to improved road and water quality? Does a subsidy to employers help get workers back into the market? Answering these questions in a credible manner depended on some variant of randomized experiments. But how do you conduct a randomized experiment when you want to see if a collapse of the housing market leads inevitably to a widespread economic depression? For macroeconomic phenomena, experimental methods are not helpful. In his 2010 essay on experimental methods, Nobel Prize winner Christopher Sims called the application of experimental methods to determining causality in macroeconomics “nonsense”.

  In the face of such intellectual broadsides some researchers retreated, quite literally, into their own imaginations. Their idea was to forget about establishing causality by using statistics and an experimental approach to data. Instead, they decided to build a mathematical model of the economy with pencil and paper. The model had to show how investment influenced national income, how interest rates varied with investment, how labor supply rose and fell in response to wages, and how consumption and savings and income changed. Especially, it had to be consistent with past economic reality. You could explore different scenarios with such a model. What if government lowered taxes on corporate profits? What if there arose an unexpected improvement in the technology for extracting petroleum? Finn Kydland and Edward Prescott (1982) outlined how such a Gedankenexperiment should proceed in their foundational article on what has since come to be known as “real business cycle theory.”

  The first step in building a model that would allow you to simulate the economy was that you had be guided by some sense of economic history. If you look at economic growth in developed economies you see a long upward trend with many small deviations and then a few very large ups, which represent booms, and downs, which represent depressions. You also see that investment spending is much more volatile than personal consumption.

  These contrasting variations suggest that a mathematical model capturing the evolution of the economy over time requires two components. First, it should have a some inner core which produces baseline trends. This is, in fact, what the theory of equilibrium in a certain world provides. Most versions of equilibrium theory says that in a certain world there should be no booms or busts, or even minor deviations from the long-term upward trend in income. If they have perfect certainty, investors continue pouring money into machines until a saturation point is reached at which the rate of return from investing in “capital” is equal to the cost of an extra dollar of investment, the interest rate on borrowed money. Once an economy reaches this “steady state” its growth rate remains constant, and income grows at this steady rate forever. There are no booms or busts, simply convergence to a steady state.

  The second feature of the Gedankenexperiment should add booms and busts to the baseline trend in growth. One way to get booms and busts is to spike the certainty model with so-called “random economic shocks” (though one can also generate cycles by assuming that preferences for consumption in different time periods are so tightly wound up with each other that their effects rebound on each other in a manner that creates waves).

  Kydland and Prescott did not invent this idea. In 1937 Eugen Slutsky discovered that by adding a randomly generated number to a simple “difference equation” relating future income to past income you could generate ups and downs resembling economic cycles. What Kydland and Prescott contributed was to insert these random shocks within the context of a model of consumers and producers optimizing their well-being and profits.

  In their model a new technological discovery might push up the productivity of investments. In response, firms quickly increase their investments above their long-term trends to take advantage of the new opportunities. Once these one-time new opportunities are exhausted, businesses once again lower their investment spending to the original long-term trend. Investment is fuelled, in part, by workers who also find extra cash in their pockets because the boom in business opportunities leads to an increased demand for their services. Because they do not like to live in either feast or famine, they save some of their unexpected cash windfall, which banks then lend out to investors. This is why consumption has a muted response to surprise changes in productivity while investment reacts much more suddenly. It does so in order to take advantage of fleeting opportunities. The triumph of the real-business cycle model was that when you dropped random shocks into it, trends consistent with the stylized facts of economic growth emerged. This model of the economy allowed Kydland and Prescott to simulate what would happen if government tried to stimulate the economy by lowering taxes or increasing spending.

  Yet an enormous question mark hangs over this approach to understanding how the economy works. The equations in the Kydland-Prescott model spit out their predictions based on two types of information fed into them. One type of information is the value that variables will take. A variable is just that. It varies. Taxes, or investment are variables. Then there are things that do not vary and they are called parameters. A parameter is a number that translates the impact of an increase in wealth into increases in savings. If each dollar increase in wealth increa
ses savings by half a dollar, the impact parameter is 50 per cent.

  Kydland and Prescott had to choose impact parameters such that their mathematical model was backwardly consistent with preceding economic trends. Put differently, if you had data from 1900 to 1980 on national income, investment and so on, then your model had to be able to generate these numbers.

  It turned out that many different combinations of impact parameters were consistent with past historical trends. When you picked a set of parameters you had “calibrated” to be consistent with the past, you were never quite sure whether they were the right ones. Naturally that influenced your ability to evaluate the future effect of changes in government policy. I say “evaluate” because the Kydland-Prescott framework was not really about prediction as most people understand it. It was about simulating the very narrow impacts of changes in the economic environment. To appreciate this we need to look more closely at what prediction in economics really means.

  What about prediction?

  ALL THIS DISCUSSION about statistics generally gets people interested in the question of predicting the future. One of the most annoying and embarrassing questions an economist can be asked is “so if you are so smart with your economics Ph.D. why aren’t you rich?” At the heart of this reproach is the sentiment that with all their fancy models of the economy and their high-powered data analysis, economists should be able to predict how stock prices will move, or when the economy will crash. At the very least, they should be able to tell us where society is heading in the long-term.

  These assaults on economists are understandable. From the 1930s to the 1960s, data analysis in economics was dominated by scholars who were either originally physicists or who had a strong training in that discipline. The earliest and greatest triumph of physics was the ability to predict the course of the planets and even stars. That power is based in part on Newton’s first law, which says that an object in motion tends to stay in motion, thereby allowing one to trace a future trajectory based on the preceding trajectory.

  Yet anyone who has studied physics closely will tell you that, with the exception of rocket trajectories in the vacuum of space, prediction is a challenge. The best physical models of the atmosphere cannot predict the weather with any useful accuracy for longer than a week. What prediction really means in physics is that you can tell how an entity will react when acted upon by another entity. Prediction is based on a knowledge of how the components of the system interact. Then you can calculate how some components will change when others change.

  Predicting the effect on one variable of a change in another is also the essence of prediction in economics. This does not mean that a model which predicts behavior will tell you how people will behave in the future. Did you get that? Few people do, though the point is not difficult to grasp.

  Suppose you have a model which tells you that when the price of movie tickets rises people will go to fewer movies. You go back over the numbers and look to see if attendance fell when prices rose. If it did, then your model is said to be able to predict the effect of a rise in ticket prices on the level of attendance. But this is very different from claiming that the model can tell you how attendance will evolve over the next three years.

  Economic models are not crystal balls. They are a statement of relations between variables. This is why economic models are at their best when being used to analyze the result of a government intervention, such as the imposition of rent control. A ceiling on rent discourages landlords from providing rental properties because landlords conform to the economic prediction that supply falls when price falls. Rent control disturbs an economic system previously in equilibrium. Theory tells us how variables in the system, such as apartments for rent, will change when the system is disturbed.

  But ask an economist to tell you where the economy will be next year and you will draw a blank. The models may be sound but the economist can only predict if he or she knows how the inputs into these models, such as interest rates, inflation, technological change, will vary over the next year. These changes are disturbances to the economic system which neither the economist nor any other individual has the ability to predict systematically. The point is that we must understand that economic models “predict” in a very different manner from what people understand the term to mean. It is the prediction of change in a system when subject to a particular disruption.

  If this line of reasoning sounds like a dodge to you, then perhaps you might be convinced that some economic values cannot be predicted because of the very logic of the economic models used in the attempt to predict them. People anticipate future developments in the economy and this makes their current decisions inscrutable in a necessary sense. Suppose someone could anticipate your every move and reaction to future economic developments. He or she could then exploit you. To thwart being exploited humans have built into their decisions a certain degree of randomness. Game theorists call this a “mixed strategy”. Humans generate uncertainty and unpredictability of necessity, as a protective measure against exploitation by other humans. We are all squirting ink and muddying the waters, which vastly complicates the efforts of economists who wish to map the human heart, whilst mixing metaphors.

  Even when we are not playing games with others we absorb information in such a way as to invalidate the economist’s efforts at prediction as the case of stock markets shows. If people make the most of the available information, then the past trend in stock prices should be no guide to the future trend. If it were, then you would buy more of an upward rising stock, thereby pumping up its price in the present and by this anticipatory purchase wiping out the remaining anticipated increments to its price. Since past trends are of no use, the only valid information to act upon is that which nature reveals at its whim. News that the CEO will retire may precipitate a fall in the stock price. The reaction is immediate. And being immediate, it makes stock prices look random. New information is revealed randomly and is immediately folded into stock prices, making these resemble what Burton Malkiel described as a “random walk”.

  In regards to government interventions, Robert E. Lucas Jr. showed in his celebrated 1976 essay that simply relying on past relations between, say, consumption and government spending, would not tell you how consumers were going to react to new government attempts to stimulate the economy. Consumers based their decisions on how long they believed the government stimulus would last. The new government plan or “rule” for increased stimulus became part of the consumer’s decision problem. This invalidated any past relations that might have been established by econometricians between government spending and private consumption because those past relations were based on a different government stimulus rule. More technically, the method of regression analysis suffered from “specification bias” by not taking into account how consumers reacted to the new policy and so failed to integrate the “parameters” of the government policy into the “decision problem” of consumers, thereby making any estimated regression relation between consumption and government spending incomplete and therefore potentially biased.

  Perhaps the only predictions we can bank upon are those relating to government interventions whose effects we measure through controlled experiments. The controlled experiment does not suffer from bias, but the lessons it teaches are circumscribed by the nature of the experiment. Discovering that lessons in crop rotation increase village agricultural yields does not mean that it is a good idea to extend all sorts of help to villages. Experiments tell you only what you ask of them. The lesson is that the predictive power of economics is closely related to the questions one asks. The more specific the question, the better the prediction, but the more specific the question, the less widely applicable is the lesson. In prediction as in all of statistics—and economics—trade-offs rear their unpleasant heads, reminding us that there are no free lunches.

  Summary

  STATISTICAL CONTROL CAN be summarized first in a word, then in a sentence. In a word, control means filtering
. In a sentence, statistical control is a method for isolating the influence of a variable of interest, such as a government effort to help the unemployed, on a target variable, such as unemployment, by removing the contribution to unemployment of all other possible “control” variables.

  Control should be a part of every educated person’s mental arsenal because control is a concept that can help us easily uncover false ideas we encounter every day. If you read in the morning news that video games make children socially inept because a new study shows that one goes with the other, then the idea of control allows you to poke through the essential flabbiness of the stance. Video games may not cause social ineptitude, but rather, social ineptitude may lead children to play video games. To determine whether video games cause social ineptitude you would have create two similar groups of children and allow one group to play video games and forbid the other from doing so. Then you would test them to see whether the gamers became less socially adept. In doing so, you would have eliminated all explanations for why gaming and ineptitude go together, with the exception of chance events. If you find the gaming group more socially inept by a small margin, you could use statistical analysis to tell you how likely it was that chance created this difference.

 

‹ Prev