The Apprentice Economist

Home > Other > The Apprentice Economist > Page 6
The Apprentice Economist Page 6

by Filip Palda


  To return to the lifetime consumption problem, the trick with investment is to choose between millions of potential income paths over which we must calculate the optimal consumption trajectory. It is like solving millions of permanent income hypothesis problems given that you can vary your circumstances and then trying to figure out which path is best. Few then will be surprised that finding optimal trajectories for whatever control variable you have at hand is a challenge that might be difficult to be met head on. The head-on approach is to derive a so-called analytical or “closed-form” solution that tells you at any precise time how much capital you have accumulated, and how much you should be consuming and investing to maximize lifetime utility.

  Finding a closed-form solution to the optimum path of consumption and investment, sometimes called Ramsey’s Problem, is daunting and usually forces economists into making highly restrictive assumptions about some of the quantities involved. As Olivier Blanchard and Stanley Fischer expounded on in their classic 1989 textbook on macroeconomics (283), “Can we go from these first-order conditions [equations of motion] to derive an explicit solution for consumption and saving? In general, we cannot, but for specific utility functions and assumptions about asset returns and labor income, we can.”

  Any time you restrict something that can move in a dynamic system, you greatly simplify the analysis, though at the cost of diminished realism. Not wishing to give up realism, economists had to make an intellectual trade and give something else up in return. Instead of giving us the precise levels of consumption and investment that an economy will follow, many of the more sophisticated models of inter-temporal optimization are satisfied to tell you, through a formula known variously as a differential equation or an equation of motion, how crucial variables of interest such as consumption and investment should change at any given moment. You don’t know where the horse will be at any time in the future, but at any given instant you can tell in what direction it is headed.

  This form of reasoning shows economists whether consumption and investment will converge to some steady point which is optimal for the economy. Such concepts were implicit in the work of the 1920s Cambridge mathematician, Frank Ramsey. He derived equations of motion for an economy following the best path for maximizing well-being in the face of having to make the decision to sacrifice consumption today to invest and augment consumption later. His work in turn was extended in the 1960s by American economist David Cass. They showed how a free market economy could produce a “Pareto efficient” consumption path. Along such a path consumption and investment cannot be rearranged so as to make at least one person better off without hurting anyone else, the standard definition of efficiency in economics.

  The fall and rise of growth theory

  THOUGH THIS EFFICIENCY result was important to the first wave of researchers in the field, to subsequent students of economics the insights into human behavior and markets derived with such effort from integrating consumption and investment seemed limited, and perhaps even irrelevant. As a result, academic interest in this field largely stalled in the 1970s. Who, after all, really cared more than fleetingly if a theoretical consumption and investment path extending to infinity was Pareto optimal? Was not the problem of business cycles more important? How could optimal consumption and growth theory, with all its limiting analytical complications, tell us anything worthwhile about these cycles? And who wanted in any case to learn the maths needed to understand what was going on when a much simpler and more elegant approach to consumption was at hand in the permanent income, life-cycle models of Friedman and Modigliani, limited as they might be by the absence of a growth component?

  Interest was only revived during the 1970s and 1980s with the work of a wave of researchers who used newly developed mathematical techniques and computer technology to get a more realistic feel for what was going on when people engaged in complex intertemporal optimization decisions.

  The key mathematical technique was a revolutionary simplification of the calculus of variations known as “dynamic programming”. Under certain conditions it allowed the researcher to find an analytical solution, or at least a very good approximation to one, that illuminated the optimal path that consumption, investment and other control variables took.

  Dynamic programming was a new way of solving problems that the calculus of variations had trouble with and one which was ideally suited to computers because it considered time in discrete periods such as this year and next year, rather than as a continuous flow (though Bellman also figured out how to apply his concepts to continuous time to solve similar problems of “optimal control”). Whereas the calculus of variations tried to pick the best path for some “control” variable, such as investment, over the whole planning horizon, dynamic programming broke the problem down into a series of decisions you made on the spot based only on how the decision influenced your wellbeing, or profits, or whatever else you were trying to optimize, in the present period versus what you could get by way of benefits in all subsequent periods by diverting resources to a future in which you had already devised an optimal plan.

  In dynamic programming you did not choose one path out of many but rather a “policy” on how to act in any given and subsequent periods which could be cut off from what happened in all previous periods. Of course the old methods from the calculus of variations could still be applied, but getting the solution would be much more laborious. Provided that your value function, or objective, such as total lifetime happiness or utility, was “separable” in the sense that your anticipated utility from a later age did not affect your utility at an earlier age, then the vast inter-temporal optimization problem could be broken up into small, forward looking packets. You started solving the problem for the last two periods of your planning horizon to get an optimal response to whatever resources previous decisions had left you. Then you stepped back a period and decided how to invest given that you already knew what your optimal response to current choices would be, and so forth. Thus you could chain a series of already solved mini-problems into a larger sequence of choices reaching all the way back to the first period of your problem. This produced breathtaking increases in the speed of solving problems of intertemporal optimization. The approach also meshed with Soren Kierkegaard’s view that “Life can only be understood backwards; but it must be lived forwards.”

  The dynamic programming approach did not work well if you tried to apply it to situations where the anticipated benefits of future choices had some influence on your present well-being . Then you would end up getting “inconsistent” solutions. You would have arrived at a series of policies you would want to change in the future because in deriving them you had not taken into account what was to have happened in the earlier parts of the problem. You would have to keep “re-opening” your dynamic programming procedure until your solution had converged to what you would have discovered had you just adopted the more complex calculus of variations approach in the first place.

  I am grossly simplifying the condition needed for application of Bellman’s approach, but roughly speaking it was what Bellman called the “principle of optimality”. If the principle held then dynamic programming problems could each be solved as an interlocking sequence of backwardly deduced problems.

  Alpha Chang (1992, 22) explained that “if you chop off the first arc from an optimal sequence of arcs, the remaining abridged sequence must still be optimal in its own right.” In economic terms, what this meant was that if government was trying to devise some optimal long-term policy of subsidies to industry, then the policy in mid-course would not change from what it had been planned to be at the start. Namely, if you “chopped off” the first half of the policy period, Chang’s “initial arc”, then the plan stayed the same, or Chang’s words, “the remaining abridged sequence” would still be optimal.

  Nobellist Thomas Sargent produced the most comprehensive summary when he wrote (1997, 19), “Thus as time advances there is no incentive to depart from the original plan
. This self-enforcing character of optimal policies is known as ‘Bellman’s principle of optimality’”. If you used dynamic programming and the principle of optimality did not hold then your policy at any given moment would be “inconsistent”, meaning that if you drew up a lifetime plan, at every moment you would be tempted to reconsider your plans.

  While this may seem to be mathematical navel-gazing of the most self-indulgent sort, understanding the conditions under which the principle of optimality would hold became a key ingredient in the debate about the role of government in the economy. To appreciate the force of these ideas we need first to understand the intellectual debates that preceded them.

  Statistics and the social engineer

  RECIPES FOR INTER-TEMPORAL optimization are peppered with evocative terms. “Dynamic programming”, “policy function”, and “optimal control” conjure images of engineers guiding society with the help of mathematical equations. The imagery is apt. The inter-temporal analysis of economic phenomena has drawn engineers and physicists into the study of economic phenomena. But beyond the technical talents required, dynamic mathematics attracts a particular sort of individual with a zest for control and order.

  Those who used these tools in economics focused at first on the best way for people to arrange their consumption over time. They then shifted their attention to the best means by which governments could intervene to redirect consumption and investment should these have stalled or gone astray, at least according to the best minds in government planning offices. These minds conceived of an economy guided in large part by the use of methods of dynamic optimization. As in most stories of good intentions, this one ends, if not on the road to hell, then at least in frustration at a paradox in the theory of government intervention.

  This paradox, called time-inconsistency, has its roots in the 1930s when Jan Tinbergen was inspired by Keynes to map out the structure of the economy. With such a map in hand, it seemed that government could wage campaigns against whatever injustice or inefficiency threatened the integrity of society. More specifically, Tinbergen wanted to know if there was some way to make a statistical link between government interventions such as a tax increase or a spending increase, and unemployment, inflation, and national wealth. With an understanding of these links you could start to build optimal plans for the economy.

  In the West, the notion that government could fine-tune an economy was prevalent at the time. Ideas for government intervention were being energized by the greatest experiment in social planning ever conceived. Russia was thought to be using central government control to engineer an ideal society. But western governments could not suit up commissars in leather trench coats and give them the discretion to put the fear of the gulag into the workforce. They needed a more subtle approach based on an understanding of the relationships between an economy’s inputs and outputs.

  Social engineers such as Tinbergen and his students in “econometrics” were to guide the process. Tinbergen’s notion was that if you could work out the basic structure of the economy, such as the manner in which income influences consumption, or money growth influences inflation, and how that influences unemployment, then government could change the money supply, or its own consumption, to stimulate private economic activity. The way in which you worked out the structure was by looking at how these quantities moved together, or “co-varied”, over time.

  Surprisingly, Keynes opposed this intellectual crusade. In a caustic 1939 critique of Tinbergen he argued that there were far too many forces influencing economic aggregates for econometricians ever to be able to disentangle and identify them. Keynes wrote mockingly (1939, 560), “Am I right in thinking that the method of multiple correlation analysis essentially depends on the economist having furnished, not merely a list of the significant causes, which is correct so far as it goes, but a complete list?” Nor might the effects of independent forces on, say national income growth be constant.

  Perhaps more tellingly Keynes raised the question of how to calculate the effect of profit on investment when investment may itself influence profit. He noted that when profits double, a doubling of investments does not necessarily mean increases in profits will increase investments. Perhaps it is rising investment that increases profits. Or perhaps each have a positive influence on the other, so that the causality is difficult to sift out by simply looking at how the two variables move together. As Keynes wrote (1939, 561), “What happens if the phenomenon under investigation itself reacts on the factors by which we are explaining it? For example, when he investigates the fluctuations of investment, Prof. Tinbergen makes them depend on the fluctuations of profit. But what happens if the fluctuations of profit partly depend (as, indeed, they clearly do) on the fluctuations of investment? Prof. Tinbergen mentions the difficulty in a general way in a footnote to p. 17, where he says, without further discussion, that ‘one has to be careful.’ But is he?” Had the master turned on his apprentice? Perhaps.

  Keynes’ critique of Tinbergen is remarkable not simply because it outlined the statistical and conceptual problems that eventually shook the Tinbergen project, but also because Keynes was taking pot-shots at one of his most fervent and talented disciples. Perhaps this intellectual honesty is one of the reasons that Milton Friedman held Keynes in such high regard that in his television series Free to Choose, he considered Keynes’ early death a disaster. Only Keynes had the intellect and force of character to make his disciples aware of the need for some skepticism of his theories.

  Milton Friedman shared with Keynes a distrust of the analysis of economic time series. There were tricks you could use to make almost any two series appear correlated. In this spirit, econometrician David Hendry’s 1980 paper on nonsense correlation showed how you could easily play with time-series data to get an almost perfect correlation between monthly precipitation and economic growth (394-395). What few people realize was how closely Keynes and Friedman were allied intellectually in their critique of the Tinbergen project. Robert Leeson has brought this into public consciousness in his 2000 book where he writes on page 6, “Keynes and Friedman were equally perceptive about the dubious nature of mechanical econometrics and equally doubtful that such practices could resolve economic disagreements. Later, contrary to common perceptions, Tinbergen came to accept much of Keynes’ Critique and Keynes did not revise his objections to econometrics.”

  Despite these misgivings, the Tinbergen project of large-scale econometric modeling of the economy barreled ahead until the 1960s. Then, Edmund Phelps and Milton Friedman started noticing that these models were doing a questionable job of predicting the effect of government-induced inflation on unemployment. Inflation could cheapen the cost of labor if wages were written into long-term contracts. An hourly wage of $20 falls to $10 in real terms if inflation doubles, because the real purchasing power of a dollar halves. This would be a boon to employers who would hire more labor and produce more output. But if workers understood government’s game, then they could build inflation clauses into their contracts and thwart the intervention. Just because econometric research had uncovered that inflation in the past might have seemed to be associated with a reduction in unemployment, this did not mean that a government could blithely manipulate inflation to decrease unemployment. The story can be told differently to implicate inflation in the deception of firms instead of workers, but the results are the same.

  Econometric models that did not take into account the manner in which people anticipated government action and its consequences were questionable as guides for the fine-tuning of the economy. A metaphor for this problem might be that of the donkey and the carrot. The “donkeymetrician” who has extensively studied the barnyard and its menagerie has noted a statistical relationship between the presence of carrots in a trough and the movement of donkeys towards that trough. Thinking he knows the association between carrots and donkey displacement, he harnesses a donkey to a plough and attaches a device to the donkey that dangles before it a carrot. At first the donke
y is intrigued and plods forward dragging his load, but noticing the effort of pulling the plough and failing to gain recompense for his march towards the carrot, he reverts to his initial stubbornly passive state. By his mulish reactivity the donkey has thwarted the prescriptions of the barnyard statistician.

  Robert E. Lucas Jr. sharpened these insights in a 1976 paper on econometric policy evaluation. As mentioned earlier, if you wanted to predict the results of a government intervention on the economy, you not only had to know the basic relations between such macroeconomic variables as income and consumption, but you also had to know what people thought about those interventions.

  If a government was spending more today, then you had to know whether people thought the increase was temporary or permanent to judge how they would react. Econometric policy evaluation had to take into account not just how the basic relations between macroeconomic variables would react to government interventions, but also the manner in which people would react to government interventions. Because the parameters, or guideposts, of relations between macroeconomic variables were sensitive not just to economic fundamentals but also to economic policy, these parameters would change whenever the way of doing government business changed. That meant government had to go reformulate its policies in light of these new relations between economic variables, which in turn meant that expectations changed again and government would be forced back to the drawing board.

  Eine kleine time paradox

  PERHAPS THIS APPROACH might converge to a stable equilibrium where the predictive models were consistent with expectations, but by then policy could have been modified to the point where it was far from the original vision. Policy conclusions built on mechanistic Tinbergen models that did not account for people’s reactions to policy, could be misleading and if taken to their logical limit could produce policies greatly mutated from their original form.

 

‹ Prev