by Daniel Bell
In a less direct but equally important way, the changing relation between theory and empiricism is reflected in the formulation of government policy, particularly in the management of the economy. During the Great Depression of the 1930s, almost every government floundered about and had no notion of what to do with any confidence. In Germany in 1930, the socialist economists who determined government policy insisted that the depression would have to “run its course,” meaning that the “overproduction” which caused it, by their Marxist reasoning, would be sopped up. In England, there was a similar sense of hopelessness. Tom Jones, a confidant of Stanley Baldwin and a member of the Unemployment Assistance Board, noted in a letter to Abraham Flexner on March 1, 1934: “On the home front we have favourable if slight evidence of improved trade, but nothing that will make any dent in the unemployment figures. It is slowly but surely being realized by more and more that the great majority of these will never work again, and people like Lindsay of Balliol, T.J., and that ilk, are facing up to big and permanent developments of these occupational and training centres.” 20
In the United States, Franklin D. Roosevelt tinkered with a wide variety of programs. Through the National Recovery Administration he set up an elaborate price-fixing and regulative set of codes which resembled a corporate state. On the advice of George Warren, he manipulated the gold content of the dollar in order to raise the price level. To do something for the idle, he began a large campaign of public works. Few of these policies derived from any comprehensive theory about economic recovery; there was none at hand. As Rexford Tugwell, one of Roosevelt’s economic advisors, later observed, Roosevelt simply was trying one “magical formula” after another in the hope of finding some combination that would get the economy moving.21
It was largely through the joining of theory and policy that a better understanding of economic management was achieved. Keynes provided the theoretical justification for the intervention of government into the economy as the means of bridging the gap between saving and investment.22 The work of Kuznets, Hicks, and others in macro-economics gave government policy a firm framework through the creation of a system of national economic accounts—the aggregations of economic data and the fitting of such components as investment and consumption into product accounts and income accounts—so that one could measure the level of economic activity and decide which sectors needed government intervention.
The other major revolution in economics has been the attempted use of an increasingly rigorous, mathematically formalized body of economic theory, derived from the general equilibrium theory of Walras and developed in the last three decades by Leontief, Tinbergen, Frisch, and Samuelson 23 for policy purposes. In the past, these concepts and tools—production functions, consumption functions, time preferences, and discounting—though powerful as abstractions were remote from empirical content because there was no appropriate quantitative data for testing and applying this body of theory.24
The development of modern economics, in this respect, has been possible because of the computer. Computers have provided the bridge between the body of formal theory and the large data bases of recent years; out of this has come modern econometrics and the policy orientation of economics.25 One major area has been the models of interdependencies among industries such as the input-output matrices developed by Wassily Leontieff, which simplify the general equilibrium system of Walras and show, empirically, the transactions between industries, or sectors, or regions. The input-output matrix of the American economy is a grid of 81 industries, from Footwear and other Leather Products (1) to Scrap, Used, and Secondhand Goods (81) grouped into the productive, distributive, and service sectors of the economy. A dollar-flow table shows the distribution of the output of any one industry to each (or any) of the other 80 sectors. The input-output matrix shows the mix and proportions of inputs (from each or several industries) which go into a specific unit of output (in dollar value or physical production terms). An inverse matrix shows the indirect demand generated by a product as well as the direct demand. Thus, one can trace the effect of the final consumer demand say for automobiles on the amount (or value) of iron ore, even though the automobile industry buys no iron ore directly. Or one can see what proportion of iron ore, as a raw material, goes into such final products as autos, ships, buildings, and the like. In this way, one can chart the changes in the nature of final demands in terms of the differential effects on each sector of the economy.26 Input-output tables are now the basic tools for national economic planning and they have been applied in regional planning, through computerized models, to test the effect on trade of changes in population distributions.
The large econometric models of the economy, such as the Brookings model discussed earlier, allow one to do economic forecasting, while the existence of such computer models now enables economists to do policy “experiments,” such as the work of Fromm and Taubman in simulating eight different combinations of fiscal and monetary policy for the period 1960–1962, in order to see which policy might have been the most effective.27 With these tools one can test different theories to see whether it is now possible to do “fine tuning” of the economy.
It would be technocratic to assume that the managing of an economy is only a technical offshoot of a theoretical model. The overriding considerations are political, and set the frames of decision. Yet the economic models indicate the boundaries of constraint within which one can operate, and they can specify the consequences of alternative political choices.28 The crucial point is that economic policy formulations, though not an exact art, now derive from theory, and often must find justification in theory. The fact that a Nixon administration in 1972 could casually accept the concept of a “full employment budget,” which sets a level of government expenditures as if there were full utilization of resources (thus automatically accepting deficit financing) is itself a measure of the degree of economic sophistication that government has acquired in the past thirty years.
The joining of science, technology, and economics in recent years is symbolized by the phrase “research and development” (R & D). Out of this have come the science-based industries (computers, electronics, optics, polymers) which increasingly dominate the manufacturing sector of the society and which provide the lead, in product cycles, for the advanced industrial societies. But these science-based industries, unlike industries which arose in the nineteenth century, are primarily dependent on theoretical work prior to production. The computer would not exist without the work in solid-state physics initiated forty years ago by Felix Bloch. The laser came directly out of I.I. Rabi’s research thirty years ago on molecular optical beams. (One can say, without being overly facile, that U.S. Steel is the paradigmatic corporation of the first third of the twentieth century, General Motors of the second third of the century, and IBM of the final third. The contrasting attitudes of the corporations toward research and development are a measure of these changes.)
What is true of technology and economics is true, albeit differentially, of all modes of knowledge: the advances in a field become increasingly dependent on the primacy of theoretical work, which codifies what is known and points the way to empirical confirmation. In effect, theoretical knowledge increasingly becomes the strategic resource, the axial principle, of a society. And the university, research organizations, and intellectual institutions, where theoretical knowledge is codified and enriched, become the axial structures of the emergent society.
The planning of technology. With the new modes of technological forecasting, my fourth criterion, the post-industrial societies may be able to reach a new dimension of societal change, the planning and control of technological growth.
Modern industrial economies became possible when societies were able to create new institutional mechanisms to build up savings (through banks, insurance companies, equity capital through the stock market, and government levies, i.e. loans or taxes) and to use this money for investment. The ability consistently to re-invest annually at least
10 percent of GNP became the basis of what W.W. Rostow has called the “take-off” point for economic growth. But a modern society, in order to avoid stagnation or “maturity” (however that vague word is defined), has had to open up new technological frontiers in order to maintain productivity and higher standards of living. If societies become more dependent on technology and new innovation, then a hazardous “indeterminacy” is introduced into the system. (Marx argued that a capitalist economy had to expand or die. Later Marxists, such as Lenin or Rosa Luxemburg, assumed that such expansion necessarily had to be geographical; hence the theory of imperialism. But the greater measure of expansion has been capital-intensive or technological.) Without new technology, how can growth be maintained? The development of new forecasting and “mapping techniques” makes possible a novel phase in economic history—the conscious, planned advance of technological change, and therefore the reduction of indeterminacy about the economic future. (Whether this can actually be done is a pregnant question, discussed in Chapter 3.)
But technological advance, as we have learned, has deleterious side effects, with second-order and third-order consequences that are often overlooked and certainly unintended. The increasing use of cheap fertilizers was one of the elements that created the revolution in agricultural productivity, but the run-off of nitrates into the rivers has been one of the worst sources of pollution. The introduction of DDT as a pesticide saved many crops, but also destroyed wildlife and birds. In automobiles, the gasoline engine was more effective than steam, but it has smogged the air. The point is that the introduction of technology was uncontrolled, and its initiators were interested only in single-order effects.
Yet none of this has to be. The mechanisms of control are available as well. As a number of studies by a panel of the National Academy of Science has shown, if these technologies had been “assessed” before they were introduced, alternative technologies or arrangements could have been considered. As the study group reported:
The panel believes that in some cases an injection of the broadened criteria urged here might have led, or might in the future lead, to the selection or encouragement of different technologies or at least modified ones—functional alternatives with lower “social costs” (though not necessarily lower total costs). For example, bioenvironmental rather than primarily chemical devices might have been used to control agricultural pests, or there might have been design alternatives to the purely chemical means of enhancing engine efficiency, or mass transit alternatives to further reliance upon the private automobile.29
Technology assessment is feasible. What it requires is a political mechanism that will allow such studies to be made and set up criteria for the regulation of new technologies.30 (This question is elaborated in Chapter 4.)
The rise of a new intellectual technology. “The greatest invention of the nineteenth century,” Alfred North Whitehead wrote, “was the invention of the method of invention. A new method entered into life. In order to understand our epoch, we can neglect all the details of change, such as railways, telegraphs, radios, spinning machines, synthetic dyes. We must concentrate on the method itself; that is the real novelty, which has broken up the foundations of the old civilization.” 31
In the same spirit, one can say that the methodological promise of the second half of the twentieth century is the management of organized complexity (the complexity of large organizations and systems, the complexity of theory with a large number of variables), the identification and implementation of strategies for rational choice in games against nature and games between persons, and the development of a new intellectual technology which, by the end of the century, may be as salient in human affairs as machine technology has been for the past century and a half.
In the eighteenth and nineteenth centuries, scientists learned how to handle two-variable problems: the relationship of force to distance in objects, of pressure and volume in gases, of current versus voltage in electricity. With some minor extensions to three or four variables, these are the bedrock for most modern technology. Such objects as telephones, radio, automobile, airplane, and turbine are, as Warren Weaver puts it, problems of “complex simplicity.” 32 Most of the models of nineteenth- and early-twentieth-century social science paralleled these simple interdependencies: capital and labor (as fixed and variable capital in the Marxist system; as production functions in neo-classical economics), supply and demand, balance of power, balance of trade. As closed, opposed systems, to use Albert Wohlstetter’s formulation, they are analytically most attractive, and they simplify a complex world.
In the progression of science, the next problems dealt with were not those of a small number of interdependent variables, but the ordering of gross numbers: the motion of molecules in statistical mechanics, the rates of life expectancies in actuarial tables, the distribution of heredities in population genetics. In the social sciences, these became the problems of the “average man”—the distributions of intelligence, the rates of social mobility, and the like. These are, in Warren Weaver’s term, problems of “disorganized complexity,” but their solutions were made possible by notable advances in probability theory and statistics which could specify the results in chance terms.
The major intellectual and sociological problems of the post-industrial society are, to continue Weaver’s metaphor, those of “organized complexity”—the management of large-scale systems, with large numbers of interacting variables, which have to be coordinated to achieve specific goals. It is the hubris of the modern systems theorist that the techniques for managing these systems are now available.
Since 1940, there has been a remarkable efflorescence of new fields whose results apply to problems of organized complexity: information theory, cybernetics, decision theory, game theory, utility theory, stochastic processes. From these have come specific techniques, such as linear programming, statistical decision theory, Markov chain applications, Monte Carlo randomizing, and minimax solutions, which are used to predict alternative optimal outcomes of different choices in strategy situations. Behind all this is the development in mathematics of what Jagit Singh calls “comprehensive numeracy.” 33 Average properties, linear relationships, and no feedback, are simplifications used earlier to make mathematics manually tractable. The calculus is superbly suited to problems of a few variables and rates of change. But the problems of organized complexity have to be described in probabilities—the calculable consequences of alternative choices, which introduce constraints either of conflict or cooperation—and to solve them one must go beyond classical mathematics. Since 1940, the advances in probability theory (once intuitive and now rigorous and axiomatic), sophisticated set theory, and game and decision theory have made further advances in application theoretically possible.
I have called the applications of these new developments “intellectual technology” for two reasons. Technology, as Harvey Brooks defines it, “is the use of scientific knowledge to specify ways of doing things in a reproducible manner.” 34 In this sense, the organization of a hospital or an international trade system is a social technology, as the automobile or a numerically controlled tool is a machine technology. An intellectual technology is the substitution of algorithms (problem-solving rules) for intuitive judgments. These algorithms may be embodied in an automatic machine or a computer program or a set of instructions based on some statistical or mathematical formula; the statistical and logical techniques that are used in dealing with “organized complexity” are efforts to formalize a set of decision rules. The second reason is that without the computer, the new mathematical tools would have been primarily of intellectual interest, or used, in Anatol Rappoport’s phrase, with “very low resolving power.” The chain of multiple calculations that can be readily made, the multivariate analyses that keep track of the detailed interactions of many variables, the simultaneous solution of several hundred equations—these feats which are the foundation of comprehensive numeracy—are possible only with a tool of intellectual technology,
the computer.
What is distinctive about the new intellectual technology is its effort to define rational action and to identify the means of achieving it. All situations involve constraints (costs, for example) and contrasting alternatives. And all action takes place under conditions of certainty, risk, or uncertainty. Certainty exists when the constraints are fixed and known. Risk means that a set of possible outcomes is known and the probabilities for each outcome can be stated. Uncertainty is the case when the set of possible outcomes can be stipulated, but the probabilities are completely unknown. Further, situations can be defined as “games against nature,” in which the constraints are environmental, or “games between persons,” in which each person’s course of action is necessarily shaped by the reciprocal judgments of the others’ intentions.35 In all these situations, the desirable action is a strategy that leads to the optimal or “best” solution; i.e. one which either maximizes the outcome or, depending upon the assessment of the risks and uncertainties, tries to minimize the losses. Rationality can be defined as judging, between two alternatives, which one is capable of yielding that preferred outcome.36
Intellectual technology makes its most ambitious claims in systems analysis. A system, in this sense, is any set of reciprocal relationships in which a variation in the character (or numerical value) of one of the elements will have determinate—and possibly measurable—consequences for all the others in the system. A human organism is a determinate system; a work-group whose members are engaged in specialized tasks for a common objective is a goal-setting system; a pattern of bombers and bases forms a variable system; the economy as a whole is a loose system.