by Daniel Bell
With what “state of confidence” can we accept the method and these predictions? The major difficulty lies not in any single prediction but in the lack of defined contexts. Each prediction is made as a single instance, isolated from the others, though all participants easily recognize that the realization of any one prediction is not only dependent on many others but, even more, is dependent on the state of the nation itself. The implicit premise underlying all these predictions is that the context of the United States and the world will not change. But the social systems and the relationships between them are bound to change, and these changes, more than the technical feasibility of any of the individual breakthroughs, will determine the possibility of these breakthroughs being realized. In short, if forecasting is to advance, it has to be within a system context that specifies the major social, political, and economic relationships that will obtain at any given time. In the Rand use of the Delphi technique, what we are given is a set of possibilities, but the way in which these possibilities are combined depends upon the system in which they are embedded. And the art—or science—of forecasting can be extended only when we are able to advance in the creation of models of the social system itself.
Matrices and contexts. The effort to provide an orderly way of looking at possibilities underlies most of the work now going on under the headings of “morphological research” and “relevance trees.” In principle these amount to a systematic effort to explore all possible solutions of a large-scale problem. What is novel is the means of ordering and the use of mathematical techniques to assign values to the different parameters of the problem.57
The morphological method was developed by Fritz Zwicky, a Swiss astronomer working at the Mount Wilson and Mount Palomar observatories while he was engaged in rocket research at the Aerojet Engineering Corporation in California.58
In his early work on rocket research and jet engines, a morphological box setting forth the different combinations of eleven classes of variables, each class with its own range of possibilities (e.g. propellants would be a class and gaseous, liquid, and solid state, would be the three possibilities; motion the class and translatory, rotatory, and oscillatory as the possibilities) produced a total of 25,344 possible kinds of engines. A previous evaluation, in 1943, on the basis of fewer parameters, derived 576 possibilities which included, however, the then-secret German pulse-jet powered aerial V-1 bomb and V-2 rocket, at a time when Professor Lindemann, Churchill’s scientific-advisor, failed to recognize the potential of the V-2, even when he was shown photographs, because he rejected the idea of liquid propellants.
As Jantsch has observed, “the full-scale application of [the morphological scheme] as it has been practiced by Zwicky in rocket and jet fuel development, apparently has had considerable success and was decisive in producing an unbiased approach in the early stages.” The idea of morphological charts, as Jantsch points out, is used by a number of companies to map out, or even to block, possible future inventions by trying to patent, in a rather abstract way, combinations of basic parameters. (“For example, one could observe a ‘rush’ for patents which would fit into hitherto unpatented fields of a coolant/moderator chart of nuclear reactors.”)59
The need to relate forecasting to specific objectives at different levels has given rise to the idea of “relevance trees,” sometimes called reliance trees or simply decision trees. The concept of a relevance tree was first proposed by Churchman and Ackoff in 195760 and is also an ordering device, like a company organization chart, for the mapping out of the program elements of a task and relating them to specific objectives. What is novel again is the effort to provide weights and scores for the different functional subsystems to see which patterned combinations provide the best payoffs.
While the “relevance tree” itself is simply a mapping device, the forecasting arises in the effort to deal with the unfolding of problems and new technologies over a five-, ten-, or fifteen-year sequence. North American Aviation’s Autonetics division in Anaheim, California, has a “tree” called SCORE (Select Concrete Objectives for Research Emphasis) to relate objectives five to fifteen years ahead to specific strategies. The most prominent example of a decision tree is that of the Program Planning Budget System (PPBS) used by the Defense Department.
Norbert Wiener once defined a system as “organized complexity.” But when one has, as in the NASA Tree, 301 tasks, 195 systems, 786 sub-systems, and 687 functional elements, the job of keeping track of these, of evaluating performance, of calculating the effect of new technologies in one system on all the others, is quite obviously an awesome one. What these matrices and morphology schemes attempt to do, then, is to provide some charts for these relationships.
Diffusion times. Paul Samuelson has made the fundamental observation that the output that can be obtained from a given stock of factors “depends on the state of technology” existing at the time.61 Some knowledge of the direction and spread of technologies therefore is crucial for the survival of any enterprise. But the important point is that technological forecasting rarely predicts, nor can it predict, specific inventions. Inventions, like political events, are subject to surprise, and often represent an imaginative breakthrough by the investigator. No one predicted the transistor of Shockley or the laser of Townes. Most technological forecasts assume invention—this is the crucial point—and then go on to predict the rate of extension through new escalations as in the case of the envelope curves, or the rate of diffusion as the new invention spreads throughout an industry. Our chief economic method of technological forecasting is the rate of diffusion.
What is true is that the rate at which technology is diffused through our economy has accelerated somewhat in the past seventy-five years, and this is one measure of the popular conception of the increase in the rate of change. A study by Frank Lynn for the President’s Commission on Automation reported:
• The average lapsed time between the initial discovery of a new technological innovation and the recognition of its commercial potential decreased from thirty years, for technological innovations introduced during the early part of this century (1880-1919), to sixteen years, for innovations introduced during the post-World War I period, and to nine years for the post-World War II period.
• The time required to translate a basic technical discovery into a commercial product or process decreased from seven to five years during the sixty-to-seventy year time period investigated.62
In effect, the incubation time for new products has decreased drastically, principally because of the growth of research and development; but the marketing time, while shorter, has not decreased so substantially. What does stand out is Lynn’s conclusion, stated “with reasonable confidence,” that “those technological innovations which will have a significant impact on our economy and society during the next five years have already been introduced as commercial products, and those technological innovations that will have a significant social and economic impact during the 1970-1975 period are now in a readily identifiable state of commercial development.” It is on this basis that social and technological planning is possible.
Although that is a general conclusion, innovation and diffusion do vary considerably by sector and industry. In 1961, Edwin Mansfield studied how rapidly the use of twelve innovations spread from enterprise to enterprise in four industries—bituminous coal, iron and steel, brewing, and railroads. From the date of the first successful commercial application, it took twenty years or more for all major firms to install centralized traffic control, car retarders, by-product coke ovens and continuous annealing. Only in the case of the pallet-loading machine, the tin container, and continuous mining did it take ten years or less for their installation by all the major firms. In his study, Lynn concluded that the rate of diffusion for technological innovations with consumer applications is nearly twice that of innovations with industrial applications.
These studies have been ex post facto. Some efforts have been made to forecast the rate and directio
n of the diffusion of technology. Everett M. Rogers, in his Diffusion of Innovations,63 uses historical diffusion curves (dollar volume and number of units in use are the measure) plotted against time in order to identify characteristic curve shapes. Mansfield has constructed a simple model64 built largely around one central idea—the probability that a firm will introduce a new technique increases with the proportion of firms already using it and the profitability of doing so, but decreases with the size of the investment required. So far, these are all experimental.
With the question of diffusion one passes over from technology to economic and social forecasting, for the spread of a new invention or product clearly depends not only on its technical efficiency, but on its cost, its appeals to consumers, its social costs, by-products, and the like; the introduction of any new invention thus depends upon the constraints of the economy, the policies of government, and the values and social attitudes of the customers.
Technology, in a sense, is a game against nature, in which man’s effort to wrest the secrets from nature comes up largely against the character of physical laws and man’s ingenuity in mapping those hidden paths. But economic and social life is a game between persons in which forecasting has to deal with variable strategies, dispositions, and expectations, as individuals seek, either cooperatively or antagonistically, to increase their individual advantage.
All of this takes place within social limits which it is the forecaster’s task to define. No large-scale society changes with the flick of a wrist or the twist of a rhetorical phrase. There are the constraints of nature (weather and resources), established customs, habits and institutions, and simply the recalcitrance of large numbers. Those, for example, who made sweeping predictions about the radical impact of automation, based on a few spectacular examples, forgot the simple fact that even when a new industry, such as data processing or numerical control, is introduced, the impact of industries with sales mounting quickly even to several billion dollars is small compared to an economy which annually generates a trillion in goods.
The outer limit of a society is its economic growth rate, and theorists such as Robert M. Solow of MIT (whose development of an economic growth model, now called, with heavy responsibility, the Ricardo-Marx-Solow model, is one of the accomplishments of contemporary economics) have argued that each economy has its “natural” growth which is compounded of the rate of population increase and the rate of technological progress (the latter being defined as the rate of productivity, the rate of new inventions, the improvement in quality of organization, education, etc.).65 Because of existing institutional arrangements (patterns of capital mobilization and spending, proportions of income used by consumers, etc.) and the large magnitudes of manpower, resources, and GNP in a society, even the revolutionary introduction of new technologies (such as in agriculture) will not increase the total productivity rate markedly. Some societies have a higher growth rate than others because of a later start and the effort to catch up. For short periods, advanced economies can speed up their growth rate within limits, but a shift in the “production function” to a greater utilization of capital leads later to replacement costs, lower marginal efficiency, and a flattening out of the rise until the “natural” rate again is resumed. According to studies of Edward Denison, for example, the “natural” rate of growth of the U.S. economy, because of institutional arrangements and technology, is about 3 percent a year.66 Eventually—at least logically, though perhaps not sociologically—to the extent that technology, as a part of the common fund of knowledge, is available to all societies, the rate of increase of all economies may eventually tend to even out. But within any appreciable frame of time, the limit that frames the forecasts of any economist or sociologist is the growth rate of an economy— which determines what is available for social use—and this is the baseline for any social forecasting.
We have said that technology is one axis of the post-industrial society; the other axis is knowledge as a fundamental resource. Knowledge and technology are embodied in social institutions and represented by persons. In short, we can talk of a knowledge society. What are its dimensions?
The Structure of the Knowledge Society
The post-industrial society, it is clear, is a knowledge society in a double sense: first, the sources of innovation are increasingly derivative from research and development (and more directly, there is a new relation between science and technology because of the centrality of theoretical knowledge); second, the weight of the society— measured by a larger proportion of Gross National Product and a larger share of employment—is increasingly in the knowledge field.
Fritz Machlup, in his heroic effort to compute the proportion of GNP devoted to the production and distribution of knowledge, estimated that in 1958 about 29 percent of the existing GNP, or $136,436 million, was spent for knowledge.67 (This total as distributed is shown in Table 3-1.) The definitions Machlup employs, however, are broad indeed. Education, for example, includes education in the home, on the job, and in the church. Communication media include all commercial printing, stationery, and office supplies. Information machines include musical instruments, signalling devices, and typewriters. Information services include moneys spent for securities brokers, real-estate agents, and the like. To that extent, the figure of 29 percent of GNP, which has been widely quoted, is quite misleading. Especially because of student attacks on Clark Kerr, who used this figure in The Uses of the University,68 the phrases “knowledge industry” and “knowledge factory” have acquired a derogatory connotation.
Any meaningful figure about the “knowledge society” would be much smaller. The calculation would have to be restricted largely to research (the development side of R & D has been devoted largely to missiles and space, and is disproportionate to the total), higher education, and the production of knowledge, as I have defined it, as an intellectual property, which involves valid new knowledge and its dissemination. If one takes education alone, defining it more narrowly than Machlup as direct expenditures for public and private schools, the singular fact is that the proportion of GNP spent on education in 1969 was more than double that of 1949. In 1949, the expenditure on education was 3.4 percent of GNP (in 1939, it was 3.5 percent, and in 1929, it was 3.1 percent). In 1969, the figure had risen to 7.5 percent. This doubling can be taken as one indicator of the importance of education. (Other, limited indicators can be seen in the tables in the following sub-sections.)
TABLE 3-1
Distribution of Proportion of Gross National Product
Spent on Knowledge, 1958
SOURCE: Data from Fritz Machlup, The Production and Distribution of Knowledge in the United States, © 1962 by Princeton University Press, pp. 360-361, arranged in tabular form, by permission.
DIMENSIONS OF THE KNOWLEDGE CLASS69
In the Republic of Plato, knowledge was vouchsafed only to one class, the philosophers, while the rest of the city was divided between warriors (guardians) and artisans. In the Scientific City of the future there are already foreshadowed three classes: the creative elite of scientists and the top professional administrator (can one call them the “new clerisy,” in Coleridge’s term?); the middle class of engineers and the professorate; and the proletariat of technicians, junior faculty, and teaching assistants.
The metaphor can be carried too far, yet there is already an extraordinary differentiation within the knowledge society, and the divisions are not always most fruitfully explored along traditional class lines of hierarchy and dominance, important as these may be. There are other sociological differences as well. In the social structure of the knowledge society, there is, for example, the deep and growing split between the technical intelligentsia who are committed to functional rationality and technocratic modes of operation, and the literary intellectuals, who have become increasingly apocalyptic, hedonistic, and nihilistic. There are divisions between professional administrators and technical specialists, which sometimes result in dual structures of authority—for example,
in hospitals and research laboratories. In the universities there are divisions between deans and faculty, and in the faculty between research and teaching. In the world of art there are complex relations between museum directors, curators, magazines, critics, dealers, patrons, and artists. The performing arts have different striations. Any further exploration of the knowledge class would have to explore in detail these varying patterns of vertical stratification and differentiation.
Conventionally, in social structure analysis, we begin with population. The gross figures are startling indeed. If one assumes, as does Abraham Moles, that by 1972, 5 percent of the population of the “advanced” countries and 3 percent of the total world population would be involved in intellectual work, then the global population of the Scientific City of tomorrow would number a hundred million persons! 70
World comparisons are difficult, and the figure just cited was intended to indicate the change in scale that the growth of an intellectual class has produced. Because we have no figures for the past, such comparisons are difficult. One of our tasks, however, is to provide baselines for the future; and here we shall restrict ourselves to American data, and to the census categories that allow us to make some comparisons over time and some projections for the future.
The chief census category we are concerned with is that of “professional and technical persons.” Between 1947 (the baseline after World War II) and 1964, the employment of professional and technical workers in the United States more than doubled, rising from about 3.8 million to over 8.5 million workers. By 1975, manpower requirements for this occupational group are expected to rise by more than half, to 13.2 million persons. If one assumes a total labor force at that time of 88.7 million, then the professional and technical group would make up 14.9 percent of the working population. If one adds, as well, an estimated 9.2 million managers, officials, and proprietors, then the total group would make up 25.3 percent of the working population. In effect, one out of every four persons would have had two to four years of college—the educational average for the group—and this 25.3 percent would comprise the educated class of the country.71