Tales of a New America
Page 8
The race with Japan, rather than being a race for trade dominance, could be a contest for adding value to a broadly expanding world productive system. In this type of race, the faster both contestants move ahead, the more likely it is that both will win, along with the other inhabitants of the globe. As each competitor seeks to add ever more value to what it sells worldwide, the planet’s wealth is enhanced. New products meet people’s wants at ever lower costs.
If America could choose, it would do well to choose this contest for adding value over either the arms race with the Soviets or the mercantilist race with Japan. So far we have been able to avoid making such a choice. Until recently, we were able to perform well in all contests simultaneously, because all depended on our technological prowess. The complex weapons systems of the arms race drew upon the same body of knowledge and skills as the complex technologies of the competitiveness race. By increasing our technological capabilities in arms, we became more effective commercial competitors. But this is coming to be less the case. The arms race with the Soviets is exacerbating a mercantilist race with Japan, and simultaneously jeopardizing our ability to add value to the goods and services we sell worldwide. This in turn is undermining our defensive capacities. The result is another application of the Boomerang Principle.
2
Americans have always loved technology. Nobody commands more respect than the maker of a beautiful machine; nobody draws more envy than its possessor. In the two decades following World War II, Americans led the world in technology. We developed or radically improved precision gadgets such as semiconductors, microprocessors, lasers, sensing devices, and the software that tell the gadgets how to perform. High technologies became the building blocks both for advanced weapons systems and for commercial products.
Most advances in technology evolve from what has come before. The dramatic discovery or groundbreaking invention may win a prize and bring fame to its progenitor. But such breakthroughs generally have less practical effect than the gradual accretion of improvements in existing technologies. Scientists, engineers, and tinkerers of all sorts mostly progress by applying their understanding of current technologies to new problems. They rearrange old solutions in new ways, make incremental improvements in previous ways of doing things, and try out new variations on old themes. Experience—the breadth and depth of familiarity with the technological endeavor—is what determines the technological fecundity of a culture. Our ingenuity in the postwar period, so crucial to our arms and our products, was the heritage of previous technological experience. So it is now, and will continue to be in the future. Thus a great deal hinges on what extent, and toward what ends, we develop and apply our scientific and technical resources.
America’s postwar technological leadership was in part a product of commercial research and development, as our companies struggled to present consumers with ever more sophisticated televisions, refrigerators, automobiles, and other necessities of American life. But much of the experience that drove continued progress—the more important portion—originated in collective efforts to do wildly ambitious things, such as build an atom bomb, construct intercontinental ballistic missiles, and get men to the moon. Large projects like these would never be undertaken by individual companies; they are too risky and expensive. The knowledge they generate, moreover, inevitably leaks out to others who have not shared in the financing. Only governments have the farsightedness or the foolishness to take on such endeavors. Consider the National Defense Education Act, the space shuttle, and the billions of dollars the government has poured into semiconductors, jet engines, composite materials, computers, robots, and advanced manufacturing systems. In the late 1950s, for example, IBM developed magnetic core memories and the solid-state calculating machine—precursors to the modern computer—while under contract to the government. The government remained for some time the major purchaser of computers, the only commercial motive for continued development. When Fairchild and Texas Instruments invented the integrated circuit in the late 1950s, few commercial purchasers could afford the $120 price. But as the government bought millions of the chips for missile guidance systems and the moon project, the industry learned how to make them much more cheaply, and commercial applications blossomed.1 In any other nation, this governmental function would be called “technology development,” and openly understood as a legitimate activity. Here, such public initiatives undertaken with the goal of economic advance per se would be ideologically suspect. We have preferred to act as if we were trying to do something else; the chips and the robots come as a pleasant bonus.
For much of the postwar era, then, our technological prowess granted us a comfortable lead in both the military and economic contests. Improvements in the technologies of the arms race (and the closely related moon race) yielded experience that helped us add commercial value to what we sold to the rest of the world.
3
In 1986 more was spent on research and development in the United States than the combined research and development spending of Britain, France, West Germany, and Japan. Ironically, in an antigovernment era, the direction of high technology in America increasingly was being planned and executed by the public sector. That year the United States government funded almost 50 percent of all research undertaken by American companies, or about twice the proportion of private research funded by the Japanese government. The U.S. government paid for two thirds of all the basic research in the country. The voices that so readily decry any hint of central government planning were, in this case, conspicuously and lamentably silent.2
The great bulk of government-funded research was sponsored by the Department of Defense, particularly once the great military buildup got underway. The perceived Soviet threat provoked a call to action reminiscent of that which had launched the space program after Sputnik. In the late 1970s, only about half of federally sponsored research was defense-related; by 1986 it was 80 percent. Research and development had become the fastest growing major category in the defense budget.3
Beginning in the mid-1980s a dominant factor in American technology development, and potentially the largest research and development project in our history, was the Strategic Defense Initiative, or “Star Wars.” This was the Reagan administration’s plan for throwing up a shield around the United States composed of ground- and space-based weapons that would destroy any intercontinental ballistic missile heading our way. The amount initially budgeted for Star Wars research was not that high relative to the nation’s total research budget—the entire effort was estimated to cost $26 billion over five years—but it covered the cutting edge. Measured by the numbers of scientists and engineers required, and by the technological challenges involved, Star Wars promised to be more significant than the Manhattan Project or the Apollo moon program.
The military effects of Star Wars were distant, contingent, and in the opinion of many experts highly debatable as the program got underway. But few knowledgeable observers disputed that its technological and economic effects would be profound. The technology used to create X-ray laser weapons could be adapted to super-microscopes or machines for unblocking arteries; the know-how garnered in designing particle accelerators could be applied to irradiating food products; sensors for tracking enemy rockets could be used for commercial optics and radar. Spinoffs and applications as yet unimaginable could create whole new generations of telecommunications and computer-related products that could underpin information-processing systems in the next century. Our European allies understood these implications. They were skeptical that Star Wars would work as advertised, that it could ever scour the skies clean of Soviet rockets, and they were anxious that the program would escalate the arms race. But they were seduced into joining the effort by the prospect of picking up expertise in the technologies involved, technologies that would be important to their future economic competitiveness.4
The Pentagon’s high-technology policy was also being driven by a worry more immediate than the prospect of a Soviet attack�
��the fact that Japan was clearly outpacing us in technologies that would be critical to America’s capacity to wage war in an era of advanced electronics. Lest this nation become dependent on Japan’s high-tech mastery, the Pentagon hastened to protect its sources. The Department of Defense set in motion an all-out mercantilist campaign to maintain leadership in these militarily significant technologies. The areas that the Pentagon targeted for development were precisely those annointed “industries of the future” by Japan’s Ministry of International Trade and Industry (MITI). Both MITI and the Defense Department launched parallel projects on very large scale integrated circuits, fiber optics, new materials like polymers and composite materials, super computers, and complex software. Japan was somewhat baffled as we denounced industrial targeting while enthusiastically engaging in it. But we insisted we were racing against the other guy; it was more or less a matter of coincidence that Japan shared the same track.
4
By the mid-1980s, however, the race courses had begun to diverge. The strands of technological development central to defense and industry no longer coincided so closely, and differences in final goals (MITI, concerned about commercial competitiveness; the Pentagon, concerned about industrial prowess only where relevant to weaponry) began leading to different results. America had a finite number of technical experts and laboratories, the supply of which grew only modestly in response to the surge in Pentagon dollars. Beyond that point, military research came at the expense of commercial research. Our fixation on technology to thwart the Soviets started to undermine our efforts to create commercial high-technology products that could compete successfully with the Japanese. The divergence was becoming clear in several ways:
First, the development and marketing of new commercial products is stimulated by domestic competition, which forces firms to improve their performance and aggressively seek foreign outlets. Although MITI allowed firms to cooperate on specific basic research projects, it rigidly enforced competition in the application of these technologies and the marketing of the products that resulted. The Pentagon was unconcerned about competition within American industry. Between 1981 and 1984—during the height of the most recent military buildup—America’s top five arms contractors increased their share of total Pentagon contracts from 18 percent to 22 percent; the top twenty-five, from 44 percent to 52 percent.5 In 1985 over 65 percent of the dollar volume of U.S. defense contracts was awarded without competitive bidding.6 Most military projects called for such highly specialized items and services that the only contractors capable of supplying them were those that had worked on the same or related projects before. Even where competitive bidding occurred, the bids were often rendered meaningless by routine cost overruns. The Pentagon traditionally has been most comfortable with large, stable contractors insulated from the uncertainties of competition.
Second, creating new products successfully requires long lead times, during which firms can refine innovations, organize production, and make sure they have adequate capital, labor, and productive capacity to meet anticipated demand. Many MITI projects have spanned a decade or more. But by the 1980s most Pentagon programs were subject to relatively sudden changes in politics and in perceptions of national security needs. The precipitous rise in U.S. defense spending beginning in 1981 created bottlenecks in the production of some key subcomponents and capital goods, and shortages of engineers and scientists. In 1985 unfulfilled defense orders totaled over $100 billion—up 20 percent from 1984. And there was a shortage of an estimated 30,000 skilled machinists.7 Under these constraints, commercial applications took second place.
Third, technological competitiveness requires that innovations be transferable to commercial uses at relatively low cost. MITI has seen to it that new technologies are diffused rapidly into the economy and incorporated into commercial products. But the exquisitely sophisticated designs required by Star Wars and its related ventures—precision-guided warheads, advanced missile-tracking equipment, and sensing devices—would not be as transferable to commercial uses as were the relatively more primitive technologies, like the first integrated circuit, produced during the defense and aerospace programs of the late 1950s and 1960s. Indeed, it was because commercial technologies were diverging from military specifications that the Defense Department had opted to set up a parallel research and development system to ensure the availability of precisely the right customized gadgets. Rather than encourage American commercial development, defense spending on emerging high technologies was starting to have the opposite effect, diverting U.S. scientists and engineers away from commercial applications.
Fourth, new technologies generally become commercially important only when they become cheap to produce. Nearly every major advance of modern times lay economically dormant for a time after its development because it was too costly to find a wide market. The effort to bring down production costs can be as lengthy and difficult, and fully as important, as the initial development of a new technology. But the Pentagon is notably unconcerned about costs. It wants its high-technology devices to do what they are supposed to do, and damn the expense. Thus the innovation process is truncated; to the extent the military is involved, progress ceases before the cost-reducing stage is reached. Most big contractors with the Pentagon are paid on a “cost-plus” basis—with profits rising in pace with how much they spend producing a particular item. This formula does not inspire grand campaigns for production efficiency.
The Pentagon drew much unwelcome attention in the mid-1980s for its willingness to pay large sums for, among other items, an esoteric toilet. Such extravagance is not, as many charged at the time, the product of bureaucratic lethargy or indifference alone. It is due to something more insidious. The gadgets that the Pentagon commissioned had to be designed exactly for the complicated weapons systems in which they would fit; any small deviation, and the larger project might fail. There were no economies of scale in producing such devices. If specifications were sufficiently standardized for cheap, large-scale production, then anyone—including the Soviets, or people willing to sell to the Soviets—could make the components. Both parties to Pentagon contracts were eager to avoid that result—the Pentagon, because standard parts might imperil national security; contractors, because standardization and competition would most certainly imperil their balance sheets.
Consumers, on the other hand, prefer to buy things cheaply. Consumers do not like to be limited to a single supplier who is guaranteed a profit no matter how inefficient his production process. This is why we have capitalism. Firms making products in competitive markets must worry about their costs of production. The Japanese have mastered the art of embedding complex technologies in standardized products; American producers of complex technologies—many of them lured into somnolency by Pentagon contracts—have not.
Fifth, and finally, commercial success requires that producers be exposed to stiff international competition, so that they are forced (under penalty of extinction) to pay careful attention to what consumers around the world want, and constantly innovate in that direction. But the Pentagon increasingly has sheltered American companies from the risks of global competition. To avoid the possibility that we would become too dependent on foreign producers, the Pentagon buys American versions. (“Buy American” provisions, written into most procurement regulations, stipulate that American products be chosen so long as their prices are not more than 50 percent higher than foreign sources.) In recent years, whenever an American industry has been threatened by foreign competition—even if the industry (like footwear) is only tangentially involved in defense work—it has sounded the same alarm: Protect us, or the nation’s defense will be imperiled.
Apart from occasional meddling by Congress or the press, the life of a Pentagon contractor is not a harried one. The Japanese pose no threat. There is little possibility that a competitor’s innovations will render a product obsolete. Profits are guaranteed. Once sampled, such a life is almost irresistible, like a narcotic drug. By 1986
America’s preeminent technology companies were exchanging the inconveniences of commerce for this more comfortable existence. Honeywell gave up on commercial computers and began concentrating on items like torpedoes and defense navigation devices. TRW was shifting out of bearings and tools, where foreign competition was stiff, and into defense electronics (in 1980 defense contracts provided 30 percent of its profits; by 1985, 50 percent). General Motors had acquired Hughes Aircraft, a major defense contractor, and was working on a wide variety of military contracts. Sixty percent of Westinghouse’s profits came from defense electronics. And by acquiring RCA, GE had become America’s sixth largest defense contractor, with 20 percent of its sales being to the military (in 1980, military sales had accounted for no more than 10 percent of either firm’s income).8 America’s technology companies were busily withdrawing from the race with Japan.
5
There was also the matter of secrecy.
Experience in developing new technologies must be shared if it is to be put to use. Few scientists, engineers, or tinkerers have all the experience they need, firsthand. High technologies are sufficiently complex that those who seek to improve them must work with others close by, typically in teams. Even the teams must branch out to other teams, so that a wider network of ideas and insights can be exchanged informally, constantly. This ongoing form of exchange is what builds a Silicon Valley or a Route 128 around Boston. It generates social wealth.
What must be shared is not “information,” in the sense of specific data or designs. It is more like gossip—a continuous discussion about ongoing activities. The value of sharing comes in the resulting accumulation of different approaches and results. It comes in the development of common understandings about what technologies might “work” to solve what sorts of problems, and in wider insights into what current technologies can do and how they can be improved and adapted. Through such informal sharing of experiences, scientists and engineers learn from one another how they might do things differently next time. By contrast, mere “information,” in the form of specific data or blueprints, is relatively useless for designing future generations of technology. It may solve an immediate technological problem, but it does not provide experience for solving the next one. It does not teach.