by Tom Clancy
If an engine has to compress a lot of air, then the pressure increase is distributed, or spread out, over a large volume. By reducing the amount of air flowing into the compressor, more work can be done on a smaller volume, which means a greater pressure increase. This is good. Then the designers increased the rotational speed of the compressor. With the compressor stages spinning around faster, more work is done on the air, and this again means a greater pressure increase. This is better. The bypass duct was relatively easy to incorporate into an engine design, but unfortunately, a faster spinning compressor proved to be far more difficult.
There were three major problems:1. Getting more work out of the turbine so that it could drive the compressor at higher speeds.
2. Preventing the compressor blades from stalling when rotated at the higher speeds.
3. Reducing the weight of the compressor so that the centrifugal stresses would not exceed the mechanical strength of the alloys used in the compressor blades.
Each problem is a formidable technological challenge, but mastering all three took some serious engineering ingenuity.
Getting more work out of a turbine is basically a metallurgy problem: To produce the hotter gases needed to spin the turbine wheels faster, the engine must run hotter. Next, if the turbine’s weight can be reduced, more useful work can be extracted from the hot gases. Both require a stronger, more heat-resistant metal alloy. But developing such an alloy is a difficult quest. In working with metals, you don’t find high strength and high heat resistance in the same material. The solution was found not only in the particular alloy chosen for the turbine blades, but also in the manufacturing technique.
Traditionally, turbine blades have been constructed from nickel-based alloys. These are very resistant to high temperatures and have great mechanical strength. Unfortunately, even the best nickel-based alloys melt around 2,100° to 2,200°F/1,148° to 1,204°C. For turbojets like the J79, in which the combustion section exit temperature is only about 1,800°F/982°C, this is good enough; the temperature of the first stage turbine blades can be kept well below their melting point. But high bypass turbofans have combustion exit temperatures in the neighborhood of 2,500°F/1,371°C. Such heat turns the best nickel-based turbine blade into slag in a few seconds. Even before the blades reached their melting point, they would become pliable, like Silly Putty. Stretched by centrifugal forces, they would quickly come into contact with the stationary turbine case. Bad news.
Nickel-based alloys still remain the best material for turbine blades. So improvements in strength and heat resistance depend on the blade manufacturing process. The manufacturing technology that had the greatest effect on turbine blade performance was single-crystal casting.
Single-crystal casting is a process in which a molten turbine blade is carefully cooled so that the metallic structure of the blade forms a single crystal. Most metallic objects have a crystalline structure. For example, you can sometimes see the crystal boundaries on the zinc coating of new galvanized steel cans, or on old brass doorknobs etched by years of wear. When metal objects are cast, the crystals in the metal form randomly due to uneven cooling. Metal objects usually break or fracture along the boundaries of crystal structures. To melt a crystalline object, the heat energy must break down the bonds that hold the crystals together. The bigger the crystals the more energy it takes. If these crystalline boundaries can be eliminated entirely, a cast metal object can have very high strength and heat resistance, qualities highly desirable in a turbine blade.
The first step in forming a single crystal structure is to precisely control the cooling process. In turbine blade manufacturing, this is done by very slowly withdrawing the mold from an induction furnace. This works like your microwave oven at home, only a lot hotter. Controlled cooling by itself, however, will not produce a single crystalline structure. For that you also need a “structural filter.”
So the molten nickel alloy is poured into the turbine blade mold, which is mounted on a cold plate in an induction furnace. When the mold is filled, the mold/cold plate package is slowly retracted from the furnace. Immediately, multiple crystal structures begin to form in a crystal “starter block” at the bottom of the mold. But because the cold plate is withdrawn vertically, the crystals can only grow toward the top of the starter block. At the top of the block is a very narrow passage that is shaped like a pig’s curly tail. This pigtail coil is the structural filter, and it is only wide enough for one crystal structure to travel through. When the single crystal structure reaches the root of the turbine blade, it spreads out and solidifies as the blade mold is slowly withdrawn from the furnace. Once it is completely cooled, the turbine blade will be a single crystal of metal with no structural boundaries to weaken it. It now only requires final machining and polishing to make it ready for use.
A cutaway of the molding process for a modern turbofan engine fan blade.
Jack Ryan Enterprises, Ltd., by Laura Alpher
While single-crystal turbine blades are very strong and heat resistant, they would still melt if directly exposed to the hot gases from the combustion of a turbofan engine. To keep molten turbine wheels from dribbling out the back end of the engine, a blanket of cool air from the compressor is spread over the turbine blades. This is possible because complex air passages and air bleed holes can be cast directly into the turbine blades. These bleed holes form a protective film of air, which keeps the turbine blades from coming into direct contact with the exhaust gases, while simultaneously allowing the turbine blades to extract work from those gases. Earlier non-single-crystal turbine blade designs had very simple cooling passages and bleed holes that were machined out by lasers or electron beams, and didn’t provide as much thermal protection.
A cutaway of the Pratt & Whitney F100-PW-229 turbofan engine.
Jack Ryan Enterprises, Ltd., by Laura Alpher
Thanks to single-crystal casting technologies, the turbine sections of turbofans not only operate at higher pressures and temperatures than turbojets, but are smaller, lighter, and more reliable. For example, a quick comparison between the J79 and the F100 shows that the turbine section that drives the compressor has shrunk from three large stages to two smaller ones.
The remaining problems resulting from a turbofan’s higher pressure ratio include preventing the compressor blades from stalling at higher rotational speeds, and reducing the compressor section’s weight. Weight is particularly critical, since every extra pound/kilogram has to be compensated for by the aircraft’s designers. Fortunately, the solution to compressor stalling also reduces the compressor’s overall weight.
Consider the problem: As the rotational speed of the compressor increases, so too does the speed of the airflow. At some point the airspeed becomes so high that a shock wave forms and the compressor “stalls.” This is very similar to what happened to many early straight wing jet and rocket-powered aircraft when they went supersonic. As the aircraft exceeded the speed of sound, a shock wave (a virtual “wall” of air) formed which caused the wing to undergo “shock stall” and lose all lift. In an engine, excessive shock-induced drag stalls the airflow and the compressor is unable to push the air any further. In aircraft design, the remedy for shock stall was to sweep the wings back. The same solution works for turbofan engine compressor blades. Sweeping back the compressor blades not only avoids shock stalling, but allows the blades to do more work on the air because they are moving faster. This raises the pressure ratio. Since these higher-speed, swept-back compressor blades are much more efficient in compacting air, a smaller number of compressor stages are required to achieve a desired pressure ratio. A smaller number of stages means a reduction in the overall weight of the compressor and the engine itself. Again, comparing the J79 and the F100, we can see an overall reduction in the number of compressor stages from seventeen in the J79 to thirteen for the F100 (or really only ten if we exclude the fan section). Compressor weight has also been reduced through the use of titanium alloys in about half of the stages toward
s the front of the engine. Although titanium is lighter than nickel alloys, it cannot be used further aft than the midsection of the compressor (due to heat-resistance limits of titanium alloys), so heavier steel alloys are used in the remaining stages. Still, there is a significant weight saving from the use of titanium where it is applicable, and the current generation of fighting turbofan engines has greatly benefited as a result.
Once the problems with higher rotation speed compressors were solved, turbofan engines generally replaced the turbojet as the propulsion plant of choice for high-performance military aircraft. Their superior thrust made them a natural choice for the new generation of high-performance aircraft like the F-15 and F-16 that came on-line in the mid-1970s.
The latest version of the Pratt & Whitney F100 family, the F100-PW-229, is generally considered to be the best fighter engine in the world today. It is capable of delivering over 29,000 lb./13,181.8 kg. of thrust in afterburner, as well as providing improved fuel economy in dry-thrust ranges. Although it’s not the first turbofan engine used in a fighter design (the F-111A was fitted with the Pratt & Whitney TF30), the F100 engine was the first true “fighting” turbofan, and is the propulsion plant for all of the F-15-series aircraft and the majority of the F-16 fleet as well. The F100 engine first flew in July 1972 in the first prototype F-15; and by February 1975, the Eagle had established eight world records for rapid climbing, streaking past the records held by the turbojet-powered F-4 Phantom and the Soviet MiG-25 Foxbat.
The improvement in fuel economy at subsonic speeds came about because the smaller quantity of higher-pressure air entering the combustion chamber mixed better with the fuel and burned more completely. Since the fuel burns more efficiently, turbofans have about 20% lower specific fuel consumption at subsonic speeds; and as an added bonus they do not produce as much smoke as a turbojet. This was a major tactical improvement. In Vietnam, the F-4 Phantom II usually announced its presence by the plumes of smoke belching from its twin J79 turbojets.
Another significant improvement in fuel economy and overall engine performance came with the development of an advanced electronic-control system called Full-Authority Digital Engine Control or FADEC. FADEC replaced the old hydromechanical control system found on turbojets, responding faster and more precisely to changes that the engine experiences in flight. Factors that FADEC monitors include aircraft angle of attack, air pressure, air temperature, and airspeed. Since FADEC can monitor considerably more parameters than a hydromechanical system, it is constantly fine-tuning the engine to maximize its performance.
Not everything about a fighting turbofan engine is an improvement over a turbojet. For instance, the afterburner of a turbofan actually consumes far more fuel (about 25% more) than its counterpart on a turbojet. Because so much of the air entering a turbofan goes through the bypass duct, the afterburner is supplied with a larger supply of oxygen-rich air. With the greater amount of oxygen available for combustion, more fuel can be sprayed into the afterburner to produce even more thrust. For turbofan engines, the afterburner provides about a 65% increase in thrust (compared with 50% for a turbojet). The good news is that aircraft equipped with fighting turbofans don’t need to use afterburners as often. The latest version of the F100 produces as much thrust without afterburner as the J79 does with it. Now, an F-15C still needs the afterburner to sustain supersonic flight, but it can cruise at high subsonic speeds, loaded with external fuel tanks and missiles, without using this fuel-guzzling feature.
Presently, all high-performance fighters are subsonic aircraft, with the ability to make short supersonic dashes through the use of afterburners. But the USAF’s next-generation Advanced Tactical Fighter (ATF) will be required to sustain cruise speeds above Mach 1.5 (at altitude) without the use of its afterburners. The only way this can be done is to have the core (the compressor, combust, and turbine section) of a turbofan produce more thrust than even the current-generation fighting turbofans. With the help of advanced computer-modeling techniques, called computational fluid dynamics, the compressor and turbine blades of the new engine are shorter, thicker, and more twisted than those in the F100. Thus, the F119-PW-100, the engine chosen for the new F-22 fighter (winner of the ATF competition), has fewer stages in the compressor and the turbine (three stages in the fan, six in the compressor, and two stages in the turbine). Even with these changes, supersonic cruise could not be achieved. To get the needed thrust, the bypass ratio had to be further reduced, and more air sent through the core of the engine.
The F119 engine on the F-22 is technically a low-bypass turbofan, with only about 15% to 20% of the air going down the bypass duct. Now, this low-bypass ratio seems to conflict with all I’ve said about the advantages of high-bypass turbofans. However, a high-bypass-ratio turbofan is designed to give good performance at subsonic speeds! For supersonic cruising, the best engine must be more like a turbojet. With its low bypass ratio, the F119 engine is almost a pure turbojet, with only enough air sent down the bypass duct to provide for the cooling and combustion (oxygen) requirements of the afterburner. During test runs in 1990 and 1991, the F-22 was able to sustain Mach 1.58 at altitude, without using its afterburner. The tremendous advantage of maintaining supersonic speeds without the afterburner, coupled with thrust-vectored exhaust nozzles, will provide the F-22 with significantly enhanced maneuvering characteristics over even the nimble F-16 Block 50/52, equipped with the -229 version of the F100. Thrust vectoring is the use of steerable nozzles or vanes to deflect part of the engine exhaust in a desired direction. This allows the aircraft to change its direction, or flight attitude, with less use of its control surfaces (ailerons, rudder), which induce a lot of drag. The Rolls-Royce Pegasus engine, which enables the AV-8 Harrier to land and take off from a tennis court, is the best-known example of thrust vectoring.
Where engine technology will go from here is anyone’s guess. One of the major challenges that has faced designers for decades is to produce power-plants that can make Short Takeoff/Vertical Landing (STOVL) tactical aircraft a practical reality. The AV-8B Harrier II is a wonderful tool for the U.S. Marines, but the weight of its Pegasus powerplant limits it to short-range, subsonic flights. Perhaps the next-generation engine that is being developed under the Joint Advanced Strike Technology (JAST) program will provide the answer for this quest. Whatever happens, though, engine designers will always hold the key to those who “feel the need for speed . . .”
STEALTH
Stealth is a good Anglo-Saxon word, derived from the same root as the verb “steal,” in the sense of “stealing” up on your foe to surprise him. When a good set of eyes and ears were the only sensors, camouflage and careful, muffled steps (don’t break any twigs, and I’ll flog the first legionary whose armor clanks!) were the way to sneak up on the enemy. The ninja warriors of medieval Japan were masters of stealth, using the cover of night, black suits, and silent methods of infiltrating castles and killing sentries to earn a legendary reputation for mystical invisibility. Submarines use the ocean to conceal their movements, and no high-technology sensor has yet managed to render the ocean transparent.
For aircraft, radar and infrared are the sensors that represent the greatest threat. Let’s consider radar first. The acronym RADAR first came into the military vocabulary during World War II. The term stands for Radio Detection and Ranging, and this significantly enhanced the ability of a land-based warning outpost, ship, or aircraft to detect enemy units. A transmitter generates a pulse of electromagnetic energy, which is fed to an antenna via a switching circuit. The antenna forms the pulse into a concentrated beam which can be steered by the antenna. If a target lies within the beam, some is absorbed, and a very small amount is reflected back to the radar antenna. The switching circuit then takes the returning pulse from the antenna and sends it to a receiver which amplifies the signal and extracts the important tactical information (target bearing and range). This information is displayed on a screen, where a human can see the target’s position, guess where it is going,
and try to make tactical decisions. A big object that reflects lots of energy back toward the antenna shows up as a big, bright blip on the screen. A very small object that reflects very little energy may not show up at all.
There are two stealth techniques to defeat radar: shaping, to reduce an object’s “radar cross section” (RCS), and coating the object with radar-absorbing materials (RAM). When radar was in its infancy in World War II, both sides experimented with these techniques. The Germans were particularly successful. By 1943, the Germans were applying two different types of RAM coatings, called Jaumann and Wesch absorbers, to their U-boat snorkel masts to reduce detectability to aircraft radar. Although the RAM reduced the radar-detection range of a snorkel mast from about 8 miles/14.6 km. to 1 mile/1.8 km., the coatings didn’t adhere well to the snorkel masts after prolonged immersion in seawater. Meanwhile, the Luftwaffe was investigating radar-defeating airframe shapes. In 1943, two German brothers named Horten designed a jet-propelled flying wing, quite similar in appearance to the USAF B-2 bomber. Tail surfaces and sharp breaks between wing and fuselage increase a plane’s radar cross section, so an all-wing airplane is an ideal stealth shape, as well as an efficient design. A prototype aircraft, designated the Ho IX V-2, first flew in 1944, but crashed in the spring of 1945 after a test flight. Due to Allied advances on both fronts, the program was stopped. The remarkable work on reducing various aircraft signatures that was done by German engineers in the early-to-mid-1940s would not be reproduced in an operational aircraft until 1958, when Lockheed’s Skunk Works started working on the A-12, the forerunner to the SR-71 Blackbird.
A comparison of the radar reflectivity of three angled surfaces.
Jack Ryan Enterprises, Ltd., by Laura Alpher
As with any other active sensor, a radar’s performance is highly dependent on how much of the transmitted energy is reflected by the target back towards the receiving antenna. A lot of energy, and the operator sees a big blip. Less energy, and the operator sees a small blip. The amount of the reflected energy, the radar cross section (RCS) of the target, is expressed as an area, usually in square meters (about 10.8 square feet). This measurement is, however, somewhat misleading: RCS can’t be determined by simply calculating the target area facing the radar. RCS is a complex characteristic that depends on the cross-sectional area of that target (geometric cross section), how well the target reflects radar energy (material reflectivity), and how much of the reflected energy travels back toward the radar antenna (directivity). To lower an aircraft’s RCS, designers must reduce these factors as much as they can without degrading the aircraft’s ability to carry out its mission. It should be said that such design features are not easily slapped onto an existing design, but in fact are fundamental to the plane’s design. Thus the need for designed-to-purpose stealth structures.