Book Read Free

TSR2

Page 34

by Damien Burke


  On 21 January 1964 Olympus 320X 22213 was into the second of two 24hr tests to clear engine accessories (gear box, fuel pumps, and so on) when the No. 2 bearing failed. This was the intermediate LP shaft bearing situated at the rear of the LP compressor. (Many months later, further investigation would reveal that the No. 3 bearing at the front of the HP compressor had also failed.) On 21 February an Olympus 320X was undergoing flight clearance trials in No. 2 Cell at the NGTE. Reheat had just been engaged when there was a loud bang and the engine spewed a stream of shattered turbine blades from the exhaust. The first-stage turbine blades, cast from G.64, had fallen victim to fatigue caused by flutter (as they had done on three previous occasions). Forged Nimonic blades solved the problem.

  In March another engine ingested some bolts from its own casing, which cracked all of the HP turbine rotor blades. The bolts had vibrated loose, so improvements to the casing had to be made, initially with measures to lock the bolts in place. Later, measures were taken to avoid the vibration mode that was causing them to loosen in the first place. On 8 April 1964 engine No. 22220, scheduled to be the fifth flight engine, failed its two-hour pre-delivery test run just one minute short of the test’s completion when the No. 7 bearing registered an excessive overheat condition. On strip-down and examination it was found that there had been a bearing oil fire, and that the HP shaft had bent as a result. This engine had already incorporated modifications to deal with previous bearing fires in the same location, there having been five previous incidents of a similar nature. A further No. 7 bearing fire incident in May was to follow before a successful improvement was made to venting in this area.

  By June 1964, with BSEL now suffering from the financial penalties imposed by missing many contractual guarantee points, engines had been installed in the first TSR2 airframe, XR219. After dealing with the multitude of small problems to be expected with a first-time installation of new engines in a new airframe (of which more later), several hours of ground running were carried out. However, an unusual engine note and hammering noises heard on several reheat runs on 5 June 1964, along with overstress indications on the port engine (the only one with a strain gauge fitted to the LP shaft), required further investigation, and both engines were removed and returned to BSEL for examination. The port engine’s LP shaft had indeed been overstressed, and was bent. Another disastrous LP shaft failure had been narrowly avoided.

  Replacement engines with the strengthened and damped LP shafts, as previously installed on engine 2203 after the Vulcan FTB incident, were delivered and installed into XR219, but BSEL could not clear them for reheat running until the test results from engine No. 2203 were available. To add to the engine’s litany of problems, cracks were being found in the turbine entry ducts, so the material used there had to be changed and thickened. All existing engines were given shorter lives in the meantime.

  Matching engine to aircraft

  Unlike previous Olympus installations, which were effectively underslung units with wrap-around doors as on the Vulcan, the TSR2’s engines were to be installed into a pair of fully enclosed tunnels with access only from the rear. The aircraft’s installation railing would match up with rollers provided on the engine casing, and the engine had three attachment points: a single trunnion at the top of the casing which held the thrust bearing through which the thrust load of the engine was transmitted to the airframe, and two steadying points in the pitching and rolling planes that could accommodate the inevitable movements and expansions and contractions of the engine in use.

  This proved to be a major problem on the development-batch airframes, with the space available in the tunnels being continually reduced as the aircraft’s development proceeded. Bristol Siddeley Engines, already running into its own problems and under pressure, was keen to keep its customer happy, and tended to acquiesce to most requests. When BAC asked for the engine to be reduced in diameter, BSEL even said yes to this, though it meant an expensive redesign, with the initial batch of bench-test engines being unrepresentative of production examples. In addition, the space provided in the aircraft for an engine accessories bay had looked just sufficient to begin with, but as development proceeded the bay became ever more stuffed with components and pipework, to the point where some items that needed to be accessible were now obscured by others. The need for additional instrumentation and associated plumbing on development-batch aircraft exacerbated the situation.

  The engine installation. This illustration gives some idea of the tight fit of the engine within the airframe. BAE Systems via Brooklands Museum

  Trial installation of a mockup engine into a mockup TSR2 at Weybridge. Installation of a real engine in a real aircraft proved to be somewhat trickier. BAE Systems via Brooklands Museum

  Work to improve the accessories bay layout meant further expensive redesign of existing components, such as reducing the size of the main gearbox, but every improvement was more than matched by a reduction in available space caused by other design changes, especially the switch to double-walled fuel and oil pipes introduced by the MoA and BAC, which enlarged all these pipes and their associated unions. To ease the accessories-bay congestion temporarily for the development-batch aircraft, BAC was working on re-routeing pipes in the bay to give optimum use of space (probably at the expense of accessibility), but the preferred solution was to move the large CSDS unit into a new bay aft of the accessories bay. This would incur a penalty in the form of an 80gal (360L) reduction in fuel capacity, and would also require a change in the configuration of the airbrakes. A further suggestion was to bulge the aircraft in the area of the accessories to give more room, but the RAF did not think much of this idea with its attendant increase in drag and reduction in performance. Nor did it like the frankly bizarre idea of locating the accessory drives in an external pod underneath the bay. On balance, the RAF preferred the solution of adding a ventral ‘spine’ between the engines to house certain items, as this would create less drag and could also double as a locating point for the emergency arrester hook it wanted.

  This wide-angle shot of XR220’s partly depopulated starboard engine accessories bay gives a very good idea of the congestion in this bay, and why it proved to be such a problem area. Damien Burke

  The gaping maw of XR220’s port engine tunnel. Visible running along the top and bottom of the tunnel are the guide rails (the bottom rail being offset to the right) and holes in the tunnel walls for the reheat controls and, in the far distance, the engine accessories. Despite best efforts, no two engine tunnels were of exactly the same dimensions, and individual aircraft gauges would have been needed to measure tunnels for suitability before the installation of any particular ECU was attempted. Damien Burke

  By February 1964 the situation in the accessories bay had become critical. The RAF declared the current bay unacceptable, and said none of BAC’s solutions to improve it were acceptable either. A request was made to revert to single-walled pipes (which the MoA would take until July just to discuss), and even so it was not expected that any improvement to the bay would be available until at least the tenth airframe (the first pre-production aircraft). A meeting at the MoA on 6 March resulted in BAC being instructed to investigate urgently some more feasible layouts than previously suggested. That investigation was completed in the second week of May, and this time the preferred solution was a small enlargement of the bay at its forward end, cutting an arch into an existing fuselage frame and relocating a water pump and LP fuel cock higher in the fuselage. Around 14in (35cm) more room would be available, and the CSDS could be moved to the forward end of the bay.

  Furthermore, originally there were twenty-seven connections from the aircraft to the basic engine-change unit (ECU), but by January 1962 there were forty-five (sixtynine counting the jetpipe and reheat unit). This gives some idea of how many additional pipes and wires needed to be disconnected to enable an engine change. This was a serious safety concern, as many connections had to be made without visual access, and subsequent connections blockin
g access to previous ones. There would be no way to check many of the connections without running the engines, and even then small air, fuel or hydraulic leaks could go undetected during low-rpm ground-running. The first indication of poor workmanship during engine installation could be the fire warning lights illuminating in the cockpit during a flight. Many of the electrical connections were also single lines sprouting from apparently random points on the engine, instead of being consolidated into fewer wiring harnesses with connector blocks, as BAC preferred.

  Physically getting an engine into the aircraft was also problematic. Installing the first engine in the first airframe took over a week of hard work. Rather than sliding it smoothly into place, days of frustrating and painful work ensued, jockeying the engine into position by fractions of an inch at a time, with frequent minor rotations of the entire unit to clear obstructions as they impeded further progress. On the first aircraft, XR219, things were further complicated by the port engine tunnel not being perfectly circular; thus a great deal of extra work was necessary to match the port engine jetpipe shroud with the rear fairing.

  An Olympus 320 ECU on display at the RAF Museum Cosford; the casing has been replaced by Perspex to reveal the HP and LP compressors. Damien Burke

  The Olympus 320 reheat unit. At 184in (4.67m) in length, this was longer but far simpler than the ECU. At the far left of the unit can be seen the clamps for attaching it to the ECU. Further right are the jack housings for the reheat nozzle actuators and, at far right, the nozzle area itself. The nozzle was moved back and forth by the jacks, the petals inside being pressed against the nozzle sides by the exhaust flow and thus giving variable nozzle diameter. Damien Burke

  The business end of the reheat unit. Running around the inner wall are the rollers for the variable-position petals; deep within the unit are the three concentric rings of the flame holders. Damien Burke

  The overall clearance between engine and engine tunnel had by now dropped to a mere third of an inch, a tolerance that BAC was confident would prove satisfactory in service, but which the RAF thought unacceptable. The 3hr ECU time could not be met even under ideal conditions with trained and experienced personnel, as predicted by the RAF’s resident project officer in 1962. Instead BAC initially quoted a figure of 10hr 50min, reducing this to 8hr 15min when pressured to improve. Exasperated by the situation, the RAF wisely decided this probably meant 12hr and accepted this as the best that could be managed without a hugely expensive rethink of the whole question of engine installation. It is noteworthy that most military aircraft that followed the TSR2 have rather more sensible engine installations, such as the SEPECAT Jaguar and the Tornado with their much simpler and far more accessible engine bays.

  Flying the engine

  In March 1964 the engines were still at such an immature stage of development that multiple limitations were placed on the flight batch of engines. This was particularly serious, as the aircraft’s first flight had at that point been expected to take place in May. The limitations included:

  Intake Pressure (P1)

  A 20psi (1.40kg/sq cm) limit was in place here (expected to be brought up to 27psi (1.89kg/sq cm) for the first flight, but specified at 33psi (2.32kg/sq cm)), which would limit airspeed, particularly at low altitude and in low ambient temperatures (when air is denser).

  Intake Temperature (T1)

  A 70°C limit (expected to be brought up to 100° for the first flight, but specified at 146° with short periods at 180°) was in place because of insufficient turbine cooling. Bearing fires could be expected at higher temperatures, with resulting LP or HP shaft damage and possible catastrophic turbine disc failure.

  Compressor Delivery Pressure (P3)

  A 200psi (14.06kg/sq cm) limit (specified at 300psi (21.09kg/sq cm)) was in place because of a weak engine-to-jetpipe joint, which would further limit airspeed; to as low as 320kt (370mph; 590km/h) at low altitude in typical conditions with full power. The joint was made by a single metal strap wrapped around the joint and bolted into place in a single location, something BAC felt was entirely unsound in an area of great heat and vibration. An improved interim design, with four latches added to the strap to keep it in place, had been created, but would not be available on the initial four flight-batch engines. A fully bolted joint would be embodied in production engines.

  G

  Although 7.3g was specified as the maximum the engine would need to cope with, problems with compressor blades rubbing against the engine casing reduced this to a mere 3.5g. Above this, dependent on altitude, the g loading could result in blades contacting the casing.

  Water Injection

  Thrust boosting by water injection had not been cleared for flight use.

  Turbine Entry Temperature (T4)

  Limited to 1180K (specified at 1,240°) to enable ‘continuous’ use of reheat.

  Reheat Temperature

  Limited to 1850K (specified at 2,000°) due to cracking the corrugated reheat liner under test; this would reduce available reheat thrust.

  Reheat Control

  A failure of the aircraft’s AC electrical supply, leaving only DC power available, would leave the pilot unable to control reheat, as this was available only on AC power. An engine failure on top of this would leave the aircraft unable to maintain height when heavy with fuel and in the approach configuration. Nor could it climb away with flaps down. An emergency DC reheat control was far in the future, and as a minimum measure BAC’s chief test pilot insisted on a fuel jettison system being available so that the aircraft’s weight could be reduced in a hurry if needs be, and to also limit fuel loads to keep the AUW below the critical 70,000lb (31,750kg) mark (thus limiting flight duration to below 40min).

  Engine Life

  Limited to 25hr airborne (8hr in reheat) and 40hr ground running.

  All of these limitations combined to reduce the available flight envelope for early flights, and the life limitations also seriously reduced the possible intensity of the test-flying programme. An overall drop in engine performance in the order of 8 per cent could be expected from the limitations, which BAC was prepared to tolerate for a very few early flights. The major effect would be a lengthened take-off run (in the order of 10 per cent more runway being needed), with the aircraft also restricted to short flights.

  The seemingly never-ending stream of often catastrophic problems delayed the clearance to flight of the engine, but it did at least mean that some of the early limitations were able to be reduced, though the first flight was pushed further and further away from the May date. The various catastrophic failures had finally pushed BAC into insisting that some form of blade containment be added to the engine to prevent any further major failures sending blades scything through the surrounding aircraft structure. As that structure included fuel tanks, the consequences of an in-flight failure involving the expulsion of a disc or individual blades would almost certainly guarantee the loss of the aircraft. Although BSEL began working on this, there was no chance that such containment would be available on early test flights.

  Engine control and fuel systems functional layout. BAE Systems via Brooklands Museum

  Reheat fuel system function layout. A variable fuel-flow restrictor was to be fitted to this system to avoid overfuelling and reheat combustion instability, with associated LP shaft damage if any particular manifold malfunctioned. BAE Systems via Brooklands Museum

  By July the aircraft still did not have engines cleared for flight, and after 50hr 44min running in a test cell, on 24 July 1964 engine 2203 provided just the results that nobody wanted. The LP driveshaft ruptured once again and the first-stage LP turbine wheel was ejected, causing much excitement in the test cell. The cell’s thick steel walls were deeply gouged in many places, and the engine casing and surrounding pipe work were wrecked. Clearly, stiffening the LP shaft alone had not been enough to cure the problem, and now the aircraft’s first flight was effectively out of the question because the engines simply could not be relied upon.

  Int
ensive investigations at NGTE and BSEL finally found that the LP driveshaft failures were brought on by a bell-mode vibration. Effectively, the shaft was resonating under certain conditions, alternately being squashed into a flattened oval shape in horizontal and then vertical directions (and ringing like a bell; hence the name). This had not been experienced on previous marks of Olympus, as they had a shorter LP driveshaft with differing resonance characteristics. The conditions under which this vibration occurred had been unclear, but after the ‘hammering’ incident BSEL had turned to the reheat fuelling system for its next avenue of investigation. The indications were that one of the three reheat fuel manifolds had malfunctioned, forcing extra fuel into the other two. This overfuelling during reheat running was resulting in pressure fluctuations (the hammering noise observers had noted had been a particularly severe manifestation of this), and the torsional frequency of the LP shaft was in resonance with the jetpipe pressure fluctuation. The jet pipe was effectively ringing the LP shaft’s bell, to dramatic effect.

  The solution was to introduce a fuel flow restrictor to the reheat unit to stop over-fuelling from occurring should there be a manifold malfunction. The first interim fix for this was a fixed fuel-flow restrictor in each manifold that would introduce a number of limitations to the engines, including the loss of any reheat functionality above 13,000ft (3,900m). This was a crude temporary fix, and risked long-term damage to the fuel pumps from back-pressure. Unfortunately it would take until at least September to add the restrictors to the existing engines. The final solution was being worked on by Dowty, and consisted of a fully variable fuel-flow restrictor in each reheat fuel manifold. Thus a malfunction in any particular manifold would not result in overfuelling on the others. However the improved flow-control system was not to be developed and tested until early 1965, and the first flight could not wait that long.

 

‹ Prev