Flawless Execution

Home > Other > Flawless Execution > Page 14
Flawless Execution Page 14

by James D. Murphy


  Every cause has a human component. The members of the squadron don’t yet know why this pilot ran out of fuel, but they do know how. The pilot failed to monitor his fuel state according to the brief. Now they need a why. Yes, airplanes are going to run out of fuel, but if they run out of fuel when they’re not supposed to, there was a human error involved. Think about it. But why? Why the error?

  Only on the rarest of occasions will the “why” of a problem come down to personal negligence—a single person doing a terrible job. People errors are the exception, not the rule. More often than not, organizational processes, organizational behavior and system failures are the true culprits. Look at this in macro terms. An individual may have made an active human error, but what specific organizational process, organizational behavior, or system failure contributed to that error? What was the cause of that error? The answer is found through what we call root cause analysis.

  Root causes are big things, systemic things. They’re usually hard to see—and often not analyzed—because they’re largely latent issues. Something about the culture of the organization or the training or the attitude may have contributed to the cause. These things are hard to see. To find them, we use two approaches: We look for a possible breakdown in the Flawless Execution cycle, or, we use an analysis tool we call LOTCD.

  Using the example of the F-15 that ran out of gas, the squadron would analyze the error as shown below.

  Was the root cause:

  Leadership?

  Organization?

  Teamwork?

  Communication?

  Discipline?

  Leadership

  Let’s say that in this pilot’s squadron the leadership subtly created a maverick attitude about fuel management. In fact, the squadron commander always came back with a warning light blinking in his cockpit, indicating low fuel. Other pilots picked up on this, which led subconsciously to a practice of stretching their fuel—in fact, it led to a feeling that they weren’t tough enough if they didn’t stretch their fuel. Countless pilots in the squadron landed with the warning light blinking and the female computer voice known as Bitchin’ Betty saying “low fuel” in their helmets. Over time, well… they became numb to it.

  In time, this leadership error came home to roost. Maybe bad weather delayed the mission. Maybe it was an imperceptible matter of minutes—minutes! Whatever the circumstance, the cause was leadership. The pilot heard the warning signal for low fuel, was probably conditioned to ignore it, and, with horror, finally realized that the sudden deceleration of his jet was terminal—he was out of fuel. It wasn’t the pilot; it was an organizational issue. In this case, they’d probably have to get rid of the commander.

  Organization

  The flight leader briefed the team late (he was disorganized) and didn’t cover fuel contingencies. In turn, the flight members stepped to their jets late, which caused a late takeoff. Trying to make up time in the air, the leader took off with full power, causing everyone behind him to tap into their afterburners just to catch up. This in turn caused the trailing flight members to have less fuel than the leader. The last pilot was doomed from the start—he started his mission low on fuel. He had no chance. Not his fault. The disorganized flight leader triggered a hopeless chain of events.

  Teamwork

  Let’s say that this was a flight of four aircraft. Unforgivable. There were three other pilots who could have helped get the flight organized. Any of the pilots could have spoken up on the radio to alert the leader of an unsafe situation developing around fuel. We call this Cockpit Resource Management, or CRM. Pilots on a mission have to act as a team.

  Communication

  This is obvious. Anyone could have broadcast a fuel state.

  Discipline

  At any time, any of the flight members could have said “Stop.” They all launched with the same fuel; they were all getting low. But none of the pilots had the discipline to say, “Abort, low fuel.” Discipline.

  Perhaps the cause can’t be found in LOTCD. Then what? Now is the time to search for causes against the Flawless Execution cycle. Was the error a breakdown in one of the four phases?

  Was there a flaw in one or more of the Six Steps to Mission Planning?

  Was the brief confusing or imprecise?

  Was task saturation an issue? Were checklists, cross-checks, or mutual support used to mitigate task saturation?

  Debriefs? Had there been a lack of prior debriefs or effective debriefs preceding the mission?

  Analyzing the Flawless Execution cycle can be invaluable. Maybe not everyone understood the mission objective, or the threats, or what kind of assets we had in play to help us win. Maybe we failed to identify all of our available resources, or maybe we didn’t spend enough time on step six, the contingencies. What went wrong? Let’s say there were five SAM sites around a target—a SAM trap, as they call it. But let’s also say that they briefed only one SAM site around the target. That is a deadly difference. The pilots can neutralize one SAM site, but a SAM trap? SAM traps are incredibly deadly. They can’t notch away to break the lock of one missile without being acquired by another. They turn away from one, are acquired by the second, and turn away only to be acquired by the third. So they ran into a SAM trap and lost an F-15. They briefed a solo SAM site; they ran into a trap. That is a catastrophic failure in the intelligence briefing, not a pilot error. That’s the why of the error on this mission. The cycle broke down.

  Capturing Data

  Now is the time to pay close attention to how you capture data. It does matter how you write data on the whiteboards. (See the accompanying figure, Recurring Root Causes.)

  Here is how fighter pilots squadrons do it: First, put just one success and one error on each whiteboard. Next, write down the active human error—the how. Next, use LOTCD, and/or the Flawless Execution cycle to identify possible root causes—the why. When asking why, brainstorm multiple root causes. Unlike the cause or the how, in the root cause dissection you will find multiple possibilities. (Under cause there will be only one.)

  Now you have a room filled with whiteboards standing shoulder to shoulder; errors and successes arrayed left to right, one to a board.

  Step back and look at your boards. Notice any patterns?

  RECURRING ROOT CAUSES

  The Wheels Fell Off

  Understandably, when a wheel came off a car during a test drive on a company track, the quality control department of this major auto manufacturer asked a lot of questions. How on earth did a wheel fall off? They determined the active human error was that the lug nut was improperly tightened. But, as they dug deeper, they kept asking why? Why was the lug nut not tight enough? Because the tool was used at the wrong setting. Why was that? Because the person who usually did that kind of work was sick and someone covered for him. Why was that? Because the person who was sick had not communicated that he would be sick and a backup was not properly trained. How could that not have been anticipated? The final answer was that there was not a standard operating procedure for that step on the production line.

  Rarely is an error caused by a person. People are usually little more than the unwitting symptoms of organizational problems.

  L: LESSONS LEARNED

  Now it’s time to look for patterns. In other words, we’re looking for a prominent or recurring root cause that bridges together several errors or successes. It doesn’t happen all the time. But if we find such a thing, take note: We have a significant problem (or opportunity). If we have an opportunity, we identify a way to get the message across the organization. If we have a problem, we identify a fix. The fix is what we call a lesson learned.

  A lesson learned is not something small. Yes, there are plenty of glitches that surface in a mission debrief, and you learn from your analysis of these glitches and help each other, but a lesson learned is bigger than that. A lesson learned comes out of a pattern of recurring root causes. Here’s a simple test. How do you know if you have a lesson learned? Ask yourself this: Sh
ould it be disseminated across the entire organization? Should the entire company (or department) change the way it does things because of the problem we’ve identified? If your is answer yes, likely you have a lesson learned.

  As an example, let’s look at a major sales call. Let’s assume that a salesman is having trouble closing the deal. That’s a legitimate issue, and you can help that salesperson improve. But it’s worse than that. Let’s say three or four salespeople had trouble closing. Guess what? That’s a recurring root cause. That’s a major problem. The entire company needs to do something fast. Maybe it’s time to retrain everyone on closing skills. Maybe the company needs to create a checklist on the proper steps to closing a sale. Either way, the problem was a root cause problem and the fix goes up the organization to the CEO and down the organization to the training department. That is a lesson learned. One salesman is just one salesman; two identical problems with two different salesmen means there is a recurring cause. The proposed fix—retraining the sales force on closing skills—is a lesson learned.

  Another example: the NFL. We know that after every game, the officials debrief. Let’s say that after one particularly hard game they may have questioned how they were calling pass interference. Okay. But let’s say that across the entire league there is a pattern of officiating errors related to pass interference. That’s different. No longer is it one official or one bad day on the gridiron. Now it’s a recurring root cause, and that requires a fix. The fix? Maybe they need something called an instant replay system.

  Here’s the message: A lesson learned is the result of a pattern of data points identified from the debrief that identify the root causes of an error that’s being repeated and repeated and repeated. The fix has to be a change in the organizational processes, the organization’s behavior, or the system. Think of this as something that affects your entire business silo, or your division (or you!) on a macro level. Identify what needs to be changed throughout the system in order to preclude future execution errors. That is a lesson learned.

  A note of caution: Don’t overdo it. How often can a mistake be so fundamental that the entire organization has to change? Not often. So how do you know if your mission team’s lesson learned is a symptom of a bigger problem? In fighter aviation, several flight leaders fly missions at the same time. Lessons learned on one mission may be identical to those on another mission. Flight leaders let their commanding officer know what these are, and he or she compares the lessons learned to those of other squadrons. If there is a true, systemic root cause, it becomes an organizational issue.

  Lessons learned are systemic issues.

  A lessoned learned is always turned into a process.

  The process is always communicated as a precise series of steps—or actions—to take.

  T: TRANSFER LESSONS LEARNED THROUGHOUT YOUR ORGANIZATION

  Always tell people what you’ve learned. It is not good enough to simply identify the lessons learned. You have to communicate it throughout the organization. The specific fix you recommend needs to be clearly written so that others within your organization can understand the issue and benefit from the solution even if they were not there. You have to get the lesson learned out of your isolated debrief and into the veins of the company. You have to transfer knowledge quickly and help accelerate everyone’s learning experience.

  Let’s go back in history. In Vietnam, if a fighter pilot could survive his first ten missions, there was a good chance he would survive 100 missions and go home to his family. But the first ten missions were tough—most of the pilots lost were lost inside of ten missions. To survive long enough to go home, a pilot first had to get through those initial ten missions.

  As it happened, some squadrons were more successful in those first ten missions than others. What the Air Force discovered was that some squadrons did the full-on plan-brief-execute-debrief process, but some did not. Those that did kept more pilots alive than those that didn’t. It was learned that not only was debriefing vitally important but that communicating the lessons learned—accelerating the learning curve—was enough to give a three-mission pilot the tools and skills of a thirty-mission pilot. It was all about survival. And it was that simple.

  Write out a lesson learned as if your fellow fighter pilot were sick that day and had no idea of what went on during the mission or in the debrief that followed. Presume the reader is a rookie. It has to be that transparent.

  Be specific. I can’t fly my jet by a lesson learned called organization. I need a step-by-step process to implement that lessoned learned when I fly tomorrow.

  In Business

  At Afterburner, we want learning experiences transferred throughout our organization as quickly as possible. We have teams on the move every day of the week. Thus, after every seminar, no matter where they are, be it in Australia, the United States, or Europe, our main speakers and facilitators debrief following the STEALTH process. We post small lessons learned on our company intranet, and everyone in the company logs on daily to update themselves.

  At the end of the year, the lessons learned are evaluated by a group charged only with the job of evaluating input from the teams. Those that are seen as true lessons learned are incorporated into our standards guide.

  H: HIGH NOTE—POSITIVE SUMMATION

  You have to end the debrief on a high note. Just as tone was set in the beginning of the debrief, the leader must set the tone again at the end. Debriefs are tough love; end it with love.

  After dissecting a mission, admitting errors, and underscoring successes, you have to end the debrief with something positive. Did you take out the target? Say so. Despite a number of problems, did the mission end as briefed? Say so. Even if you ended the mission with a moderate fiasco, at the very least you can point out the fact that the debrief process is positive and that it’s helping the group execute better in a rapidly changing environment. Always end the debrief with an honest, positive assessment of the team’s execution. Nothing contributes to failure more than hang-dog faces. Send your pilots out on an up-note.

  COMPLETING THE CYCLE

  How does one put debriefs into proper perspective? As I said, it is at once our critique, our training, our intelligence gathering, the transfer mechanism for lessons learned, the catalyst to accelerate learning experiences, the foundation for the next mission brief—it’s all these and more, with endless eddies spinning off into other disciplines. Lessons learned get passed up or down the execution engine from the associate level to the leadership ranks or down to the training level by keeping the managers in the loop after each mission cycle. This high tempo gives the managers time to make adjustments in their plans on an almost daily basis using the feedback from their direct reports.

  CHAPTER 16

  Standards

  At the beginning of this book I was quick to define what I meant by Flawless Execution. Is any mission entirely flawless? Not at all. In fact, as I said, you give up a lot of points on the way to winning a basketball game—and every mission in my F-15 had a glitch, however small. But each mission improved the next because of the Air Force’s unique processes for accelerating our learning experiences. In the aggregate, we started to outperform the competition—outfly the enemy—because our process had one unmovable aiming point, which was, of course, Flawless Execution.

  But what happens when the plan breaks down? What happens when we’re not there? Let’s face it, we can’t control everything. We can’t be everywhere at once, and we sure can’t do it all ourselves. The answer is standards. When the plan breaks down, you need standards to fall back on. If you’re not around, if you’ve been shot down, if bad weather has killed all the phone lines, you have to know in your gut that your people have at the very least a strong set of standards to keep them performing their mission. You have to be able to rely on a minimum level of execution no matter how inexperienced a person is or how bad a hand-off might be or how incommunicado you’ve become. You have to be able to rely on a minimum level of execution bas
ed on standards alone.

  Now, let’s dispose of some common confusion. Standards are not the same as compliance. Often, when I introduce this concept to companies, I hear back: “Murph, we have rigid standards—FDA food labeling, EOOC, FTC emission standards. We’ve got standards!” No you don’t. You’re confusing standards with compliance. Compliance is part of your environment.

  Equally true, standards do include, but mean a lot more than, dress and etiquette. At our company we indeed have dress and etiquette standards. Our manuals spell out what everyone wears when they’re traveling and when they’re training, down to the color of their T-shirt under their flight suit (it’s black). We also have etiquette standards. We answer our phones in three rings and always greet the caller professionally. But inside the Flawless Execution Model, standards are different things. They are the things that allow you to keep the mission alive even if, like Apollo 13, you’ve had a tank explode, you lose your electricity, and it’s freezing cold in the capsule.

 

‹ Prev