Book Read Free

Labyrinth- the Art of Decision-Making

Page 16

by Pawel Motyl


  One crucial paragraph of the CAIB final report shows how these different aspects combined to create an ideal environment for bad decision-making:

  The organizational causes of this accident are rooted in the Space Shuttle Program’s history and culture, including the original compromises that were required to gain approval for the Shuttle Program, subsequent years of resource constraints, fluctuating priorities, schedule pressures, mischaracterizations of the Shuttle as operational rather than developmental, and lack of an agreed national vision. Cultural traits and organizational practices detrimental to safety and reliability were allowed to develop, including: reliance on past success as a substitute for sound engineering practices [... ] organizational barriers which prevented effective communication of critical safety information [... and] lack of integrated management across program elements. 15

  In one of his statements, General Duane Deal expressed his concerns for the future of NASA. Among them were the Agency’s reluctance to implement change, which typified the organization in earlier years and which was deeply rooted in its culture:

  History shows that NASA often ignores strong recommendations; without a culture change, it is overly optimistic to believe NASA will tackle something relegated to an “observation” when it has a record of ignoring recommendations. 16

  In the following years, though, NASA showed that General Deal was wrong on this count and that his concerns were unfounded. After the Columbia tragedy, the Agency opted for full transparency in its activities, including communicating about the ongoing process of cultural change in the organization. The aim of the change was simple: to maintain the positive aspects of functioning as a commercial organization while restoring the cultural attitudes and behaviors from the Apollo 13 era and, as a result, radically improving flight safety levels. These cultural changes were to be accompanied by important structural changes in response to the CAIB recommendations. An initial review of the areas requiring change was conducted by an external group led by Albert Diaz, director of the Goddard Center. Diaz’s group highlighted seven key issues: leadership, learning, communication, processes and rules, technical capabilities, organizational structure, and risk management.

  In order to fully understand the prevailing situation in the Agency, as well as the views of employees and contractors, NASA also turned to the consulting firm Behavioral Science Technology Solutions (BST), requesting that they conduct a detailed analysis of the organizational culture and employees’ opinions, and also check how NASA’s four core values—safety, integrity, teamwork, and excellence—were being interpreted. BST was also expected to propose specific organizational changes for the years 2004–09.

  The results of the project, published in a final report by the consultants, were like a bucket of cold water. It turned out that the organization’s longtime values had become warped, in the eyes of employees, by the pressures of its functioning as a commercial operation. Employees admitted, for example, that safety was a frequent topic within the organization, but conversations about it were more abstract, with less focus on specific solutions. In other words, there was a lot of talk, but no walk. Similarly, integrity was treated as a purely theoretical matter that didn’t translate into other areas of the Agency’s operations—improving management quality, shaping decision-making processes, or improving the flow of information. It’s hardly surprising, then, that the level of open communication suffered—while people felt comfortable talking to colleagues of equal status, those above them in the hierarchy discouraged open communication, especially if it meant talking about inconvenient truths—like problems and risks.

  The communication problem appeared to be ubiquitous, as the flow of information between the Agency’s management and individual centers (including the Langley and Ames research centers, the Goddard Space Flight Center, Kennedy Center, and Johnson Center) was universally assessed as very poor. Even a value such as “teamwork” turned out to be meaningless. NASA’s employees were quite unambiguous on this point: “We’re not treated like experts. The organization doesn’t value or respect us.” Thanks to this, the BST experts revealed a dangerous schism: NASA employees felt engaged in their work, but not engaged with the Agency. This led to a dwindling sense of responsibility for the ultimate success of an undertaking via a tenet of “I have to do my job well (and be engaged in the process), but I’m not interested in the wider perspective. I’ll just keep my head down and let others worry about everything else.” The situation was even worse in the support functions, where there was absolutely no broader vision or understanding of those functions’ influence on the end results.

  A further problem was the lack of a precise definition of the role of contractors in decision-making. According to NASA’s technical staff, subcontractors and their opinions were key from the point of view of mission safety. Unfortunately, the Agency’s managers had never worked out any structures to involve representatives of outside firms in analytical processes; whenever they did become involved, it was on an ad hoc basis. The opinions of contractors were only considered when they suited the purposes of mission controllers, which is precisely what happened in the case of the now-infamous dialogue with Morton-Thiokol the day before the Challenger launch.

  The BST experts, like the CAIB members, singled out the organization’s history as the direct cause of the situation, as that was what had shaped the prevailing organizational culture and decision-making methods, and criticized its leadership (or lack thereof in many areas). The fundamental accusations leveled against NASA were essentially twofold: lower-level employees were being placed under increasing time pressure, which meant they had less and less interest in situational analysis and safety, and those who did voice concerns found themselves having to prove that a problem existed rather than providing evidence it didn’t; and the leaders were failing to follow internal NASA procedures.

  The BST experts recommended that the Agency conduct a long-term, wide-scale cultural transformation program, including acting to resume shuttle flights, which had been suspended following the Columbia disaster. The overriding goal, though, was to carry out a process that seemed impossible, not only to those inside NASA, but also to even the most casual observer: to radically change the attitudes of NASA employees, especially those in manage-ment positions.

  NASA implemented the majority of the BST recommendations, including providing rigorous training in risk management and decision-making in situations of uncertainty for Agency managers and 360-degree feedback for key personnel from mission management, establishing a long-term leadership development program, and introducing a system of employee assessment based on behavioral competencies. Additionally, the Agency’s subcontractors, who had previously been kept at a distance, were engaged in much of the decision-making. Inside the organization a range of structural changes were introduced, which reflected the increased emphasis on mission safety.

  Engineers were promoted to much higher positions in the hierarchy, which also reinforced the newly prioritized emphasis on safety. This led to deeper analysis of events within NASA and better flow of information between units. The Safety and Mission Assurance, Programs Analysis and Evaluation, Programs and Institutional Integration as well as the chief engineer all reported directly to the Office of the Administrator. These changes radically elevated the status of expert circles, which allowed them to participate in discussions to a far greater degree and exert greater influence over decision-makers.

  BST also introduced five rules of operation to guide NASA’s employees and contractors. They described the organizational culture that was being aimed for:

  Open and clear communication is encouraged and modeledPeople at every level of the organization must be committed to the free and unobstructed flow of information up and down within the organization. This means having the courage to question assumptions, and the willingness to ask even seemingly obvious questions, to listen actively and be ready to learn. It describes
a value for shared inquiry that is unimpeded by concern about “looking bad.” [... ] Open and clear communication means that people feel free from intimidation or retribution in raising issues. [... ]

  Rigorously informed judgment is the sole basis for decision-makingRobust processes for analysis, judgment and decision-making must be flawlessly executed without cognitive bias. The only basis for confidence is properly understood data that meet safety and reliability criteria. [... ] Cognitive bias is understood by decision-makers and leaders are committed to eliminating it as a source of influence on decision-making. Decisions are based on scientifically grounded assessment of risk.

  Personal responsibility is taken by each individualEach individual is responsible for upholding a safety-supporting culture in what we do and how we do it. Each individual feels a sense of duty, responsibility, and ownership for the safety of every mission in which he or she is involved, and acts accordingly. It is unacceptable to assume that someone else will handle your issues or questions. Each individual is fully engaged in the pursuit of long-term and short-term success, of which safety is an integral part. [... ]

  Integrated technical and managerial competence is our shared valueWe require excellence in every aspect of our work. We hold that optimal safety follows from integrated technical and managerial competence. Mission success is accomplished by integrating all aspects of program management: safety, engineering, cost and schedule, across functional and organizational lines.

  Individual accountability is the basis for high reliabilityMission safety results from actions, not just words. Our credibility is built on the consistency between our words and our actions. Procedures, values, objectives and plans are only worthwhile to the extent that they can be reliably executed. We will set new standards of flawless execution in both our management practices and our technical work. Each individual will be accountable for performing to that standard. 17

  The multifaceted nature of the actions taken by NASA can be best summarized in this short paragraph from the BST materials:

  In order to achieve cultural change of this magnitude across a large, decentralized, geographically dispersed agency, perseverance and strong support from senior agency leadership will be required. Cultural effects are systemic and enterprise-wide; accordingly, cultural transformation requires a systemic, enterprise-wide approach.

  Specifically, senior management alignment, focus, openness, teamwork and discipline will be required in ways that have perhaps never before been fully contemplated. Changes will be required in many deeply-embedded organizational systems and processes. Leadership attitudes, beliefs and behaviors will need to change in very significant ways, and sound management practices will be more important than ever. 18

  After various initiatives were launched, BST carried out a follow-up review, which showed a clear change in the organizational culture of NASA in the desired direction.

  In the first two and a half years after the Columbia catastrophe, NASA also implemented almost all of the fifteen changes recommended in the CAIB’s final report, which were a condition for the resumption of shuttle flights. Twelve of the changes were either fully implemented, or exceeded the expectations of the experts; as for the remaining three, NASA admitted that due to time constraints, they hadn’t achieved much in that regard. On July 26, 2005, the shuttle Discovery was given permission to launch for the first mission since the tragedy. It successfully and safely completed a two-week flight.

  Cultural changes were also perceived by the person who, during the Columbia mission, battled most fiercely for further analyses to be carried out to assess the degree of danger to the crew. Rodney Rocha, who led the group of DAT engineers at the time, said in a 2005 interview with the New York Times, “We have a voice. Engineering is more independent than ever, more assertive than ever, and more in your face.” 19

  The change of leader at NASA was also not without significance. On April 13, 2005, Michael Douglas Griffin replaced Sean O’Keefe as administrator; his priority was to redefine NASA’s vision, to have a long-term goal for the Agency. It was a highly significant shift in priority, because since the Moon landings there had been no such vision, and the organization’s operations had become increasingly dominated by a commercial-style focus on small details, causing it to lose sight of the bigger picture. Griffin’s successors, especially Charles F. Bolden Jr., who was administrator from July 17, 2009, to January 20, 2017, sustained this strategy.

  Rule #11

  Never stop shaping the organizational culture. It can be your greatest ally, or your worst enemy, in making the right decisions.

  NASA’s history shows how vital a role context plays when it comes to the quality of the decisions taken by those in power. In earlier chapters, we looked at techniques for improving decision-making processes at the individual and team levels, but we must always remember the third level, which may, in fact, be the most important: the organizational culture within which decision-makers operate and the pressures of the environment they are subjected to. The case of NASA offers a tragic example of how not to behave after a disaster. Even though the Roger’s Commission formulated a series of recommendations following the Challenger tragedy, the organizational culture of the Agency didn’t change at all. A perfectly conducted RCA is meaningless if we don’t then consistently implement and monitor its recommendations over the following years. The end result will be the same as it was for NASA—the same mechanism, ignoring black swans and failing to apply an inquiry approach, buoyed up by past successes and faulty teamwork, and a lack of leadership, will inevitably lead to another disaster.

  One client I had the pleasure to work for as a consultant was struggling with a similar problem. Founded many years earlier, the company had become the market leader in its industry sector, leading to rapid but uncontrolled growth with no equivalent development of the organization. The head count went up, new units were created, new systems introduced, procedures expanded, as the board thought that only increased reporting was the key to maintaining the rapid upward trajectory. With time, though, the first cracks started to appear in this plan, although they didn’t compromise the company’s competitive position. However, the levels of internal conflicts grew incrementally, more and more often requiring intervention at the highest levels, swallowing up the time of the already busy management. Another issue was the gradual emergence of cliques within the organization, as every unit started to fight for its own interests and security. As one of the support function managers put it to me, “There was a hell of a mess, and we all started to lose our bearings. When things began to look dangerous, we all just covered our own backs. We started to be cautious about what information we passed on. As a team, we avoided forming unambiguous opinions. You cover yourself by leaving room for interpretation, and the interpretation is done by someone else. Then you say it was their fault.” With time, this attitude affected not only teams, but individuals, too, who acted to minimize their own personal risk. It isn’t difficult to anticipate the effect this had on decision-making. It became harder and harder to get a clear, concrete opinion on which to base a business decision. The company suffered a classic diffusion of responsibility. Everyone focused purely on their own job as a damage limitation measure. Simultaneously, a number of “orphans” appeared—problems nobody was looking at, because the procedures, structures, or orders didn’t make clear whose wards they were. Everyone at the lower levels could see them, but no one said anything because not only were they “minding their own business,” they also assumed that someone else would look after the orphans.

  This is, unfortunately, a typical human phenomenon, and not one restricted to business. A tragic example of collective irresponsibility occurred on March 13, 1964, in Queens, New York.

  Twenty-eight-year-old Kitty Genovese worked as a manager in one of the district’s bars; she typically finished work late at night. She rented an apartment on Austin Street in Kew Gardens, a peaceful, middle
-class area. On the morning of March 13, Genovese parked her car about 30 yards from the entrance to her building. She never reached her apartment, though, as she was attacked en route by Winston Moseley, a twenty-year-old with no criminal record. Moseley stabbed Genovese twice and then fled because of her screams. It later transpired that a total of thirty-eight people in Kew Gardens had heard her screaming. None of them called the police. A semiconscious Genovese lay on the street where she had fallen. Nine minutes later, Moseley returned and attacked her again. This time, too, he fled, startled by a shout from one of the neighbors, Robert Mozer, who yelled out of his window, “Leave that woman alone!” but did nothing else. Still no one called the police. Moseley returned again, this time raping the dying Genovese, stabbing her one last time, and stealing a small amount of money. Only after this third attack did another neighbor, Karl Ross, call the police, who arrived at the scene barely two minutes later. For Kitty Genovese it was too little too late—she died in the ambulance on the way to hospital.

  Two weeks later, Martin Gansberg published a shocking article in the New York Times, in which he described the night’s events and accused the neighbors of indifference. Genovese’s ordeal lasted a full thirty-five minutes and the murderer returned twice. If the police had been called after the first attack, she would have lived. The neighbors whom the journalist spoke to explained their lack of a reaction in various ways: “I thought someone else would call,” “I didn’t want to get involved,” “I was tired”... Karl Ross had actually phoned a friend in Nassau County to ask him what he should do before he finally called the police. 20

  Although the expression “callous indifference” is hard to resist in this context, the problem was a result of the bystander effect (also known as bystander apathy), a term coined by social psychologists Bibb Latané and John Darley. In a situation where numerous people observe a disturbing incident, the feeling of individual responsibility becomes divided and diminished, with everyone thinking “someone else will deal with it.” On March 13, 1964, in New York, for over half an hour, thirty-eight people thought precisely that, and as a result a young woman died. It’s no accident that on self-defense courses today, women are instructed not to shout an impersonal “Help!” but to address a specific person in the crowd: “Please help me, sir! Sir, you, the one in the red jacket!” It’s harder to resist such a direct appeal.

 

‹ Prev