Human Error

Home > Other > Human Error > Page 9
Human Error Page 9

by James Reason


  As can be seen from the preceding review, there has been a gradual change in the flavour of cognitive theorising. During the last few years, a small number of cognitive psychologists together with a newer breed styling themselves as cognitive scientists and cognitive engineers have begun producing tentative ‘global’ performance models which outline, in deliberately general terms, the principal characteristics of the human information-processing system. This development has gone relatively unremarked since, for the most part, these ‘broad brush’ models have appeared in chapters, books and technical reports, often of an applied nature, rather than in the major psychological journals.

  So that global performance models should not be judged by too restricted a view of what makes a ‘good’ theory, it is worth spelling out some of the major differences in form and purpose between these comprehensive accounts of the cognitive system and the domain-specific models derived from conventional laboratory research (see Table 2.1). For simplicity, I will distinguish them by the terms framework and local models.

  Table 2.1. A comparison of the principal characteristics of local and framework theories.

  Local theories

  Framework theories

  Analyse

  Synthesise

  Predictive

  Descriptive

  Refutable

  Subject to paradigm shifts.

  Emphasise theoretical differences

  Focus on agreements between theories

  Laboratory-based

  Derived from natural history and clinical observations

  In the natural science tradition

  In the engineering and clinical traditions

  Use experimental techniques to establish causal relations

  Make extensive naturalistic observations to derive useful working approximations

  Research strategy: set up experiments to test between theoretical contenders

  Research strategy: design studies to identify the conditions that limit generalisations

  At another time or within another discipline, these variations in theoretical style would not need to be discussed, but I am aware that to many experimentalists the ‘working approximations’ embodied in broad models are of little interest (or worse) because they fail to discriminate between the available theoretical alternatives. Global assertions that could have emerged from a wide range of theoretical positions are unlikely to hold much appeal for those whose skills lie mainly in devising ways of testing between such positions. For them, the fact that many or all current theories could generate a particular assertion makes it vacuous. But it is precisely this consensual aspect that attracts the framework theorist.

  For additional commentaries upon the distinctions between local and framework models, the reader is directed to Neisser (1976), Card and colleagues (1983), Broadbent (1984), Reason (1984a), and Norman and Draper (1986). The main point to be made by Table 2.1 is that although broad models have not been fashionable in the ‘new’ cognitive psychology, they are nevertheless in keeping with a number of honourable psychological traditions. It is also important to remind ourselves that modern psychology is a hybrid pioneered by refugees from many different disciplines with many distinct modes of enquiry. The long-standing dominance of ‘binarism’ sometimes causes us to feel unnecessarily apologetic about employing other legitimate means of investigating and representing cognition.

  5. Conclusion: A working framework for human error

  This chapter has tried to show some of the major influences upon the theoretical arguments to be presented later in this book. The review is now concluded, and it is time to take stock. The purpose of this concluding section is to draw out some of the basic assumptions about human cognition that collectively constitute the point of departure for these next few chapters.

  Is there such a thing as a ‘typical’ framework for cognition in the late 1980s? Notwithstanding their differences with regard to structure, processing and representation, many contemporary models contain some important areas of common ground (see also Reason, 1988).

  5.1. Control modes

  Most frameworks make a distinction, either explicitly or implicitly, between (a) controlled or conscious processing and (b) automatic or unconscious processing. Cognitive activities are guided by a complex interplay between these two modes of cognitive control, which are discussed under the headings of the attentional and schematic modes.

  5.2. Cognitive structures

  For the purposes of theorising about error mechanisms it is convenient to distinguish two structural features of human cognition: the workspace or working memory and the knowledge base. The former is identified with the attentional control mode, the latter with the schematic control mode.

  5.3. The attentional mode

  The attentional control mode—closely identified with working memory and consciousness—is limited, sequential, slow, effortful and difficult to sustain for more than brief periods. It can be thought of as a highly restricted ‘workspace’ into which selected sensory inputs as well as the products of parallel search processes (carried out by the schematic mode) are delivered, and within which powerful computational operators (subsumed under the general heading of inference) are brought to bear upon a very limited number of discrete informational elements in a largely voluntary and conscious manner.

  As the result of the ‘work’ performed upon them, these discrete informational elements may be transformed, extended or recombined. This ‘work’ is usually seen as deriving its ‘energy’ from a strictly finite pool of attentional resources.

  These resource limitations confer the important benefit of selectivity, since several high-level activities are potentially available to the conscious ‘workspace’ at any one time. Operations within this limited workspace are largely freed from the immediate constraints of time and place. Much of this mode’s work is concerned with setting future goals, with selecting the means to achieve them, with monitoring progress towards these objectives and with the detection and recovery of errors.

  5.4. The schematic control mode

  The cognitive system is extremely good at modelling and internalising the useful regularities of the past and then reapplying them whenever their ‘calling conditions’ are supplied by intentional activity or by the environment. The minutiae of mental life are governed by a vast community of specialised processors (schemata), each an ‘expert’ on some recurrent aspect of the world, and each operating over brief timespans in response to very specific triggering conditions (activators). This schematic control mode can process familiar information rapidly, in parallel and without conscious effort. There are no known limits to either the number of schemata that may be stored or to the duration of their retention. By itself, however, this schematic mode is relatively ineffective in the face of change.

  Each schema may be brought into play by a particular set of environmental signals that match aspects of the knowledge structure’s attributes or by ‘descriptions’ passed on from other task-related processors (see Norman & Bobrow, 1979). Since most human activities proceed roughly according to plan, it must also be assumed that the collective activation of these individual schemata is additionally orchestrated by the outputs from the workspace. It is presumed that schema activation follows automatically upon the receipt of matching ‘calling conditions’ produced as the result of workspace activity. It is also assumed that where several schemata might be partially matched by this ‘top-down’ activation, the conflict (posed by the limited capacity of the workspace) is generally resolved in favour of contextually-appropriate, high-frequency knowledge units. For this ‘frequency-gambling’ to occur, it is necessary that each schema should be tagged according to the approximate frequency of its prior employment (see Hasher & Zacks, 1984).

  5.5. Activation

  Although their framework model is prototypical in many respects, Schiffrin and Schneider (1977) were clearly wrong in asserting that the long-term store (the knowledge base) contains nodes that “are normal
ly passive and inactive.” Many cognitive theorists believe that the only way to explain the appearance of unintended yet perfectly coherent action slips is to assume (a) that specialist processors are not ‘switched off when out of use, but remain in varying states of activation; and (b) that they can receive this activation from sources other than the conscious workspace. Schemata require a certain threshold level of activation to call them into operation. The various sources of this activation can be divided into two broad classes: specific and general activators.

  5.5.1. Specific activators

  These activators bring a given schema into play at a particular time. Of these, intentional activity is clearly the most important. Plans constitute ‘descriptions’ of intended actions. For adults, these descriptions usually comprise a set of brief jottings on the mental scratchpad. The more frequently a particular set of actions are performed, the less detailed are the descriptions that need to be provided by the higher levels. But this steady devolution of control to schemata carries a penalty. To change an established routine of action or thought requires a positive intervention by the attentional control mode. The omission of this intervention in moments of preoccupation or distraction is the most common cause of absent-minded slips of action (Reason, 1979).

  5.5.2. General activators

  These activators provide background activation to schemata, irrespective of the current intentional state. Of these, frequency of prior use is probably the most influential. The more often a particular schema is put to work, the less it requires in the way of intentional activation. Quite often, contextual cueing is all that is needed to trigger it, particularly in very familiar environments. Other factors include recency and features shared with other schemata. It is also clear that emotional factors can play a significant part in activating specific groups of knowledge structures.

  The factors involved in the specification of cognitive activity will be considered further in Chapter 4. There, it will be argued that error forms arise primarily as the result of cognitive underspecification. This can take many different forms, but its consequences are remarkably uniform; the cognitive system tends to default to contextually-appropriate, high-frequency responses. In the next chapter, however, we will look more closely at the issue of error types.

  3 Performance levels and error types

  * * *

  The purpose of this chapter is to provide a conceptual framework—the generic error-modelling system (GEMS) —within which to locate the origins of the basic human error types. This structure is derived in large part from Rasmussen’s skill-rule-knowledge classification of human performance (outlined in Chapter 2), and yields three basic error types:

  skill-based slips (and lapses)

  rule-based mistakes

  knowledge-based mistakes

  In particular, GEMS seeks to integrate two hitherto distinct areas of error research: (a) slips and lapses, in which actions deviate from current intention due to execution failures and/or storage failures (see Reason, 1979, 1984a, b; Reason & Mycielska, 1982; Norman, 1981; Norman & Shallice, 1980); and (b) mistakes, in which the actions may run according to plan, but where the plan is inadequate to achieve its desired outcome (Simon, 1957, 1983; Wason & Johnson-Laird, 1972; Rasmussen & Jensen, 1974; Nisbett & Ross, 1980; Rouse, 1981; Hunt & Rouse, 1984; Kahneman, Slovic & Tversky, 1982; Evans, 1983).

  The chapter begins by explaining why the simple slips/mistakes distinction (outlined in Chapter 1) is not sufficient to capture all of the basic error types. The evidence demands that mistakes be divided into at least two kinds: rule-based mistakes and knowledge-based mistakes. The three error types (skill-based slips and lapses, rule-based mistakes and knowledge-based mistakes) may be differentiated by a variety of processing, representational and task-related factors, as discussed in Section 2. Next, the GEMS framework is outlined and the differences in the cognitive origins of the three error types are explained, together with their switching mechanisms. The final part of the chapter looks in more detail at the failure modes possible at each of these levels and what factors may shape the resultant errors.

  1. Why the slips-mistakes dichotomy is not enough

  The distinction made in Chapter 1 between execution failures (slips and lapses) and planning failures (mistakes) was a useful first approximation, and can be justified on several counts. The dichotomy falls naturally out of the working definition of error; planned actions may fail to achieve their desired outcome either because the actions did not go as planned or because the plan itself was deficient. It corresponds to meaningful differences in the level of cognitive operation implicated in error production; mistakes occur at the level of intention formation, whereas slips and lapses are associated with failures at the more subordinate levels of action selection, execution and intention storage. As a consequence, mistakes are likely to be more aetiologically complex than slips and lapses.

  On this basis, it is tempting to argue that slips and mistakes originate from quite different cognitive mechanisms. Slips could be said to stem from the unintended activation of largely automatic procedural routines (associated primarily with inappropriate attentional monitoring): however, mistakes arise from failures of the higher-order cognitive processes involved in judging the available information, setting objectives and deciding upon the means to achieve them. But if this were true, one would expect slips and mistakes to take quite different forms. And that is not the case. Both slips and mistakes can take ‘strong-but-wrong’ forms, where the erroneous behaviour is more in keeping with past practice than the current circumstances demand.

  There is also a further difficulty. Certain well-documented errors fall between the simple slip and mistake categories. They possess properties common to both. This problem is illustrated by the following errors committed by nuclear power plant (NPP) operators during five separate emergencies (Kemeny, 1979; Pew, Miller & Feeher, 1981; Woods, 1982; NUREG, 1985; Collier & Davies, 1986; USSR State Committee on the Utilization of Atomic Energy, 1986).

  (a) Oyster Creek (1979): An operator, intending to close pump discharge valves A and E, inadvertently closed B and C also. All natural circulation to the core area was shut off.

  (b) Davis-Besse (1985): An operator, wishing to initiate the steam and feedwater rupture control system manually, inadvertently pressed the wrong two buttons on the control panel, and failed to realise the error.

  (c) Oyster Creek (1979): The operators mistook the annulus level (160.8 inches) for the water level within the shroud. The two levels are usually the same. But on this occasion, the shroud level was only 56 inches above the fuel elements (due to the valve-closing error described above). Although the low water level alarm sounded 3 minutes into the event and continued to sound, the error was not discovered until 30 minutes later.

  (d) Three Mile Island (1979): The operators did not recognise that the relief valve on the pressurizer was stuck open. The panel display indicated that the relief valve switch was selected closed. They took this to indicate that the valve was shut, even though this switch only activated the opening and shutting mechanism. They did not consider the possibility that this mechanism could have (and actually had) failed independently and that a stuck-open valve could not be revealed by the selector display on the control panel.

  (e) Ginna (1982): The operators, intending to depressurize the reactor coolant system, used the wrong strategy with regard to the pressure operated relief valve (PORV). They cycled it open and shut, and the valve stuck open on the fourth occasion.

  (f) Chernobyl (1986): Although a previous operator error had reduced reactor power to well below 10 per cent of maximum, and despite strict safety procedures prohibiting any operation below 20 per cent of maximum power, the combined team of operators and electrical engineers continued with the planned test programme. This and the subsequent violations of safety procedures resulted in a double explosion within the core that breached the containment, releasing a large amount of radioactive material into the atmosphere.


  Errors (a) and (b) are clearly slips of action. The intentions were appropriate enough, but the actions were not executed as planned. Similarly, errors (e) and (f) can be categorized fairly unambiguously as mistakes; the operators’ actions went as planned, but the plans were inadequate to achieve safe plant conditions. But errors (c) and (d) do not readily fit into either category. They contain some of the elements of mistakes in that they involved improper appraisals of the system state; yet they also show sliplike features in that ‘strong-but-wrong’ interpretations were selected. These errors can perhaps best be described as arising from the application of inappropriate diagnostic rules, of the kind: if (situation X prevails) then (system state Y exists). In both cases, rules that had proved their worth in the past yielded wrong answers in these extremely unusual emergency conditions.

  One way of resolving these problems is to differentiate two kinds of mistake: rule-based mistakes and knowledge-based mistakes. Such a distinction is in accord with the symptomatic and topographic categories discussed in the previous chapter. It also allows us to identify three distinct error types, each associated with one of the Rasmussen performance levels. These are summarised in Table 3.1, and the grounds for their discrimination are considered in the next section.

  2. Distinguishing three error types

 

‹ Prev