Book Read Free

Human Error

Page 15

by James Reason


  (a) Insufficient consideration of processes in time: Subjects were more interested in the way things are now than in considering how they had developed over previous years. For example, they concentrated on the amount of money currently in the city treasury without regard for the ups and downs of its previous financial fortunes.

  (b) Difficulties in dealing with exponential development: Processes that develop exponentially have great significance for systems in either growth or decline, yet subjects appeared to have no intuitive feeling for them. When asked to gauge such processes, they almost invariably underestimated their rate of change and were constantly surprised at their outcomes. This means, for example, that they had virtually no appreciation of what was meant by, say, a 6 per cent annual growth in the number of cars registered to citizens.

  (c) Thinking in causal series insteadofin causal nets: When dealing with complex systems, people have a marked tendency to think in linear sequences. They are sensitive to the main effects of their actions upon the path to an immediate goal, but remain unaware of their side effects upon the remainder of the system. In a highly-interactive, tightly-coupled system, the consequences of actions radiate outwards like ripples in a pool, but people can only ‘see’ their influences within the narrow sector of their current concern (see also Rasmussen, 1986; Brehmer, 1987).

  Among the poor performers, two maladaptive styles were especially noteworthy:

  (a) Thematic vagabonding: This involves flitting from issue to issue quickly, treating each one superficially. No one theme is pursued to its natural conclusion. In some cases, they pick up topics previously abandoned, apparently forgetting their earlier attempts. Doerner (1987, p. 101) interpreted this as escape behaviour: “Whenever subjects have difficulties dealing with a topic, they leave it alone, so that they don’t have to face their own helplessness more than necessary.”

  (b) Encysting: On the surface, this seems to be the exact opposite of vagabonding. Topics are lingered over and small details (e.g., school meals) attended to lovingly. Other more important issues are disregarded. In reality, however, both vagabonding and encysting are mediated by the same underlying tendencies: bounded rationality, a poor self-assessment and a desire to escape from the evidence of one’s own inadequacy.

  Of particular interest was the way subjects behaved when things went badly wrong. Critical situations provoked what Doerner has termed an intellectual emergency reaction, geared to produce rapid responses. Overall, this could be characterised as a reduction in intellectual level: thinking reduces to reflexive behaviour. There is a marked diminution in phases of self-reflection in which subjects pause to evaluate their progress and previous actions. Planful thinking degrades into the production of disconnected and increasingly stereotyped actions.

  The experience of repeated failure on the part of poor performers brings with it a further set of ‘pathologies’. They take greater risks, apparently driven by the need to master the situation at any price. There is a marked increase in their willingness to bend the rules: whether a particular course of action involves violating some acceptable practice becomes subordinate to the achievement of the immediate goal. Their hypotheses become increasingly more global in character: all phenomena are attributed to a single cause. Doerner (1987, p. 107) states: “Such reductive hypotheses are very attractive for the simple reason that they reduce insecurity with one stroke and encourage the feeling that things are understood.”

  Confirmation biases become more marked with the experience of failure. In the beginning of the test run, all subjects looked for confirming rather than disconfirming evidence. Later, however, the good subjects would adopt strategies designed to provoke the refutation of their current hypotheses. Poor subjects, on the other hand, grew increasingly more single-minded in their search for confirmation.

  6.11. Problems of diagnosis in everyday situations

  In a recent and highly ingenious set of studies, Groenewegen and Wagenaar (1988) investigated people’s ability to diagnose problems in real-life situations at the knowledge-based level of performance. The subjects were asked to perform two kinds of task: they diagnosed what had gone wrong in some everyday problem situation or they identified the symptoms for which some explanation was needed.

  A number of recurrent difficulties were observed. People were not at all good at the diagnosis task. On their first attempts, only 28 per cent of the subjects produced complete diagnoses. The major difficulty appeared to lie in the identification of symptoms rather than in the ability to generate plausible event scenarios. Part of the trouble arose from people’s efforts to search for symptoms and to create possible event scenarios at the same time. They identify a few symptoms and then use these as a basis for generating explanatory stories. In so doing, they are frequently unaware that some of the symptoms they have incorporated into their scenarios do not require an explanation. The process is one of continual interchange between observed symptoms and story elements and the two become increasingly confused.

  Groenewegen and Wagenaar reasoned that if the initial symptom identification was the main problem, diagnoses should be significantly improved by providing support for the diagnostic phase of the task. They did this in two ways. In one case, they gave subjects a list of the relevant symptoms. This increased the number of correct diagnoses from 28 to 48 per cent. The second type of support involved directed questions that forced people to make explicit how their initial diagnoses explained the symptoms supplied by the experimenters. Inducing this ‘active verification’ frame of mind increased the number of correct diagnoses to 69 per cent. Those who were not helped by this type of support failed to see that their diagnoses conflicted with the facts of the situation, originally described.

  The root of the problem in everyday diagnoses appears to be located in the complex interaction between two logical reasoning tasks. One serves to identify critical symptoms and those factual elements of the presented situation needing an explanation. The other is concerned with verifying whether the symptoms have been explained and whether the supplied situational factors are compatible with the favoured explanatory scenario. Difficulties arise not because people lack the necessary creativity to generate scenarios, they can do this well enough, but because they fail to apply strictly logical thinking to both the initial facts and to the products of scenario generation. This, as we have seen earlier, is a familiar theme in accounts of knowledge-based processing.

  7. Summary and conclusions

  This chapter began by presenting evidence for the existence of three basic error types: skill-based slips and lapses, rule-based mistakes and knowledge-based mistakes. It was argued that these could be variously differentiated along several dimensions: type of activity, focus of attention, predominant control mode, predictability, relative abundance, the influence of situational factors, detectability, mode of error detection and relationship to change.

  The possible origins of these basic error types were then located within the Generic Error Modelling System, or GEMS. Errors at the skill-based level were attributed mainly to monitoring failures. Most usually, these involved inattention: the omission of a high-level check upon behaviour at some critical point beyond which routine actions branched towards a number of possible end states. The failure to bring the conscious workspace ‘into the loop’ at these critical points generally causes actions to run, by default, along the most frequently travelled route when the current intention was otherwise. Slips and lapses also arise from overattention: when a high-level enquiry is made as to the progress of an ongoing action sequence, and the current position is assessed as either being further along or not as far as it actually is.

  Mistakes at the rule-based and knowledge-based levels are associated with problem solving. A key feature of GEMS is the assertion that problem solvers always confront an unplanned-for change by first establishing (often at a largely automatic pattern-matching level) whether or not the local indications have been encountered before. If the pattern is recognised—an
d there are powerful forces at work to establish some kind of match—a previously established if (condition) then (action) rule-based solution is applied. Only when this relatively effortless pattern-matching and rule-applying procedure fails to provide an adequate solution will they move to the more laborious mode of making inferences from knowledge-based mental models of the problem space and, from these, go on to formulate and try out various remedial possibilities.

  It was argued that rule-based mistakes fell into two broad categories: those associated with the misapplication of good rules (i.e., rules of proven worth) and those due to the application of bad rules. In the case of the former, several factors conspire to yield strong-but-wrong rules: exceptions to general rules are truly exceptional, countersigns may be missed in a mass of incoming data or explained away and higher-level, more general rules will be stronger than more specific ones. Bad rules, on the other hand, can arise from encoding difficulties or from deficiencies in the action component. The latter were considered under three headings: wrong rules, inelegant or clumsy rules and inadvisable rules.

  Knowledge-based mistakes have their roots in two aspects of human cognition: bounded rationality and the fact that knowledge relevant to the problem space is nearly always incomplete and often inaccurate. Several specific ‘pathologies’ associated with knowledge-based problem solving were then considered: selecting the wrong features of the problem space, being insensitive to the absence of relevant elements, confirmation bias, overconfidence, biased reviewing of plan construction, illusory correlation, halo effects, and problems with causality, with complexity and with diagnosis in everyday life.

  By now, we have reviewed a wide variety of data relating to the more systematic varieties of human error. We have also attempted to integrate the basic error types within a broad theoretical framework, the generic error-modelling system. In the next chapter, we will examine in more detail how cognitive operations are specified and why it is that various forms of underspecification lead to contextually-appropriate, high-frequency error forms. In particular, we will seek to link the widespread occurrence of these forms to the ‘computational primitives’ of the cognitive system: similarity-matching and frequency-gambling.

  4 Cognitive underspecification and error forms

  * * *

  In Chapter 1, a distinction was made between error types and error forms. Error types are differentiated according to the performance levels at which they occur. Error forms, on the other hand, are pervasive varieties of fallibility that are evident at all performance levels. Their ubiquity indicates that they are rooted in universal processes that influence the entire spectrum of cognitive activities.

  The view advanced in this chapter is that error forms are shaped primarily by two factors: similarity and frequency. These, in turn, have their origins in the automatic retrieval processes— similarity-matching and frequency-gambling—by which knowledge structures are located and their products delivered to consciousness (thoughts, words, images, etc.) or to the outside world (action, speech or gesture). It is also argued that the more cognitive operations are in some way underspecified, the more likely it is that error forms will be shaped by the frequency-gambling heuristic.

  If the study of human error is to make a useful contribution to the safety and efficiency of hazardous technologies, it must be able to offer their designers and operators some workable generalizations regarding the information-handling properties of a system’s human participants (see Card, Moran & Newell, 1983). This chapter explores the generality of one such approximation:

  When cognitive operations are underspecified, they tend to default to contextually appropriate, high-frequency responses.

  Exactly what information is missing from a sufficient specification, or which controlling agency fails to provide it, will vary with the nature of the cognitive activity being performed. The crucial point is that notwithstanding these possible varieties of underspecification, their consequences are remarkably uniform: what emerge are perceptions, words, recollections, thoughts and actions that recognisably belong to an individual’s well-established repertoire for a given situation. Or, to put it another way, the more often a cognitive routine achieves a successful outcome in relation to a particular context, the more likely it is to reappear in conditions of incomplete specification.

  Figure 4.1. The two modes of cognitive control: attentional control associated with working memory (or the conscious workspace) and schematic control derived from the knowledge base (long-term memory).

  Frequency biasing gives predictable shape to human errors in a wide variety of activities and situations (see Norman, 1981; Reason & Mycielska, 1982; Rasmussen, 1982). The psychological literature is replete with terms to describe this pervasive error form: ‘conventionalization’ (Bartlett, 1932), ‘sophisticated guessing’ (Solomon & Postman, 1952), ‘fragment theory’ (Neisser, 1967), ‘response bias’ (Broadbent, 1967), ‘strong associate substitution’ (Chapman & Chapman, 1973), ‘inert stereotype’ (Luria, 1973), ‘banalization’ (Timpanaro, 1976), ‘strong habit intrusions’ (Reason, 1979) and ‘capture errors’ (Norman, 1981). But irrespective of whether or not the consequences are erroneous, this tendency to ‘gamble’ in favour of high-frequency alternatives when control statements are imprecise is generally an adaptive strategy for dealing with a world that contains a great deal of regularity as well as a large measure of uncertainty.

  1. The specification of mental operations

  Correct performance in any sphere of mental activity is achieved by activating the right schemata in the right order at the right time. Cognitive processes receive their guidance from a complex interaction between the conscious workspace and the schematic knowledge base, as summarised in Figure 4.1 (see also Chapter 2). The former specifies the strategic direction and redirection of action (both internal and external), while the latter provides the finegrained tactical control.

  Schemata require a certain threshold level of activation to call them into operation. The various sources of this activation can be divided into two broad classes: specific and general activators (see Figure 4.2).

  Figure 4.2. The combined influence of specific and general schema activators. Schemata are brought into play by both sets of activators, but only the specific activators are directly related to the current intention.

  1.1. Specific activators

  Specific activators bring a given schema into play at a particular time. Of these, intentional activity is clearly the most important. Plans constitute ‘descriptions’ of intended actions. For adults, these descriptions usually comprise a set of brief jottings on the mental scratchpad (e.g., “Go to the post office and buy some stamps.”). There is no need to fill in the ‘small print’ of each detailed operation; these low-level control statements are already present within the constituent schemata. The more frequently a particular set of actions is performed, the less detailed are the descriptions that need to be provided by the higher levels. But this steady devolution of control to schemata carries a penalty. To change an established routine of action or thought requires a positive intervention by the attentional control mode. The omission of this intervention in moments of preoccupation or distraction is the most common cause of absent-minded slips of action (see Chapter 3).

  1.2. General activators

  A number of general factors provide background activation to schemata irrespective of the current intentional state or context: recency, frequency, attributes shared with other schemata and emotional or motivational factors (see Norman & Shallice, 1980; Reason 1984a). Of these, frequency of prior use is probably the most influential. The more often a particular schema is put to work, the less it requires in the way of intentional activation. Quite often, contextual cueing is all that is needed to trigger it, particularly in very familiar environments. The issue of priming by active schemata possessing common elements will be considered in detail in Section 1.4.

  1.3. Specifications are context-dependent

  As Bobrow a
nd Norman (1975, p. 133) pointed out, descriptions are context-dependent. “We suggest that descriptions are normally formed to be unambiguous within the context in which they were first used. That is, a description defines a memory schema relative to a context.”

  Even the most ill-formed or high-flown intention must boil down to a selection of ways and means if it is to be considered at all seriously. Sooner or later, the planner must move down the abstraction hierarchy (Rasmussen, 1984) from a vague statement of some desired future state to a detailed review of possible resources and situations. And as soon as that occurs, the intentional activity becomes context-dependent. When a context is identified, the range of possible schema candidates is greatly restricted. Whenever a person moves from one physical location to another, either in action or in thought, the schemata within the contextual frame change accordingly. Comparatively little in the way of additional specification is needed to retrieve the appropriate schemata once this contextual frame has been established.

 

‹ Prev