Book Read Free

Human Error

Page 2

by James Reason


  My sincere thanks are due to John Senders and Ann Crichton-Harris for organising, sponsoring and hosting the First Human Error Conference at Columbia Falls, Maine, in 1980. This gathering, together with Three Mile Island, did much to set the error ‘ball’ rolling. In addition to his own distinguished contributions to the field, John Senders is also one of the great impresarios of human error. In 1983, together with Neville Moray, he organised a second great error event (sponsored by NATO and the Rockefeller Foundation) at Bellagio on Lake Como. These two meetings have played a great part in giving a sense of identity to human error research and in establishing personal contacts between otherwise scattered (both in geography and across disciplines) members of the reliability community.

  I have also benefited a great deal from conversations and correspondence with the following, listed in no particular order: Bernard Baars, Donald and Margaret Broadbent, David Woods, Neville Moray, Alan Swain, Tim Shallice, Ezra Krendel, Duane McCruer, John Wreathall, Ed Dougherty, Joe Fragola, Don Schurman, Alan Baddeley, Tony Sanford, Donald Taylor, Douglas Herrmann, Erik Hollnagel, Bill Rouse, Todd LaPorte, Veronique de Keyser, Jacques Leplat, Maurice de Montmollin, Keith Duncan, Lisanne Bainbridge, Trevor Kletz, Zvi Lanir, Baruch Fischhoff, Beth Loftus, Michael Frese, Antonio Rizzo, Leena Norros, George Apostolakis, Henning Andersen, Ron Westrum, Paul Brown, Abigail Sellen and Barry Turner. A special debt is owed to David Embrey for introducing me to the world of high technology, and for supplying so much excellent incident and event data, and to Deborah Lucas, once my research student and research assistant, now a distinguished ‘error person’ in her own right.

  Closer to home, I must gratefully acknowledge the help and stimulation I have had from my colleagues at the University of Manchester: Sebastian Halliday, Graham Hitch, Tony Manstead, Andrew Mayes and Stephen Stradling; from my research associates: Alan Fish, Janis Williamson, James Baxter and Karen Campbell; from my research students: Philip Marsden, Richard Shotton and Gill Brown; and successive generations of finalists who collected data for me and for their undergraduate projects, especially Victoria Horrocks, Sarah Bailey, Caroline Mackintosh and Karen Feingold.

  Still closer to home, I must thank my wife, Rea. It is conventional to express gratitude to spouses for their patience and forbearance. Mine probably did me a far greater service by not indulging writer’s tantrums and not allowing me to dodge my share of the household chores; she, after all, had a book of her own to write. More to the point, she gave willingly of her services where they were most needed: as an informed editor and as an eagle-eyed proofreader, for which I am truly thankful.

  Finally, I am grateful to the Economic and Social Research Council (or the Social Science Research Council, as it was then more suitably called) for two research grants awarded between 1978 and 1983. The first supported diary and questionnaire studies of everyday errors; the second, a personal research grant, gave me half-time freedom from teaching for two years and allowed me to carry out most of the library research for this book.

  James Reason

  1 The nature of error

  * * *

  Just over 60 years ago, Spearman (1928) grumbled that “crammed as psychological writings are, and must needs be, with allusions to errors in an incidental manner, they hardly ever arrive at considering these profoundly, or even systematically.” Even at the time, Spearman’s lament was not altogether justified (see Chapter 2); but if he were around today, he would find still less cause for complaint. The past decade has seen a rapid increase in what might loosely be called ‘studies of errors for their own sake’.

  The most obvious impetus for this renewed interest has been a growing public concern over the terrible cost of human error: the Tenerife runway collision in 1977, Three Mile Island two years later, the Bhopal methyl isocyanate tragedy in 1984, the Challenger and Chernobyl disasters of 1986, the capsize of the Herald of Free Enterprise, the King’s Cross tube station fire in 1987 and the Piper Alpha oil platform explosion in 1988. There is nothing new about tragic accidents caused by human error; but in the past, the injurious consequences were usually confined to the immediate vicinity of the disaster. Now, the nature and the scale of certain potentially hazardous technologies, especially nuclear power plants, means that human errors can have adverse effects upon whole continents over several generations.

  Aside from these world events, from the mid-1970s onwards theoretical and methodological developments within cognitive psychology have also acted to make errors a proper study in their own right. Not only must more effective methods of predicting and reducing dangerous errors emerge from a better understanding of mental processes, it has also become increasingly apparent that such theorising, if it is to provide an adequate picture of cognitive control processes, must explain not only correct performance, but also the more predictable varieties of human fallibility. Far from being rooted in irrational or maladaptive tendencies, these recurrent error forms have their origins in fundamentally useful psychological processes. Ernst Mach (1905) put it well: “Knowledge and error flow from the same mental sources, only success can tell the one from the other.”

  A central thesis of this book is that the relatively limited number of ways in which errors actually manifest themselves is inextricably bound up with the ‘computational primitives’ by which stored knowledge structures are selected and retrieved in response to current situational demands. And it is just these processes that confer upon human cognition its most conspicuous advantage over other computational devices: the remarkable ability to simplify complex informational tasks.

  1. The cognitive ‘balance sheet’

  Correct performance and systematic errors are two sides of the same coin. Or, perhaps more aptly, they are two sides of the same cognitive ‘balance sheet’. Each entry on the asset side carries a corresponding debit. Thus, automaticity (the delegation of control to low-level specialists) makes slips, or actions-not-as-planned, inevitable. The resource limitations of the conscious ‘workspace’, while essential for focusing computationally-powerful operators upon particular aspects of the world, contribute to informational overload and data loss. A knowledge base that contains specialised ‘theories’ rather than isolated facts preserves meaningfulness, but renders us liable to confirmation bias. An extraordinarily rapid retrieval system, capable of locating relevant items within a virtually unlimited knowledge base, leads our interpretations of the present and anticipations of the future to be shaped too much by the matching regularities of the past. Considerations such as these make it clear that a broadly-based analysis of recurrent error forms is essential to achieving a proper understanding of the largely hidden processes that govern human thought and action.

  2. Errors take a limited number of forms

  On the face of it, the odds against error-free performance seem overwhelmingly high. There is usually only one way of performing a task correctly, or, at best, very few; but each step in a planned sequence of actions or thoughts provides an opportunity to stray along a multitude of unintended or inappropriate pathways. Think of boiling an egg. At what stages and in how many ways can even this relatively simple operation be bungled? The list of possibilities is very long. Thoughts such as these make it appear highly unlikely that we could ever adequately chart the varieties of human error.

  Fortunately, the reality is different. Human error is neither as abundant nor as varied as its vast potential might suggest. Not only are errors much rarer than correct actions, they also tend to take a surprisingly limited number of forms, surprising, that is, when set against their possible variety. Moreover, errors appear in very similar guises across a wide range of mental activities. Thus, it is possible to identify comparable error forms in action, speech, perception, recall, recognition, judgement, problem solving, decision making, concept formation and the like. The ubiquity of these recurrent error forms demands the formulation of more global theories of cognitive control than are usually derived from laboratory experiments. Of necessity, these focus upon very r
estricted aspects of mental function in rather artificial settings.

  Far from leading down countless unconnected or divergent pathways, the quest for the more predictable varieties of human error is one that continually draws the searcher inwards to the common theoretical heartland of consciousness, attention, working memory and the vast repository of knowledge structures with which they interact. And it is with these theoretical issues that the first half of this book is primarily concerned.

  Figure 1.1. Target patterns of ten shots fired by two riflemen. A’s pattern exhibits no constant error, but rather large variable errors. B’s pattern shows a large constant error, but small variable errors (from Chapanis, 1951).

  3. Variable and constant errors

  Although it may be possible to accept that errors are neither as numerous nor as varied as they might first appear, the idea of a predictable error is a much harder one to swallow. If errors were indeed predictable, we would surely take steps to avoid them. Yet they still occur. So what is a predictable error?

  Consider the two targets shown in Figure 1.1 (taken from Chapanis, 1951). Each shows a pattern of ten shots, one fired by rifleman A, the other by rifleman B.A placed his shots around the bull’s eye, but the grouping is poor. B’s shots fell into a tight cluster, but at some distance from the bull’s eye.

  These patterns allow us to distinguish between two types of error: variable and constant errors. A’s pattern exhibits no constant error, only a rather large amount of variable error. B shows the reverse: a large constant error, but small variable error. In this example, the variability is revealed by the spread of the individual shots, and provides an indication of the rifleman’s consistency of shooting. The constant error, on the other hand, is given by the distance between the group average and the centre of the target.

  What do these patterns tell us about the relative merits of these two individuals? If we should rely only on their respective scores, then A would appear the better shot, achieving a total of 88 to B’s 61. But it is obvious from the groupings that this is not the case. A more acceptable view would be that A is a rather unsteady shot with accurately aligned sights, while B is an expert marksman whose sights are out of true.

  It is also evident that the errors of these two marksmen differ considerably in their degree of predictability. Given another ten shots each, with B still aiming at the target’s centre and his sights still unadjusted, we could say with a high degree of confidence whereabouts his shots would fall; but the variability of A’s shooting makes such a confident forecast impossible. The difference is very clear: in B’s case, we have a theory that will account for the precise nature of his constant error, namely, that he is an excellent shot with biased sights. But our theory in A’s case, that he has accurate sights but a shaky hand, is not one that would permit a precise prediction of where his shots will fall. We can anticipate the poor grouping and have some idea of its spread, but that is all.

  The lesson of this simple example is that the accuracy of error prediction depends very largely on the extent to which the factors giving rise to the errors are understood. This requires a theory which relates the three major elements in the production of an error: the nature of the task and its environmental circumstances, the mechanisms governing performance and the nature of the individual. An adequate theory, therefore, is one that enables us to forecast both the conditions under which an error will occur, and the particular form that it will take.

  For most errors, our understanding of the complex interaction between these various causal factors is, and is always likely to be, imperfect and incomplete. Consequently, most error predictions will be probabilistic rather than precise. Thus, they they are liable to take the form: “Given this task to perform under these circumstances, this type of person will probably make errors at around this point, and they are likely to be of this variety,” rather than be of the kind: “Person X will make this particular error at such-and-such a time in such-and-such a place.” Nevertheless, predictions of this latter sort can be made in regard to certain types of error when they are deliberately elicited within a controlled laboratory environment. This is especially true of many perceptual illusions. Not only can we predict them with near certainty (given an intact sensory system), we can also forecast with considerable accuracy how their experience will vary with different experimental manipulations. But these are exceptions.

  The more usual type of prediction is illustrated by the following example. It can be forecast with near certainty that during next January the banks will return a large number of cheques with this year’s date on them. We cannot necessarily predict the exact number of the misdated cheques (although such information probably exists for previous years, so the approximate number could be estimated), nor can we say precisely who will make this error, or on which day. But we do know that such strong habit intrusions are among the most common of all error forms; that dating a cheque, being a largely routinised activity (at least with respect to the year), is particularly susceptible to absent-minded deviations of this kind; and that the early part of the year is the period in which these slips are most likely to happen. Such qualitative predictions may seem merely banal, but they are nonetheless powerful. Moreover, the regular recurrence of this error form is extremely revealing of the covert processes controlling practised activities.

  4. Intentions, actions and consequences

  The notions of intention and error are inseparable. Any attempt at defining human error or classifying its forms must begin with a consideration of the varieties of intentional behaviour.

  One psychologically useful way of distinguishing between the different kinds of intentional behaviour is on the basis of yes-no answers to three questions regarding a given sequence of actions (Figure 1.2):

  Were the actions directed by some prior intention?

  Did the actions proceed as planned?

  Did they achieve their desired end?

  Notice that all of these questions are capable of being answered. In contrast to issues like basic motivation or detailed execution, the nature of the prior intentions, knowledge of whether or not the subsequent actions deviated from them and an appreciation of their success or failure are potentially available to consciousness. Indeed, one of the primary functions of consciousness is to alert us to departures of action from intention (Mandler, 1975; 1985) and, though less immediately, to the likelihood that the planned actions currently underway will not achieve their desired goal.

  The notion of intention comprises two elements: (a) an expression of the end-state to be attained, and (b) an indication of the means by which it is to be achieved. Both elements may vary widely in their degree of specificity. For most everyday actions, prior intentions or plans consist of little more than a series of verbal tags and mental images. With the repetition of an action sequence, fewer and fewer ‘intentional tags’ come to stand for increasingly larger amounts of detailed movement. The more routine the activity, the fewer the number of low-level control statements required to specify it. In novel activities, however, we are aware of the need to ‘talk ourselves through’ the actions. Under these circumstances, our activities are guided by the effortful yet computationally-powerful investment of conscious attention.

  Figure 1.2. Algorithm for distinguishing the varieties of intentional behaviour. The three main categories are non-intentional behaviour, unintentional behaviour (slips and lapses) and intentional but mistaken behaviour.

  4.1. Distinguishing prior intention and intentional action

  Searle (1980, p. 52) made an important distinction between ‘prior intentions’ and ‘intentions in action’: “All intentional actions have intentions in action but not all intentional actions have prior intentions.” Actions without prior intentions fall into two broad classes: intentional and nonintentional actions.

  4.1.1. Intentional actions without prior intention

  Searle (1980) gives two instances of intentional actions without prior intention: spontaneous and
subsidiary actions. Someone might hit another on the spur of the moment without forming any prior intention. In this case, the intention resides only in the action itself, as Searle (1980, p. 52) states “the action and the intention are inseparable.” Similarly, in executing well-practised action sequences, only the ‘major headings’ are likely to be specified in the prior intention (e.g., “I will drive to the office”). We do not, indeed cannot, consciously fill in the ‘small print’ of each component operation in advance (i.e., opening the car door, sitting down, putting on the seat belt, inserting the ignition key, starting the engine, etc.). For such subsidiary actions, writes Searle (1980, p. 52), “I have an intention, but no prior intentions” (see also Reason & Mycielska, 1982, p. 9).

  4.1.2. Non-intentional or involuntary actions

  Textbooks of jurisprudence and criminal law are replete with accounts of intentionless behaviour (Hart, 1968; Smith & Hogan, 1973). As Hart (1968, p. 114) put it: “All civilized penal systems make liability to punishment for at any rate serious crime dependent not merely that the person to be punished has done the outward act of the crime, but on his having done it in a certain frame of mind or will.” A crime has thus two elements: the actus rea and the mens rea. To prove a criminal liability, it must be shown not only that the consequences of the criminal act were intended, but also that the act itself was committed voluntarily.

  The defence of ‘automatism’ rests on demonstrating the absence of “a vital link between mind and body” (Smith & Hogan, 1973, p. 35). In such instances, “the movements of the human body seem more like the movements of an inanimate thing than the actions of a person. Someone unconscious in a fit of epilepsy, hits out in a spasm and hurts another; or someone, suddenly stung by a bee, in his agony drops and breaks a plate he is holding” (Hart, 1968, pp. 91-92).

 

‹ Prev