Book Read Free

Human Error

Page 1

by James Reason




  Human Error

  Human Error

  James Reason

  Department of Psychology

  University of Manchester

  CAMBRIDGE UNIVERSITY PRESS

  Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore,

  São Paulo, Delhi, Dubai, Tokyo, Mexico City

  Cambridge University Press

  32 Avenue of the Americas, New York ny 10013-2473, USA

  www.cambridge.org

  Information on this title: www.cambridge.org/9780521306690

  © Cambridge University Press 1990

  This publication is in copyright. Subject to statutory exception and to the provisions of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press.

  First published 1990

  20th printing 2009

  A catalog record for this publication is available from the British Library.

  ISBN 978-0-521-30669-0 Hardback

  ISBN 978-0-521-31419-0 Paperback

  Cambridge University Press has no responsibility for the persistence or accuracy of URLs for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Information regarding prices, travel timetables, and other factual information given in this work is correct at the time of first printing but Cambridge University Press does not guarantee the accuracy of such information thereafter.

  To Jens Rasmussen

  Contents

  * * *

  Preface

  1. The nature of error

  2. Studies of human error

  3. Performance levels and error types

  4. Cognitive underspecification and error forms

  5. A design for a fallible machine

  6. The detection of errors

  7. Latent errors and systems disasters

  8. Assessing and reducing the human error risk

  Appendix

  References

  Name index

  Subject index

  Preface

  * * *

  Human error is a very large subject, quite as extensive as that covered by the term human performance. But these daunting proportions can be reduced in at least two ways. The topic can be treated in a broad but shallow fashion, aiming at a wide though superficial coverage of many well-documented error types. Or, an attempt can be made to carve out a narrow but relatively deep slice, trading comprehensiveness for a chance to get at some of the more general principles of error production. I have tried to achieve the latter.

  The book is written with a mixed readership in mind: cognitive psychologists, human factors professionals, safety managers and reliability engineers — and, of course, their students. As far as possible, I have tried to make both the theoretical and the practical aspects of the book accessible to all. In other words, it presumes little in the way of prior specialist knowledge of either kind. Although some familiarity with the way psychologists think, write and handle evidence is clearly an advantage, it is not a necessary qualification for tackling the book. Nor, for that matter, should an unfamiliarity with high-technology systems deter psychologists from reading the last two chapters.

  Errors mean different things to different people. For cognitive theorists, they offer important clues to the covert control processes underlying routine human action. To applied practitioners, they remain the main threat to the safe operation of high-risk technologies. Whereas the theoreticians like to collect, cultivate and categorise errors, practitioners are more interested in their elimination and, where this fails, in containing their adverse effects by error-tolerant designs. It is hoped that this book offers something useful to both camps.

  The shape of the book

  The book is divided into three parts. The first two chapters introduce the basic ideas, methods, research traditions and background studies. They set the scene for the book as a whole.

  Chapter 1 discusses the nature of error, makes a preliminary identification of its major categories and considers the various techniques by which it has been investigated.

  Chapter 2 outlines the human error studies that have been most influential in shaping the arguments presented later in the book. It distinguishes two traditions of research: the natural science and engineering (or cognitive science) approaches. The former is characterized by its restricted focus upon well-defined, manipulable phenomena and their explanation by ‘local’ theories whose predictive differences are, potentially at least, resolvable by experimentation. This tradition has provided the basis of much of what we know about the resource limitations of human cognition. The engineering approach, on the other hand, is more concerned with framing working generalisations than with the finer shades of theoretical difference. It synthesises rather than analyses and formulates broadly based theoretical frameworks rather than limited, data-bound models. The more theoretical aspects of the subsequent chapters are very much in this latter tradition.

  The middle section of the book, comprising Chapters 3 to 5, presents a view of the basic error mechanisms and especially those processes that give recurrent forms to a wide variety of error types. Whereas error types are rooted in the cognitive stages involved in conceiving and then carrying out an action sequence (i.e., planning, storage and execution), error forms have their origins in the universal processes that select and retrieve pre-packaged knowledge structures from long-term storage.

  Chapter 3 describes a generic error-modelling system (GEMS) which permits the identification of three basic error types: skill-based slips and lapses, rule-based mistakes and knowledge-based mistakes. These three types may be distinguished on the basis of several dimensions: activity, attentional focus, control mode, relative predictability, abundance in relation to opportunity, situational influences, ease of detection and relationship to change. Most of the chapter is taken up with describing the various failure modes evident at the skill-based, rule-based and knowledge-based levels of performance.

  Chapter 4 introduces the concept of cognitive underspecification. Cognitive operations may be underspecified in a variety of ways, but the consequences are remarkably uniform; the cognitive system tends to ‘default’ to contextually appropriate, high-frequency responses. Error forms, it is argued, are shaped primarily by two factors: similarity and frequency. These, in turn, originate in the automatic retrieval processes by which knowledge structures are located and their products delivered either to consciousness (thoughts, words, images, etc.) or to the outside world (actions, speech, gestures, etc.). There are two processes involved: similarity-matching, by which appropriate knowledge attributes are matched to the current calling conditions on a like-to-like basis; and frequency-gambling, by which conflicts between partially matched knowledge structures are resolved in favour of the more frequently employed items. Both of these processes, but especially the latter, come into increasing prominence when cognitive operations are insufficiently specified. Underspecification, though highly variable in its origins, can be rendered down to two functionally equivalent states: insufficient calling conditions to locate a unique knowledge item and incomplete knowledge (i.e., some of the ‘facts’ associated with a particular knowledge structure —or set of structures —are missing). Both states will increase the natural tendency of the cognitive system to output high-frequency responses and this gives recognisable form to many error types. Evidence drawn from a wide range of cognitive activities is presented in support of these assertions.

  Chapter 5 attempts to express these ideas more precisely in both a notional and a computational form. It addresses the question: What kind of information-handling machine could operate correctly for most of th
e time, but also produce the occasional wrong responses characteristic of human behaviour? The description of the fallible machine is in two parts: first in a notional, non-programmatic form, then in a suite of computer programs that seek to model how human subjects, of varying degrees of ignorance, give answers to general knowledge questions relating to the lives of U.S. presidents. The output of this model is then compared to the responses of human subjects.

  The final section of the book focuses upon the consequences of human error: error detection, accident contribution and remedial measures.

  Chapter 6 reviews the relatively sparse empirical evidence bearing upon the important issues of error detection and error correction. Although error correction mechanisms are little understood, there are grounds for arguing that their effectiveness is inversely related to their position within the cognitive control hierarchy. Low-level (and largely hard-wired) postural correcting mechanisms work extremely well. Attentional processes involved in monitoring the actual execution of action plans are reasonably successful in detecting unintended deviations (i.e., slips and lapses). But even higher-level processes concerned with making these plans are relatively insensitive to actual or potential straying from some adequate path towards the desired goal (mistakes). The relative efficiency of these error-detection mechanisms depends crucially upon the immediacy and the adequacy of feed-back information. The quality of this feed-back is increasingly degraded as one moves up the control levels.

  Chapter 7 considers the human contribution to accidents in complex, high-risk technologies. An important distinction is made between active errors and latent errors. The former, usually associated with the performance of ‘front-line’ operators (pilots, control room crews, and the like), have an immediate impact upon the system. The latter, most often generated by those at the ‘blunt end’ of the system (designers, high-level decision makers, construction crews, managers, etc.), may lie dormant for a long time, only making their presence felt when they combine with other ‘resident pathogens’ and local triggering events to breach the system’s defences. Close examination of six case studies — Three Mile Island, Bhopal, Challenger, Chernobyl, the Herald of Free Enterprise and the King’s Cross underground fire—indicate that latent rather than active failures now pose the greatest threat to the safety of high-technology systems. Such a view is amply borne out by more recent disasters such as the Piper Alpha explosion, the shooting down of the Iranian airbus by the U.S.S. Vincennes, the Clapham Junction and Purley rail crashes and the Hillsborough football stadium catastrophe.

  The book ends with a consideration of the various techniques, either in current use or in prospect, to assess and reduce the risks associated with human error. Chapter 8 begins with a critical review of probabilistic risk assessment (PRA) and its associated human reliability assessment (HRA) techniques. It then considers some of the more speculative measures for error reduction: eliminating error affordances, intelligent decision support systems, memory aids, error management and ecological interface design. In conclusion, the chapter traces the shifting preoccupations of reliability specialists: an initial concern with defending against component failures, then an increasing awareness of the damaging potential of active human errors, and now, in the last few years, a growing realisation that the prime causes of accidents are often present within systems long before an accident sequence begins.

  The final note is a rather pessimistic one. Engineered safety devices are proof against most single failures, both human and mechanical. As yet, however, there are no guaranteed technological defences against either the insidious build-up of latent failures within the organisational and managerial spheres or their adverse (and often unforeseeable) conjunction with various local triggers. While cognitive psychology can tell us something about an individual’s potential for error, it has very little to say about how these individual tendencies interact within complex groupings of people working in high-risk systems. And it is these collective failures that represent the major residual hazard.

  Some conspicuous omissions

  Although it has featured fairly large in the literature, relatively little special attention has been given in this book to the relationship between errors and stress. This omission was made for two reasons.

  First, while there are a small number of ‘ecologically valid’ studies (Ronan, 1953; Grinker & Spiegel, 1963; Berkun, 1964; Marshall, 1978) indicating that high levels of stress can, and often do, increase the likelihood of error, it is also clear that stress is neither a necessary nor a sufficient condition for the occurrence of cognitive failure.

  Recent investigations (Broadbent, Cooper, FitzGerald & Parkes, 1982; Broadbent, Broadbent & Jones, 1986) have suggested that a more interesting question is not so much “Why does stress promote error?” but rather “Why is a relatively marked personal proneness to cognitive failures associated with increased vulnerability to stress?” The second reason for omitting any specific treatment of stress is that this important relationship between error proneness and stress vulnerability has been considered at length elsewhere (Reason, 1988d).

  The existence of this recent publication (a chapter in the Handbook of Life Stress, Cognition and Health, 1988) also explains why questionnaire studies of error proneness receive only a summary mention in this book (see Chapter 1). There is a further reason for not dealing in any detail with the general issue of individual differences here. Although it is well known that factors such as age and pathology play an important part in error production, there is little compelling evidence to suggest that these individual factors yield unique error types. Rather, they produce an exaggerated liability to the pervasive error forms whose varieties and origins are already treated extensively in Chapters 4 and 5.

  A skimmer’s guide

  Cognitive psychologists with an interest in theory are urged to read the first six chapters. If inclined, they could then go on to skim the remaining two chapters. Much of this material will be unfamiliar to them since little of it has appeared in the conventional cognitive literature.

  Those with more practical concerns (and less interest in cognitive psychology) can afford to be more selective without losing too much of the thread. After reading Chapter 1, they could skip to the conclusions of Chapter 2. Chapter 3 contains some human factors material as well as the generic error-modelling system, and is therefore worth rather more than a glance. Practitioners might find much of Chapter 4 rather too academic for their tastes, but they could read the first few pages and the concluding remarks in order to get a sense of the general argument. The first part of Chapter 5 presents a fairly concise summary of the basic theory; whether they read to the end depends on their interest (or faith) in computer modelling. The remaining three chapters, and particularly the last two, were written specifically for those with applied leanings and should not be skipped by them.

  Acknowledgements

  Jens Rasmussen, to whom this book is dedicated, has had a profound influence on the ideas expressed here, both through his writings and as a result of the many fruitful meetings he has convened (and generously hosted over the years) at the Risoe National Laboratory. His skill-rule-knowledge framework has justifiably become a ‘market standard’ for the human reliability community the world over. I hope I have done it justice here.

  A great debt is owed to Don Norman for his intellectual stimulation, his long-standing encouragement of this work and for his hospitality during my brief spell in La Jolla. We got into the ‘error business’ at about the same time, but I always seem to find myself trailing several ideas behind him. This is especially apparent after the recent publication of his excellent book, The Psychology of Everyday Things, upon which I have preyed extensively here.

  Berndt Brehmer, with whom I have spent many pleasant and productive days both in Manchester and in various foreign parts, was kind enough to read and comment upon an early version of the manuscript. It is because of his wise advice that readers are spared a lengthy and self-indulgent chapter cove
ring the history of error from Plato onwards. But I was also considerably heartened by his encouraging remarks on the remainder.

  I am greatly indebted to Dietrich Doerner for showing me (along with Berndt Brehmer) that it was possible to study complex and dynamic problem-solving tasks in a rigorous fashion without losing any of their real-world richness and for his kind hospitality on many occasions. These visits not only allowed me to meet many of his distinguished colleagues, they also provided an introduction to the diversity and excitement of the ‘new’ German psychology. Unfettered by the more sterile aspects of Anglo-American experimentalism and in tune with broader philosophical influences than British Empiricism, they have been making a spirited attack on many of the affective and motivational issues avoided by those who regard human cognition primarily as an information-processing device.

  Much is owed to two sets of collaborators. I wish to thank Carlo Cacciabue, Giuseppe Mancini, Ugo Bersini, Francoise Decortis and Michel Masson at the CEC Joint Research Centre, Ispra, where we have been attempting to model the behaviour of nuclear plant operators in emergency conditions, along lines similar to those described in Chapter 5. And special thanks are due to Carlo Cacciabue for instructing me so patiently in the mysteries of nuclear engineering. (And, while on this subject, let me also thank John Harris of the Simon Engineering Laboratories for the same service.) I must also express my great appreciation to Willem Wagenaar, Patrick Hudson and Jop Groeneweg, all of the University of Leiden, who have done much to clarify my thinking about ‘resident pathogens’ and accident causation during our joint project for Shell Internationale Petroleum Maatschappij.

 

‹ Prev