Human Error

Home > Other > Human Error > Page 19
Human Error Page 19

by James Reason


  In both sections of the questionnaire, the test material was designed to vary both the number of supplied retrieval cues and the ‘set sizes’ covered by each cue (i.e., the number of listed presidents who fitted each ‘calling condition’). It was assumed that cue specificity would increase as a function of cue number and decrease with set size.

  Taken together, the findings from both Sections A and B provided qualified support for the notion that a decrease in search specificity, whether due to impoverished knowledge or imprecise cueing, leads to an increase in the employment of the frequency-gambling strategy and a corresponding diminution in the use of similarity-matching. It was also evident that people were unaware of this process. Thus, low-knowledge subjects, when asked how they made their choices, said they were guessing. In reality, however, their selections were heavily influenced by presidential salience, or, as operationally defined here, by a president’s frequency of encounter in the world.

  These three studies confirmed the predictions of the underspecification hypothesis, at least with regard to the retrieval of knowledge relating to American presidents. It will, of course, be necessary to explore a much wider range of knowledge domains before we can claim that we are dealing with a universal knowledge retrieval phenomenon. Nevertheless, when the results of these ‘presidential’ studies are considered alongside those summarised in the earlier part of the chapter, a strong case can be made for the ubiquity of the underspecification principle in human cognitive processing.

  7. Concluding remarks

  This chapter has presented evidence to support the generalization that the cognitive system is disposed to select contextually appropriate, high-frequency responses in conditions of underspecification, and that this tendency gives predictable form to a wide variety of errors. There is little new in this assertion (see Thorndike, 1911). What is more unusual is that the evidence has been drawn from a broad range of cognitive activities.

  Howell (1973) offered two reasons to explain why the general significance of frequency in cognition has been largely unappreciated. The first is that most investigators who were interested in frequency have preferred not to deal in cognitive concepts. The second is that “frequency has become tied to the particular vehicle by which it is conveyed (e.g., words, numbers, lights, etc.) and the particular paradigm in which it occurs (e.g., paired-associate learning, decision making, information transmission)” (Howell, 1973, p. 44).

  The revival of the schema concept in the mid-1970s (due in large part to developments in artificial intelligence) has created a theoretical climate in which it is possible to regard frequency in a less paradigm-bound fashion. Common to the schema concept, in all its many contemporary guises (scripts, frames, personae, etc.), is the notion of high-level knowledge structures that contain informational ‘slots’ or variables. Each slot will only accept information of a particular kind. When external sources fail to provide data to fill them, they take on ‘default assignments’, where these are the most frequent (or stereotypical) instances in a given context.

  By its nature, frequency is intimately bound up with many other processing and representational factors, of which ‘connectedness’ is probably the most important. How can we be sure that frequency rather than the degree of association with other schemata is more important in determining ‘default assignments’? The short answer is that we can never be certain, and this probably does not matter. Frequency and connectedness are inextricably linked. The more often a particular object or event is encountered, the more opportunity it has to form episodic and semantic linkages with other items. Just as all roads lead to Rome, so all—or nearly all—associative connections within a given context are likely to lead to the most frequently-employed schema. Such a view is implicit in recent computer models of parallel distributed processing (see McClelland & Rumelhart, 1985; Rumelhart & McClelland, 1986; Norman, 1985). But irrespective of whether it is frequency-logging or connectional weighting that is the more fundamental factor, the functional consequences are likely to be very similar.

  In recent years, a number of authors have made strong cases for the largely automatic encoding of event frequency (see Hasher & Zacks, 1984), and for its privileged representation in memory (Hintzman & Block, 1971; Hintzman, 1976; Hintzman, Nozawa & Irmscher, 1982). These observations prompt the question: If frequency information is so important, what is it good for? This chapter has attempted to provide one very general answer.

  5 A design for a fallible machine

  * * *

  This chapter sketches out one possible answer to the following question: What kind of information-handling device could operate correctly for most of the time, but also produce the occasional wrong responses characteristic of human behaviour? Of special interest are those error forms that recur so often that any adequate model of human action must explain not only correct performance, but also these more predictable varieties of fallibility.

  Most of the component parts of this ‘machine’ have been discussed at earlier points in this book. The purpose of this chapter is to assemble them in a concise and internally consistent fashion.

  It is called a ‘fallible machine’ rather than a theoretical framework because it is expressed in a potentially computable form. That is, it borrows from Artificial Intelligence (AI) the aim of making an information-handling machine do “the sorts of things that are done by human minds” (Boden, 1987, p. 48). As Boden (1987, p. 48) indicates, the advantages of this approach are twofold: “First, it enables one to express richly structured psychological theories in a rigorous fashion (for everything in the program has to be precisely specified, and all its operations have to be made explicit); and secondly, it forces one to suggest specific hypotheses about precisely how a psychological change can come about.”

  The description of the ‘fallible machine’ is in two parts. In the first seven sections of the chapter, it is presented in a notional, nonprogrammatic form. The remaining sections consider how these ideas may be embodied in a suite of computer programs designed to model the cognitive processes involved in the retrieval of incomplete knowledge. More specifically, these programs are intended to emulate (a) the way our human subjects generated exemplars of the category ‘American presidents’, and (b) their responses to Section B of the presidential quiz (see Chapter 4).

  1. The structural components of the ‘machine’

  The ‘machine’ has two principal components: working memory (WM) and the knowledge base (KB). WM is subdivided into two parts: focal WM and peripheral WM. The primary interconnections between these components are shown in Figure 5.1.

  These two aspects of the memory system communicate with the outside world via the input function (IF) and the output function (OF). The IF comprises an array of specialised sensors whose activity is fed into peripheral WM. The OF consists of a set of effectors for transforming stored instructions into speech or motor action, and for directing sensory orientation. The OF acts upon outputs from the knowledge base. There are also feedback loops connecting the output and input functions (not shown in Figure 5.1).

  Figure 5.1. The principal structural components of the fallible machine and their interconnections: peripheral and focal working memory, the knowledge base and its associated buffer store.

  2. The functional properties of the parts

  2.1. Focal working memory (FWM)

  This is a limited capacity ‘workspace’ that receives inputs continuously from both the outside world (sensory inputs) and the knowledge base. It has a cycle time of a few milliseconds, and each cycle contains around two or three discrete informational elements. During a run of consecutive cycles, these elements may be transformed, extended or recombined as the result of ‘work’ performed upon them by powerful operators that function only within this highly restricted domain.

  A useful image for FWM is that of a slicer. Information, comprising elements from both the sensory inputs and the KB, is cut into ‘slices’ (corresponding to the few milliseconds of cycle tim
e) that are then dropped into the buffer store of the KB. The width of these ‘slices’ may vary according to the type of ‘work’ that is done upon them during their transition through FWM.

  2.2. Peripheral working memory (PWM)

  The primary function of PWM is to govern access to FWM. It receives inputs directly from the IF and KB, and holds this information briefly while a selection is being made. Only a very small proportion of the contents of the peripheral WM reach focal WM. Access to FWM is decided according to a variety of prioritising principles.

  2.2.1. Visual dominance

  Visual information has priority access to FWM at all times. So long as the visual sensors are working, their outputs will constitute the ‘foreground’ of FWM activity.

  This dominance of vision among the sensory modalities has been demonstrated in many situations. Studies of divided attention show that when a light and a tone are presented simultaneously, the light is likely to be detected first (Colavita, 1974; Posner, Nissen & Klein, 1976).

  Visual prepotence is perhaps revealed most dramatically in conditions of sensory rearrangement in which the normally harmonious relations between the spatial senses are deliberately distorted so that visual information is at odds with the inputs from the vestibular and muscle-skin-joint systems (see Reason, 1974). Uniform movement of large parts of the visual scene is usually an accompaniment of self-motion, and this is the way the brain interprets such large-scale movements, even when the immediate environment is being moved and the body is actually stationary. Notwithstanding the fact that these visual inputs are at variance with the veridical information derived from the other spatial senses, they still come to dominate visual perception. Ernst Mach, for example, described one form of these visually-derived illusions of self-motion (or vection): “If we stand on a bridge, and look at the water flowing beneath, we usually feel ourselves at rest, whilst the water seems in motion. Prolonged looking at the water, however, commonly has for its result to make the bridge with the observer and its surroundings suddenly seem to move the direction opposed to that of the water, whilst the water itself assumes the appearance of standing still” (cited by James, 1880). Similar effects can be obtained by the movement of a train on the next track whilst our own is stationary, or from looking up at scudding clouds over the edge of a house or tree.

  2.2.2. Change detection

  Inputs indicating a step change in the conditions of the outside world have privileged access to FWM. All sensory systems are biased to accentuate the differences in the immediate environment and to attenuate its constant features. In short, the nervous system is essentially a change-detector, and this general principle appears to hold good throughout the animal kingdom for all sensory modalities.

  2.2.3. Coherence principle

  Access to FWM is biased to favour information that corresponds to its current contents. This principle preserves the consistency of successive FWM elements. In this way, FWM maintains a coherent ‘picture’ of the world. The coherence principle also contributes to confirmation bias: the tendency to hold on to initial hypotheses in the face of contradictory evidence (see Johnson-Laird & Wason, 1977).

  2.2.4. Activation principle

  Access to FWM of informational elements from the KB is determined by the level of activation of the knowledge units from which they originate (the concept of activation and the factors that determine it will be discussed below). The higher this level of activation, the greater the chances of admission to FWM.

  2.2.5. Dedicated processors

  The PWM also contains dedicated ‘slave systems’ such as the articulatory loop and the visuo-spatial scratch pad (see Baddeley & Hitch, 1974; Hitch, 1980). These provide limited ‘holding facilities’ for specialised information currently being processed by FWM.

  2.3. Knowledge base

  This is a vast repository of knowledge units. These originate, in the first instance, from WM activity. The KB is effectively unlimited in either its storage capacity or the length of time for which knowledge units may be stored. However, it has relatively little in the way of intrinsic organisation; it is more like a tip than a library. Any outward appearance of categorical organisation is due to the way the retrieval system operates, not to any hierarchical structure or modularity within the knowledge base itself (see Kahneman & Miller, 1986). What organisation it possesses derives from shared context or co-occurrence. That is, FWM slices that repeatedly recur in consistent sequences (e.g., routine actions, arithmetical procedures, logical operators, etc.) tend to be compiled into procedural knowledge units.

  A similar kind of process operates for declarative knowledge units. Individual knowledge slices become compiled into larger unite as a consequence of their continual recycling through focal WM. Compilation of declarative knowledge is achieved through forging (within FWM) ‘is-a’, ‘has-a’, ‘means’ linkages between shared-element slices. Consider how the machine might develop a knowledge unit representing, say, a raven. At some point, the machine will have in FWM a visual image of a largish, black feathered creature together with the supplied information ‘is-a raven’. In the same way, further connections will be made with ‘is-a bird’, ‘is-a member of the crow family’, ‘is-a feathered vertebrate’, ‘is-to-be-found-on Tower Green’, ‘is-the title of a poem by Edgar Allan Poe’, and so on. To the extent that these various slices share the label ‘is-a raven’, they will be compiled as a discrete knowledge unit. In this way, the KB develops a ‘unitised’ and distributed organisation rather than a categorical one (see McClelland & Rumelhart, 1985). Each unit is a ‘content-addressable’ knowledge file (see Alba & Hasher, 1983).

  3. The dynamics of the system: Activation

  It is now time to assemble these various components into a working machine, and imbue it with some driving force. This is supplied by the notion of activation.

  Each knowledge unit within the KB, whether declarative or procedural, has a modifiable level of activation. When this level exceeds a given threshold, the knowledge unit will emit a product. These products may be instructions for action, words or images, depending upon the character of the unit. They are delivered either to the effectors (if procedural units) or to PWM. It is important to emphasise that it is the products of these units and not the units themselves that are so despatched.

  Knowledge units receive their activational charge from two principal sources, the one obtained from activity within FWM (to be described in a moment) and the other deriving from a number of nonspecific sources. These two sources will be labelled specific and general activators.

  3.1. Specific activators

  These activators give the machine its purposive character and are the most influential of all the triggering factors. The mechanism is very simple. The most recent ‘run’ of FWM slices is held briefly in a buffer store (see Figure 5.1). All stored knowledge units possessing attributes that correspond to those held in the buffer will increase their activation by an amount related to the goodness of match. The closer the match, the greater will be the received activation. In short, specific activators operate on the basis of graded similarity-matching.

  3.2. General activators

  These allow knowledge units to ‘fire off without continual direction from the FWM. Perhaps the single most important general activator derives from the frequency of prior use. The more often a particular unit, or related set of units, has been triggered in the past, the greater is its ‘background level’ of activation, and the less additional activation is necessary to call it into play. As a consequence of the activation principle, the products of well-used units will have an advantage in the competition for FWM access.

  Whereas the activational resources supplying stored knowledge units are virtually unrestricted, the attentional activators serving both parts of working memory (but especially FWM) are drawn from a strictly limited ‘pool’. Certain types of directed processing make heavy demands upon this limited attentional resource. When this happens, control falls by default to active knowl
edge units, which are likely to be contextually-appropriate, high-frequency units. This, together with the coherence principle, gives the machine its characteristic conservatism.

  4. Retrieval mechanisms

  For the most part, the basic dynamics of the retrieval process have already been described above. However, a number of points need to be elaborated further. The machine has three mechanisms for bringing the products of stored knowledge units into FWM. Two of them, similarity-matching and frequency-gambling, constitute the computational primitives of the system as a whole and operate in a parallel, distributed and automatic fashion within the KB. The third retrieval mechanism, serial or directed search (analogous to human inference), derives from the sophisticated processing capabilities of FWM. Within this restricted workspace, through which informational elements must be processed slowly and sequentially, speed, effortlessness and unlimited capacity have been sacrificed in favour of selectivity, coherence and computational power.

 

‹ Prev