by James Reason
In applying these notions to text comprehension, Rumelhart was concerned with the interaction between ‘top-down’ schema-based knowledge and ‘bottom-up’ text information. If a reader arrives at the schema intended by the author, then the text has been correctly comprehended. If the reader can find no schema to accept the textual information, the text is not understood. And if the reader finds a schema other than the one intended by the author, the text is misinterpreted.
Other modern variants of the schema notion include scripts (Abelson, 1976), plans (Neisser, 1976), prototypes (Cantor & Mischel, 1977) and personae (Nisbett & Ross, 1980). A script, for example, is a structure that represents a familiar episode or scenario, such as visiting the dentist or going to a restaurant. Good accounts of the current state of schema theory have been provided by Taylor and Crocker (1981), Hastie (1981) and Fiske and Taylor (1984).
The current view of schemata, then, is as higher-order, generic cognitive structures that underlie all aspects of human knowledge and skill. Although their processing lies beyond the direct reach of awareness, their products — words, images, feelings and actions—are available to consciousness. Their encoding and representational functions include lending structure to perceptual experience and determining what information will be encoded into or retrieved from memory. Their inferential and interpretative functions go beyond the given information, allowing us to supply missing data within sensory or recalled information.
As Taylor and Crocker (1981) pointed out, “virtually any of the properties of schematic function that are useful under some circumstances will be liabilities under others. like all gamblers, cognitive gamblers sometimes lose.” Systematic errors can arise (a) from fitting the data to the wrong schema, (b) from employing the correct schema too enthusiastically so that gaps in the stimulus configuration are filled with best guesses rather than available sensory data and (c) from relying too heavily upon active or salient schemata. Most of these schematic error tendencies can be explained by a single principle: a schema only contains evidence of how a particular recollection or sensory input should appear. It has no representation of what it should not look like.
3.2. Norman and Shallice’s attention to action model
The Norman-Shallice model (Norman & Shallice, 1980) represents a family of action theories (Norman, 1981; Reason & Mycielska, 1982; Reason, 1984a) that, although they embrace aspects of the experimental literature, are derived mainly from natural history and clinical observations of cognitive failures. They start from the belief that an adequate theory of human action must account not only for correct performance, but also for the more predictable varieties of human error. Systematic error forms and correct performance are seen as two sides of the same theoretical coin.
The challenge for error-based theories of human action is to specify a control system that not only allows for the relative autonomy of well-established motor programs (as indicated by the error data), but also acknowledges that most of our actions nevertheless go according to plan. The Norman-Shallice model achieves this by the provision of two kinds of control structures: (a) horizontal threads, each one comprising a self-sufficient strand of specialized processing structures (schemas); and (b) vertical threads, which interact with the horizontal threads to provide the means by which attentional or motivational factors can modulate the schema activation values. Horizontal threads govern habitual activities without the need for moment-to-moment attentional control, receiving their triggering conditions from environmental input or from previously active schemas. Higher-level attentional processes come into play, via the vertical threads, in novel or critical conditions when currently active schemas are insufficient to achieve the current goal. They add to or decrease from schema activation levels to modify ongoing action. Motivational variables also influence schema activation along the vertical threads, but are assumed to work over much longer time periods than the attentional resources.
3.3. The decline of normative theories
Until the early 1970s, research into human judgement and inference had a markedly rationalist bias. That is, it assumed that the mental processes involved in these activities could be understood, albeit with minor deviations due to human frailty, in terms of normative theories describing optimal strategies (see Kahneman, Slovic & Tversky, 1982, for a more thorough discussion of this approach). Errors were attributed either to irrationality or to unawareness on the part of the perceiver. Thus, human beings were assumed to make decisions according to Subjective Expected Utility Theory, to draw inferences from evidence in accordance with logical principles and to make uncertain judgements in the manner of ‘intuitive scientists’ employing statistical decision theory or Bayes Theorem. Something of the spirit of these times can be gained from the following quotations:
In general, [our] results indicate that probability theory and statistics can be used as the basis for psychological models that integrate and account for human performance in a wide range of inferential tasks. (Peterson & Beach, 1967)
Man, by and large, follows the correct Bayesian rule [in estimating subjective probabilities], but fails to appreciate the full impact of the evidence, and is therefore conservative. (Edwards, 1968)
In the late 1960s and early 1970s, this view came under vigorous attack from a number of quarters. The work of three groups, in particular, proved ultimately fatal to the optimizing, rationalist or normative view of human cognition: Herbert Simon and his collaborators at Carnegie-Mellon, Wason and Johnson-Laird in Britain and Tversky and Kahneman, two Israelis then recently transplanted to North America.
3.3.1. Bounded rationality and ‘satisficing’
In the 1950s and 1960s, psychological research into decision making “took its marching orders from standard American economics, which assumes that people always know what they want and choose the optimal course of action for getting it” (Fischhoff, 1986). Simon (1956, 1957, 1983) and his collaborators (Cyert & March, 1963) were among the first to chart how and why the cognitive reality departs from this formalised ideal when people set about choosing between alternatives.
Subjective Expected Utility Theory (SEU) is an extremely elegant formal machine, devised by economists and mathematical statisticians to guide human decision making. Simon (1983, p. 13) described it as “a beautiful object deserving a prominent place in Plato’s heaven of ideas.” SEU makes four basic assumptions about decision makers.
(a) That they have a clearly defined utility function which allows them to assign a cardinal number as an index of their preference for each of a range of future outcomes.
(b) That they possess a clear and exhaustive view of the possible alternative strategies open to them.
(c) That they can create a consistent joint probability distribution of scenarios for the future associated with each strategy.
(d) That they will (or ought to) choose between alternatives and/or possible strategies in order to maximise their subjective expected utility.
As a few moments introspection might reveal, flesh and blood decision making falls a long way short of this ideal in a number of significant respects. Whereas SEU assumes that decision makers have an undisturbed view of all possible scenarios of action, in reality human decision making is almost invariably focused upon specific matters (e.g., buying a car) that from a personal perspective are seen as largely independent of other kinds of choice (e.g., buying a house, selecting a meal from a menu).
The formal theory requires that the decision maker comprehends the entire range of possible alternatives, both now and in the future; but the actuality is that human beings, even when engaged in important decisions, do not work out detailed future scenarios, each complete with conditional probability distributions. Rather, the decision maker is likely to contemplate only a few of the available alternatives. Moreover, there is a wealth of evidence to show that when people consider action options, they often neglect seemingly obvious candidates. In addition, they are relatively insensitive to the number and importance of thes
e omitted alternatives (Fischhoff, Slovic & Lichtenstein, 1978). The considered possibilities are often ill-defined and not ‘thought through’. This imprecision makes it difficult for decision makers to evaluate their choices subsequently, since they are unable to recollect exactly what they did and why. Such reconstructions of the decision-making process are further complicated by hindsight bias, the ‘knew it all along’ effect, or the tendency to exaggerate in hindsight what was actually known prior to the choice point (Fischhoff, 1975).
SEU assumes a well-defined set of subjective values that are consistent across all aspects of the world. Again, the reality is markedly different. Subjective utilities vary from one type of decision to the next. As Simon (1983, p. 18) put it: “particular decision domains will evoke particular values, and great inconsistencies in choice may result from fluctuating attention.” In short, human decision making is severely constrained by its ‘keyhole’ view of the problem space, or what Simon (1975, p. 198) has dubbed bounded rationality:
The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problems whose solution is required for objectively rational behaviour in the real world—or even for a reasonable approximation of such objective rationality.
This fundamental limitation upon human information processing gives rise to satisficing behaviour, the tendency to settle for satisfactory rather than optimal courses of action. This is true both for individual and for collective decision making. As Cyert and March (1963) demonstrated, organisational planners are inclined to compromise their goal setting by choosing minimal objectives rather than those likely to yield the best possible outcome. We will return to these notions in the next chapter.
3.3.2. Imperfect rationality
While the greater part of the cognitive world of the 1960s and early 1970s was tinkering with the nuts and bolts of short-term memory and selective attention, Wason and his students were tackling the thorny problem of how people draw explicit conclusions from evidence. In particular, they were fascinated by the fact that so many highly intelligent people, when presented with simple deductive problems, almost invariably got them wrong. And these mistakes were nearly always of a particular kind. This once lonely furrow has now yielded a rich harvest of texts documenting the various pathologies of human reasoning (Wason & Johnson-Laird, 1972; Johnson-Laird & Wason, 1977; and Evans, 1983).
Many of their observations were in close accordance with Bacon’s (1620) Idols of the Tribe: they observed that while people are happy to deal with affirmative statements, they find it exceedingly difficult to understand negative statements (“it is the peculiar and perpetual error of the human intellect to be more moved and excited by affirmatives than by negatives”), and they show an often overwhelming tendency to verify generalisations rather than falsify them (“The human understanding when it has once adopted an opinion draws all things else to support and agree with it”). Most of these errors could be explained by one general principle: “Whenever two different items, or classes, can be matched in a one-to-one fashion, then the process of inference is readily made, whether it be logically valid or invalid” (Wason & Johnson-Laird, 1972). In short, reasoning is governed more by similarity-matching than by logic. We will return to this point several times in later chapters.
3.3.3. Judgemental heuristics and biases
Tversky and Kahneman (1974) directed their initial assault against Ward Edwards’s claim that people made uncertain judgements on the basis of Bayesian principles. Bayes Theorem, as applied for example to the case of a physician judging whether a patient’s breast lump is malignant or not, integrates three types of evidence: the prior or background information (e.g., experiential or estimated base rates influencing the doctor’s subjective estimate of malignancy), the specific evidence concerning an individual case (e.g., the results of a mammography test) and the known screeningpower of that test (e.g., its ‘false alarm’ rate). On the basis of a series of ingenious studies, Tversky and Kahneman concluded: “In his evaluation of evidence, man is apparently not a conservative Bayesian: he is not a Bayesian at all.”
Instead, they argued that when making judgements concerning the likelihood of uncertain events, “people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgemental operations. In general, these heuristics are quite useful, but sometimes lead to severe and systematic errors” (Tversky & Kahneman, 1974). Two heuristics, in particular, exert powerful effects in a wide range of judgemental tasks: the representativeness heuristic (like causes like) and the availability heuristic (things are judged more frequent the more readily they spring to mind). These too will be considered again in later chapters.
3.4. Reluctant rationality
Whereas the mistakes of bounded rationality arise primarily from the limitations of the conscious ‘workspace’ and those of imperfect rationality from an overreliance on simplifying heuristics operating within the schematic knowledge base, the mistakes of reluctant rationality stem from human cognition’s unwillingness to engage in the laborious yet computationally powerful processes involved in analytic reasoning.
As William James (1908) pointed out, these attention-demanding processes are very difficult to sustain. He was writing about the difficulties of maintaining concentration upon a boring topic; but his description also holds good for the pursuit of any novel line of thought:
our mind tends to wander, [and] we have to bring back our attention every now and then by using distinct pulses of effort, which revivify the topic for a moment, the mind then running on for a certain number of seconds or minutes with spontaneous interest, until again some intercurrent idea captures it and takes it off. (James, 1908, p. 101)
This difficulty of holding a mental course is closely bound up with what Bruner and his associates (Bruner et al., 1956) termed cognitive strain. This arises whenever a mental course must be steered against rather than with the prevailing currents of habit or desire. It is the cost associated with the struggle to assimilate new information.
In a now classic series of studies, Bruner and his colleagues identified some of the strategies people adopt in order to minimize cognitive strain in ‘attention-intensive’ tasks such as problem solving and concept attainment. These strategies may be either efficient or inefficient, depending upon the task demands. For example, when attempting to distinguish exemplars from non-exemplars of a category, people resorted to the criterion of verisimilitude. That is, they tended to prefer cues that had proved useful in the past, and thus had the ‘look of truth’ about them, regardless of their present utility. This was part of a more general strategy called persistence-forecasting.
Though it continues to spring surprises, our world contains a high degree of regularity. In this respect, persistence-forecasting is an extremely adaptive way of applying prepackaged solutions to recurring problems. “What we lose in terms of efficiency or elegance of strategies employed for testing familiar hypotheses, we probably gain back by virtue of the fact that in most things persistence-forecasting does far better for us with less effort than most other forms of problem solution. It is only in unconventional or unusual situations that such an approach proves costly” (Bruner et al., 1956, p. 112).
In summary, reluctant rationality—the avoidance of cognitive strain—is likely to lead to an excessive reliance on what appear to be familiar cues and to an overready application of well-tried problem solutions. It therefore functions, as does imperfect rationality, to direct our thoughts along well-trodden rather than new pathways. And, like bounded rationality (to which it is closely related), it restricts potentially profitable explorations of the problem configuration.
3.5. Irrationality and the cognitive ‘backlash’
Part of the interest in these less-than-perfect varieties of human rationality arose as a cognitive ‘backlash’ to earlier motivational interpretations of human fallibility. This work has shown that ma
ny of the sources of human error lie not in the darker irrational aspects of our nature, but in “honest, unemotional thought processes” (Fischhoff, 1986). So what is there left for irrationality to explain? Some theorists, notably Nisbett and Ross (1980), would claim almost nothing. Even the most apparently irrational tendencies like racial prejudice can be explained by nonmotivational factors such as the fundamental attribution error, people’s readiness to overattribute the behaviour of others to dispositional causes, thus ignoring the influence of situational factors such as role or context (see Fiske & Taylor, 1984, for a detailed account of this phenomenon).
Given that even psychologists can fall prey to such fundamental errors and so impute more irrationality to humankind than is justified, are there sufficient grounds for abandoning the notion of irrational mistakes altogether? While accepting that the swing away from what was once a catch-all category is a move in the right direction, it would be unwise to allow the pendulum of psychological fashion to swing too far towards the ‘cognitive-only’ extreme. There still remain a number of well-documented human aberrations that cannot be readily explained by cognitive biases alone.
For example, Janis’s (1972) account of the groupthink syndrome clearly shows that group dynamics can introduce genuine irrationality into the planning process. How else could one account for the way in which small, elite policy-making groups (e.g., the architects of the Cuban Bay of Pigs fiasco in 1961) conspire to repress adverse indications or the excessive confidence shown by these planners in the rightness of their decisions? Indeed, a definition of irrational behaviour must include something like the wilful suppression of information indicating that a particular course of action could only end in disaster. In this respect, Kennedy’s advisers behaved no less irrationally than the would-be bird men who jump off high buildings with wings attached to their arms. The tragedy of the Somme in 1916 or the fall of Singapore in 1942 suggest similar processes operating among groups of military planners. Such blunders, as Dixon (1976) demonstrated in his analysis of military incompetence, require the involvement of both motivational and cognitive explanations. We will return to this question in Chapter 7 when discussing the distinction between errors and violations.