Content and Consciousness

Home > Other > Content and Consciousness > Page 19
Content and Consciousness Page 19

by Daniel C. Dennett


  I can imagine very bizarre outcomes of imaginary actions. I can imagine, for example, picking up a teacup and moulding it in a twinkling into a live rabbit. The fact that in problem solving by pondering these unusual outcomes are excluded from the imagination – even though I may imagine an outcome which is not right – needs explanation. Imagining a course of action does not include the outcome automatically if there is anything new or puzzling involved. The mere fact that imagination is neither a direct transcription or something earlier experienced (in which case it might be stored and then rerun like a film) nor a completely disconnected sequence of ‘imagery’ must mean that it proceeds in a regulated way, guided by stored information on experience in general. Although we cannot see or introspect this general knowledge working to guide our imaginings, we can infer its existence from what we are aware1 of when we imagine.

  The fact that we have no introspective access to these internal operations, whatever they are, is obscured by the fact that we do have introspective access to some operations, or, to put it better: while engaged in problem solving we are aware1 of a series of things prior to arriving at a conclusion, and we can often, on the basis of this awareness1, divide our problem solving into a sequence of operations or steps. Reaching a conclusion is something that happens, that occurs before or after other things occur, and figuring out the answer to a problem takes time. When one is asked how one figured out the answer, one can often give a list of steps, e.g., ‘first I divided both sides by two, and then I saw that the left side was a prime …’. What one is doing when one reports these steps is by no means obvious. Are the operations reported in some way atomic, or can they be analysed into further operations? Suppose I tell you I first divided eight by two. If you then ask how I did this, I will be left speechless: the operation had no introspectible parts for me, but does that mean it could have no further parts, discoverable by some other sort of analysis?

  Our inability to analyse introspectively our own problem-solving activities below a certain level of simplicity strongly suggests an analogy with certain sorts of problem-solving computer programmes. In the field of ‘computer simulation of cognitive processes’,6 the professed object is to get a computer to solve a problem ‘the same way’ a human being solves it, and typical procedure is to construct one’s computer programme so that it prints out a play-by-play of its operations in the course of searching for a solution to a problem. This ‘machine trace’ or ‘programme trace’ is then compared with the ‘protocol statements’ of a human subject or subjects describing their own efforts to solve the same problem. The machine trace of the solution of a particular problem may report many steps – particularly the mindless trial-and-error sort of series often called ‘brute force’ computing – that do not appear in the subject’s protocol, and in fact are explicitly denied by the subject, e.g., ‘I certainly didn’t methodically check each piece on the chessboard before concluding my rook was unguarded.’ What can be concluded from these dissimilarities? Some critics have wanted to conclude that this is evidence that the computer and the subject are using very different methods, or their computations involve different processes, but this does not follow. In the case of the computer, there is a certain limit to the depth of analysis of the print-out, determined by the language of the print-out. Ordinarily the print-out is in a high-order language rather than in basic machine language, and hence the computer is unequipped to report the truly atomic steps of its computations, the opening and closing of ‘logic gates’, for example. Is there a similar limit to the depth of analysis in the human protocol? It is tempting to suppose that when the subject, on introspection, finds that addition of single digits or some other operation is simply unanalysable for him, an atomic operation lacking introspectible parts, that he has reached the limit of analysis determined by the ‘language’ in which he is programmed for these particular tasks. It would not follow from this supposition that addition of digits is, for human beings, an unanalysable atomic process rather than the complex amalgam of operations it is for a digital computer, but only that it is, for people, an unanalysable activity: they are not aware1 of any deeper operations. The human print-out capacity in this case might just not go deep enough to reveal the ‘brute force’ computing being done in the brain.

  The point becomes clearer when one considers the problem of ‘intuition’. Intuition is often contrasted by the workers in the field of computer simulation to brute force methods of solution, and the simulators are somewhat in the dark about how one could even begin to build intuition into a programme. The subject’s protocol to the effect that he just ‘caught on in a flash’ is seen by some to be a stymying indication, but such a protocol could be a case of ‘print-out’ in a language far removed from the basic operations. A quixotic but illuminating exercise would be to programme a computer to solve certain problems without providing any print-out capacity except for the standard phrase, accompanying each solution: ‘It just came to me, that’s all’. Would this not be building intuition into a programme? Intuition, after all, is not a particular method of deduction or induction; to speak of intuition is to deny that one knows how one arrived at the answer, and the truth of this denial is compatible with one’s having arrived at the answer by any method or process at all, including ‘unconscious’ brute force computing. Psychologists will never discover a hidden process with the characteristic hallmarks of human intuition, because intuition has no hallmarks.

  The analogy between introspective protocol statements and machine trace print-outs is illuminating, but imperfect. The link between the internal operations of information processing and human introspection is much looser than that between any computer programme so far developed and its machine trace. For one thing, as we noticed in Chapter VI, a human introspector can be enticed into speculating, and the line between the two is often hard to discern. When the chess player says something like ‘it was the asymmetry of his bishops that gave me the clue’ he is no longer just accounting what ‘went through his head’, but putting a fallible interpretation on it. Moreover, the access a person has to the information processing he is doing varies from time to time. Consider the following brace of examples. In case A I walk into the kitchen, pick up an apple and bite into it. When asked why, I remark with surprise ‘Oh! I wasn’t really aware that I had picked up the apple at all. I don’t know why I did.’ In case B I walk into the kitchen, see the apple, say to myself: ‘That is a nice apple I have there, and I won’t spoil my supper, and I like apples, so I think I’ll just pick it up and eat it.’ Here, when I am asked about my action, I have quite an elaborate protocol to present. But in both cases we can be sure that approximately the same information processing went on, including a lot that did not enter into my protocol in case B. In both cases I would not have picked up the apple had I been in someone else’s house, nor would I have bitten into a raw egg, nor would I have eaten the apple had I known it was time for dinner. It follows that either the appropriateness of my behaviour is an immense coincidence or a great deal of information must have been processed of which I can give no account in the protocol, e.g., that apples are not poisonous, that it is socially acceptable to eat apples before dark, and so on virtually ad infinitum. This is not to say that all this information need have been processed at this moment, but that earlier processing has prepared me for the appropriate processing I now perform.

  Information need not come to the fore, need not cross the awareness line, for it to contribute to the producing of a conclusion or an inference. If I have stored the information that tomorrow is Friday and I see on the calendar that on Friday we are dining out, I can say almost immediately that tomorrow we are dining out, without running through the argument in my head. But I cannot do this just as soon as I see the calendar; the information from the calendar must be brought to the storage and operational areas, whatever they may be, which produce the ‘conclusion’ that I can say. Whatever one wants to call these subconscious productions of new information, their ope
ration is essentially logical and they must occur if behaviour control is not sheer magic.

  These operations must occur in animals as well as human beings. Consider the behaviour of certain low-nesting birds that feign a broken wing when a fox appears in order to lead it away from the nest where the unprotected chicks are. The bird’s behaviour may well be only an ‘unreasoning’ tropism, a rigid, inherited routine, but it would not work, and hence would never have become genetically established, if the fox could not act rationally, unless, of course, the fox’s behaviour is also pure tropism and the entire performance is a stately, ritual dance instinctively performed by hungry predator and alarmed bird, with no benefit accruing to the predator. One might be tempted to adorn the fox’s behaviour and internal cerebral activity with the postulated ‘mental process’: ‘I like to eat birds, therefore I like to eat limping birds; I cannot catch flying birds, but this bird is not flying; it is limping, therefore …’, but this is silly. In the first place, dumb animals have no language and hence cannot be aware1 of such thoughts, and in the second place it would be most bizarre even for a person to go through such a tortuous bit of rehearsing to himself. The thinking of the thoughts, the saying of the words, is not what is necessary, but still the verbal formulae do exhibit, incompletely and vaguely, what must be going on in some internal operations.

  In saying these operations are logical, one must be careful not to suppose that the operations are cases of rigorous, foolproof deduction, governed by the ‘laws of logic’. In computers, initial design and subsequent careful programming can ensure that no operations occur that are not sanctioned by the laws of mathematics and logic, but apparently the organization of the brain is not similarly designed. We can jump to false conclusions and miscalculate arithmetical problems. In particular, there is no need to suppose that the ‘logic gates’ of a digital computer have their counterparts in the brain, at the ‘machine language’ level. It is at the gross level of solving problems, plotting trajectories, or generating prime numbers that a computer’s operations can simulate, to some degree, the activities of human beings, and the fact that they can do this goes some way towards showing that the operations that make up the gross activities of the computer are similar to the operations ‘behind’ human problem solving, but there need not be any binary system, for example, discoverable in the brain. Our ability to ‘follow’ the rules of logic in processing information need not be due to any inherited structure ensuring sound, consistent information processing (one thinks here of innate knowledge of a priori principles); we may develop our logical acumen inductively, as part of the development of appropriate afferent-efferent coordination. Part of the way things are is the way things logically are, and if our behaviour is to be appropriate to the way things are, it must be produced along logically sound lines.7

  Should we call this internal information processing reasoning, or thinking, or are there some other phenomena that better fit our intuitions? If we prefer to heed the ordinary notion that reasoning is a matter of conscious acts of the mind, a better way to define reasoning would be as awareness1 of an argument sequence leading to a conclusion. The decision is parallel to the decision on whether ‘aware1’ or ‘aware2’ is the notion of awareness. Is introspective access or felicity of behaviour to be the benchmark of reasoning? Consider a mathematician who does a problem in his head without even saying the steps to himself, and when we ask him how he did it, he says ‘I just knew’. Should we say he did the problem without thinking? He can tie his shoe without thinking, so why not solve the problem without thinking? Tying his shoe requires some information processing to go on, and so does solving the problem, and if we decide, implausibly, that this is what deserves the name thinking, then, of course, mute animals can think. If, on the other hand, we restrict thinking to something like ‘consciously reasoning with concepts’, then animals cannot think, since they cannot be aware1 of anything, but also people can do many quite intellectual things without thinking.

  In the latter sense of ‘think’ we can think enthymematically; in the former sense we cannot. When I say ‘It costs only a pound so it can’t be a real antique’, I leave out many steps in the argument; I do not mention information that must have contributed somehow to the production of my conclusion – information about the shrewdness of antique dealers, the law of supply and demand, the going rates for antiques. If I did not know these contributing facts, if they were not stored in me somehow, I would not have been able to arrive at this conclusion. It does not follow from this that the logical steps we write down when we present a formal argument rather than an enthymeme are parallel to distinct operations or events in the brain, but only that the information (including ‘supposed’ information and misinformation) used in each step must have contributed to the organization that produced the conclusion. Writing out the logical steps rigorously is thus not being a biographer of any mental or cerebral events, even if the brain does, on a particular occasion, operate rigorously. Anscombe says, ‘… if Aristotle’s account of the practical syllogism were supposed to describe actual mental processes, it would in general be quite absurd. The interest of the account is that it describes an order which is there whenever actions are done with intentions …’8 An order which is where? It is not an order which there is in our ‘conscious thoughts’ for we need not think them, and this is what Anscombe must mean by saying the account of the practical syllogism does not describe ‘actual mental processes’. Where the order is is in the Intentional characterization of the brain as an information processor, but this need not be a sequential ordering of events and operations.

  XX REASONS AND CAUSES

  We use our reasoning powers not only to solve puzzles but also in what Aristotle called practical reasoning, to guide and determine our actions. In recounting our reasoning, then, we are not always telling how we got a certain solution or conclusion, but often why we decided to do whatever we are doing. The how and why questions can be seen to merge in our ordinary discourse, as when one asks why I think my answer is the correct solution to a problem, and I respond by telling him how I derived it. This practice of asking and giving one’s reasons plays a central role in our notions of action and responsibility, and indeed in our notion of a person; a person performs actions, and is aware of them and his reasons for them, while bodies (to which sub-personal accounts are appropriate) only undergo motions. The role of reason-giving will be examined in detail in Chapter 9; first our capacity to engage in the practice must be examined. We have seen that often, when a person is asked for a ‘protocol’, his account drifts imperceptibly away from pure introspection into speculation, as he is tempted to interpret that of which he was aware1 instead of just recalling it. This tendency produces significant confusions in our notion of reason-giving.

  The practice of asking a man for his reasons is accompanied and explained by the doctrine that a man is the best authority on his own reasons, and even perhaps a logically insuperable authority. What accompanies the notion of insuperable authority in turn is the notion of infallible access. Does a man have infallible access to his own reasons? The infallibility discovered and explained in Chapter 5 was only an infallibility of expression of that of which one was aware1, and not at all an infallibility of detection of inner processes, events or causes. If reasoning, then, is a process effective in determining our actions, the sort of infallible access described in Chapter 5 will not suffice to give us knowledge of our reasons that is immune to error.

  The capacity for awareness1 provides for a sort of knowledge that is immune to error, which I shall call non-inferential knowledge. One is aware1 of, and thus knows non-inferentially, what information one is in receipt of – but not whether this information is true or false. That is, one knows non-inferentially that one seems to see a man approaching, or seems to have a bent knee, for signals (veridical or not) to that effect cross the awareness line. We cannot ‘misidentify’ the signal (e.g., as signalling an elbow itch rather than a bent knee), but we can go on t
o interpret the signal as veridical, and then room for more than merely verbal error is introduced. The case of pain is interesting in that a report of pain has, as it were, a built-in ‘seems-to’ operator. When one is aware1 that one has a pain in the foot, the signal to that effect cannot be misidentified and amounts to having a pain in the foot. If it is veridical then one has an injury in one’s foot, but if it is not veridical one still has the pain. To have a pain is to seem to have an injury, so the idiom ‘I seem to have a pain’ contains a redundant and meaningless disclaimer; it amounts to saying ‘I seem to seem to have an injury’.

  When one has non-inferential knowledge of a pain, or of seeming to see a man approaching, one can have inferential knowledge of an injury, or of a man approaching. In some cases the inference is a conscious one. That is, one is first aware1 that one seems to see a man, and then is aware1 that since one seems to see a man most likely there is a man. This happens only in the most unusual circumstances – when, for instance, one is expecting an optical illusion, or suspicious that one may be hallucinating, or just engaging in a thought experiment for philosophical ends. More usually the inference is subconscious, a fait accompli that involves no thinking (one is aware1 of no argument). The distinction, then, is logical; it distinguishes one evidential status from another. It should not be confused with a psychological distinction between inferences we happen to have made consciously, and things we know (regardless of evidential status) without having made any conscious inference. Inferential knowledge is knowledge where there is logical room for an inference, and hence room for more than just verbal error.

 

‹ Prev