Book Read Free

Content and Consciousness

Page 14

by Daniel C. Dennett


  Projecting the structure of language in this way into the brain and postulating hierarchical assemblies for controlling the production of utterances is, of course, a rather empty trick. No actual mechanisms for doing this work and no discrete anatomical hierarchies are being proposed, but still this projection can provide a fruitful way of looking at things, of ordering the experimental and everyday observations we have about verbal behaviour. For example, we observe errors in verbal behaviour and can now assign the malfunctions responsible for these to specific locales on the map of hierarchies, even though this does not in any real sense tell us where these malfunctions occur in the brain. Stuttering is the sort of mistake we believe to be closest to the muscles, although we are prepared to find much deeper causes for it; lisping and saying ‘twee’ for ‘tree’ would be assigned to fairly stable bits of misprogramming at the phonemic level. Above this would be the malfunctions responsible for spoonerisms and other mistakes in phonemic sequence, then mispronunciations and, still higher, strictly verbal and grammatical errors, including both ‘Freudian slips’ and more permanently entrenched misuses of words and malaprops. The arguments of Chomsky and others would place responsibility for errors in syntactical ordering still higher, since the evidence suggests that the determination of syntactic structure is prior to word choice. Above this level there are only the solecisms that are not, strictly speaking, linguistic errors at all, like saying ‘Oops, dammit’ instead of ‘Please excuse me’. The particular difficulties that aphasics have in finding the word ‘on the tip of their tongues’ during post-stroke recovery (and in Penfield’s remarkable cases of electrically induced partial aphasia8) would be caused by malfunction at the level of oratio recta implementation of oratio obliqua commands.

  Introducing the possibility of inhibition at the various levels, we can formulate a plausible sketch of what it is to talk to oneself, and to think. Sometimes when we say we are thinking we mean in a very strong sense that we are talking to ourselves: whatever is going on is expressed in full sentences, in definite words in a definite order, even in a particular ‘tone of voice’ with particular emphases, and the thinking of these thoughts takes just about as long as saying the words aloud would take. But sometimes our thoughts are not like this; sometimes they are swift, somehow not quite formulated into particular words, and in no tone of voice. In the former case one is tempted to agree with Ryle’s description of thinking: one is talking without moving one’s lips. One can talk, or one can whisper, or one can just move one’s lips; or, one can eliminate even the lip movements and whatever is left is this type of thinking.9 Then one might describe the latter as further eliminating the formulation of temporal sequences of particular words. Something of temporal succession remains but it is not the same as the easily clockable sequence of words in the former case. The physicalist supposition to go along with this ‘introspective’ account is that when one is talking to oneself the situation differs from when one is talking out loud in that the last-rank efferent impulses are inhibited, and that when the efferent activity is inhibited at the level of oratio recta commands, the swifter form of thinking is going on.

  Such a view is plausible, I believe, but it does not lead me to propose an identity of thoughts with these brain processes, even with these brain processes assigned a certain content strikingly like the content we would normally assign to a thought. For immediately the objections would arise that no mechanism has been proposed to make me aware of these neural activities (and I certainly am not aware of these neural activities, while I am aware of my thoughts), and in any event the content of the activities is not at all a discriminable characteristic of them, such as I might be able to ‘intuit’, but merely an artificial determination made by some observing neurologist. These objections betray, I believe, a fundamental misunderstanding of the problem, but they do hold against such a naïve identity theory, which betrays the same misunderstanding. There is a lot more to be done before any sort of an answer should be attempted to the question of what a thought is, and if such a hypothesis about the organization of linguistic behaviour controls is likely to be a part of the answer, there is no reason yet to propose any identities. The hypothesis does give us a very general, Intentionally characterized model of the organization of a ‘speech centre’, which is what we need for the elaboration of our perceiving machine.

  Suppose that the art of making neural net stimulus analysing mechanisms has advanced to such a state that it was feasible, and desirable for some reason, to make a ‘perceiving machine’. Its sense organs could be television cameras (two for binocular overlap), and the output from these cameras could be recoded in any regular way to fit the input requirements of an immense neural net analyser which then fed its output into a ‘speech centre’ computer. The speech centre computer would be programmed to transform the output of the analyser into printed English ‘reports’, like ‘I see a man approaching’.

  It might be worth mentioning that there would be no need for television screens in this machine. Setting up the screens and then monitoring them with some device would simply postpone the activity of the analyser. Since a television output, unlike the output of the eye, is in the form of a sequential stream rather than a simultaneous multi-channel barrage, it would probably be advisable to ‘spread’ the sequence of impulses reporting each complete scanning of the television camera image by time-lags over a bank of inputs in the analyser, so that single scannings are fed in simultaneously, but this is a point of engineering, and not a logical requirement. Similarly, if one did arrange in this way for spreading the television output over an array of inputs, there could be only reasons of engineering (e.g., economy of wiring) for having the array reproduce the image in the camera. Since nothing will be looking at (or photographing) the arrays (no little man in our perceiving machine), there is no need for the pattern of inputs to produce any image or topological analogue of the sense organs’ image. Any stable spreading system could be used.

  The analyser would eventually produce outputs to which one would have to assign significance – by the arduous procedure of checking the multitudinous outputs against the vast variety of scenes set before the cameras and finding regularities between descriptions of the scene and outputs. The trick would then be to programme the ‘speech centre’ computer to take over this job and produce English sentences describing the scenes presented as a transformation of the analyser’s output. Such a task is out of the question at present, but it is plausible to assume that the efficient way of programming the speech centre computer would be to organize it along the hierarchical lines described above. Probably the only remotely feasible way to achieve this would be to build in certain ‘learning’ capacities in the speech centre computer and ‘teach’ it to produce (true) English sentences. The ‘perceiving machine’ that resulted from all this miraculous expertise would, of course, be a pale copy of a human perceiver, since no provision would be made for it to use its ‘perceptions’ for any purpose other than as the basis for verbal reports, nor would the machine be given the capacities to lie about its view, to decide to talk about some other subject, to ask questions, etc. It would simplemindedly reel off reports of what it saw – giving almost Skinnerian verbal responses to its visual stimuli. But it would share one crucial feature with human perceivers: it could not be mistaken about its ‘mental’ states.

  Once such a machine were operational, in what ways could its reports be fallible? First, it would be fairly easy to trick the machine. Presenting it with a moving dummy could result in the report ‘I see a man approaching’, or, for example, the television outputs produced when a man was approaching could be recorded and then fed into the analyser at some time when there was no man approaching, producing something like an hallucination in the machine, or one might say one had ‘hypnotized’ the machine. In these cases the analyser would issue in the same output as for veridical ‘perceptions’ of men approaching. Aside from such trickery there might be malfunctions in the television system or th
e analyser. This could be guarded against by redundancy measures, but would still be possible, we can suppose.

  Such a malfunction or bit of trickery would result in an analyser output that was mistaken relative to the outside scene. This mistaken output would be expressed, in a false English sentence, by the speech centre computer unless it too made a mistake. Feedback loops and redundancy in the speech centre computer would be designed to correct malfunctions before the actual print-out, or if the malfunction took place in the last rank – the actual printing – such typographical errors could be erased and corrected by further feedback loops. If feedback loops failed to correct speech centre malfunctions there could be ‘verbal’ errors in the final report, but if ‘verbal errors’ are discounted or corrected, whatever analyser output does enter the speech centre will be correctly expressed relative to the rules of language programmed into the computer. Disallowing misuse of language and ‘slips of the tongue’ there is no room for mistakes to occur in the expression of the analyser’s output. That output, whether right or wrong relative to the actual sensory scene, cannot help but be correctly expressed if feedback corrects all verbal errors. Our machine, like any machine, can malfunction at any point inside, but all the possible malfunctions sort themselves into two kinds, depending on where they occur. Uncorrected errors that occur prior to speech centre input are all errors in afferent analysis; any such error will ensure that the ultimate output of analysis (the input to the speech centre) is mistaken relative to what was being analysed: the outside world. Any uncorrected error occurring after initial input into the speech centre will result in a verbal slip, a mistake in expression. Put another way, errors prior to speech centre input make for errors in what is to be expressed by the speech centre, but if the speech centre functions properly, the only possibility of error is relative to the outside world. The key word is ‘expressed’. The perceiving machine as a whole can be said to make reports describing its external ‘visual’ environment, but it does not report or describe the output of the analyser, since that output is not a replica of what is outside, but a report of sorts itself. The speech centre part of our machine does not examine or analyse its input in order to determine its qualities or even its similarities and dissimilarities with other inputs, but rather produces English sentences as expressions of its input. The infallibility, barring verbal slips of the ‘reports’ of the analyser output, is due to the criterion of identity for such output states. What makes an output the output it is is what it goes on to produce in the speech centre, barring correctible speech centre errors, so an output is precisely what it is ‘taken to be’ by the speech centre, regardless of its qualities and characteristics in any physical realization.

  There are, then, two kinds of errors the machine can make. It can be wrong in its analysis of what is actually before its ‘eyes’ (as in illusions, hallucinations and other misidentifications), and it can make only verbal errors in ‘uttering’ its reports of perception, but it cannot misidentify the output which comes from the analyser, which is the same logical state as the speech centre input. In other words, it cannot be mistaken about that which it seems to see. Suppose that instead of making its reports in the form ‘I see a man approaching’ it always wrote ‘I seem to see …’, or ‘it is just as if I were seeing …’. Reports in this form would disavow responsibility for fraudulent input or mistakes in the analyser and hence would be infallible barring only correctable verbal errors.

  This should not be taken to mean that the change in the form of words changes what is going on; the switch from the ‘I see …’ idiom to the ‘I seem to see …’ idiom does not ensure that a particular thing is being done (e.g., a report is being made about output rather than about the outside world), but that what is being done is to be interpreted in a certain way. Whatever the form of words, whatever the sequence of printed symbols, what is printed will be an expression of the analyser output; the form of words is just being used as an indicator that one is to discount discrepancies between output and outside world. One could just as well leave reports in the ‘I see …’ form and attach a small sign to the machine, ‘Not Responsible for Fraudulent Input or Errors in Input Analysis’. Carried over to the case of human utterance this point becomes: the immunity to error has nothing to do with the execution of any personal action. An account of a man’s intention, or of what he thinks he is doing, plays no role in explaining introspective certainty; with whatever intention an utterance is made (considered on the personal level), on the sub-personal level it will be an expression of the input into the human speech centre (which receives its input from more sources than just perceptual analysers), and as such it is immune from error relative to the outside scene. In fact, of course, when we intend our utterances to be immune in this way, when we intend, that is, that others judge them in this light, we frame our expressions in the ‘I seem to see …’ idiom. In using this idiom a person is not intentionally expressing the input of his speech centre, for he has no notion of speech centre input at all, most likely; what accounts for the immunity to error is nothing the person does – no personal action, intentional or otherwise – but what is going on in his brain.10

  Using the notion of content ascription, and staying firmly on the sub-personal level of explanation, we can say that a sentence uttered is not a description of a cerebral event, but rather the expression of the event’s content, which, after all, may be itself a description – of the visual field, for example. As an expression, it is subject to verbal errors, but not to misdescription or misidentification.11 He who reads this sentence aloud is not uttering a description of the marks on the page, and although he may make a verbal slip, he cannot commit a factual error, since he is not reporting or describing; for instance, lisping while reading aloud is not saying that there are th’s on the page when in fact there are s’s. The content of an event, or of the logical state of which a physical state is the realization, is not a matter of intrinsic physical characteristics or qualities that could be reported or described, but of functional capacities, including the functional capacity to initiate (barring malfunction) just the utterance or class of utterances that would be said to express the content in some language. Thus the Intentional characterization of an event or state – identifying it, that is, as the event or state having a certain content – fixes its identity in almost the same way as a machine-table description fixes the identity of a logical state. The difference is that an Intentional characterization only alludes to or suggests what a machine-table characterization determines completely: the further succession of states.

  It is now even more tempting than before to identify thoughts and other mental events with certain other things, say Intentionally characterized brain processes given a certain functional location or logical states of the cerebral Turing machine, but still I will resist the temptation. The argument that deflects me from this course is in some respects silly, but can be given enough force to satisfy me that there is no gain in proposing an identity and some danger of confusion. One could argue against the identity that the mental experience or thought is what is reported when the content of a certain cerebral state is expressed. For it is admitted that it is not the cerebral state that is reported and we do say that we report our thoughts and inner experiences, so a thought, being what-is-reported, cannot very well be identical with what-is-expressed. (We do, of course, also talk of people expressing their thoughts, and this takes some if not all of the wind out of the sails of this argument. But we do not express a neural event or state; we express its content, which is hardly the sort of thing one would go to the trouble of identifying with a thought.) Starting from the position that thoughts, being what-is-reported, cannot be identified with anything in the sub-personal story, it would be poor philosophy to argue further that there must really be something, the thought, that is reported when it is true that I am reporting my thoughts. On that argument our perceiving machine would have to have thoughts, or at least thought-analogues, as well, and we have
not instructed the engineers to put thoughts in our perceiving machine. There is no entity in the perceiving machine, and by analogy, in the human brain, that would be well referred to by the expression ‘that which is infallibly reported by the final output expression’, and this is the very best of reasons for viewing this expression and its mate, ‘thought’, as non-referential. On the personal, mental language level we still have a variety of dead-end truths, such as the truth that people just can tell what they are thinking, and the truth that what they report are their thoughts. These are truths that deserve to be fused, and then the fact that there should be such truths can be explained at another level, where people, thoughts, experiences and introspective reports are simply not part of the subject matter.

  6

  AWARENESS AND CONSCIOUSNESS

  XIV THE ORDINARY WORDS

  The account of introspective certainty given in the last chapter is the first step in a theory of consciousness or awareness. The infallible reporter in the mind has evaporated, to be replaced at a different level of explanation by the notion of a speech-producing system which is invulnerable to reportorial errors just because it does not ascertain and does not report. This is just a first step, however, for there are more aspects of consciousness than just perceptual consciousness, and more things we do with speech than just make sincere reports of our experience.

  This chapter will be an examination of our concepts of consciousness and awareness with a view not merely to cataloguing confusions and differences in our ordinary terms but also to proposing several artificial reforms in these terms. It is fairly common practice to use ‘consciousness’ and ‘awareness’ as if they were clearly synonymous terms, or at least terms with unproblematic meaning, but I shall argue that these concepts, as revealed in the tangled skein of accepted and dubious usage, are an unhappy conglomerate of a number of separable concepts and that the only way to bring some order and manageability to the task of formulating a theory of consciousness and awareness is to coin some artificial terms to reflect the various functions of the ordinary terms.

 

‹ Prev