Book Read Free

The Science of Language

Page 34

by Noam Chomsky


  Theory-general conceptions of simplicity continue as they have for centuries to guide the scientist's (not a child's mind's) construction of theories of various sorts of various domains, of course, including the linguist's theories of linguistic phenomena. And in the theory-general domain, it is hardly likely that nature has provided the scientist with an automatic selection device that does the job of coming up with a good theory of whatever phenomena the scientist aims to describe and explain. We do seem to have something; we have what Descartes called “the light of nature,” and what Chomsky calls the “science-forming capacity.” It is a gift given to humans alone, so far as we know, although not a gift to be attributed to God, as Descartes suggested. It is somehow written into our bio-physical-computational natures. Because of this capacity, we can exercise what Peirce called “abduction” and contemporary philosophers are fond of calling “inference to the best explanation.” It is very unlike other kinds of inference; it is more like good guessing. Probably some internal operation that seeks simplicity of some sort or sorts is a part of it. In any case, with it and other mental contributors, we miraculously but typically manage to converge on what counts as the better/improved description or explanation for some set of phenomena.

  Appendix X: Hume on the missing shade of blue and related matters

  Getting a better way to look at Hume's missing shade of blue problem requires abandoning Hume's very strong empiricist principles. His color problem, and the more general issue of novel experience and novel judgment, can only be dealt with satisfactorily by appealing (as Chomsky suggests) to theories of the internal systems that yield the kinds of ways we can cognize, and the limits that those systems set. It is clear that Hume was aware of the general issue; as indicated, he recognized that the mind can (and does) understand and make judgments about novel moral circumstances. It is also clear that he recognized that the limits of our capacities to understand and experience must be set by internally (and – while he did not like the fact – innately) determined ‘instincts.’ Further, he thought that understanding how these instincts work lay outside human powers. But on that, he was quite obviously wrong. Through computational sciences of the mind, they are now coming to be understood. As Chomsky emphasizes elsewhere in our discussions, one of the aims of cognitive science is coming to understand the natures of these cognitive instincts.

  Note that modern theories of color and other visual artifacts (borders, shading, depth . . .) assume that they are products of internal machinery that both yields and, by having specific ranges and domains, sets limits on what can be sensed by the human visual system. (Bees can and do respond to photon input in the ultraviolet energy range with what we must assume are some kinds of internal visual representation, presumably something like colors. We, however, cannot respond to, nor produce [‘represent’] colors as a result of this kind of stimulation.)1 While these systems are not recursive in the way that the language system is and do not yield discrete infinities of output, it is still plausible to speak of the color system of the human mind ‘generating’ colors and other visual artifacts by relying on algorithms that have no ‘gaps’ of the sort Hume pointed to in their ranges, nor in their output domains. They have no problems with novel input and pose no puzzles about how a novel output could be produced. Hume's specific puzzle supposes novel output without novel input, of course, but it is not at all clear how he could even pose his puzzle in actual cases. One reason, as we now know, is that the human visual system is capable of producing between 7.5 and 10 million different colors – that is, yielding discriminable combinations of hue, brightness, and saturation. What, then, would count as a Humean unique color? How would it be specified or individuated without a well-developed theory of what the human visual system can produce? How would one decide whether a person's system was or was not producing that specific color when presented with an array of closely matched stimuli? How does one take into account fatigue, color blindness, accommodation, etc.? Hume now would be able to answer these questions, but unfortunately for his empiricist views and his reluctance to believe that one can investigate mental instincts, he could answer them – and pose them in a reasonable way – only because there is a plausible theory of the internal instinct that yields colors in place. Given the theory and its success, one can even ask how seriously we should take philosophical thought experiments like Hume's. Surely at this stage, the existing theory counts as a better guide to which questions about what we can see and experience are reasonable to continue to raise.

  Hume's insight, that our cognitive efforts are largely a matter of instinct, has now lots of evidence in its favor. Existing theories of vision and language indicate that he was on the right track in pointing to instinct as the source of our mental operations – pointing, that is, to the automatically developing biophysical machinery that makes discrimination in various modalities and judgment possible. The point generalizes: we can engage in science at all only because we can rely on some kind of ‘instinct’ that offers us something that Peirce labeled as “abduction.” Chomsky makes this point in the main text.

  There are other lessons in this detour into color. An obvious one is that the internalist approach – approaching the issue of what the human cognitive system can or cannot do in this and likely other cases by looking inside the head and constructing a theory of how a system operates and develops – is supported by points like these. Another is that theory and the abstraction and simplification that is characteristic of and likely necessary to theory-construction trumps lists and compilations of ‘raw data.’ Data can be understood – and as the color case indicates, really only gathered – only when there is a theory that is making progress in place. Further, as with language, so too with color: commonsense views of both not only can be, but are in fact, misleading. If you want to know what a language or color is – or rather, want to have an objective and theoretically viable conception of a language or a color – look to how the best extant theories individuate them. In the case of a language, look to how to specify an I-language, and in the case of color, look to triples of hue, brightness, and saturation or rather, to the internalist theory that makes these the dimensions along which colors vary.

  1 No one should assume that there is a one-to-one matching of color experiences and spectral inputs. In fact, one of the most convincing reasons to construct a theory of the internal operations of the visual system is that they seem to modify and ‘add’ a great deal to the input, to the extent that one should abandon the idea that vision somehow accurately ‘represents’ input and its distal cause/source. For example, three ‘monochromatic’ (single-wavelength) light sources of any of a range of different wavelengths (so long as they provide light within the normal input range of each of the three human cone systems) can be varied in their intensity and combined to produce experience of any spectral color. Three fixed wavelengths, any color. If so, at some risk of confusion, we can say that colors are in the head. They are because they are ‘produced’ there. This commits one to something like a ‘projectivist’ view according to which what and how we see owes a great deal to what the mind contributes to our vision. Chomsky's view of the roles of human concepts and the language faculty is a version of projectivism. The ways in which we experience are due to a large extent to the ways in which we can experience, and these depend essentially on what our various internal systems provide in the way of ‘content,’ much of which must be innate.

  Appendix XI: Syntax, semantics, pragmatics, non-Chomskyan and Chomskyan

  The proposal offered here will puzzle many. Chomsky proposes treating semantics as a variety of syntax. Or to put it in a different way: a theory of linguistic meaning is a theory of what is in the head (and of how it can configure experience). In fact, as other appendices such as VI point out, his view is stronger still: what is called “linguistic semantics” or “formal semantics” is syntax, which is for Chomsky the study of (linguistic) symbols inside the head which are intensionally (theoretically) describ
ed and explained. Semantics in the traditional referential sense probably does not exist. Reference – a form of human action – appears to be out of reach of science.

  By way of background and to settle on the relevant ways to understand the terms “syntax,” “semantics,” and “pragmatics,” I review Charles Morris's (1938) now more-or-less standard distinction between syntax, semantics, and pragmatics and then take up Chomsky's modifications in it. Focusing first on the standard view and especially contemporary understandings of it allows us to see why Chomsky's proposal seems surprising and also allows me to highlight the modifications Chomsky proposes.

  Morris offered his distinctions – derived to a large extent from distinctions Charles Saunders Peirce advanced before him and that Carnap and others were advancing in the 1920s and 1930s – as a contribution to what he thought of as the scientific study of signs or symbols. He suggested that syntax be understood as the study of what might be called the intrinsic properties of signs, those that are internal to them. This could include at least some relational properties, such as ‘after’ said of a sign that follows another, where some ordering is specified (temporal, left–right . . .). Sometimes sets of signs and their relevant properties are both stated (lists, etc.) and created – as with the syntactic items found in formal logical systems. Semantics is the study of how such symbols relate to ‘things’ and sets of things. Semantics focuses, then, on syntax and some set of objects and their states. It appears to be a two-term relationship, although Frege and others made it a three-term relationship between sign, sense, and object(s). Pragmatics includes still another entity, the speaker. Pragmatics deals with the use of signs by a speaker to deal with ‘things.’ Morris simply assumed that the signs he had in mind are marks on a page (orthography) or perhaps sounds thought of as somehow ‘out there.’ This is a common assumption among logicians and others who invent and employ symbol systems. Natural language symbols, of course, are in the head.

  Formal logic and logicians’ thoughts about it and its aim played an important role in shaping many researchers’ views of signs and how they operate. Consider a formally defined set of symbols such as those that appear in first-order predicate logic. A first-order logic text stipulates that various varieties of marks that appear in the textbook – for example, capital and small letters in some font or another (a, b, c, . . . P, Q, R . . .), parentheses, perhaps some specially created marks such as ├, ≡, or the familiar tilde (~) for a negation operator – constitute the syntax of the calculus. The usual aim is to make the semantically relevant roles of the stipulated signs clear and explicit: some logic text, for example, might stipulate that the complex sign ‘(x)Fx’ is to be read as a universal quantifier ‘(x)’ appearing before the predicate sign ‘F’ and a variable sign ‘x,’ the Fx constituting an “open sentence” with a variable, and the whole with quantifier is a proposition/statement to the effect that F is a property that all individuals x have. Generally speaking, the signs chosen are arbitrary and the reasons for their choices are transparent: logicians care most not about the niceties of syntax, so long as the stipulated elements are ‘perspicuous’ about their jobs. Their job is aiding semantics as they conceive it. Logicians put together some syntax that can highlight the properties and relations that they take to be semantically important. They are primarily interested in truth and reference and the truth-preservingness of various inferences and argument structures. The signs in terms of which a calculation is carried out are designed to help ensure explicitness and provide a way to avoid ambiguities. The users of signs are typically ignored.

  With this kind of focus, the standard view of semantics becomes the study of signs syntactically characterized, but conceived of as items outside the head, and treated in terms of their (supposed) relations to things and circumstances ‘outside’ the sign, usually thought of as things and circumstances in the world or perhaps in a model. Semantic discussion generally, then, focuses on what traditionally have been called matters of truth (for sentences) and reference (for terms), thus on aspects of what philosophers call “intentionality.”

  As mentioned in another connection, many who do natural language semantics work within a picture of how a semantic theory should be constructed introduced by Gottlob Frege's efforts at the end of the nineteenth and into the twentieth century to construct a semantics for mathematics. Frege introduced a third element besides sign and circumstances and things (for him, ‘entities’ in a world of mathematical abstract entities). He introduced what he called “senses,” where these are conceived of as mediating between words and things. His reason for introducing them depended on the observation that what he called “proper names” (which in his “On Sense and Reference” included any singular term – term that refers to a single entity – and thus included definite descriptions too) could have the same referent while differing in meaning, or what he called “sense.” For example, the singular terms “the morning star” and “the evening star” both are said to refer to Venus, but to differ in meaning, or sense. A mathematical example might note that “√9” and “3” have the same referent, but differ in sense again. Frege viewed a sense as an abstract object. Others turned it into a psychological entity (Fodor 1998) or turned it into a function from a sign to a referent, offering mildly different views about what that function might be, and what it involves. Introducing senses complicated matters a bit, but the primary focus remained as before: semantics studies a relation (perhaps mediated, perhaps not) between word and ‘things’ – perhaps abstract, perhaps concrete.

  As remarked earlier again, Frege himself seems to have had serious doubts about applying his picture of a semantics for mathematics to natural languages. It is easy enough to see why. He assumed that within a community, a sign expresses a unique sense (no ambiguity can be allowed), and that each sense “determined” a unique referent (in the case of a sentence, a truth value). Ignoring senses, semantics-as-usual assumes a sign–thing relationship of some determinate sort. Nothing like these one–one mappings of sign to referent(s) is met in the uses of natural languages, even though the conditions for such mappings are reasonably closely honored in the practices of mathematicians.

  It is a very different matter with Chomsky's view of natural language syntax and semantics. The basic assumptions about semantics outlined above remain more or less in place, but only with strict qualifications. Natural language syntax deals with properties internal to ‘signs,’ as usually assumed. However, these signs are inside the head, and their syntax is not the syntax of formally constructed systems. Signs have something to do with meaning; but it turns out to be non-relational, and meanings and the signs themselves remain inside the head, even ‘inside’ the sign in the form of semantic features. Clearly, the study of signs is not the study of marks on a page (nor those supposed entities, public linguistic sounds), but items in the mind. Their study is not the trivial orthographic study of chosen marks on a page and their invented combinatory operations nor – for that matter – the study of chosen sets of binary codings in a machine ‘language.’ Rather, their syntactic study is a form of naturalistic study of varieties of state/event of the sorts that figure in linguistic computations – that is, of anything that is involved in yielding sound–meaning pairs. Such study reveals that the relevant kinds of signs are found in human heads. Perhaps there are aspects of them in other organism heads, as discussed earlier, but there must also be distinctively human phonological and semantic features of ‘words’ and/or other elements. Neither phonological/phonetic nor ‘semantic’ features are referential. The standard view of the semantic study of signs invites thinking in terms of intentionality, so that a sign is a sign of something and saying what the ‘semantic value’ or ‘content’ of the sign is is saying what its “referent” is, where its referent is distinct from the sign. Chomsky holds instead that it is not only perfectly possible to specify what he calls the meaning of a sign without introducing anything like standard semantics, but that because of all the evide
nce against the existence of anything like a referential semantics for human languages (see the text and Appendix VI), it is the only way to proceed.

  A plausible way to conceive of what he proposes is to say that he adopts something like Frege's notion of a sense, but denies that the sense or meaning of a sign/expression is a separate entity – perhaps an abstract entity as Frege seems to have believed. A sense is instead intrinsic to the sign/expression itself, the sign is located inside the head, and the sign and its sense can be studied by naturalistic scientific means. A sign is in fact a mental entity – a state/event located in the head that figures in a linguistic computation and provides ‘information’ with its features to other systems. And meaning features or ‘semantic’ features plus phonological features (and perhaps formal ones) are not just features of internal linguistic signs themselves, but they constitute such internal signs: semantic features are partially constitutive of lexical items. They are the features that as a result of a linguistic derivation/computation end up at the “semantic interface” (SEM or, sometimes, “LF” for “logical form”) and they there constitute the semantic ‘information’ or what can be called “internal” or “intrinsic content” of a sentence/expression. Or to put it in a Fregean way: they serve as modes of presentation.

 

‹ Prev