The Science of Language

Home > Other > The Science of Language > Page 37
The Science of Language Page 37

by Noam Chomsky


  One possible way of making sense of how there might be some overlap (as it appeared in the article) but also very considerable difference is to begin by noting that so far as one can tell, human conceptual resources differ little from those of other primates until about age 1, but after that diverge. Some claims made by Elizabeth Spelke (2003, 2004, 2007) and her colleagues suggest this. If such a suggestion is on the right track, we must ask just how human conceptual resources come to diverge – what mechanisms are at work and what triggers or activates them. I doubt anyone has a really good idea, but here is one possibility: perhaps a distinctively human conceptual acquisition mechanism of some sort comes ‘online’ at this age, perhaps sooner. Perhaps that system incorporates a version of Merge, which we have seen on independent grounds to be distinctively human. Perhaps Merge assembles more primitive ‘parts’ that are shared with other creatures. Or perhaps a mechanism that ‘manufactures’ distinctive human concepts was in place before the introduction of language; that would suit Cudworth, among others. But this is speculation.

  Independently of a specific hypothesis, however, it is plausible to speak of mechanisms and not something like acculturation as a way to make sense of divergence because there is good reason to think that uniquely human conceptual resources are shared across the whole population of humans, without regard to culture and environment, and that word acquisition (that is, pairing of sounds and meanings in lexical items in a way that makes them available for the computational system to operate) is both effortless and automatic – requiring that the conceptual and ‘sound’ resources are operational before association, and even before speech production (to explain comprehension before production). In effect, the sound and meaning (conceptual) resources relevant to language (its articulation and understanding) must be innate, and all that is required for lexical acquisition is pairing of readily activated concepts and sounds. The issues are taken up in more detail in Appendix V.

  Page 26, On reference; limiting the study (including meaning) to syntax

  Chomsky sees the study of linguistically expressed meaning and sound as a form of syntax. The basic idea is that syntax is the study of the ‘intrinsic’ properties and combinatory principles of ‘signs.’ If you think, as he does, of words and sentences not as what comes from people's mouths (not sounds at all, but frequency- and amplitude-modulated ‘signals’) or marks on a piece of paper or other medium (orthographic marks, for example, and surely irrelevant), but as mental states or events, the study of linguistic syntax becomes the study of the intrinsic properties of what a mentalistic theory of language takes as its subject matter. Syntax becomes, then, a study of linguistic mental states/events in the mind – their properties, how they are ‘computed,’ and what they do at interfaces. Chomsky's approach to language is not just syntactic, but internalist. Granting that, it is still useful to keep in mind that much of the study of language in linguistics so far, and the primary focus of much of Chomsky's own work, is syntax in a narrower sense, where it is understood as the study of the basic (or core) computational (lexical + combinatorial) mechanism. To distinguish this kind of study from other forms of syntactic study, it is often called “narrow syntax”; it includes the study of morphology, phonology, and linguistic meaning where these are thought to be included in the core language computational system and what it yields at its ‘interfaces’ with other mental systems. Core syntax corresponds to what in Hauser, Chomsky and Fitch (2002) is called the “FLN” (“faculty of language, narrow”). “FLB” (faculty of language, broad) includes FLN and the various ‘performance’ systems that constitute the perceptual, articulatory, and “conceptual-intentional” systems on the other side of the core system's interfaces.

  Just how far does linguistic syntax extend? One way to answer is to point out – reasonably, given the areas where naturalistic/scientific research into language has proven successful so far – that one reaches a point in investigating a phenomenon where, while it remains focused on the mind, it ceases to be a study of what is occurring in the language faculty itself (in that mental system, narrowly or broadly conceived). Fodor's study of meaning passes that point; he places a substantial part of the study of linguistically expressed meaning in what he calls a “Language of Thought” (LOT). His concepts go even further afield; they go outside the mind too, taking them out of the reach of syntax, however broadly conceived; see Appendix VI. Chomsky does not take even the first of Fodor's steps, and for good reason: fobbing off the work of the study of linguistically expressed meaning/semantics onto other systems in the head (relations to things ‘outside’ are excluded from the subject matter of theories by the internalist for other reasons) complicates matters, and apparently unnecessarily. They require adding to such a theory a detailed account of another system, and of the precise relations (element to element, presumably) between the language system and the other. That is one of Chomsky's points in the discussion immediately below this. Chomsky's much more austere account of linguistically expressed meanings includes it in the states, elements, operations, and growth of the core language faculty. And as it happens, that kind of study is possible; see Appendices V and VI. As for studies of the rest of the mind and of relations between the components of mind: that can be included in syntax in a still broader sense, so long as that kind of study excludes from its subject matter things ‘outside’ (whether abstract as Frege's senses and numbers were or ‘concrete’), and relations to these, if any.

  Page 27, On Chomsky's view of meaning and interpretation

  The analogy between phonology/phonetics and (internalist) semantics is developed in detail in Chomsky (2000: 175–183). There and elsewhere, he speaks of the language faculty offering through its ‘meaning’ (semantic) information at SEM and the complex of resources it brings to that interface ways of understanding or configuring how human minds can understand and – where this is an issue (such as cases of perception and thought about the world – understand and configure our experience and thought about the world. Specifically, he says that “the weakest plausible assumption about the LF [SEM] interface is that the semantic properties of the interface focus attention on selected aspects of the world as it is taken to be by other cognitive systems, and provide intricate and highly specialized perspectives from which to view them, crucially involving human interests and concerns even in the simplest cases” (2000: 125). By “other cognitive systems,” he presumably means – for perception – vision, facial configuration, taste, audition, and the like and – more generally – imagination, and other auxiliary systems. The precise status and role of the semantic ‘information’ provided at SEM is unclear. He offers an example that helps a bit, one where the linguistically expressed concept HOUSE (which we can think of as a component of the complex of semantic information offered in a sentence at SEM) plays a role. He continues, “In the case of ‘I painted my house brown’ the semantic features impose an analysis in terms of specific properties of intended design and use, a designated exterior, and indeed far more intricacy.” His point can be put in terms of the kinds of assumptions humans are likely to develop, and the coherence of the discourse and stories that they would be likely to accept, given the conceptual resources that this sentence brings to bear. We would assume that the brown paint went on the outside of the house, for example, and if the sentence appeared in a longish story about how to get to my house, we would be entitled to be upset if the brown paint went on the inside walls because we would be expecting the outside to be brown. Moreover, with the exception of the realtor's “Do you want to see this house?” (where ‘see’ is read as inspect/take a look at), one cannot see a house while on the inside. House – sometimes called a “container word” – in sentential complexes like the one here focuses attention on exterior surfaces, not interior. Another illustration is found in Locke's discussion of the concept of a person, which is mentioned as the discussion continues immediately below: in effect, the concept PERSON assigns to persons to which it is used to refer a notion o
f personal identity of a complex and legally and morally relevant sort. It includes psychic continuity, and underwrites assigning responsibility to people for acts committed and promises made – and so on. In this connection, Locke says that the concept of a person is a “forensic” one: it is one that is designed for understanding people as agents with commitments (promises and contracts) and responsibilities to meet them.

  Any identity conditions on things to which a person refers that are imposed by the concepts expressed at SEM by natural languages – and those that come about as results of complex interactions between language and other cognitive systems – are unlikely to be well defined and characterizable independent of context, unlike what one would be likely to find – or rather, what scientists strive to maintain – in the concepts for objects and events of mathematics and the natural sciences. Chomsky's brief discussion of the traditional Ship of Theseus thought experiment later (pp. 125–6) illustrates this. When Theseus rebuilds his wooden ship by replacing over time one plank and beam after another and throwing the discarded planks and beams in the dump where his neighbor gathers them and builds a ship that has the same planks and beams in the same configuration as the original ship, we do not assume that the ship built with the discarded parts is Theseus’, even though its composition is the same as the one with which he began. That is because ships and other artifacts portrayed in this kind of way have ownership by specific individuals or quasi-persons such as corporations built into their descriptions. Under other descriptions, telling different stories, we are not sure, or we have different intuitions.

  Philosophers in recent years have constructed thought experiments in which persons – or their bodies or their minds – are fissioned or fused and placed in varying circumstances to explore intuitions about when “we” would say that person P at time t is the same or different from person P’ at a different time. Nothing is settled: intuitions can be pushed one way or another and someone can be persuaded to give a firm answer at one time under one story, and persuaded in another way with a different story. This should be no surprise; commonsense concepts are rich and flexibly used, but still have limitations. The richness and complexity of the commonsense concepts expressed in our natural languages allow them to serve human interests and reach reasonable resolutions to practical problems in a variety of circumstances. But not all: there is no reason to think that a commonsense concept should be able to offer answers to every question posed. There is clear evidence of this in the obvious failures found in trying to put commonsense concepts to use in the sciences. However, they reveal their limits in other ways too, such as the thought experiments mentioned above. There is no disadvantage in any of this. Because of their richness and complexity, commonsense concepts can support the extraordinary degree of flexibility displayed in their application by persons when they use language – a flexibility that has proven very advantageous in the practical domain, although not at all in the scientific and mathematical. And while they yield answers to only some kinds of questions and not others – because, presumably, they are innate products of acquisition mechanisms ‘devoted’ to yielding what they can – this too is an advantage. For if innate, they are readily available even to the young and can thereby enable the child to quickly develop an understanding of people and their actions and things and what they can be expected to do.

  The illustrations of the richness and complexity of commonsense concepts – and Chomsky offers many in his writings – do not say how the conceptual/meaning resources brought to SEM ‘work.’ A compelling naturalistic answer requires a science, one that does not exist and, for various reasons explored elsewhere in the discussion, may never exist. Given the extent to which externalist intuition can distort and mislead an account of perception and thought ‘about the world,’ however, it is worthwhile developing an internalist alternative picture of how they ‘do their job.’ Chomsky makes some suggestions when he speaks of (internal) sentences yielding “perspectives,” cognitive ‘tools’ with which a person can comprehend the world as portrayed by other mental systems. I add some suggestions in Appendix XII.

  Page 28, On what the “semantic interface” provides

  That is, the language system provides what its ‘design’ (a term that must be treated cautiously, as indicated on pages 50ff.) allows for with regard to the use of language in thought, understanding, and the like. To scotch a possible misunderstanding: does language (the language system) assert and declare? No. It offers the opportunity to do so; it provides the means for individuals to express an assertion – as we would say in the commonsense domain. This is, I think, the way in which one should understand Hinzen's (2007) view of a syntactic, internalist approach to truth.

  Chapter 3

  Page 30, Chomsky on representation, computational theories, and truth-indications

  When Chomsky calls his derivational theory of linguistic syntax a “computational theory” and offers by means of it a compositional account of linguistic meaning that is not only internalist, but focused on operations in the language faculty, it is obvious that he is not – unlike Rey and Fodor – adopting a computational theory of a re-presentationalist sort. This point is connected to his effort to avoid reference (and truth) in constructing an account of linguistically expressed meanings.

  That said, he does say of SEMs – of language's contributions at the “conceptual-intentional” interface – that they can be seen as offering “truth-indications.” The relevant quotation appears in Chomsky (1996) immediately after pointing out that what he calls the “referentialist thesis” (that words like water refer by some kind of ‘natural’ relationship to a substance ‘out there’) must be rejected, because “language doesn’t work that way.” It does not, we have seen, because people make reference to things; language does not ‘directly refer.’ Indeed, even if someone does use a word to refer, and succeeds for an audience in doing so, no referential relationship comes to be established in a way that makes it of interest to the empirical and naturalistic science of language. What he has to say about truth and truth conditions seems to parallel this:

  We cannot assume that statements (let alone sentences) have truth conditions. At most they can have something more complex: “truth-indications,” in some sense. The issue is not “open texture” or “family resemblance” in the Wittgensteinian sense. Nor does the conclusion lend any weight to the belief that semantics is “holistic” in the Quinean sense that semantic properties are assigned to the whole array of words, not to each individually. Each of these familiar pictures of the nature of meaning seems partially correct, but only partially. There is good evidence that words have intrinsic features of sound, form, and meaning; but also open texture, which allows their meanings to be extended and sharpened in certain ways; and also holistic properties that allow some mutual adjustment. The intrinsic properties suffice to establish certain formal relations among expressions, interpreted as rhyme, entailment, and in other ways by the performance systems associated with [the] language faculty. Among the intrinsic semantic relations that seem well established on empirical grounds are analytic connections between expressions, a subclass of no special significance for the study of natural language semantics, though perhaps of independent interest in the different context of the concerns of modern philosophy. Only perhaps, because it is not clear that human language has much to do with these, or that they capture what was of traditional interest. (1996: 52)

  In brief compass, Chomsky dismisses major features of contemporary philosophy of language as of little or no relevance to the science of language and the science of natural language meaning in particular. He also emphasizes that linguistic meaning (of a sort that can be investigated by the science of language) is intrinsic to expressions themselves and sufficient to establish certain ‘formal relations’ (in effect, those pointed to earlier in the discussion where “relational” expressions were mentioned, and again in the next section); the study of these relations is a “shadow” of syntax. As for open t
exture, here is his explanation (p.c. January 2009): “By saying that expressions – say ‘river’ – have open texture, I just mean that their intrinsic linguistic properties do not in themselves determine all circumstances of appropriate use to refer. Such complex human actions as referring can and do take into account all sorts of other features of human life. For example whether I’d call something a river or a creek depends on complex historical and cultural factors. If the Jordan river happened to be displaced, precisely as it is, in central Texas, people would call it a creek (often a dry creek, since most of its water has been diverted into the Israeli water carrier).” This does not deny that it is possible to introduce a technical sense of ‘refer’ for a version of model theory; see in this regard Chomsky (1986, 2000), and particularly the discussion of ‘relation R’ in the latter. But reference in this sense is stipulative. Similar points can be made about truth-in-a-model.

  Chapter 4

  Page 33, On human concepts, native and artifact, and theories of them

  Two remarks. First, as indicated in a comment of mine above, the distinction between truth conditions and truth-indications is an important one, reflecting as it does the idea that the natures of the concepts (semantic information) that the language faculty makes available to “conceptual-intentional” systems at the SEM interface do not determine the ways in which these linguistically expressed concepts can – much less, should – be employed by persons, even arguably by ‘other systems,’ given that the relations between language and them are likely not determinate. The linguistically expressed concepts do, however, ‘instruct’ in their distinctive ways those other systems on the other side of the interface, and by extension, they provide ‘indications’ for how they can be used by people. Note in this respect that English speakers use HOUSE differently than (say) HOME. The differences in the ways in which we use these concepts indicate something about the natures of the concepts themselves – thus, the kinds of ‘instruction’ that they give. Nevertheless, nothing like determination is relevant here. Nor, of course, should anyone be tempted by the idea that the natures of the concepts themselves are fixed by the ways in which they happen to be used – contra the popular “conceptual role” view of (linguistically expressed) concepts found explicitly in the work of Sellars and others, and implicitly among many more. Rather, they have the natures that they do because that is the way they developed/grew in that individual, assuming that the set of concepts that can develop (and the possible ways that they can develop) is (are) more or less fixed. Being fixed, they might be finite in number, but depending upon how they develop, the issue of how many of them there might be remains open.

 

‹ Prev