by Noam Chomsky
Continuing, what is the status of current terms such as ABSTRACT? One can think of these as provisional theoretical terms. Think of them as descriptive in the way indicated: not describing things in the world, but describing ways to understand. In addition, conceive of them as rather like dispositional terms in that – like the dispositional term “soluble” said of salt – they do not offer explanations themselves of how semantic features ‘work,’ but describe by a discernible or noticeable ‘result’: salt dissolves when placed in water, and ABSTRACT yields an understanding of an ‘object’ as abstract. When hearing “George is writing a book on hydrodynamics and it will break your bookshelf,” you ‘see’ George's book as at first abstract, and then concrete. (Presumably, your LI ‘contains’ both ABSTRACT and CONCRETE.) Looking at terms such as ABSTRACT in this way, one can take them then to be provisional theoretical terms that as part of a naturalistic theoretical effort have something like the status of a dispositional term that can be replaced in the vocabulary of an advanced science of semantic features by terms that describe what a semantic feature is and – with the aid of the theory – explain how they ‘work,’ how they and lexical items are acquired, and the like.
Color science offers an analogy of sorts. When we (as we say) “see a green patch” (or better, I suspect, to capture the adverbial character: “sense greenly patchly,” awful though that sounds), the greenness is actually contributed by our visual system/mind in response (usually) to photon impingements on our arrayed retinal cones. Our minds, through the operations of the visual system, configure shape- location- and color-sensation in colored ways, a specific green being one of those ways. Any specific green and its contribution to ‘color experience’ are captured in a complex of theoretical terms – hue, brightness, and saturation – where these are scalars in a theory of the operations of the visual system, and the theory shows how and why specific arrays of firing rates of retinal cones subjected to various forms of calculation yield triples with a specific set of HBS values for a specific ‘point’ in a retinotopic ‘visual space.’ A specific set of values of hue, brightness, and saturation describe and explain ‘how a person sees in a colored way’ on a specific occasion. The analogy to semantic features should be clear, but it is a limited one. For one thing, language, unlike vision, often – perhaps in the great majority of times – operates ‘offline’: it does not depend heavily as does vision on stimulation from other systems in the head or on ‘signals’ from the environment. Vision can go off line too: it seems to do so in the case of imagination and dreams, but presumably in such cases, this is due to internal stimulation, and the degree to which it can go off line is nothing like what one can, and routinely does, find with language. Another disanalogy: language is what Chomsky calls a “knowledge” system; vision is not. LIs store semantic and phonological information that – especially in the latter case – configure how we understand ourselves, our actions, and our world(s), not just a specific form of fully internal ‘sensory content.’
Notice that a view of this sort avoids any need to somehow link ‘words’ to semantic values, properties, elements in a language of thought, or anything else of the sort. Because of that, there is an immediate advantage: there is no need to add to a theory of linguistically expressed meanings an account of what meanings are, nor need for a theory to explain how the link to elements in a LOT takes place, nor need to tie or “lock” (Fodor's term) elements in a LOT to properties ‘out there.’ Issues of acquisition are located where they belong, in an account of semantic features, where they ‘come from,’ and how they come to be assembled. Because they are located there, it becomes a lot easier to understand how linguistically expressed concepts could come to be so readily acquired and accessible to anyone with the right ‘equipment’ in their heads: assuming that the features are universal and so too the assembly mechanism, the fact that a few clues usually suffices to yield a reasonable grasp of a concept that one has not needed before becomes a lot easier to deal with. Further, one gets what Chomsky takes to be an advantage: a parallel between the way(s) in which naturalistic theories of language deal with phonological and phonetic features and the ways in which they ‘work.’ The parallel is useful in indicating to convinced externalists that dropping the myths of referentialism and representationalism do not make it impossible for humans to communicate.
There are serious issues to resolve on the way to a theory of lexical semantic features and how they are acquired and do their work. One is whether the concepts that play a role at SEM are “underspecified,” allowing for ‘filling out’ by other systems. Or are they perhaps over-specified, so that some pruning is required at SEM? Another, related issue is whether there will prove to be a need to assign some uniquely identifying feature to a specific concept feature set in order to distinguish that set from the set for another concept. If it were required, one could ask why that feature alone could not serve to individuate a concept. A third is whether during the derivation of a sentential expression, one can allow for insertion (or deletion) of features. Chomsky seems to think so in (2000: 175 f.), where he notes that LIs such as who and nobody yield restricted quantifier constructions at SEM, and other LIs such as chase and persuade seem to demand a form of lexical composition where a causal action element (for persuade, “cause to intend”) and a resultative state (x intends . . .) appear to demand composition in the course of a derivation.4 Nevertheless, he remarks, for “simple words” (2000: 175) that it is plausible to speak of their features simply being ‘transported’ intact to SEMs. Reading “simple words” as something like candidates for morphological lexical stems of open class words (not formal, such as formal versions of “of,” “to,” plus TNS . . .), the point seems to be that what might be called “morphological roots” such as house and real (each meaning the relevant cluster of semantic features, represented by HOUSE and REAL) are neither composed nor decomposed during the course of a derivation/sentential computation. I assume so in what follows. A view of sentential derivation that essentially builds this idea into a picture of the morphological and syntactic operations that ‘take place’ in the course of a derivation is available in Borer (2005). In part to keep lexical semantic roots together as “packages” of semantic features, I adopt her picture (see McGilvray 2010).
There is a good reason to do this, assuming it is not incompatible with the facts, so far as they are known. The reason lies in an argument that Fodor (1998) employed to reject the view that stereotypes could serve the purposes of meaning compositionality in the construction of sentences. For example, while most people have stereotypes for MALE (as used of humans) and for BUTLER, combining these stereotypes is very unlikely to yield a stereotypical male butler as the meaning of the two put together. More generally, Fodor argues against all ‘decomposed’ accounts of concepts with the exception of necessary and sufficient conditions, but – if there were such things – they serve purposes so far as he is concerned only because they are assumed to determine their denotations, which for Fodor (as “contents” of concepts, in his understanding of them) are ‘atomic.’ It is these that are supposed to do the work of compositionality. There is, however, a much simpler alternative account that remains entirely within syntax and morphology (inside the core of the language faculty) and does not require moving into properties of things ‘out there.’ It consists merely of pointing to the fact that the conceptual packages associated with morphological stems remain intact until they reach SEM. That is all the ‘atomicity’ that is required. Taking this route places on morphology and syntax the burden of describing and explaining how and why a package comes to be nominalized, verbalized, or made into an adjective, why and how a specific nominal comes to be assigned a role as agent, and so on. The results of computation will be grammatical in some intuitive sense, although there is no guarantee that they will be readily interpretable. That is, however, a harmless result: we humans use what we can, and overproduction actually aids the interests of the creative aspect of language use.
Some views of causal verbs such as persuade and build might demand that what appears at SEM is a syntactically joined expression that includes (say) CAUSE and INTEND (for persuade) at one or more SEMs, but while this requires treating causal verbs as complex, it is a harmless result too. In fact, it has the advantage of making it apparent that some analytic truths are underwritten by syntax without recourse to pragmatic or like considerations. Fodor and Lepore (2002 and elsewhere) have objections to this kind of move, but they are, I think, avoidable.
As for over- or underspecification: that will have to await a fuller account of the theory than can be offered now. There are, however, some considerations that argue in favor of overspecification. Take metaphorical interpretation, a very common phenomenon in natural language use. A plausible account of metaphor holds that in interpretation, one employs a form of pruning that applies one or a few semantic features of an LI to characterize something else. To take a simple example, consider the sentence John is being a pig said in a context in which 7-year-old young John is at a table eating pizza. To characterize John as a pig is to take – likely in this case – the feature GREEDY from PIG and apply it to JOHN. Pruning of semantic features, where necessary, can perhaps be made the responsibility of ‘other systems’ on the other side of SEM and, ultimately, of the user and the interpreter (or perhaps just ‘the user,’ abandoning the work of other systems, for they would have to be far too context-sensitive to yield the kinds of results that a computational theory could possibly capture). For a procedure like this to make sense at all, one needs at SEM at least a rich set of features. Arguably, much the same holds for what are often called literal interpretations, where – again – there may be need for pruning.
There are several efforts to say what the features are, with varying success. A notable one is found in Pustejovsky's (1995). There are, I think, some problems with the computational framework that Pustejovsky adopts (see McGilvray 2001), but the feature descriptors – thought of as contributions to a theory of concepts – are quite extensive, and many are illuminating. Nevertheless, it is likely that there is a long way to go – assuming, at least, that linguistically expressed concepts are specified sufficiently fully to distinguish any one from any other. As for explanatory adequacy – a solution to Plato's Problem for lexical concept acquisition, for example – one must seek that too. By way of a start, one can look to evidence gathered by Kathy Hirsh-Pasek and Roberta Golinkoff (1996), Lila Gleitman (e.g., Gleitman & Fisher 2005) and others concerning the course of development of concepts and lexical items with children, including – in the case of Hirsh-Pasek and Golinkoff, pre-linguistic infants. The issue of concept acquisition is of course distinct, in part, from the issue of lexical acquisition. For it is obvious that children have (or are able to quickly develop) at least some versions of PERSON and action concepts such as GIVE, EAT, and so on, plus TREE, WASH, and TRUCK at a very early age. For they appear to understand many things said in their presence before they are able to articulate; and they clearly have an extremely early capacity to discriminate at least some things from others. Perhaps they do not have BELIEF and KNOW before they can articulate such concepts in language; perhaps, in effect, one needs at least some capacity to articulate before being able to develop such concepts. These are open issues. However, at least it is clear that children do develop some – remarkably many – concepts quickly, and that some of them seem to already have at least some of the characteristics that make them characteristic of our (adult) conceptual schemes. Thus, as with the concept PERSON, the child's concept DONKEY must have a feature amounting to something like “psychic continuity.” As Chomsky's grandchildren's responses to a story discussed in the main text reveals, the story donkey turned into a stone remains a donkey, and the same donkey, even though it now has the appearance of a stone. This also indicates that the feature ‘has psychic continuity’ must not only be innate, but that there must be some mental ‘mechanism’ that includes this feature in a child's concepts of humans, donkeys, and no doubt other creatures.
V.2 Are human concepts unique?
Having rejected empiricist views of concepts because they have nothing to recommend them and having dismissed the externalist misinterpretations of concepts found in Fodor's view, let us turn to Chomsky's claim that human concepts are somehow unique or different from those found with other organisms. Is he correct? Lacking a reasonably well-developed theory of concepts/MOPs, one must look elsewhere for reasons to hold this. First, is there a case for human concepts being unique? Intuition does suggest differences between human concepts and what we can reasonably say about animals. It seems unlikely that a chimp has the concepts WATER, RIVER, and HIGHWAY, for example, at least in the forms we do. Perhaps a chimp can be trained to respond in some way taken to meet some criterion of success established by some experimenter or another to cases of water, rivers, and highways, but it does not follow either that the chimp has what we have, or that it acquires these concepts in the way we do – if the chimp has them at all. Moreover, on other grounds, it is very unlikely that chimps have or can ever develop NUMBER, GOD, or even RIDGE or VALE. So there is a prima facie case for at least some distinctive human concepts.
That case is considerably improved by what Gallistel (1990) and several others interested in the topic of animal communication say about the natures of animal concepts – or at least, about their use and by implication, about their natures. They are, so far as anyone can tell, referential in a way that human concepts (at least, those expressed in our natural languages, which is for our purposes all of them) are not. They seem to be ‘tied’ to dealing with an organism's environment. Assuming so, in what does the difference consist? Exploring this issue gives us an opportunity to think about how one might construct a good theory of MOPs or internal concepts.
One possibility, mentioned in the discussion, is that our linguistically expressed concepts differ from those available to other creatures in use or application. Perhaps, then, we have concepts identical to those available to chimps and bonobos, to the extent that there is overlap – for we need not suppose that we have exactly what they have, or vice versa. The difference on this line of argument lies rather in the fact that chimps and bonobos do not have language, and so they do not have at least some of the capacities we have because our language system can run ‘offline’ – essential for speculation and wondering about what might happen if something having nothing to do with current circumstances were to take place. On this view, through the flexibility of use to which its resources can be put, language allows us to ‘entertain’ complex (sententially expressed) concepts out of context, where chimps and bonobos are constrained to apply concepts – and not, obviously, concepts that are structured as a result of linguistic composition. As the discussion indicates, Chomsky rejects this explanation. If there are differences, the differences are in the natures of the concepts, not the uses to which they are put.
Our uses of linguistically expressed concepts do, of course, provide evidence for or against differences in concepts. For example, one reason for thinking that our concepts differ from those available to other creatures is that ours provide support for the multiple uses to which they are put, including metaphor – which seems to require the capacity to ‘take apart’ concepts and apply only some discourse-relevant parts. Another reason lies in animal concept use: if Gallistel and others are correct, it is very plausible that whatever an ape is employing when it employs some analogue of our concept HOUSE, it employs something that is directly mobilized by some one or some group of features that the ape's relevant sensory system(s) yield. The concept's features and character will be of its nature devoted to yielding quick recognition and performance. It will lack not only features added to a concept in the course of a linguistic computation/derivation (for apes do not have language), but will lack non-sensory abstract features such as SHELTER, ARTIFACT, and SUITABLE MATERIALS that – as Aristotle and many others have noted – regularly figure in our concept HOUSE. I return to that
, for it is compelling. First, though, I need to address a potentially misleading form of animal–human conceptual comparison.
It appears that at least some of our concepts differ from those available to animals in their internal structures. An interesting case is presented in PERSUADE and other causal verbs, verbs whose associated concepts have plausibly been held to provide support for one variety of entailment, yielding analytic truths that are presumably unavailable to apes. If John persuades Mary to go to the movies, then if it is true that he does persuade her to do this, she at that point intends to go. Whether she does eventually do so is another matter. It is not obvious, however, that this point gives us a persuasive way to describe the differences between an ape's concepts and ours. According to a plausible and much-discussed hypothesis (for a contrary position, see Fodor and Lepore 2002), entailments like this (assuming that “John persuades Mary” is taken to be true) depend on structure induced by the syntactic compositional operations of the language faculty. If that were the case, PERSUADE would turn out to amount to CAUSE to INTEND. And if so, our linguistically expressed PERSUADE would not be an ‘atomic’ concept, as are HOUSE, GROUSE, and RIDGE. Rather, it would have the character that it does because of the morphosyntactic operations of the language faculty. This suggests that if there is to be comparison of the ‘natures’ of animal concepts with ours, it is best to discount the contributions of morphology and syntax to concepts as they appear at language's semantic interface, where morphosyntax has contributed both internal and sentential structure. This point is illustrated, I think, in some of Paul Pietroski's recent work on matters of semantic structure and its interpretation.