The Science of Language

Home > Other > The Science of Language > Page 27
The Science of Language Page 27

by Noam Chomsky


  Could it be after all that the difference between our concepts and those available to animals – especially apes, including chimps and bonobos – is entirely due to contributions of the language faculty? Paul Pietroski (2008) develops a version of this option, although not – I argue – one that addresses apparent differences in the natures of ‘atomic’ concepts. He suggests that differences lie in the ‘adicity’ requirements of the language faculty at its semantic interface. The adicity of a concept is the number of arguments it takes: RUN seems to require one argument (John ran), so has adicity −1 (it needs an argument with value +1 to ‘satisfy’ it); GIVE might seem to require three arguments, and if it does, it has adicity −3. Specifically, Pietroski adopts a variation of Donald Davidson's idea that the semantics of sentences should be expressed in terms of a conjunction of monadic predicates, that is, predicates with adicity of −1, and no other. In Pietroski's terms (avoiding all but the most primitive logical notation for the benefit of the general reader unfamiliar with it), John buttered the toast amounts to: there is an [event] e, BUTTERING (e) [read this as “e is a buttering”]; there is an x, AGENT (x), CALLED-JOHN (x); there is a y, THEME [patient] (y), TOAST (y). According to this account, “buttered,” which appears to have adicity −2 (to require two arguments) is coerced to have adicity −1 (to require a single argument), and “John,” which appears to have adicity +1 (to be an argument with a value +1 that ‘satisfies’ a predicate with adicity −1), is coerced to something like the form called John. (This is reasonable on independent grounds, such as cases where one says that there are several Johns at the party.) In effect, then, the name “John” when placed in the language faculty's computational system gets −1 instead.

  There are several advantages to this “neo-Davidsonian” approach. One is that it seems to coordinate with at least some of the aims of Chomsky's Minimalist Program and its view of linguistic computation/derivation. Another is that it offers a very appealing account of some entailments that follow from overall SEM structure (or perhaps the structure of some form of representation on the ‘other side’ of SEM, in some interpretational system or another): from John buttered the toast quickly, it follows that he buttered. But as noted, it does not seem to address the prima facie difference in the natures of concepts noted above. It is unlikely that the difference between our linguistically expressed BUTTER and some chimp's BUTTER-like concept (assuming that there is such) consists solely in a difference in adicity; on Pietroski's view, BUTTER on application or use – that is, for humans, as it appears at the semantic interface as understood by Pietroski – has the adicity (–1), for by hypothesis, that is what it is assigned by the operations of the language faculty that lead up to it. Because they lack language and the resources it provides, however, there is no reason to say this of the adicity-in-application of BUTTER for an ape, whatever that might be. Since, then, the difference in question appears to be due solely to the operations of the morphosyntactic system that determines the adicity of a concept that – as in this case – is assigned the status of verb and given one argument place to suit Pietroski's view of interpretation and what it underwrites, and because it relies essentially on the fact that we have language and apes do not, it does not speak to the issue of whether a chimp has what we have when we have the ‘atomic’ concept BUTTER. Generally speaking, Pietroski's discussion of differences between animal and human concepts focuses on adicity alone, and does not really touch the issue of what a concept ‘is’ – of what its ‘intrinsic content’ or inner nature is, and how to capture this nature. It steers around the issue of what concepts are – perhaps to be investigated by looking at what they amount to in pre-computation ‘atomic’ form, where they might be described as a cluster of semantic features that as a package represent the ‘meaning’ contribution of a person's lexical item. It focuses instead on concepts as they appear (are constituted at? are ‘called upon’? are ‘fetched’?) at the language faculty's semantic interface. Because of this, it loses an opportunity to look for what counts as differences in concepts at the ‘atomic’ level, in the way a human's lexical conceptual store might differ from an ape's. And also because of this, it raises doubts, I believe, about whether Pietroski or anyone else is warranted in assuming that our concepts are in fact (adicity and other contributions of morphology and syntax aside) identical or even similar to what are available to other primates. There is, of course, a difference between us and apes. That is not in question: they do not have the computational system of language, and Merge and linguistic formal features in particular. However, that difference does not address the issue in question here.

  If looking to differences in use and to contributions of morphology and syntax do not speak to the matter and the language faculty imposes no obvious processing-specific requirements on the intrinsic features of the concepts it can take into account, another place to look for a way to describe and explain the prima facie differences is to a distinctively human conceptual acquisition device. Might humans have such a device, procedure, or system? Associative stories of the sort offered by empiricists over the ages (for a contemporary version, see Prinz 2002) are little help; they amount to an endorsement of a generalized learning procedure that neither speaks to poverty of the stimulus observations (infants with complex concepts, among other facts) nor offers a specific proposal concerning a mechanism – crucial, if one is to offer a theory at all. Their stories about generalized learning procedures are not made precise, nor – where efforts of a sort are made – are they relevant. Pointing at connectionist learning stories does not help unless there is real reason to think that is the way human minds actually work, which infant concepts acquired with virtually no ‘input,’ among other things, deny. And so it appears that their explanation of human–animal differences (bigger, more complex brains, more powerful hypothesis formation and testing procedures, ‘scaled up’ operations, communal training procedures, etc.) are just forms of handwaving.

  What, however, about appealing to a concept-acquisition mechanism that depends on a procedure that there is good reason to think only humans have? Specifically, could there be a concept-acquisition device that employs Merge, or at least some version of it? This seems promising. On independent grounds, Merge is unique to humans. However, the suggestion faces barriers. For one thing, it challenges an assumption basic to Chomsky's minimalist reading of evolution; on that reading, our human concepts must be in place before Merge and language's computational system are introduced. If this seems to rule out an appeal to Merge, there is a possible variant: perhaps the concepts in place at the introduction of Merge are those shared to an extent with some other primates, and the introduction of Merge not only provided for the construction of new and distinctively human ones, but also allowed for modifications in existing ones. That again looks promising, but it has other problems. Merge in its usual external and internal forms introduces hierarchies (unless there is another explanation for them), movement, and the like. There is no obvious need for these in dealing with concepts themselves, grammatically complex concepts such as PERSUADE aside. Perhaps there is a need for a distinction between the core features for a concept and more peripheral ones. Perhaps, for example, PERSON and DONKEY will have something like PSYCHIC CONTINUITY among their core features, but need not have BIPEDAL or QUADRIPEDAL. However, that does not appear to be a difference in hierarchy. It might even be an artifact of the way(s) in which the word person is used in majority environments, which would be irrelevant to a Merge-based account. Pair Merge, on the other hand – or something like it that provides for a form of adjunction – could provide aid here. By abandoning hierarchical structure and movement/copying, it has promise, assuming it could operate over features and allow for something that looks rather like concatenation of features to produce distinctive clusters, perhaps expandable to allow for additional experience. However, it has problems too. For one thing, if it yields something like adjunction (e.g., the big bad ugly mean nasty . . . guy), it depends on a sin
gle-element ‘host’ (here, “guy”) to which adjoined elements are attached, and it is not at all clear what that single element could be: lexical phonological elements will not do, and if there are ‘central’ features, they must by hypothesis allow for complexity. For another, it is more descriptive than explanatory: it does not help make sense of how concepts seem to develop automatically in ways that are (for core features at least) uniform across the human population, yielding conceptual packages that appear to be virtually ‘designed’ for the uses to which they can be put. And finally, it is hard to see why a procedure like the one discussed would be unavailable to animals (which also appear to have innate concepts, however different they might be), so the appeal to the human-uniqueness of the combinatory procedure fails to make sense of why human concepts are unique. That suggests that looking to uniquely human conceptual package acquisition mechanisms to make sense of why human concepts are different is the wrong strategy. Unless there is some naturalistically based combinatory procedure that is demonstrably unique to humans other than Merge – which at the moment does not look plausible – perhaps we should look elsewhere.

  Keep in mind that there is nothing obviously wrong with assuming that human concepts are complex and composed in some way; that assumption cannot, as indicated, be ruled out on Fodor's grounds. It is also independently plausible because (ignoring for good reasons Fodor's (1998, 2008) very speculative and externalist-driven accounts of how ‘atomic’ concepts could be acquired) composition offers the only viable acquisition alternative. If so, let us assume some kind of ‘guided’ compositional clustering operation that, so far as we know, could be shared with animals. Then let us look elsewhere for an explanation of the uniqueness of human concepts. One plausible line of inquiry is looking to the features that make up human conceptual capacities (+/–ABSTRACT, POLITY, INSTITUTION, and so on) and inquiring whether at least some of them are likely to be duplicated in animals’ concepts. It is difficult to be confident when speaking of the conceptual capacities of animals, but there is, I think, reason to doubt that they do or that – if they do – they are capable of employing what they have. While humans might describe and think of a troop of Hamadryas baboons as having a single form of male-dominant ‘government’ in their social system, it is unlikely that the baboons themselves would think of their form of social organization at all, much less think of it as one of a range of possible forms of political/social organization – authoritarian patriarchic tribal hierarchy, cooperative democratic system, plutocracy, matriarchic statist-capitalist economy . . . Olive baboons are of their natures matriarchal; Hamadryas baboons are definitely not. And even if a troop of Hamadryas baboons should through loss of dominant males become matriarchal, it is not as if the remainder of the troop deliberated whether to become so, and chose to. It appears that they have nothing like the capacity for abstractness afforded routinely in our notions of social institution, or for that matter classes of fruit that include a wide range of different species. Nor could Hamadryas or olive baboons or any other ape think of their organization and the territory over which they have hegemony as we do. Where we can think of London as a territory and set of buildings or as an institution that could move to another region, nothing in ape behaviors or communicative efforts exhibits this ability to adopt either, or both, ways of thinking. Nor likely could any think of their territories in the following way: “London [the volume of air in its region] is polluted” or “London [its voting population] is voting Conservative this time.” Their concepts for their organization (assuming they have such) and for the territories over which they have hegemony just do not allow for this, nor would either be seen as a species of more general cases (POLITY?) that would invite speculation about whether they could re-organize in a different way, and plan to do so if they decide to. Further, and perhaps most important, if an ape should have or ever develop a concept analogous to our RIVER – say, RIVERB (‘RIVER-for-a-variety-of-baboon’) – its concept's features would very likely be restricted to those that can readily be extracted from sensory input, and its use would be restricted to meeting current demands, not allowing speculation about what one can expect to find in particular forms of geographic terrain. In a similar vein, it is hard to imagine a chimp developing a homologue to human concepts such as JOE SIXPACK, SILLIEST IRISHMAN, or – for that matter – SILLY and IRISHMAN. In addition, on at least some plausible views of the lexicon and the meaning-relevant information it contains, mental lexicons must provide in some manner what are called “functional features,” such as TNS for tense (thought of syntactically and structurally) and several others that play roles in the composition of sentential concepts. These, clearly, are not in an ape's repertoire, and they certainly count as ‘abstract.’

  The scope of animal concepts appears to be restricted in the ways that animal communication studies of the sort found in the work of Gallistel and others indicate. To emphasize points made above: their conceptual features do not permit them to refer to the class of fruits, to forms of social institution, to rivers as channels with liquids that flow (distinct from creeks, streams, rivulets, etc.), to creatures such as humans, donkeys, and even ghosts and spirits with psychic continuity, to doors as solid and apertures, and so on. Rather, their concepts appear to involve ways of gathering and organizing sensory inputs, not abstract notions such as INSTITUTION, PSYCHIC CONTINUITY, and the like that have dominant roles in human concepts. No doubt they have something like a ‘theory of mind’ and can respond to the actions of conspecifics in ways that mirror their own action (and deceit, etc.) strategies and routines. However, there is no obvious reason to assume of them that they understand a conspecific in terms of its executing action plans (projects), deliberating what to do next, and the like. That requires symbol systems that provide for ways of organizing concepts in the ways humans can, given language. Do they think? Why not? We say computers do, and it is apparent that little but usage of the commonsense term “think” turns on that. But can they think in articulated ways provided by boundless numbers of sententially organized concepts? No. Their lack of Merge indicates that.

  Another line of inquiry – suggested obliquely above – notes that human linguistically expressed conceptual packages allow for the operations of affixation in morphology, and for dissection when they appear at a semantic interface in a compositional sentential structure. The concept FRUIT expressed by the relevant morphological root gets different treatments when subjected to morphological variation: one gets fruity (which makes the associated concept abstract and adjectival), fruitful (dispositional notion), fruitiness (abstract again), fruit (verbal), refruit (produce fruit again), etc. So far as I know, no other creature has concepts that provide for the relevant kinds of morphosyntactic ‘fiddling.’

  As for dissection: when one encounters sentences such as Tom is a pig (where Tom is an 8-year-old child), the circumstance of use and the structure of the sentence that predicates being a pig of Tom require for interpretation that one focus on a (usually small) subset of the features commonly taken to be piggish, treating these as the ones ‘called for’ by a specific state of Tom. If he is wolfing (another metaphor) down (still another) pizza, GREEDY is likely to be one of the features dissected from the others and employed in this circumstance. Human languages and the concepts that they express provide for this kind of dissection, and the desire for creativity in use routinely exhibited in metaphor depends on it. Perhaps animals have complexes of features for PIG. It is unlikely, though, that they have GREEDY (an abstract notion applied to more than pigs) or that their cognitive systems are equipped to easily dissect one part of their PIG concepts from others and apply that part to a situation, as is common with constructions that call for metaphorical readings. I assume that dissection applies only at an interpretational interface, SEM. Until that point, as indicated, a lexical item's semantic features can be thought of as carried along in a derivation as an atomic ‘package.’ Arguably, however, an animal's concepts remain functionally atomic all t
he way through whatever kinds of cognitive operations are performed on it. What is known about animal communication systems, and about the limited degree of flexibility in their behaviors, environments, and organization, suggests this.

  The last two lines of inquiry, and to a degree even the first, point to the fact that the human conceptual materials contained in mental lexicons have properties that might be contributed by, but are certainly exploited by, the compositional operations of a uniquely human language faculty. Were these properties of human conceptual materials ‘there’ before the introduction of Merge, were they instead invented anew only once the system came into place, or rather do they consist in ‘adaptations’ of prior conceptual materials to a compositional system? I do not attempt to answer that question: I know of no way to decide it one way or another, or to find evidence for a particular proposal. Clearly, however, the concepts humans express in their languages – or at least, many of them – are unique to humans.

  I should mention one endless class of concepts that plausibly does depend on Merge. Apes and other creatures lack recursion – at least, in the form found with language. If they lack that, then – as Chomsky suggests – they lack natural numbers. So NATURAL NUMBER and 53, 914, etc. are all concepts unavailable to other creatures.5 There is plenty of evidence of this. While many organisms have an approximate quantity system, and their approximations respect Weber's Law (as do very young children's), only humans with a partially developed language system have the capacity to enumerate (assuming that they employ it: for discussion, see p. 30). Only humans have the recursive capacity required to develop and employ a number system such as that found in the natural number sequence. Specifically, many organisms can reliably and in short order distinguish sets of objects with 30 members from those with 15, and with accuracy that decreases in accordance with Weber's Law, sets of 20 from 15, 18 from 15, and so on. However, only humans can reliably distinguish a set with 16 from one with 15 members. They must count in order to do so, employing recursion when they do. The work of Elizabeth Spelke, Marc Hauser, Susan Carey, Randy Gallistel, and some of their colleagues and students offers insight and resources on this and some related issues.

 

‹ Prev