The Science of Language
Page 5
Supplemental material from interview 20 January 2009
JM: I'll switch to what you called “semantic information” in a lecture in 2007 at MIT on the perfection of the language system and elsewhere. You mentioned that at the semantic interface (SEM) of the language faculty, you got two kinds of semantic information, one concerning argument structure that you assumed to be due to external Merge, and another kind of information concerning topic, scope, and new information – matters like these – that you assumed to be due to internal Merge.
NC: Well, pretty closely. There are arguments to the contrary, such as Norbert Hornstein's theory of control, which says that you pick up theta roles. So I don't want to suggest that it's a closed question by any means, but if you adopt a god-like point of view that you sort of expect that if you're going to have two different kinds of Merge, that they should be doing different things. I don't have proof. But the data seem to suggest that it's pretty close to true, so close to true that it seems too much of an accident. The standard cases for argument structure are for external Merge, and the standard cases of discourse orientation and stuff like that are from internal Merge.
JM: It's a very different kind of information.
NC: It's very different, and if we knew enough about animal thought, I suspect that we would find that the external Merge parts may even be in some measure common to primates. You can probably find things like actor-action schema with monkeys. But they can't do very much with it; it's like some kind of reflection of things that they perceive. You see it in terms of Cudworth-style properties, Gestalt properties, causal relations; it's a way of perceiving.
JM: Events with n-adic properties – taking various numbers of arguments, and the like.
NC: Yes, that kind of thing. And that may just be what external Merge gives you. On the other hand, there's another kind of Merge around, and if it's used, it's going to be used for other properties. Descriptively, it breaks down pretty closely to basic thematic structure on the one hand, and discourse orientation, information structure, scopal properties, and so on, on the other.
JM: It looks like pragmatic information . . .
NC: After all, the interface is semantic-pragmatic.[C]
There is a lot of discussion these days of Dan Everett's work with a Brazilian language, Pirahã – it's described in the New Yorker, among other places. David Pesetsky has a long paper on it with a couple of other linguists [(Nevins, Pesetsky, Rodrigues 2007)], and according to them, it's just like other languages. It's gotten into the philosophical literature too. Some smart people – a very good English philosopher wrote a paper about it. It's embarrassingly bad. He argues that this shows that it undermines Universal Grammar, because it shows that language isn't based on recursion. Well, if Everett were right, it would show that Pirahã doesn't use the resources that Universal Grammar makes available. But that's as if you found a tribe of people somewhere who crawled instead of walking. They see other people crawl, so they crawl. It doesn't show that you can't walk. It doesn't show that you're not genetically programmed to walk [and do walk, if you get the relevant kind of input that triggers it and are not otherwise disabled]. What Everett claims probably isn't true anyway, but even if it were, that just means this language has limited lexical resources and is not using internal Merge. Well, maybe not: Chinese doesn't use it for question-formation. English doesn't use a lot of things; it doesn't use Baker's polysynthesis option. No language uses all the options that are available.
3 Representation and computation
JM: Continuing in the same vein, your understanding of computation seems to differ from the philosophically favored notion where it is understood as tied in with a representational theory of mind. Computation there is understood to be something like the operations of a problem-solving device that operates over symbols understood in traditional (not your) semantic terms, in terms of relationships of items inside the head that represent things outside in the world.
NC: The term “representation” is used in a kind of technical sense in the philosophical literature which I think basically comes back to the theory of ideas. You know there's something out there and the impression of it becomes an idea, and then there's a relation – so, say, in Jerry Fodor's representational theory of mind – there's a causal relation between the cat over there and the concept cat in your language of thought. And Kripke, Putnam, Burge have a picture roughly like that.
JM: Well, it's more than just causal – I mean, for Fodor, it really is a semantic relationship . . .
NC: Yes, but it is causal [in that something ‘out there’ causes the formation of an internal representation which is your ‘idea of’ what causes it]. I mean, that's how you get the connection. There is some causal relation, and then, yes, it sets up the semantic relation of reference. And there is a factual question as to whether any of that happens. Obviously there's some causal relation between what's outside in the world and what's in our head. But it does not follow that there's a symbol–object relationship, [something like the reverse of the causal one]. And the big problem with that approach is – what's the object? Well, here we're back to studying lexical concepts and it was pretty clear by the seventeenth and eighteenth centuries that there wasn't going to be a relation like that, even for the simplest concepts. We just individuate things in different ways.
Locke's discussion of personal identity is a famous example of how we just don't individuate things that way; [we, or rather, our minds, produce the concept PERSON]. That goes back to Aristotle and form and matter, but then it's very much extended in the seventeenth century; and then it kind of dropped. As far as I know, after Hume it virtually disappears from the literature. And now – these days – we're back to a kind of neo-scholastic picture of word–thing relations. That's why you have books called Word and Object [by W.V.O. Quine] and that sort of thing. But there's no reason to believe that that relation exists. So yes, the representational theories of mind are bound to a concept of representation that has historical origins but has no particular merits as far as I know.
JM: I asked in part because, when you read works of people like Georges Rey, he seems to assume that when Turing speaks of computation, he was essentially committed to something like a representational account.
NC: I don't see where that comes from – I don't see any evidence for that in Turing. That's the way Turing is interpreted by Rey, by Fodor, and by others. But I don't see any textual basis for that. In fact, I don't think Turing even thought about the problem. Nothing in what I've read, at least. You can add that if you like to Turing; but it's not there. Now Georges Rey in particular has carried out a very intensive search of the literature to find uses of the word ‘representation’ in my work and elsewhere, and consistently misinterprets them, in my opinion [see Rey's contribution and Chomsky's reply in Hornstein & Antony (2003)]. If you look at the literature on cognitive science and neurology and so on and so forth, people are constantly talking about internal representations. But they don't mean that there's a connection between what's inside and some mind-independent entity. The term “internal representation” just means that something's inside. And when you add this philosophical tradition to it, yes, you get funny conclusions – in fact, pointless ones. But if we learned anything from graduate school when we were reading the late Wittgenstein, it's that that's a traditional philosophical error. If you want to understand how a cognitive neuroscientist or a linguist is using the word representation, you've got to look at how they're using it, not add a philosophical tradition to it. [To return to an earlier point,] take phonetic representation – which is the standard, the traditional linguistic term from which all the others come. Nobody thinks that an element in a syllable in IPA [International Phonetic Alphabet] picks out a mind-independent entity in the world. If it's called a phonetic representation, that's just to say that there's something going on in the head.[C]
4 More on human concepts
JM: We had spoken earlier about the distinctiveness of huma
n concepts, and I'd like to get a bit clearer about what that amounts to. I take it that, at least in part, it has to do with the fact that human beings, when they use their concepts – unlike many animals – do not in fact use them in circumstances in which there is some sort of direct application of the concept to immediate circumstances or situations.
NC: Well, as far as anyone knows – maybe we don't know enough about other animals – what has been described in the animal literature is that every action (local, or whatever) is connected by what Descartes would have called a machine to either an internal state or an external event that is triggering it. You can have just an internal state – so the animal emits a particular cry [or other form of behavior] ‘saying’ something like “It's me” or “I'm here,” or a threat: something like “Keep away from me,” or maybe a mating cry. [You find this] all the way down to insects. Or else there is a reaction to some sort of external event; you get a chicken that's looking up and sees something that we interpret as “There's a bird of prey” – even though no one knows what the chicken is doing. It appears that everything is like that, to the extent – as mentioned before – that Randy Gallistel (1990) in his review introduction to a volume on animal communication suggests that for every animal down to insects, whatever internal representation there is, it is one-to-one associated with an organism-independent external event, or internal event. That's plainly not true of human language. So if [what he claims] is in any way near to being true of animals, there is a very sharp divide there.
JM: That's a sharp divide with regard to what might be called the “use” or application of relevant types of concepts, but I take it that it's got to be more than that . . .
NC: Well, it's their natures. Whatever the nature of HOUSE, or LONDON, ARISTOTLE, or WATER is – whatever their internal representation is – it's just not connected to mind-independent external events, or to internal states. It's basically a version of Descartes's point, which seems accurate enough.
JM: OK, so it's not connected to the use of the concepts, nor is it connected . . .
NC: Or the thought. Is it something about their nature, or something about their use? Their use depends on their nature. We use HOUSE differently from how we use BOOK; that's because there's something different about HOUSE and BOOK. So I don't see how one can make a useful distinction . . .
JM: There's a very considerable mismatch, in any case, between whatever features human concepts have and whatever types of things and properties in the world that might or might not be ‘out there’ – even though we might use some of these concepts to apply to those things . . .
NC: Yes, in fact the relation seems to me to be in some respects similar to the sound side of language[, as I mentioned before]. There's an internal representation, æ, but there's no human-independent physical event that æ is associated with. It can come out in all sorts of ways . . .
JM: So for concepts it follows, I take it, that only a creature with a similar kind of mind can in fact comprehend what a human being is saying when he or she says something and expresses the concepts that that person has . . .
NC: So when you teach a dog commands, it's reacting to something, but not your concepts . . .
JM: OK, good. I'd like to question you then in a bit more detail about what might be thought of as relevant types of theories that one might explore with regard to concepts. Does it make sense to say that there are such things as atomic concepts? I'm not suggesting that they have to be atomic in the way that Jerry Fodor thinks they must be – because of course for him they're semantically defined over a class of identical properties . . .
NC: External . . .
JM: External properties, yes.
NC: I just don't see how that is going to work, because I don't see any way to individuate them mind-independently. But I don't see any alternative to assuming that there are atomic ones. Either they're all atomic, in which case there are atomic ones, or there is some way of combining them. I don't really have any idea of what an alternative would be. If they exist, there are atomic ones. It seems a point of logic.
JM: I wonder if the view that there must be atomic concepts doesn't have about the same status as something like Newton's assumption that there have to be corpuscles because that's just the way we think . . .
NC: That's correct . . . there have to be corpuscles. It's just that Newton had the wrong ones. Every form of physics assumes that there are some things that are elementary, even if it's strings. The things that the world is made up of, including our internal natures, our minds – either those things are composite, or they're not. If they're not composite, they're atomic. So there are corpuscles.
JM: Is there work in linguistics now being done that's at least getting closer to becoming clearer about what the nature of those atomic entities is?
NC: Yes, but the work that is being done – and it's interesting work – is almost entirely on relational concepts. There's a huge literature on telic verbs, etc. – on things that are related to syntax. How do events play a role, how about agents, states . . .? Davidsonian kind of stuff. But it's relational.
The concerns of philosophers working on philosophy of language and of linguists working on semantics are almost complementary. Nobody in linguistics works on the meaning of WATER, TREE, HOUSE, and so on; they work on LOAD, FILL, and BEGIN – mostly verbal concepts.
JM: The contributions of some philosophers working in formal semantics can be seen – as you've pointed out in other places – as a contribution to syntax.
NC: For example, Davidsonian-type work . . .
JM: Exactly . . .
NC: whatever one thinks of it, it is a contribution to the syntax of the meaning side of language. But contrary to the view of some Davidsonians and others, it's completely internal, so far as I can see. You can tie it to truth conditions, or rather truth-indications, of some kind; it enters into deciding whether statements are true. But so do a million other things.[C]
5 Reflections on the study of language
JM: You used to draw a distinction between the language faculty narrowly conceived and the language faculty more broadly conceived, where it might include some performance systems. Is that distinction understood in that way still plausible?
NC: We're assuming – it's not a certainty – but we're basically adopting the Aristotelian framework that there's sound and meaning and something connecting them. So just starting with that as a crude approximation, there is a sensory-motor system for externalization and there is a conceptual system that involves thought and action, and these are, at least in part, language-independent – internal, but language-independent. The broad faculty of language includes those and whatever interconnects them. And then the narrow faculty of language is whatever interconnects them. Whatever interconnects them is what we call syntax, ‘semantics’ [in the above sense, not the usual one], phonology, morphology . . ., and the assumption is that the faculty narrowly conceived yields the infinite variety of expressions that provide information which is used by the two interfaces. Beyond that, the sensory-motor system – which is the easier one to study, and probably the peripheral one (in fact, it's pretty much external to language) – does what it does. And when we look at the conceptual system, we're looking at human action, which is much too complicated a topic to study. You can try to pick pieces out of it in the way Galileo hoped to with inclined planes, and maybe we'll come up with something, with luck. But no matter what you do, that's still going to connect it with the way people refer to things, talk about the world, ask questions and – more or less in [John] Austin style – perform speech acts, which is going to be extremely hard to get anywhere with. If you want, it's pragmatics, as it's understood in the traditional framework [that distinguishes syntax, semantics, and pragmatics].1
All of these conceptual distinctions just last. Very interesting questions arise as to just where the boundaries are. As soon as you begin to get into the real way it works in detail, I think there's persuasive – never conclusi
ve, but very persuasive – evidence that the connecting system really is based on some merge-like operation, so that it's compositional to the core. It's building up pieces and then transferring them over to the interfaces and interpreting. So everything is compositional, or cyclic in linguistic terms. Then what you would expect from a well-functioning system is that there are constraints on memory load, which means that when you send something over the interface, you process it and forget about it; you don't have to re-process it. Then you go on to the next stage, and you don't re-process that. Well, that seems to work pretty well and to give lots of good empirical results.
But there is a problem. The problem is that there are global properties. So, for example, on the sound side, prosodic properties are global. Whether the intonation of the sentence is going to rise or fall at the end depends on the complementizer with which it begins. So if it's going to be a question that begins with, say, “who” or “what,” that's going to determine a lot about the whole prosody of the sentence. And for this and other reasons it's a global property; it's not built up piece by piece. Similarly, on the semantic side, things like variable binding or Condition C of binding theory are plainly global. Well, what does that mean? One thing it may mean is that these systems – like, say, prosody and binding theory – which we have thought of as being narrow syntax, could be outside the language faculty entirely. We're not given the architecture in advance. And we know that, somehow, there's a homunculus out there who's using the entire sound and entire meaning – that's the way we think and talk. It could be that that point where all the information is going to be gathered, that is where the global properties apply. And some of these global properties are situation-related, like what you decide to do depends on what you know you're talking about, what background information you're using, etc. But that's available to the homunculus; it's not going to be in the language faculty. The language faculty is kind of like the digestive system, it grinds away and produces stuff that we use. So we don't really know what the boundaries are. But you might discover them. You might discover them in ways like these.[C]