Book Read Free

The Science of Language

Page 6

by Noam Chomsky


  In fact, we might discover that the whole idea of an interface is wrong. Take, say, the sound side, which is easier to think about because we have some information about it. It's universally assumed – this goes back to the beginning of the subject – that the internal language constructs some kind of narrow phonetic representation which is then interpreted by the sensory-motor system; it's said in different ways, but it always comes down to this. Well, it's not a logical necessity. It could be that in the course of generating the sound side of an utterance, you send pieces over to the sensory-motor system long before you send other pieces over. So there won't be a phonetic interface. You can make up a system that works like that, and we don't know that language doesn't. It's just taken for granted that it doesn't because the simplest assumption is that there's one interface. But the fact that it's the first thing that comes to mind doesn't make it true. So it could be that our conception of the architecture is just like a first guess. It is not necessarily wrong, but most first guesses are. Take a look at the history of the advanced sciences. No matter how well established they are, they almost always turned out to be wrong.

  JM: True, but their construction has often been guided by the intuition that simplicity of structure is crucial; and you get [at least partial] success when you follow that particular lead.

  NC: No one knows why, but that has been a guiding intuition. In fact, that's sort of the core of the Galilean conception of science. That's what guided me. And in biology, that's what guided people like Turing in his effort to place the study of biology in the physics and chemistry departments.

  JM: History, free action, and accident mess things up and are beyond the scope of natural science. Did you think when you began all this that linguistics might become more and more like a physical science?

  NC: I'm kind of torn. I mean, I did believe what I was taught [by Zellig Harris and my other instructors]; a nicely brought-up Jewish boy does. But it less and less made any sense. By the late forties I was working kind of on my own and thinking maybe it – the idea that the study of language is a natural science – was a personal problem. It wasn't until the early 1950s that I began to think that the personal problem made some sense; and I began to talk about it. So it was kind of a difficult process to go through. And then, of course[, I had a long way to go]. For years, when I thought I was doing generative grammar, I was actually taking stuff over from traditional grammar.

  1 Chomsky's point concerning pragmatics seems to be that it is very unlikely to be a naturalistic science (at least, as it is currently understood), even though one might find systematic aspects of the ways in which people use language. See Appendix VI.

  6 Parameters, canalization, innateness, Universal Grammar

  JM: Still in the vein we've been talking about, I'd like to ask about linguistic development (language growth) in the individual. You've employed the concept of – or at least alluded to the concept of – canalization, C. H. Waddington's term from about fifty or sixty years ago, and suggested that the linguistic development of the child is like canalization. Can parameters be understood as a way of capturing canalization?

  NC: Canalization sounds like the right idea, but as far as I know, there are not a lot of empirical applications for it in biology.

  With regard to parameters, there are some basic questions that have to be answered. One question is: why isn't there only a single language? Why do languages vary at all? So suppose this mutation – the great leap forward – took place; why didn't it fix the language exactly? We don't know what the parameters are, but whatever they are, why is it these, and not those? So those questions have got to come up, but they are really at the edge of research. There's a conceivable answer in terms of optimal efficiency – efficiency of computation. That answer could be something like this, although no one's proposed it; it's really speculation. To the extent that biology yields a single language, that increases the genetic load: you have to have more genetic information to determine a single language than you do to allow for a variety of languages. So there's kind of a saving in having languages not be too minimal. On the other hand, it makes acquisition much harder: it's easier to acquire a minimal language. And it could be that there's a mathematical solution to this problem of simultaneous maximization: how can you optimize these two conflicting factors? It would be a nice problem; but you can't formulate it.

  And there are other speculations around; you've read Mark Baker's book (Atoms of Language), haven't you?

  JM: Yes, I have.

  NC: . . . well, there's this nice idea that parameters are there so we can deceive each other . . .

  JM: . . . and use that option in wartime.[C]

  NC: Of course, the understanding of what parameters are is too rudimentary to try to get a principled answer. But those questions are going to arise.

  Take phonology. It's generally assumed – plausibly, but not with any direct evidence – that the mapping from the narrow syntax to the semantic interface is uniform. There are lots of theories about it; but everyone's theory is that this is the way it works for every language – which is not unreasonable, since you have only very limited evidence for it. The narrow syntax looks uniform up to parameters. On the other hand, the mapping to the sound side varies all over the place. It is very complex; it doesn't seem to have any of the nice computational properties of the rest of the system. And the question is why. Well, again, there is a conceivable snowflake-style answer, namely, that whatever the phonology is, it's the optimal solution to a problem that came along somewhere in the evolution of language – how to externalize this internal system, and to externalize it through the sensory-motor apparatus. You had this internal system of thought that may have been there for thousands of years and somewhere along the line you externalize it; well, maybe the best way to do it is a mess. That would be the nicest answer, although it's a strange thought for me. And you can think of long-term questions like that all along the line.

  JM: Would optimization be required for the conceptual-intentional case?

  NC: That is really a puzzle. So, why do our concepts always have this invariant, curious property that they conform to our “cognoscitive powers” to use Ralph Cudworth's terminology, not to the nature of the world? It's really strange. And it seems to be completely independent. There are no sensible origins, selectional advantages, nothing . . .

  JM: You've often emphasized the importance of poverty of stimulus facts with respect to knowledge of all aspects of language – namely, structural, phonological-phonetic, and meaning-related conceptual ways. You have pointed out that the facts demand explanation, and that the theory of Universal Grammar is an hypothesis, perhaps the only viable one, that explains these particular facts. Could you speak to what – given the current understanding of UG and of computation in it – the innateness of these domains amounts to?

  NC: First of all, I should say – I see this now clearly in retrospect – that it was a tactical mistake to bring up the issue of the poverty of the stimulus. The reason is that it makes it look as if it's only about language, but it's a universal property of growth. The fact that we have arms and legs is a poverty of stimulus property – nutrition didn't determine them. So any aspect of growth – physical, cognitive, whatever – is going to have poverty of stimulus issues. And, at least in the sciences – it's not God, or something – it's universally assumed that it has to do with genetic endowment. So presumably the case of language has to do with genetic endowment. That's Universal Grammar as it has often been conceived.

  Now actually, that's wrong, because it's not due to genetic endowment; it's due to genetic endowment plus laws of the way the world works. Nobody knows how it works, but it's taken for granted by serious biologists in the mainstream that some kinds of developmental constraints or architectural factors play a crucial role in growth, and also in evolution – in both forms of development. Some notion of evolution and growth, which in genetic cases aren't so far apart – they're going to play a role. So you rea
lly have two factors to consider – or rather, three factors. Experience is going to make some choices. Universal Grammar or genetic endowment will set constraints. And the developmental constraints – which are independent of language and may be independent of biology – they'll play some role in determining the course of growth. The problem is to sort out the consequences of those factors.

  Well, what's Universal Grammar? It's anybody's best theory about what language is at this point. I can make my own guesses. There's the question of lexical items – where they come from. That's a huge issue. Among the properties of lexical items, I suspect, are the parameters. So they're probably lexical, and probably in a small part of the lexicon. Apart from that, there's the construction of expressions. It looks more and more as if you can eliminate everything except just for the constraint of Merge. Then you go on to sharpen it. It's a fact – a clear fact – that the syntactic objects you construct have some information in them relevant to further computation. Well, optimally, that information would be found in an easily discoverable, single element, which would be, technically, its label. The labels are going to have to come out of the lexicon and be carried forward through the computation; and they should contain, optimally, all the information relevant for further computation. Well, that means for external Merge, it's going to involve selectional properties – so, where does this thing fit the next thing that comes along? For internal Merge, what it looks like – kind of what you would expect in that domain – is that it's the probe that finds the input to internal Merge and sticks it at the edge because you don't want to tamper with it, just rearrange. Well, that carries you pretty far, and it takes you off to features; what are they, where do they come from, and so on . . .[C]

  JM: Noam, that's all the time for today. Thank you very much . . .

  JM: [Discussion continues] To pick up on an issue from the last session, we had been discussing innateness and I think we had come to an understanding to the effect that with lexical concepts we have no clear idea of what it means for them to be innate, but they are.

  NC: Part of the reason for that – for not knowing what it is for them to be innate – is that we don't have much idea what they are.

  JM: Yes. Going further down the list into areas where we have a bit more confidence that we know what is going on, we had come to understand that with regard to structural features the best way to understand innateness now is probably largely in terms of Merge, that is, a conception of language that focuses on the idea that most of the structure of language is somehow due to this introduction of Merge some fifty or sixty thousand years ago. Is that plausible?

  NC: Well, that is very plausible. How much of language that accounts for we don't really know – basically, finding that out is the Minimalist Program: how much is accounted for by this one innovation? On Merge itself, every theory agrees; if you have a system with infinitely many hierarchically organized expressions, you have Merge or something equivalent, at the very least, whatever the formulation is. We just take for granted that Merge came along somewhere, and you can more or less time it. Then the question is, given that and the external conditions that language has to meet – interface conditions and independent properties of organisms, or maybe beyond organisms (physical laws and so on and so forth) – how much of language is determined by them? That's a research question – a lot more so than I would have guessed ten years or so ago.

  JM: OK, continuing on the list, what about phonological and phonetic features and properties?

  NC: Well, there's no doubt that there's a specific array of them and you can't just make up any one. And they are plainly attuned to the sensory-motor apparatus[; they meet interface conditions without, of course, being ‘about’ them]. In fact, the same is true if you use a different modality like sign: what you do is attuned to the sensory-motor apparatus; it [sign] doesn't use phonetic features, but some counterpart. The same kinds of questions arise about them as about lexical concepts. It's just that they – the phonetic features – are easier to study. Not that it's easy. Here at MIT there has been half a century of serious work with high-tech equipment trying to figure out what they are, so it doesn't come easily; but at least it's a much more easily formulable problem. Also, on the sensory-motor side, you can imagine comparative evolutionary evidence. On the lexical-semantic side, you can't even think of any comparative evidence that works. But [on the sensory-motor side] other organisms have sensory-motor systems; they're very much like ours, it appears. So you might be able to trace origins. That's the usual hard problem with evolutionary theory. So far as we know, most of those are precursors of language. It's possible that there's adaptation of the sensory-motor system to language – that's likely – but just what it is is very hard to say.

  JM: Is there evolutionary evidence from other primates for sensory-motor systems, or primarily from other creatures?

  NC: Other primates? Well, they have tongues and ears, and so on, but it's . . .

  JM: Not the famous dropped larynx.

  NC: Well, they don't have the dropped larynx, but other organisms do – they've been found in deer, I think (Fitch & Reby 2001); but that doesn't seem critical. It's not very clear what difference it would make. You wouldn't be able to pronounce some sounds but you'd be able to pronounce others. But humans learn language and use it freely with highly defective sensory-motor systems, or no control of the sensory-motor system at all. That's one of the things that Eric Lenneberg found – discovered, actually – fifty years ago. [He discovered] that children with dysarthria [no control over their articulatory systems] – who were thought not to have language by the people raising them, training them, etc – he discovered they did. He discovered this by standing behind them and saying something and noticing their reactions. There's more recent work. So you don't require – in fact you don't even have to use it; sign language doesn't use it – so it's very hard to see that there could be any argument from sensory-motor evidence for not developing language. But also the system seems to have been around for hundreds of thousands of years, as far as we can tell from fossil evidence. But there's no indication of anything like language use or the whole set of cognitive capacities that appear to have developed along with it.

  Think about it in plain evolutionary terms. Somewhere along the line a mutation took place that led to the rewiring of the brain to give you Merge. That everyone should accept, whether they like to say it or not. Well, the most parsimonious assumption is that that's all that happened. It's probably not that [alone]; but we have no evidence against it. So unless there's some evidence to the contrary, we sort of keep to that and see how far we can go. Well, mutations take place in an individual, not in a society, so what must have happened at some point is that that mutation took place in one person and then it would be transferred to offspring, or some offspring, at least. It was a pretty small breeding group. So it could be that if it gave a selectional advantage, they'd dominate the breeding group pretty soon, maybe in a few generations. This could all be done without any communication. It gives you the ability to think, to construct complex thoughts, to plan, to interpret . . . It's hard to imagine that that wouldn't yield a selectional advantage, so it could be that over some fairly short time, throughout this breeding group, that the capacity to think was well embedded. The use of it to communicate could have come later. Furthermore, it looks peripheral: as far as we can see from studying language, it doesn't seem to affect the structure of language very much. And it does appear to be largely modality-independent. [No doubt] there are advantages to sound over sight – you can use it in the dark and it goes around corners – things like that. But it's quite possible that it's just a later development that came along, and it may not have had much effect on the structure of language.[C]

  JM: Really modality-independent? It's clearly bimodal . . .

  NC: Well, at least bimodal. But we just don't know how many modalities you can use. We don't have well-developed senses of smells, so we probably can't do much with that. You ca
n do it with touch. I don't know if people can learn Braille as a first language. It's conceivable . . .

  No, actually, there is some evidence for this. Not a ton of it, but there have been studies – actually Carol [Chomsky's wife] was working on this at MIT with people most of whom had had meningitis around age 1 or 2, somewhere around there – and who had lost all modalities except touch. They were blind and deaf – they could speak; they had an articulatory apparatus – but they were blind and deaf. There's a method of teaching them language by putting the hand on the face. So if you're one of those patients, you could put your hand on the face kind of like this – I think the thumb is on the vocal cords and the fingers are around the mouth – and they had an amazing capacity for language.

  This is a group at MIT that was working on sensory aids, but Carol was working on [the project] as a linguist to see how much they know. And she had to do pretty sophisticated tests on them – tag questions and things like that – to get to a point where they didn't [seem to] have the whole system [of language] in their heads. They get along fine – nobody would notice that there's a language defect. They have to have constant retraining too, though: they don't get any sensory feedback, so they lose their articulatory capacities, and then they have to be constantly retrained to do that. For example, their prize patient was a tool and die maker in Iowa somewhere. He got here by himself. He had a card which he would show people if he was lost and needed directions – he'd show [it to] them and [it would] say, “May I put my hand on your face,” explaining why. He could get around – got here all right, lived with his wife who was also blind and deaf. The only problem they had was locating each other. So they had a system of vibrators [installed] around the house that they could use to locate each other. But the point is that [he had] a capacity for language that you really had to test to find deficiencies – you wouldn't notice it in ordinary interaction.

 

‹ Prev