The Science of Language

Home > Other > The Science of Language > Page 4
The Science of Language Page 4

by Noam Chomsky

Just recently I started reading the records of some of the conferences back in the sixties and seventies. The participants were mostly rising young biologists, a few neurophysiologists, some linguists, a few others. And these kinds of questions kept arising – someone would say, well, what are the specific properties of this system that make it unlike other systems? And all we could do was list a complicated set of principles which are so different [from each other] and so complex that there is no conceivable way that they could have evolved: it was just out of the question.

  Furthermore, beyond the comparative question, there is another question lurking, which is right at the edge for biology currently – it is the one that Kauffman is interested in. That question is, why do biological systems have these properties – why these properties, and not other properties? It was recognized to be a problem back around Darwin's time. Thomas Huxley recognized it – that there's going to be a lot of different kinds of life forms, including human ones; maybe nature just somehow allows human types and some other types – maybe nature imposes constraints on possible life forms. This has remained a fringe issue in biology: it has to be true, but it's hard to study. [Alan] Turing (1992), for example, devoted a large part of his life to his work on morphogenesis. It is some of the main work he did – not just that on the nature of computation – and it was an effort to show that if you ever managed to understand anything really critical about biology, you'd belong to the chemistry or physics department. There are some loose ends that the history department – that is, selectional views of evolution – just happens to have. Even natural selection – it is perfectly well understood, it's obvious from the logic of it – even natural selection alone cannot do anything; it has to work within some kind of prescribed channel of physical and chemical possibilities, and that has to be a restrictive channel. You can't have any biological success unless only certain kinds of things can happen, and not others. Well, by now this is sort of understood for primitive things. Nobody thinks, for instance, that mitosis [the process of cellular DNA duplication, leading to division] is determined by natural selection into spheres and not cubes; there are physical reasons for that. Or take, say, the use of polyhedra as construction materials – whether it's the shells of viruses, or bee honeycombs. The physical reasons for that are understood, so you don't need selectional reasons. The question is, how far does it go?

  The basic questions of what is specific to language really have to do with issues that go beyond those of explanatory adequacy [that is, with dealing with Plato's Problem, or explaining the poverty of the stimulus facts for language acquisition]. So if you could achieve explanatory adequacy – if you could say, “Here's Universal Grammar [UG], feed experience into it, and you get an I-language” – that's a start in the biology of language, but it's only a start.[C] The next step would be, well, why does UG have the properties that it has? That's the basic question. Well, one possibility is just one thing after another – a set of historical accidents, asteroids hitting the earth, or whatever. In that case, it's essentially unexplainable; it is not rooted in nature, but in accident and history. But there is another possibility, which is not unreasonable, given what we know about human evolution. It seems that the language system developed quite suddenly. If so, a long process of historical accident is ruled out, and we can begin to look for an explanation elsewhere – perhaps, as Turing thought, in chemistry or physics.

  The standard image in evolutionary biology – the reason why biologists think that finding something perfect doesn't make any sense – is because you're looking at things over a long period of evolutionary history. And there are, of course, lots of instances of what François Jacob calls “bricolage,” or tinkering; at any particular point, nature does the best it can with what is at hand. You get paths in evolution that get stuck up here and go from there and not start over and go somewhere else. And so you do end up with what look like very complicated things that you might have done better if you had had a chance to engineer them from the start. That may be because we don't understand them. Maybe Turing was right; maybe they become this way because they have to. But at least it makes some sense to have that image if you have a long evolutionary development. On the other hand, if something happened pretty fast, it doesn't make any sense to take that image seriously.

  For a while, it did not seem as if the evolution of language could have happened very quickly. The only approach that seemed to make any sense of language was that UG [or the biological endowment we have that allows us to acquire a language] is a pretty intricate system with highly specific principles that had no analogue anywhere else in the world. And that leads to the end of any discussion of the central problems of biology of language – what's specific to it, how did it get there? The reason for that was the tie between the theory – between the format for linguistic theory – and the problem of acquisition. Everyone's picture – mine too – was that UG gives something like a format for possible grammars and some sort of technique for choosing the better of them, given some data. But for that to work, the format has to be highly restrictive. You can't leave a lot of options open and, to make it highly restrictive, it seems as though it has to be highly articulated and very complex. So you're stuck with a highly articulated and highly specific theory of Universal Grammar, basically for acquisition reasons. Well, along comes the Principles and Parameters (P&P) approach; it took shape around the early eighties. It doesn't solve the problem [of saying what is distinctive to language and how it got there], but it eliminates the main conceptual barrier to solving it. The big point about the P&P approach is that it dissociates the format for grammar from acquisition. Acquisition according to this approach is just going to be a matter of picking up (probably) lexical properties, and undoubtedly lexical properties are picked up from experience, so here was another way in which acquisition is dissociated from the format.

  Well, if all of that is dissociated from the principles part of UG, then there is no longer any conceptual reason why they have to be extremely intricate and specific. So you can begin to raise the question, well, have we just been wrong about their complexity and high level of articulation? Can we show that they really are simple? That's where the Minimalist Program begins. We can ask the question which was always lurking but that we could not handle, because of the need to solve the acquisition problem. With the dissociation of acquisition from the structure of language – primarily through the choice of parameters – we can at least address these questions. After the early 1980s, just about every class I taught I started with saying, “Let's see if language is perfect.” We'd try to see if it was perfect, and it didn't work; we'd end up with another kind of complexity. And in fact [pursuing that issue] didn't get very far until about the early 1990s, and then, at that point, things started to come together. We began to see how you could take the latest [theoretical understanding of the] technology and develop a fundamental explanation of it, and so on. One of the things – oddly enough – that was the last to be noticed around 2000 was that displacement [movement] is necessary. That looked like the biggest problem – why displacement? The right answer – that it's just internal Merge – strikes you in the face once you look at it in the right way.

  JM: Didn't the story used to be that it was there to meet interface conditions – constraints on the core language system that are imposed by the systems with which language must ‘communicate’?

  NC: Well, it turns out that it does meet interface conditions; but that's there anyhow. There have to be interface conditions; [the question we could now answer was] the biggest problem – why use displacement to meet them? Why not use indices, or something? Every system [has to] meet those conditions, but does it with different technology. Well now, thinking it through, it turns out that transformational grammar is the optimal method for meeting those conditions, because it's there for free.

  JM: . . . when thought of as internal and external Merge . . .

  NC: Yes, that comes for free, unless you stipulate that one of
them doesn't happen.

  JM: OK, and this helps make sense of why Merge – thus recursion in the form we employ it in language (and probably mathematics) – is available to human beings alone.[C] Is this all that is needed to make sense of what is distinctive about human language, then, that we have Merge? I can assume, on at least some grounds, that other species have conceptual capacities . . .

  NC: But see, that's questionable. On the sensory-motor [interface] side, it's probably true. There might be some adaptations for language, but not very much. Take, say, the bones of the middle ear. They happen to be beautifully designed for interpreting language, but apparently they got to the ear from the reptilian jaw by some mechanical process of skull expansion that happened, say, 60 million years ago. So that is something that just happened. The articulatory-motor apparatus is somewhat different from other primates, but most of the properties of the articulatory system are found elsewhere, and if monkeys or apes had the human capacity for language, they could have used whatever sensory-motor systems they have for externalization, much as native human signers do. Furthermore, it seems to have been available for hominids in our line for hundreds of thousands of years before it was used for language. So it doesn't seem as if there were any particular innovations there.

  On the conceptual side, it's totally different. Maybe we don't know the right things, but everything that is known about animal thought and animal minds is that the analogues to concepts – or whatever we attribute to them – do happen to have a reference-like relation to things. So there is something like a word-object relation. Every particular monkey call is associated with a particular internal state, such as “hungry,” or a particular external state, such as “There are leaves moving up there, so run away.”

  JM: As Descartes suggested.

  NC: Yes. That looks true of animal systems, so much so that the survey of animal communication by Randy Gallistel (1990) just gives it as a principle. Animal communication is based on the principle that internal symbols have a one-to-one relation to some external event or an internal state. But that is simply false for human language – totally. Our concepts are just not like that. Aristotle noticed it; but in the seventeenth century it became a vocation. Take, say, Locke's chapter 27 in An Essay Concerning Human Understanding that he added to the essay on persons. He realizes very well that a person is not an object. It's got something to do with psychic continuity. He goes into thought experiments: if two identical-looking people have the same thoughts, is there one person, or two people? And every concept you look at is like that. So they seem completely different from animal concepts.[C]

  In fact, we only have a superficial understanding of what they are. It was mainly in the seventeenth century that this was investigated. Hume later recognized that these are just mental constructions evoked somehow by external properties. And then the subject kind of tails off and there's very little that happens. By the nineteenth century, it gets absorbed into Fregean reference-style theories, and then on to modern philosophy of language and mind, which I think is just off the wall on this matter.

  . . . But to get back to your question, I think you're facing the fact that the human conceptual system looks as though it has nothing analogous in the animal world. The question arises as to where animal concepts came from, and there are ways to study that. But the origin of the human conceptual apparatus seems quite mysterious for now.

  JM: What about the idea that the capacity to engage in thought – that is, thought apart from the circumstances that might prompt or stimulate thoughts – that that might have come about as a result of the introduction of the language system too?

  NC: The only reason for doubting it is that it seems about the same among groups that separated about fifty thousand years ago. So unless there's some parallel cultural development – which is imaginable – it looks as if it was sitting there somehow. So if you ask a New Guinea native to tell you what a person is, for example, or a river . . . [you'll get an answer like the one you would give.] Furthermore, infants have it [thought]. That's the most striking aspect – that they didn't learn it [and yet its internal content is rich and intricate, and – as mentioned – beyond the reach of the Oxford English Dictionary].

  Take children's stories; they're based on these principles. I read my grandchildren stories. If they like a story, they want it read ten thousand times. One story that they like is about a donkey that somebody has turned into a rock. The rest of the story is about the little donkey trying to tell its parents that it's a baby donkey, although it's obviously a rock. Something or another happens at the end, and it's a baby donkey again. But every kid, no matter how young, knows that that rock is a donkey, that it's not a rock. It's a donkey because it's got psychic continuity, and so on. That can't be just developed from language, or from experience.

  JM: Well, what about something like distributed morphology? It might be plausible that at least some conceptual structure – say, the difference between a noun and a verb – is directly due to language as such. Is that plausible?

  NC: Depends on what you mean by it. Take the notion of a donkey again. It is a linguistic notion; it's a notion that enters into thought. So it's a lexical item and it's a concept. Are they different? Take, say, Jerry Fodor's notion of the language of thought. What do we know about the language of thought? All we know about it is that it's English. If it's somebody in East Africa who has thoughts, it's Swahili. We have no independent notion of what it is; in fact, we have no reason to believe that there's any difference between lexical items and concepts. It's true that other cultures will break things up a little differently, but the differences are pretty slight. The basic properties are just identical. When I give examples in class like river and run these odd thought experiments [concerning the identities of rivers – what a person is willing to call a river, or the same river that you find in my work], it doesn't matter much which language background anyone comes from, they all recognize it in the same way in fundamental respects. Every infant does. So, somehow, these things are there. They show up in language; whether they are ‘there’ independently of language, we have no way of knowing. We don't have any way of studying them – or very few ways, at least.

  We can study some things about conceptual development apart from language, but they have to do with other things, such as perception of motion, stability of objects, things like that. It's interesting, but pretty superficial as compared with whatever those concepts are. So the question whether it came from language seems beyond our investigation capacities; we can't understand infant thought very far beyond that.

  But then the question is, where did it come from? You can imagine how a genetic mutation might have given Merge, but how does it give our concept of psychic identity as the defining property of entities? Or many other such properties quite remote from experience.

  JM: I've sometimes speculated about whether or not lexical concepts might be in some way or another generative. It seems plausible on the face of it – it offers some ways of understanding it.

  NC: The ones that have been best studied are not the ones we have been talking about – the ones that are [sometimes] used [by us] to refer to the world, [such as WATER and RIVER,] but the relational ones, such as the temporal[ly relational] ones – stative versus active verbs[, for example] – or relational concepts, concepts involving motion, the analogies between space and time, and so on. There is a fair amount of interesting descriptive work [done on these]. But these are the parts of the semantic apparatus that are fairly closely syntactically related, so [in studying them] you're really studying a relational system that has something of a syntactic character.

  The point where it becomes an impasse is when you ask, how is any of this used to talk about the world – the traditional question of semantics. Just about everything that is done – let's suppose everything – in formal semantics or linguistic semantics or theory of aspect, and so on, is almost all internal [and syntactic in the broad sense]. It would work the same if th
ere weren't any world. So you might as well put the brain in a vat, or whatever. And then the question comes along, well look, we use these to talk about the world; how do we do it? Here, I think, philosophers and linguists and others who are in the modern intellectual tradition are caught in a kind of trap, namely, the trap that assumes that there is a reference relation.[C]

  I've found it useful and have tried to convince others – without success – to think of it on an analogy with phonology. The same question arises. All the work in phonology is internal [to the mind/brain]. You do assume that narrow phonetics gives some kind of instructions to the articulatory and auditory system – or whatever system you're using for externalization. But that's outside of the faculty of language. It's so crazy that nobody suggests that there is a sound–symbol relation; nobody thinks that the symbol æ, let's say (“a” in cat), picks out some mind-external object. You could play the game that philosophers do; you could say that there's a four-dimensional construct of motions of molecules that is the phonetic value of æ. And then æ picks that out, and when I say æ (or perhaps cat) you understand it because it refers to the same four-dimensional construct. That's so insane that no one – well, almost no one, as you know – does it. What actually happens – this is well understood – is that you give instructions to, say, your articulatory apparatus and they convert it into motions of molecules in different ways in different circumstances, and depending on whether you have a sore throat or not, or whether you're screaming, or whatever. And somebody else interprets it if they are close enough to you in their internal language and their conception of the world and understanding of circumstances, and so on; to that extent, they can interpret what you are saying. It's a more-or-less affair. Everyone assumes that that is the way the sound side of language works.

  So why shouldn't the meaning side of language work like that: no semantics at all – that is, no reference relation – just syntactic instructions to the conceptual apparatus which then acts? Now – once you're in the conceptual apparatus and action – you're in the domain of human action. And whatever the complexities of human action are, the apparatus – sort of – thinks about them in a certain way. And other people who are more or less like us or think of themselves in the same way, or put themselves in our shoes, get a passably good understanding of what we're trying to say. It doesn't seem that there's any more than that.[C]

 

‹ Prev