The Science of Language

Home > Other > The Science of Language > Page 9
The Science of Language Page 9

by Noam Chomsky


  JM: To press a point of simplicity for a moment: you've remarkably shown that there's a very considerable degree of simplicity in the faculty itself – in what might be taken to be distinctively linguistic aspects of the faculty of language. Would you expect that kind of simplicity in whatever third factor contributions are going to be required to make sense of growth of language in a child?

  NC: To the extent that they're real, then yes – to the extent that they contribute to growth. So how does a child get to know the subjacency condition [which restricts movement of a constituent to crossing a single bounding node]? Well, to the extent that that follows from some principle of efficient computation, it'll just come about in the same way as cell division comes about in terms of spheres. It won't be because it's genetically determined, or because of experience; it's because that's the way the world works.

  JM: What do you say to someone who comes along and says that the cost of introducing so much simplicity into the faculty of language is having to in the long run deal with other factors outside of the faculty of language that contribute to the growth of language, and also consists, in part, at least, of pushing into another area whatever kinds of global considerations might be relevant to not only language itself, but its use?

  NC: I don't understand why that should be considered a cost; it's a benefit.

  JM: OK; for the linguist interested in producing a good theory, that's plausible.

  NC: In the first place, the question of cost and benefit doesn't arise; it's either true or it isn't. If it is true – to the extent that it's true – it's a source of gratification that carries the study of language to a higher level. Sooner or later, we expect it to be integrated with the whole of science – maybe in ways that haven't been envisioned. So maybe it'll be integrated with the study of insect navigation some day; if so, it's all to the good.

  JM: Inclusiveness: is it still around?[C]

  NC: Yes; it's a natural principle of economy, I think. Plainly, to the extent that language is a system in which the computation just involves rearrangement of what you've already got, it's simpler than if the system adds new things. If it adds new things, it's only specific to language. Therefore, it's more complex; therefore, you don't want it, unless you can prove that it's there. At least, the burden of proof is on assuming you need to add new things. So inclusiveness is basically the null hypothesis. It says language is just what the world determines, given the initial fact that you're going to have a recursive procedure. If you're going to have a recursive procedure, the best possible system would be one in which everything else follows from optimal computation – we're very far from showing that, but insofar as you can show that anything works that way, that's a success. What you're showing here is a property of language that does not have to be attributed to genetic endowment. It's just like the discovery that polyhedra are the construction materials. That means you don't have to look for the genetic coding that tells you why animals such as bees are going to build nests in the form of polyhedra; it's just the way they're going to do it.

  JM: Inclusiveness used to depend to a large extent upon the lexicon as the source of the kind of ‘information’ to be taken into account in a computation; does the lexicon still have the important role that it used to have?

  NC: Unless there's something more primitive than the lexicon. The lexicon is a complicated notion; you're fudging lots of issues. What about compound nouns, and idioms, and what kinds of constructive procedures go on in developing the lexicon – the kind of thing that Kenneth Hale was playing with? So ‘lexicon’ is kind of a cover for a big mass of problems. But if there's one aspect of language that is unavoidable, it's that in any language, there's some assembly of the possible properties of the language – features, which just means linguistic properties. So there's some process of assembly of the features and, then, no more access to the features, except for what has already been assembled. That seems like an overwhelmingly and massively supported property of language, and an extremely natural one from the point of view of computation, or use. So you're going to have to have some kind of lexicon, but what it will be, what its internal structure will be, how morphology fits into it, how compounding fits in, where idioms come in – all of those problems are still sitting there.

  JM: Merge – the basic computational principle: how far down does it go?

  NC: Whatever the lexical atoms are, they have to be put together, and the easiest way for them to be put together is for some process to just form the object that consists of them. That's Merge. If you need more than that, then ok, there's more – and anything more will be specific to language.

  JM: So in principle, von Humboldt might have been right, that the lexicon is not this – I think his term was “completed, inert mass” . . .

  NC: . . . but something created . . .

  JM: . . . something created and put together. But if it's put together, is it put together on an occasion, or is there some sort of storage involved?

  NC: It's got to be storage. We can make up new words, but it's peripheral to the language [system's core computational operations].[C]

  As for Humboldt, in fact, I think that when he was talking about the energeia and the lexicon, I think he was actually referring to usage. In fact, almost all the time, when he talks about infinite use of finite means, he doesn't mean what we mean – infinite generation – he means use; so, it's part of your life.

  JM: But he did recognize that use depended rather heavily upon systems that underlie it, and that effectively supported and provided the opportunity for the use to . . .

  NC: . . . that's where it fades off into obscurity. I think now that the way that I and others who have quoted him has been a bit misleading, in that it sounds as if he's a precursor of generative grammar, where perhaps instead he's really a precursor of the study of language use as being unbounded, creative, and so on – in a sense, coming straight out of the Cartesian tradition, because that's what Descartes was talking about. But the whole idea that you can somehow distinguish an internal competence that is already infinite from the use of it is a very hard notion to grasp. In fact, maybe the person who came closest to it that I've found is neither Humboldt nor Descartes, but [A.W.] Schlegel in those strange remarks that he made about poetry [see Chomsky, 1966/2002/2009)]. But it was kind of groping around in an area there was no way of understanding, because the whole idea of a recursive infinity just didn't exist.

  JM: But didn't Humboldt distinguish . . . he did have a distinction between what he called the Form of language and its character, and that seems to track something like a distinction between competence and use . . .

  NC: It's hard to know what he meant by it. When you read through it, you can see it was just groping through a maze that you can't make any sense of until you at least distinguish, somehow, competence from performance. And that requires having the notion of a recursive procedure and an internal capacity that is ‘there’ and already infinite, and can be used in all the sorts of ways he was talking about. Until you at least begin to make those distinctions, you can't do much except grope in the wilderness.

  JM: But that idea was around, as you've pointed out. John Mikhail pointed it out in Hume; it was around in the seventeenth and eighteenth centuries . . .

  NC: . . . something was around. What Hume says, and what John noticed, is that you have an infinite number of responsibilities and duties, so there has to be some procedure that determines them; there has to be some kind of system. But notice again that it's a system of usage – it determines usage. It's not that there's a class of duties characterized in a finite manner in your brain. It's true it has to be that; but that wasn't what he was talking about. You could say it's around in Euclid, in some sense. The idea of a finite axiom system sort of incorporates the idea; but it was never clearly articulated.

  JM: And that notion really only had its beginning with your work in the fifties – so far as anyone can tell, in any case?

  NC: Well, as it applies t
o language. But the idea was by then already sort of finished; you had Church's thesis, and concepts of algorithm and recursive procedure were already well understood before him. You could just then sort of apply it to biological systems, language being the obvious case.

  10 On the intellectual ailments of some scientists

  JM: You mentioned that some who would be scientists are too data-oriented – that they are unwilling to abstract and idealize in the ways needed in order to simplify and construct a science. Is the phenomenon that you've quite often remarked upon – that even distinguished chemists [and other scientists] early in the twentieth century wanted to deny that their theoretical work had anything more than an instrumental value – is this an aspect of what happens in studying human languages, that we wish to talk about what people say, and the circumstances in which they say it – their behavior – and not the principles and systems that underlie it and make it possible?

  NC: There's some connection there, but it seems to me a different issue. There was a strong Machian tradition in the sciences, which was that if you can't see it, it's not there – [that theoretical principles] are just some hypotheses you're making up to make your computations work better. This was true in physics too, like in chemistry, late into the 1920s. Poincaré, for example, dismissed molecules and said that the only reason we talk about them is that we know the game of billiards, but there's no basis for them – you can't see them, they're just a useful hypothesis for computing things. And this goes on into the 1920s – leading scientists were saying that Kékulé's structural chemistry or Bohr's atom were simply modes of computation. And their reason was an interesting one: you couldn't reduce [them] to physics. I've quoted Russell in 1929. Russell knew the sciences quite well, and he says that chemical laws cannot at present be reduced to physical laws – the assumption being that the normal course of science is to reduce to physical laws. But as long as they aren't [reduced], it's not real science. Well, we know what happened; they were never reduced. Physics underwent a radical change, and it was unified with a virtually unchanged chemistry. Well, at that point, it was recognized – well actually, it was never explicitly recognized, it was just tacitly understood – that the entire discussion of the last century or so was crazy; it was just sort of forgotten and nobody talked about it anymore. And the history was forgotten. But it's an interesting history.

  I've been trying (vainly) to convince philosophers of mind for years that their own discussions today are almost a repetition of what was going on in the natural sciences not that many years ago – up until the 1930s – and we should learn something from that.[C] And that is not the only case in the history of science; there are many such cases. The classic moment in the history of science is such a case – Newton. Newton himself considered his proposals [primarily, the inverse square law of gravitation] as absurd, because they could not be reduced to physics – namely, the mechanical philosophy [of Descartes], which [he and many others thought] was obviously true. So he regarded his proposals as an absurdity that no sensible person could accept. Yet we have to accept them, because they seem to be true. And that was extremely puzzling to him, and he spent the rest of his life trying to find some way out of it, and so did later scientists. But what he actually showed – and in retrospect, it's understood, while forgetting the history, unfortunately – he showed that the truth of the world is not reducible to what was called “physics,” and physics had to be abandoned and revised. That's the classic moment in the history of science, and it goes on and on like that. The quantum-theoretic interpretation of the chemical bond was another such development. Why should we expect the study of the mental aspects of the world to somehow break from the history of the sciences? Maybe it will, but there's no particular reason to expect it.

  JM: What about that odd phenomenon of behaviorism? Part of the motivation for it clearly had to do with the first of these factors you have been talking about: behaviorists offered to people in power some kind of legitimacy because they portrayed themselves – or wanted to portray themselves – as experts and scientists . . .

  NC: . . . and benign too. We're controlling people's behavior for their own good – kind of like Mill.

  JM: Precisely. But another part of the behaviorist rhetoric was of course their Machian effort to stick to the observable.

  NC: It's a strange view of science that is not held in the core of the natural sciences anymore, but once was – that science is the study of data. In fact, the whole concept [of behaviorism] is very interesting. In the 1950s, all of the fields of social science and psychology were behavioral science; [and] as soon as you see the word, [you know] something's wrong. Behavior is data – like meter-readings are data in physics. But physics isn't meter-reading science. I mean, you're looking at the data to see if you can find evidence, and evidence is a relational concept; it's evidence for something. So what you're looking for is evidence for some theory that will explain the data – and explain new data and give you some insight into what's happening, and so on. If you just keep to the data, you're not doing science, whatever else you're doing. Behavioral science is, in principle, keeping to the data; so you just know that there's something wrong with it – or should know. But it is based on a concept of science that was prevalent even in the core physical sciences for a long time. In the late nineteenth century, physics was regarded by physicists – leading physicists – as mostly a science of measurement and correlation between measured quantity and pressure and general relations about them, a position that reached its sophisticated form in Mach.

  JM: What about recent forms of that, found in connectionism and the like?

  NC: They're manifestations of it, I think. Somehow, we've got to start from the simplest thing we understand – like a neural connection – and make up some story that will account for everything. It's like corpuscularian physics in the seventeenth century, which made similar assumptions. People like Boyle and Newton and others recognized, plausibly, that there must be some elementary building blocks of matter – corpuscles – and they must be like the bricks out of which you build buildings. So we'll assume that. And then they try to show how you can account for everything in terms of different arrangements of the corpuscles.

  Nowadays, Newton's concern for alchemy is regarded as some sort of aberration, but it was not; it was very rational. It's perfectly correct that if nature consists of simple building blocks, differently arranged, you should be able to turn lead into gold. You just have to figure out how to do it; nothing irrational about that. In fact, in some sense, he's right; there are elementary things – not what he was talking about – but, yes, something like that. And connectionism seems to me about at the level of corpuscularianism in physics. Do we have any reason to believe that by taking these few things that we think – probably falsely – that we understand, and building up a complex structure from them, we're going to find anything? Well, maybe, but it's highly unlikely. Furthermore, if you take a look at the core things they're looking at, like connections between neurons, they're far more complex. They're abstracting radically from the physical reality, and who knows if the abstractions are going in the right direction? But, like any other proposal, you evaluate it in terms of its theoretical achievements and empirical consequences. It happens to be quite easy in this case, because they're almost nonexistent.[C]

  JM: There's a new growth in connectionism that amounts to trying to pursue topics that they think of as evolutionary. I guess there was always that sort of connection between certain views of evolution and behaviorism . . .

  NC: Skinner, for example, was very explicit about it. He pointed out, and he was right, that the logic of radical behaviorism was about the same as the logic of a pure form of selectionism that no serious biologist could pay attention to, but which is [a form of] popular biology – selection takes any path. And parts of it get put in behaviorist terms: the right paths get reinforced and extended, and so on. It's like a sixth grade version of the theory of evolution. It can't poss
ibly be right. But he was correct in pointing out that the logic of behaviorism is like that [of naïve adaptationism], as did Quine. They're both correct that they're similar, and both wrong – for the same reasons.

  11 The place of language in the mind

  JM: To get back to business . . . can we talk about the place of language in the mind?

  NC: OK.

  JM: It's not a peripheral system; you've mentioned that it has some of the characteristics of a central system. What do you mean by that?

 

‹ Prev