Book Read Free

The World Philosophy Made

Page 14

by Scott Soames


  CHAPTER 7

  THE SCIENCE OF LANGUAGE

  The emergence of the scientific study of natural languages: Chomsky’s conception of linguistic theory and the role of syntax in connecting sound with meaning; origins in philosophical logic of the empirical science of meaning in natural language; early models of linguistically encoded information; the conception of meaning as truth conditions; the cognitive breakthrough: information as types of cognition; the current challenge: understanding how varying contextual information mixes with fixed linguistic meaning in communication; applications in legal interpretation.

  Nothing about human beings is more central to who we are and how we differ from other animals than our extraordinary command of language. It is language that allows us to communicate with those in the past, and in the future, as well as with those with whom we are now in contact. Nor is communication the whole story. Language is also the vehicle for our most complex and wide-ranging thoughts. Without it we would scarcely have any thoughts about things we have never encountered. We would know little history and have limited ability to project ourselves beyond the present. Our mental universe would be tiny.

  Human language is enormously complex, consisting of interacting subsystems governing (i) the production and perception of the finely discriminated sounds of spoken language, (ii) the processes of word-formation, (iii) the syntactic principles by which sequences of words are organized into sentences, (iv) the interpretation of sentences on the basis of our understanding of their parts, and (v) the dynamic flow of information carried by discourses consisting of many sentences. Each subsystem is the subject of an advancing subdiscipline of the emerging science of natural human language. The subdisciplines corresponding to (iv) and (v)—the study of the principles governing the interpretation of sentences (used in various contexts) and of those governing the flow of information in discourses made up of many sentences—are the youngest subdisciplines of linguistics, and those most closely related to philosophical developments in the last century.

  The conception of an empirical theory of a natural language (such as English) as an integrated theory encompassing all these subsystems grew of the work of Noam Chomsky, who, between 1955 and 1965, laid down philosophical foundations of the emerging science of language in The Logical Structure of Linguistic Theory, Syntactic Structures, and Aspects of a Theory of Syntax.1 In these works he set out his syntactic theory of sentence structure, his semantic theory of how sentence structure relates to meaning, and his cognitive theory of internalized rule-following responsible for our linguistic competence.

  A syntactic theory of English—a.k.a. a grammar—was taken to be a set of formal rules generating all and only the strings of English words that are (well-formed) sentences, and breaking each sentence into a hierarchical structure of phrases capturing how the sentences are understood. Such a theory predicts which strings are, or would be, recognized and used as sentences by speakers, and which strings aren’t, or wouldn’t be, so used and recognized. Since the number of English sentences containing 20 words or less has been estimated to be roughly 1030—compare, for example, the number, 3.15 × 109, of seconds in a century—the task of producing a predictively correct grammar is far from trivial.2

  In The Logical Structure of Linguistic Theory and Syntactic Structures, Chomsky used two sets of rules: context-free phrase structure rules, which generated an initial set of word-sequences with hierarchical structures, and transformations that mapped such structures onto other such structures. In Aspects of a Theory of Syntax, he added constraints on their operation. First, the phrase structure rules generated grammatical deep structures. Then, obligatory and optional transformations were applied—the former that must applied to any hierarchical structures satisfying their application conditions, and the latter that could, but need not, be applied to such structures. The end result was to be the generation of all and only the sentences of the language.

  The key point for our purposes is that each sentence was associated with a surface structure, the hierarchical structure representing the words as spoken, plus a deep structure, representing the meaning of the sentence. The former was the input of the phonological component of the grammar, which precisely described the sequence of sounds making up utterances of the sentence. The latter was the input to the semantic component of the grammar, the job of which was to specify the meanings to sentences. Hence, it was thought, a Chomskian generative grammar of a natural language would explain how produced and perceived sounds were connected to the meanings of sentences in speakers’ minds.

  Chomsky’s mentalistic conception of these theories was both inspiring and contentious. Positing a rich system of linguistic universals—commonalities in the grammars of all languages of human communities—he took linguistics to be a nonexperimental branch of cognitive psychology describing not only syntactic, semantic, and phonological rules existing in the minds of speakers, but also a rich innate system allowing children to learn the highly complex systems that linguists were discovering natural languages to be. This was inspiring because it seemed be a new way to study one of the central aspects of human cognition. It was contentious because the ability to use a set of formal rules to produce outputs—sentences—that match or approximate those of ordinary speakers does not guarantee that the mechanisms used by speakers match or closely approximate those used by linguists. Although the issue remains contentious, there is little doubt that the formal and abstract study of language by today’s linguists is capable of informing, and being informed by, genuinely experimental cognitive psychology.3

  The Chomskian revolution in linguistics, which set the study of natural language on a new path, was the product of his deeply philosophical reconceptualization of the subject together with his technical and scientific prowess. This set the stage for more strictly philosophical contributions to the emerging science of language. As noted, an integrated theory of a natural language connects, via the syntax of a language, the sounds of spoken utterances with the meanings extracted from them by speakers. How theorists conceptualize this connection has evolved since Chomsky’s 1965 discussion in Aspects of a Theory of Syntax. Typically, it still involves an input—often called the logical form of the sentence—to the semantic component of a theory, and an output representing its meaning, which combines with contextual factors to generate the assertive and other communicative content of the utterance. However, this raised a problem. The tradition in linguistics did not provide useful conceptions of what meaning is, or how it is possible to study it. Thus, linguists looked to philosophy. What they found was a logic-based approach to meaning, to which linguists and philosophers have been jointly contributing for the past half century.

  The contemporary science of linguistic meaning, linguistic semantics, grew out of developments in logic starting with Frege’s invention of modern logic in the late nineteenth century, and continuing through Russell’s elaboration and application of that work in the early twentieth century. By the mid 1930s, Gödel, Tarski, Church, and Turing had established the scope, limits, and conceptual independence of the new logic as an independent discipline. As we have seen, one aspect of any modern system of logic is a technique for interpreting its sentences. That technique almost immediately became a template for studying meanings of declarative sentences of natural languages. Although imperatives and interrogatives received (and continue to receive) less attention, it has usually been assumed that their interpretations can be made to parallel correct accounts of declaratives.

  The central interpretive idea is that language, like perception and thought generally, is representational. The things represented are whatever we are perceiving, thinking, or talking about. The ways they are represented are the properties our words, perceptions, or thoughts ascribe to them. Our visual experience represents things we see as having various characteristics—e.g., as being red, or round. Our nonlinguistic thoughts represent a greater range of things as having a wider variety of properties, while our linguistically
expressed thoughts vastly expand our representational capacities. Whenever we represent anything as being any way, either the thing is the way it is represented to be, or it isn’t. If it is, the representation—the thought, sentence, or perceptual experience—is true or veridical.

  Perceptions and nonlinguistic thoughts are cognitions of a certain sort; linguistically encoded thoughts are a special kind of cognition, while sentences are cultural artifacts—cognitive tools created by communities. Because the same sequence of words or sounds could, in principle, mean different things in different communities, the meaning of a sentence in the language of a community is determined by the community’s conventions governing its use. In the simplest case, there is a convention stipulating that a certain name ‘N’ is used to refer to a particular man, John, another convention stipulating that ‘H’ is used to represent individuals as being hungry, and a third convention from which it follows that ‘N is H’ is used to represent John as being hungry. To understand such a sentence is to know the properties it represents things as having, and so what they must be like for the sentence to be true. To know the meaning of a constituent of a sentence—e.g., a name, a predicate, or a clause—is to know what it contributes to the meanings of sentences in which it occurs.

  But what are meanings? The meaning of our example ‘N is H’ represents John as being hungry, and so determines that the sentence is true if and only if John is hungry. In general, the meaning of a declarative sentence is a piece of information (or misinformation) about how things are. These pieces of information are called propositions. Since knowing the meaning of a sentence involves knowing the meanings of its parts, the proposition expressed by a use of a sentence must incorporate the meanings of its syntactic constituents, which, in our example, are the man John (which is the meaning of the name ‘N’) and the property being hungry (which is the meaning of the predicate ‘H’). Finally, propositions are the things we assert, believe, deny, or doubt, as well as being the objects of a host of related cognitive attitudes. They are transmitted from one agent to another in the communicative exchange of information.

  To convert these informal ideas into a framework for studying language, one must (a) identify meanings of sub-sentential expressions, (b) articulate how they combine to form meanings of complex expressions, (c) show that the resulting propositions expressed by sentences represent things as being certain ways, and are true if and only if those things are the ways they are represented to be, and (d) explain the cognitive attitudes—belief, knowledge, and the like—that agents bear to propositions, keeping in mind that knowledge and belief, though central to language use, are not restricted to language-using agents.

  This last task—explaining how cognitive attitudes relate agents to propositions—highlights the magnitude of the challenge we face. The task is to explain and identify what thoughts are, to distinguish different kinds of thoughts (in humans and nonhumans alike), and to explain how thoughts are transmitted. We will never understand what human beings are, how we differ from other cognitive agents, or the relationship between mind and body, until we have met this challenge. We have only just begun to do so by developing real sciences of language, mind, and information.

  Although great progress has been made, the tasks (a)–(d) sketched above have not yet been definitively completed for a single natural human language. In part, this is due to the complexity of natural language. But it is also due to the conceptual unclarity of central linguistic notions, most notably that of a proposition, or piece of information. Though Frege and Russell had reasonable ideas about how to identify meanings of sub-sentential expressions, and how to make progress in calculating the truth conditions of sentences on the basis of the meanings and referents of their parts, they had trouble with propositions.

  Taking propositions to be abstract Platonic structures “grasped” or “entertained” by minds in a primitive and indefinable way, they couldn’t explain what made propositions representational, or what was involved in believing them, or bringing them before one’s mind.4 In retrospect, one might charitably view their abstract structures as theoreticians’ models or placeholders that would someday be traded for real things. Although that did eventually happen, Frege-Russell propositions weren’t seen as mere models for much of the twentieth century, nor did Frege and Russell take them to be such. Their flawed proposals about propositions were early attempts to identify the real things.5

  Their problem stemmed from an inversion of proper explanatory priorities. Instead of taking minds to be sources of representation—when they perceive, imagine, or think of things as being certain ways—and deriving representational propositions from them, Frege and the early Russell started from the other end. Taking purely abstract structures, assumed to be independently representational, they took minds to represent by passively perceiving those structures in the mind’s eye. Having started this way, they could only fail.6

  By 1910 Russell had changed his mind, grasping the basic truth that minds are the source of representation and that other things represent only by standing in the right relations to minds. Unfortunately, he didn’t see that this insight could, in fact, be used to construct a new conception of propositions, eliminating the intractable problems of his earlier view.7 Hence, he rejected the idea that any class of things could be what propositions had been purported to be—namely, meanings of sentences, bearers of truth and falsity, and things asserted, believed, known. Wittgenstein’s Tractatus Logico-Philosophicus, published in 1922, seemed to put the final nail in that coffin.

  Rejecting the Frege-Russell conception of propositions, Wittgenstein denied that any entities were sentence meanings. There were, of course, things that represented objects as being certain ways, and so were true when the objects were that way, and false otherwise. He even called them “propositions.” But he took them to be meaningful sentences themselves—or, perhaps better, uses of them. After this, propositions—thought of as complex nonlinguistic entities expressed by meaningful sentences—came, for decades, to be regarded as creatures of darkness. As was the case with Russell, however, this blanket rejection was ironic, since we can now see how close Wittgenstein was to laying the foundation for a cognitively realistic conception of propositions that has finally won adherents in the first decade and a half of the twenty-first century.8

  With propositions temporarily out of the way, the development of the scientific study of meaning was focused on the relationship between meaning and truth. Because meaningful declarative sentences represent things in the world as being certain ways, it was thought that we can study their meanings by studying what would make them true. This was done by constructing models of the world and checking to see which sentences were true in which models. The models were those descending from Tarski’s notion truth in a model, discussed in chapter 6. Following his work, it had become commonplace to view an interpreted logical language as the result of adding an intended Tarskian model (interpretation) and theory of truth in a model to an uninterpreted logical calculus, thereby assigning truth conditions to every sentence. Using truth theories to endow uninterpreted sentences with truth conditions, and hence meaning, encouraged the idea that truth theories could also be used to describe the meanings of already meaningful English sentences, if we are clever enough to discern the logical scaffolding underlying those sentences. Thus began the attempt to build an empirical science of linguistic meaning in natural language by extending and applying the logical techniques of formal (or logical) semantics.9

  By 1940, “classical logic” (descending from Frege) was beginning to inspire specialized extensions. One was modal logic, which introduced an operator it is necessarily true that—which when prefixed to a logically true sentence produces a truth. Since this operator was defined in terms of truth “at,” “in,” or “according to” model-like elements, logical models for modal calculi had to contain such elements, dubbed “possible world-states,” thought of as ways the world could be. This strengthened the idea that for a (
declarative) sentence S to be meaningful is for S to represent the world as being a certain way, which is to impose conditions the world must satisfy for S to be true. Henceforth meaning was to be studied by using the syntactic structure of sentences plus the representational contents of their parts to derive the truth conditions of sentences. For example, a semantic theory for Italian is expected to derive the statement ‘Firenze è una bella città’ is true at a possible world-state w if and only if at w, Florence is a beautiful city—which is a technical way of stating necessary and sufficient conditions for the world to conform to the way the Italian sentence represents it to be.

  Since to learn these conditions is to learn something approximating the meaning of the sentence, one who did so would acquire a rudimentary communicative competence in Italian. If one learned a theory that derived a similar statement of truth conditions for every Italian sentence, one would have acquired a more extensive competence—perhaps, it was thought, enough to be counted as understanding Italian. By 1960, theorists reasoning in this way thought they might come to understand what meaning is and how information is linguistically encoded.

  Since then, philosophers and theoretical linguists have expanded the framework to cover large fragments of natural languages. Their research program started with the logical constructions recognized in classical logic, augmented by the operators it is necessarily true that, it could have been true that, and if it had been true that such-and-such, then it would have been true that so-and-so, plus similar operators involving time and tense. Gradually, more natural-language constructions, including comparatives, adverbial modifiers, adverbs of quantification (‘usually’, ‘always’), intensional transitive verbs (like ‘worship’ and ‘look for’), indexicals (like ‘I’, ‘now’, ‘you’, and ‘today’), demonstrative words and phrases (like ‘these’, ‘those’, and ‘that F’), and propositional attitude verbs (such as ‘believe’, ‘expect’, and ‘know’) were added to the language fragments under investigation. At each stage, a language fragment for which we already had a truth theory was expanded to include more features found in natural language. As the research program advanced, the fragments of which we had a good truth-theoretic grasp became more fully natural language–like. Although one may doubt that all aspects of natural language can be squeezed into this logic-based paradigm, the prospects of extending the results so far achieved justify optimism that we still have more to learn from pursuing this strategy.

 

‹ Prev