Syntactical and Semantical Categories

views updated

SYNTACTICAL AND SEMANTICAL CATEGORIES

The basis for any theory of syntactical categories is the linguistic fact that in all natural languages there are strings of (one or more) words which are mutually interchangeable in all well-formed contexts salva beneformatione that is, with well-formedness (grammaticality, syntactical correctness) being preserved in the interchangeand that there are innumerable other strings which do not stand in this relation. Any theory of semantical categories rests on a similar fact, with well-formed replaced by meaningful or semantically correct, and beneformatione by significatione.

The relation between well formed and meaningful is, in general, complex, and neither term is simply reducible to the other. The English expression "Colorful green ideas sleep furiously" (to use an example given by Noam Chomsky) is, at least prima facie, syntactically well formed. Yet it is semantically meaningless, even though certain meanings can be assigned to it by special conventions or in special contexts. In contrast, many everyday utterances are syntactically ill formed (because of false starts, repetitions, and the like) but semantically perfectly meaningful, again at least prima facie.

Chomsky and his followers have recently stressed that for natural languages well-formedness and meaningfulness are mutually irreducible, but this view has not gone unchallenged. For constructed language systems, particularly those meant to serve as languages of science, it has generally been assumed that the notions of well-formedness and meaningfulness coincide.

Since the time of Aristotle it has been customary among philosophers to explain the linguistic facts about interchangeability by resort to ontological assumptions. Certain strings of words, it is said, are not well formed (or meaningful) because the entities denoted by the substrings (the meanings, denotata, etc., of these substrings) do not fit together. Edmund Husserl, one of the authors who dealt most explicitly with interchangeability, coined the term meaning categories (Bedeutungskategorien ). He maintained that we determine whether or not two expressions belong to the same meaning category, or whether or not two meanings fit together, by "apodictic evidence." But his examples and terminologyfor instance, the use of the expression "adjectival matter" (adjektivische Materie )indicate that his apodictic evidence was nothing more than a sort of unsophisticated grammatical intuition, which he hypostatized as insights into the realm of meanings.

Husserl certainly deserves great credit for distinguishing between nonsense (Unsinn ) and "countersense" (Widersinn ), or, in modern terms, between strings that violate rules of formation and strings that are refutable by the rules of deduction. But he is also responsible for the initiation of a fateful tradition in the treatment of semantical (and syntactical) categories. This tradition assumessometimes without even noticing the problematic status of the assumption, more often with only the flimsiest justificationthat if two strings are interchangeable in some one context salva beneformatione, they must be so in all contexts.

This entry will discuss the chief modern contributions to the theory of syntactical and semantical categories. It will first outline the achievements of the Polish logician Stanisław Leśniewski and his pupil Kazimierz Ajdukiewicz. It will then evaluate the contributions by Rudolf Carnap and, in particular, stress the added flexibility gained by his decision not to adhere to Leśniewski's "main principle." Finally, the synthesis by Yehoshua Bar-Hillel of the insights of Ajdukiewicz and Carnap into a theory of syntactical categories and the demonstration by Chomsky of the essential inadequacy of categorial grammars for a description of the syntactical structure of natural languages will be mentioned.

LeŚniewski

In 1921, Leśniewski made an attempt to simplify Bertrand Russell's ramified theory of types but was not satisfied with the outcome. A type theory, however simplified and otherwise improved, remained for him an "inadequate palliative." He therefore began, the following year, to develop a theory of semantical categories that had greater appeal to his intuitive insights into the syntactical and semantical structure of "proper" language. For this purpose he turned from Russell to Husserl, of whose teachings he had learned from his teacher and Husserl's pupil, Kazimierz Twardowski, and, in particular, to Husserl's conception of meaning categories. As a prototype of a proper language, to which his theory of semantical categories was to be applied, Leśniewski constructed the canonical language L. Husserl's tacit assumption that if two strings are interchangeable in some one context salva beneformatione, they must be so in all contexts was elevated to the rank of the "main principle of semantical categories." Today Leśniewski's term semantical categories must be regarded as a misnomer, since the categorization was based on purely syntactical considerations. At the time, however, Leśniewski, like many other authors, believed that well-formedness and meaningfulness are completely coextensive for any proper language.

According to Leśniewski, each string, whether a single word or a whole phrase, of a proper language, and hence of his canonical language L, belongs to at most one category out of an infinitely extensible complex hierarchy. Strings are understood as tokens rather than as types. Moreover, two equiform tokens may well belong to different categories. This homonymy, however, never leads to ambiguity, since in any well-formed formula the context always uniquely determines the category of the particular token. In fact, Leśniewski exploited this homonymy for systematic analogy, with an effect similar to that obtained by Russell's exploitation of the typical ambiguity of strings (qua types).

Leśniewski excluded from the hierarchy only strings outside a sentential context, terms inside quantifiers binding variables, and parentheses and other punctuation signs. Defined constants were automatically assigned to categories by means of "introductory theses," as Leśniewski called those object-language sentences which, in his view, served to introduce new terms into an existing language. He gave rigid directives for the formation of introductory theses, assignment to a category being valid only after these theses were specified. The constructive relativity thus introduced was intended to take the place of the order restrictions by which Russell had sought to avoid the semantical antinomies.

In his canonical language Leśniewski worked with two basic categories, "sentences" and "nominals," and a potential infinity of functor categories. He admitted only indicative sentences; interrogatives, imperatives, hortatives, and the like were excluded. He explicitly rejected any categorial distinction between proper names and common nouns or between empty, uniquely denoting, and multiply denoting nominal phrases, although he later drew these distinctions on another basis. In the notation subsequently devised by Ajdukiewicz the category, say, of the sentential negation sign (that is, of a functor which, from a sentence as argument, forms a complex expression itself belonging to the category of sentences) is denoted by its "index" "s/s." The denominator of this "fraction" indicates the category of the argument and the numerator that of the resulting string. The index of such binary connectives as the conjunction sign is "s/ss." With "n " as the category index of nominals, "n/n " is assigned to "attributive adjectives" (but also to "nominal negators" such as "non-____"), "s/n " to "predicative intransitive verbs," "s/nn " to "predicative transitive verbs," "s/n//s/n " to certain kinds of "verbal adverbs," etc.

Ajdukiewicz

With the help of this notation Ajdukiewicz was able to formulate, in 1935, an algorithm for the determination of the syntactical structure of any given string in certain languages and, in particular, of its "syntactical connexity"that is, its well-formedness. These languages had to embody, among other conditions, the Polish notation, in which functors always precede their arguments (thereby freeing parentheses from their customary duty as scope signals and making them available for other duties) and had to be "monotectonic," in H. B. Curry's later terminologythat is, to allow just one structure for each well-formed formula. These conditions of course excluded the natural languages from coming under Ajdukiewicz's algorithm.

To illustrate: Let
Afagbc
be a string in a given language fulfilling the above conditions. Let "n " be the index of "a," "b," and "c," let "s/n " be the index of "f," let "s/nn " be the index of "g," and let "s/ss " be the index of "A." The index string corresponding to the given string is, then,

Afagbc
s/sss/snns/nnnn.

Let the only rule of operation be the following: replace α/ββ (where α and β are any index or string of indexes) by α (always applying the rule as far "left" as possible). One then arrives in two steps at the "exponent" "s," thus verifying that the given string is a sentence with the "parsing" (A (fa )(gbc )). The whole derivation can be pictured as follows:

In 1951, Bar-Hillel adapted Ajdukiewicz's notation to natural languages by taking into account the facts that in such languages arguments can stand on both sides of the functor, that each element, whether word, morpheme, or other appropriate atom in some linguistic scheme, can be assigned to more than one category, and that many well-formed expressions will turn out to be syntactically ambiguous or to have more than one structural description. These changes greatly increased the linguistic importance of the theory of syntactical categories and initiated the study of a new type of grammars, the so-called categorial grammars.

Ajdukiewicz never questioned the validity of Leśniewski's main principle. Neither did Alfred Tarski at first. It was taken for granted in the main body of Tarski's famous 1935 paper, "Der Wahrheitsbegriff in den formalisierten Sprachen." (The concept of truth in formalized languages; whose Polish original dates from 1931.) The appendix to this paper voiced some doubts as to its intuitive appeal, but these doubts probably derived more from a growing preference for set-theoretical logics over type-theoretical ones than from straight linguistic considerations.

Carnap

Rudolf Carnap, in Der logische Aufbau der Welt (1928), had few misgivings about applying the simple theory of types to natural languages. Like Russell, he made a halfhearted attempt to provide a quasilinguistic justification for the type hierarchy, and his notion of "spheres" (Sphären ) occupies a position approximately midway between Russell's types and Leśniewski's semantical categories. Carnap's explanation of certain philosophical pseudo problems as based on a "confusion of spheres" (Sphärenvermengung ) antedates Gilbert Ryle's discussion of "category mistakes" in his Concept of Mind (London, 1949) by more than twenty years. Both explanations rest on an uncritical implicit adherence to the "main principle," even though Leśniewski's formulation was not known to Carnap at the time he wrote his book, probably because Leśniewski's publications prior to 1929 were all in Russian or Polish. At the same time, neither Leśniewski, Ajdukiewicz, nor Tarski quotes Carnap's book in their pertinent articles. Ryle, in his book, does not mention any of these publications.

Carnap was apparently the first logician to use the term syntactical categories, in 1932. At that time he believed that all logical problems could be treated adequately as syntactical problems, in the broad sense he gave the term.

He was also the first to free himself from the main principle. It eventually occurred to him that this principle embodied an arbitrary restriction on freedom of expression. Any attempt to impose this restriction on natural languages resulted in an intolerable and self-defeating proliferation of homonymies, similar to the outcome of the attempt by Russell and some of his followers to impose type-theoretical restrictions on natural languages, other than the tolerable "typical" ambiguities. In some cases it sounded rather natural to invoke equivocation (which is, of course, a "nontypical" ambiguity)in the tradition of Aristotle, who used this notion to explain the deviancy of "The musical note and the knife are sharp." But in innumerable other cases there were no independent reasons for such invocation, and the induced artificialities exploded the whole structure. For instance, very strong reasons seem to be required if one were to assign the string "I am thinking of" to a different type or syntactical category each time the string following it belonged to a different type or category. For one may have after "I am thinking of" such varied strings as "you," "freedom," "the theory of syntactical categories," and "the world going to pieces."

In 1934, in Logische Syntax der Sprache, Carnap took implicit account of the possibility that two strings might be interchangeable in some contexts but not in all. He coined the term related for this relation and used isogenous for the relation of total interchangeability. Languages in which all strings are either pairwise isogenous or unrelated have, in this respect, a particularly simple structure. But there is no reason to assume that natural languages will exhibit this particularly simple structure. In fact, observing the main principle becomes a nuisance even for rich constructed language systems; as Carnap showed, the principle is not observed in some of the better-known calculi (perhaps contrary to the intention of their creators) with no real harm done.

Bar-Hillel and Chomsky

The relation "related" is clearly reflexive and symmetrical; hence, it is a similarity relation. The relation "isogenous" is, in addition, transitive; hence, it is an equivalence relation. Starting from these two relations, Bar-Hillel, in 1947, developed a theory of syntactical categories, illustrated by a series of model languages, all of which were, in a certain natural sense, sublanguages of English. In 1954, Chomsky developed a more powerful theory by taking into account, in addition, relations between the linguistic environments of the strings compared.

Recently, primarily owing to the insights of Chomsky and coming as a surprise to most workers in the field, it has become clear that interchangeability in context cannot by itself serve as the basic relation of an adequate grammar for natural languages. It may play this role for a number of constructed languages, and it certainly does so, for example, in the case of the standard propositional calculi. More exactly, it provides a satisfactory basis for what have become known as "phrase-structure languages," or what Curry calls "concatenative systems."

A phrase-structure language is a language (a set of sentences) determined by a phrase-structure grammar, the grammar being regarded as a device for generating or recursively enumerating a subset of the set of all strings over a given vocabulary. A phrase-structure grammar, rigorously defined, is an ordered quadruple V,T,P,S , where V is a finite vocabulary, T (the terminal vocabulary) is a subset of V, P is a finite set of productions of the form X x (where X is a string over VT, the auxiliary vocabulary, and x is a string over V consisting of at least one word), and S (the initial string) is a distinguished element of VT. Any terminal string (string over T ) that can be obtained from S by a finite number of applications of the productions is a sentence. When the X 's in all the productions consist of only one word the grammar is called a context-free, or simple, phrase-structure grammar.

Interchangeability in context seems also to be adequate for describing the surface structure of all English sentences but not for describing their "deep structure." It is powerful enough to enable us to analyze correctly the sentence "John loves Mary" (S ) as a concatenate of a noun phrase (NP ), consisting in this particularly simple illustration of a single noun (N ), and a verb phrase (VP ), consisting of a transitive verb (Vt ) and another noun phrase itself consisting of a noun. Two customary representations of this analysis are the "labeled bracketing,"
(S (NP (N John)(VP (Vt loves)(NP (N Mary))))),
and the "inverted tree,"

(both representations are simplified for present purposes). Interchangeability in context is likewise powerful enough to provide "Mary is loved by John" with the correct structure,
(S (NP (N Mary)(VP (PassVt is(Vt love)-ed by)(NP (N John))))).

However, these analyses will not exhibit the syntactically (and semantically) decisive fact that "Mary is loved by John" stands in a very specific syntactical relation to "John loves Mary," namely that the former is the passive of the latter. No grammar can be regarded as adequate that does not, in one way or another, account for this fact. Transformational grammars, originated by Zellig Harris and considerably refined by Chomsky and his associates, appear to be in a better position to describe the deep structures of these sentences and of innumerable others. Such grammars adequately account for the relation between the active and passive sentences and explain the fact that one intuitively feels "John" to be in some sense the subject of "Mary is loved by John," a feeling often expressed by saying that "John," though not the "grammatical" subject, is still the "logical" subject of the sentence. Transformational analysis shows that "John," though indeed not the subject in the surface structure of the given sentence, is the subject of another, underlying sentence of which the given sentence is a transform.

It has recently been proved that categorial grammars and context-free phrase-structure grammars are equivalent, at least in the weak sense of generating the same languages qua sets of sentences over a given vocabulary, though perhaps not always assigning the same structure(s) to each sentence. These sets can also be generated (or accepted) by certain kinds of automata, the so-called push-down store transducers. The connection that this and other results establish between algebraic linguistics and automata theory should be of considerable importance for any future philosophy of language.

Developments in the 1960s

The early 1960s witnessed a revival of interest in the semantical categorization of expressions in natural languages, mostly under the impact of the fresh ideas of Chomsky and his associates. The whole field of theoretical semantics of natural languages is still very much in the dark, with innumerable methodological and substantive problems unsolved and sometimes hardly well enough formulated to allow for serious attempts at their solution. However, there is now a tendency to include indexes of semantical categories in the lexicon part of a complete description of such languages. These indexes, after application of appropriate rules, determine whether a given string is meaningful and, if it is, what its meaning is in some paraphrase of standardized form or, if it is not, how it deviates from perfect meaningfulness. In addition to semantical category indexes there are morphological, inflectional, and syntactical category indexes that determine whether the given string is morphologically and syntactically completely well formed, that present its syntactical structure in some standardized form, or that indicate the ways in which it deviates from full well-formedness.

Whether at least some semantical categories can, or perhaps must, be considered in some sense universal (language-independent) is a question that, like its syntactical counterpart, is now growing out of the speculative stage, with the first testable contributions beginning to appear. Investigations by Uriel Weinreich (1966) have cast serious doubts on the possibility of making a clear distinction between syntactical and semantical categories. Should these doubts be confirmed, the whole problem of the relation between these two types of categories will have to be reexamined.

See also Categories; Semantics, History of; Type Theory.

Bibliography

Ajdukiewicz, Kazimierz. "Die syntaktische Konnexität." Studia Philosophica 1 (1935): 127.

Bar-Hillel, Yehoshua. Language and Information. Reading, MA: Addison-Wesley, 1964.

Carnap, Rudolf. Der logische Aufbau der Welt. 2nd ed. Hamburg, 1961.

Carnap, Rudolf. Logische Syntax der Sprache. Vienna: Springer, 1934. Translated by Amethe Smeaton as The Logical Syntax of Language. New York: Harcourt Brace, 1937.

Chomsky, Noam. Aspects of the Theory of Syntax. Cambridge, MA: MIT Press, 1965.

Chomsky, Noam. "Formal Properties of Grammars." In Handbook of Mathematical Psychology, edited by R. Duncan Luce, R. R. Bush, and E. Galanter. Vol. II. New York: Wiley, 1963. Ch. 12.

Curry, H. B. Foundations of Mathematical Logic. New York: McGraw-Hill, 1963.

Husserl, Edmund. Logische Untersuchungen. 2nd ed., Vol. II. Halle: Niemeyer, 1913.

Luschei, E. C. The Logical Systems of Leśniewski. Amsterdam: North-Holland, 1962.

Suszko, Roman. "Syntactic Structure and Semantic Reference." Studia Logica 8 (1958): 213244, and 9 (1960): 6391.

Tarski, Alfred. Logic, Semantics, Metamathematics. Translated by J. H. Woodger. Oxford: Clarendon Press, 1956. Contains a translation of "Der Wahrheitsbegriff in den formalisierten Sprachen."

Weinreich, Uriel. "Explorations in Semantic Theory." In Current Trends in Linguistics, edited by Thomas A. Sebeok, Vol. III. The Hague: Mouton, 1966.

Yehoshua Bar-Hillel (1967)

More From encyclopedia.com