Semantics

views updated May 14 2018

SEMANTICS

Semantics is the study of meaning. More specifically, semantics is concerned with the systematic assignment of meanings to the simple and complex expressions of a language. The best way to understand the field of semantics is to appreciate its development through the twentieth century. In what follows, that development is described. As will be seen, advances in semantics have been intimately tied to developments in logic and philosophical logic.

Though there were certainly important theories, or proto-theories, of the meanings of linguistics expressions prior to the seminal work of the mathematician and philosopher Gottlob Frege, in explaining what semantics is it is reasonable to begin with Frege's mature work. For Frege's work so altered the way language, meaning and logic are thought about that it is only a slight exaggeration to say that work prior to Frege has been rendered more or less irrelevant to how these things are currently understood.

In his pioneering work in logic Begriffschrift, eine der arithmetischen nachgebildete Formalsprache des reinen Denkens, which was published in 1879, Frege literally revolutionized the field. It is well beyond the scope of the present entry to describe Frege's achievements in this work. But it should be said that one of his most important contributions was to achieve for the first time a clear understanding of the semantic functioning of expressions of generality, such as 'every,' 'some' and so on. This made it possible to understand, again for the first time, how sentences containing multiple expressions of generality, such as 'Every skier loves some mountain,' manage to mean what they do. In a series of papers written in the late 1800s, Frege articulated a novel theory of meaning for languages that was to be very influential. These papers included "Function and Concept" (1891), "On Concept and Object" (1892) and most famously "On Sense and Reference" (1892).

Frege made a fundamental distinction between expressions that are unsaturated or incomplete and expressions that are complete. The former he called concept words (perhaps concept expressions would be better) and the latter he called proper names. A sentence like:

1. Frege runs.

can be split up into the part that is unsaturated, the concept word 'runs,' and the complete part, the proper name 'Frege.' All expressions, Frege thought, are associated with a sense and a reference. These both have some claim to be called the meaning of the expression in question, and so it is probably best to think of Frege as claiming that there are two components to the meaning of an expression. The referent of an expression can be thought of as the thing in the world the expression stands for. Thus, the referent of the proper name 'Frege' is Frege himself. And the referent of the concept word 'runs' is a concept, which Frege took to be a function from an object to a truth value. So the concept 'runs' refers to maps an object o to the truth value true iff o runs. Otherwise, it maps the object to false. By contrast the sense of an expression Frege thought of as a way or mode in which the referent of the expression is presented. So perhaps Frege can be "presented" as the author of Begriffschrift. Then the sense of the name 'Frege' is the descriptive condition the author of Begriffschrift. It is perhaps more difficult to think of senses of concept words, but it helps to think of them as descriptive conditions that present the concept that is the referent in a certain way.

Now Frege thought that the sense of an expression determines its referent. So the sense of 'Frege' is a mode of presentation of Frege, a descriptive condition that Frege uniquely satisfies in virtue of which he is the referent of 'Frege.' Further, in understanding a linguistic expression, a competent speaker grasps its sense and realizes that it is the sense of the expression.

Of course complex linguistic expressions, such as 1 above, also have senses and references. Frege held that the sense of a complex expression is determined by the senses of its parts and how those parts are combined. (Principles of this general sort are called principles of compositionality, and so it could be said that Frege held a principle of compositionality for senses.) Indeed, Frege seems to have held the stronger view that the sense of a complex expression is literally built out of the senses of its parts. In the case of 1, its sense is the result of combining the sense of 'runs' and of 'Frege.' Frege believed that just as the expression 'runs' is unsaturated, so its sense too must be unsaturated or in need of completion. The sense of 'Frege,' by contrast, like the expression itself, is whole and complete (not in need of saturation). The sense of 1 is the result of the whole sense of 'Frege' saturating or completing the incomplete/unsaturated sense of 'runs.' It is the unsaturated sense of 'runs' that holds the sense of 1 together, and this is true generally for Frege. Frege called the sense of a declarative sentence like 1 a thought. Thus in "On Concept and Object" (p. 193) Frege writes:

For not all the parts of a thought can be complete; at least one must be unsatured or predicative; otherwise they would not hold together.

Similarly, Frege held that the reference of a complex expression is determined by the references of its parts and how they are put together (i.e. he held a principle of compositionality for referents). In the case of 1, the referent is determined by taking the object that is the referent of 'Frege' and making it the argument of the function that 'runs' refers to. This function maps objects to the True or the False depending on whether they run or not. Thus, the result of making this object the argument of this function is either the True or the False. And whichever of these is the result of making the object the argument of the function is the referent of 1. So sentences have thoughts as senses and truth values (the True; the False) as referents.

Concerning Frege's account of sentences containing quantifiers, expressions of generality such as 'every,' 'some' etc., consider the sentence

2. Every student runs.

The words 'student' and 'runs' are both concept words. Thus they have unsaturated senses and refer to concepts: functions from object to truth values. Now Frege thought that a word like 'every' was doubly unsaturated. To form a whole/complete expression from it, it needs to be supplemented with two concept words ('student' and 'runs' in 2). The sense of 'every' is also doubly unsaturated. Thus the sense of 2 is a thought, a complete sense, that is the result of the senses of 'student' and 'runs' both saturating the doubly unsaturated sense of 'every' (in a certain order). By contrast, the referent of 'every' must be something that takes two concepts (those referred to by 'student' and 'runs' in 2) and yields a referent for the sentence. But as we have seen, a sentence's referent is a truth value. Thus the referent of 'every' must take two concepts and return a truth value. That is, its referent is a function from a pair of concepts to a truth value. In essence, 'every' refers to a function that maps the concepts A and B (in that order) to the True iff every object that A maps to the true, B maps to the true (i.e. iff every object that falls under A falls under B).

Above it was mentioned that Frege thought that the referent of a complex expression was a function of the referents of its parts and how they are combined (compositionality of reference). Some examples seem to show that this is incorrect. Consider the following:

3. Chris believes that snow is white.

3a. Chris believes that Mt. Whitney is more than 14,000 feet high.

These sentences may well have different referents, that is, truth values. But the embedded sentences ('snow is white'; 'Mt. Whitney is more than 14,000 feet high') have the same referents (the True) and the other parts of the sentences have the same referents as well. But then it would seem that compositionality of reference would require that 3 and 3a have the same reference/truth value. Frege famously gets out of this apparent problem by claiming that 'believes' has the effect of shifting the referents of expressions embedded with respect to it. In 3 and 3a, the shifted referents of the embedded sentences are their usual senses. So in these environments, the sentences have different referents because they express different thoughts outside of contexts involving 'believes' and related devices.

Frege's doctrine of sense and reference constitutes a semantical theory of languages, because it claims that the meanings of linguistic expressions have these two components, and it gives an account of what the senses and referents of different kinds of linguistic expressions are.

Shortly after Frege had worked out his semantical theory of sense and reference, the English philosopher and mathematician Bertrand Russell was working out a theory of the meanings, or information contents of sentences. While Frege had held that the thought expressed by a sentence, which captures the information the sentence encodes, consisted of senses, Russell (1903) held that the information encoded by sentences were propositions, where the constituents of propositions, far from being Fregean senses, where roughly (and for the most part) the things the propositions is about. Thus, whereas Frege held that 1 expressed a thought containing a mode of presentation of Frege and a mode of presentation of the concept of running, Russell held that the proposition expressed by 1 contained Frege himself and the concept of running (though Russell thought of concepts differently from the way Frege did). This contrast has more than historical significance, because current semanticists are classified as Fregean or Russellian depending on whether they hold that the information contents of sentences contain the things those information contents are about (objects, properties and relationsRussellian) or modes of presentation of the things those information contents are about (Fregean).

In the early part of the twentieth century, the philosophical movement known as Logical Positivism achieved dominance, especially among logically minded philosophers who might have been interested in semantics. The Positivists thought that much of traditional philosophy was literally nonsense. They applied the (pejorative) term "metaphysics " to what they viewed as such philosophical nonsense. The Positivists, and especially Rudolf Carnap, developed accounts of meaning according to which much of what had been written by philosophers was literally meaningless. The earliest and crudest Positivist account of meaning was formulated by Carnap (1932). On this view, the meaning of a word was given by first specifying the simplest sentence in which it could occur (its elementary sentence ). Next, it must be stated how the word's elementary sentence could be verified. Any word not satisfying these two conditions was therefore meaningless. Carnap held that many words used in traditional philosophy failed to meet these conditions and so were meaningless.

Carnap called philosophical statements (sentences) that on analysis fail to be meaningful pseudo-statements. Some philosophical statements are pseudo-statements, according to Carnap, because they contain meaningless terms as just described. But Carnap thought that there is another class of philosophical pseudo-statements. These are statements that are literally not well formed (Carnap gives Heidegger's "We know the nothing." as an example).

The downfall of the Positivist's theory of meaning was that it appeared to rule out certain scientifically important statements as meaningless. This was unacceptable to the Positivists themselves, who were self consciously very scientifically minded. Carnap heroically altered and refined the Positivists account of meaningfulness, but difficulties remained. Hempel (1950) is a good source for these developments.

At about the same time Carnap was formulating the Positivists' account of meaning, the Polish logician Alfred Tarski was involved in investigations that would change forever both logic and semantics. It had long been thought that meaning and truth were somehow intimately connected. Indeed, some remarks of Wittgenstein's in his Tractatus Logico-Philosophicus ("4.024. To understand a proposition means to know what is the case, if it is true.") had led many to believe that the meaning of a sentence was given by the conditions under which it would be true and false. However, the Positivists had been wary of the notion of truth. It seemed to them a dangerously metaphysical notion, (which is why they "replaced" talk of truth with talk of being verified ).

Against this background, Tarski showed that truth ('true sentence') could be rigorously defined for a variety of formal languages (languages, growing out of Frege's work in logic, explicitly formulated for the purpose of pursuing research in logic or to be used to precisely express mathematical or scientific theories). Though earlier papers in Polish and German contained the essential ideas, it was Tarski (1935) that alerted the philosophical world to Tarski's important new results.

Tarski himself despaired of giving a definition of true sentence of English (or any other naturally occurring language). He thought that the fact that such languages contain the means for talking about expressions of that very language and their semantic features (so English contains expressions like 'true sentence,' 'denotes,' 'names,' etc.) meant that paradoxes, such as the paradox of the liar, are formulable in such languages. In turn, Tarski thought that this meant that such languages were logically inconsistent and hence that there could be no correct definition of 'true sentence' for such languages.

Nonetheless, Tarski's work made the notion of truth once again philosophically and scientifically respectable. And it introduced the idea that an important element, perhaps the sole element, in providing a semantics for a language was to provide a rigorous assignment to sentences of the language the conditions under which they are true. (Tarski's 1935 paper for the most part gave definitions of true sentence for languages with fixed interpretations. The now more familiar notion of true sentence with respect to a model was introduced later. See Hodges [2001] for details.)

Carnap's Meaning and Necessity (1947) is arguably the first work that contemporary semanticists would recognize as a work in what is now considered to be semantics. Following Tarski, Carnap distinguishes the languages under study and for which he gives a semantics, object languages, from the languages in which the semantics for the object languages are stated, metalanguages. The object languages Carnap primarily considers are a standard first order language (S1), the result from adding 'N' ("a sign for logical necessity") to that language (S2), and ordinary English. Carnap does not give detailed descriptions of any of these languages, noting that the book

" is intended not so much to carry out exact analyses of exactly constructed systems as to state informally some considerations aimed at the discovery of concepts and methods suitable for semantical analysis" (p. 8).

The heart of Carnap's semantics for these languages is given by rules of designation for predicates and individual constants, rules of truth for sentences and rules of ranges for sentences. The rules of designation state the meanings of the predicates and individuals constants using English as the metalanguage. So we have (p. 4):

's' is a symbolic translation of 'Walter Scott'

'Bx''x is a biped'

The rules of truth simply provide a Tarski style definition of truth for sentences of the language, (the definition assumes fixed meanings given by the rules of designation for predicates and individual constants). In order to specify the rules of range, Carnap introduces the notion of a state-description. For a language, say S1, a state description in S1 is a set that contains for every atomic sentence of S1, either it or its negation, but not both; and it contains no other sentences. Carnap comments (p. 9):

it [a state-description in S1] obviously gives a complete description of a possible state of the universe of individuals with respect to all properties and relation S expressed by predicates of the system. Thus the state-descriptions represent Leibniz' possible worlds or Wittgenstein's possible states of affairs.

Next Carnap gives a recursive characterization of a sentence holding in a state-description. An atomic sentences holds in a state-description iff it is a member of it. A disjunctions holds in it iff one of its disjuncts holds in it, etc. The characterization of holding in a state description is designed to formally capture the intuitive idea of the sentence being true if the possible world represented by the state-description obtained (i.e. if all the sentences belonging to the state-description were true). Given a sentence S, Carnap calls the class of state-descriptions in which S holds its range. Thus the clauses in the characterization of holding in a state-description Carnap calls rules of ranges. Regarding these rules of ranges, Carnap writes (p. 910):

By determining the ranges, they give, together with the rules of designation for the predicates and the individual constants , an interpretation for all sentences of S1, since to know the meaning of a sentence is to know in which of the possible cases it would be true and in which not, as Wittgenstein has pointed out.

Thus, Carnap regards the rules of ranges together with the rules of designation as giving the meaning of the sentences of S1 (the connection with truth and the rules of truth is that there is one state-description that describes the actual world, and a sentence is true iff it holds in that state-description).

Using these resources, Carnap defines his well known L concepts. We here concentrate on L-truth and L-equivalence. Before getting to that, we must say something about Carnap's notion of explication. Carnap believed that one of the main tasks for philosophers was to take a "vague or not quite exact" concept, and replace it by a more exact concept that one had clearly characterized. This new concept, called by Carnap the explicatum of the old concept, was intended to be used to do the work the old concept was used to do. Carnap thought that the notion of L-truth was the explicatum of the vague notions of "logical or necessary or analytic truth" (p. 10).

A sentence is L-true in a semantical system (e.g. S1) iff it holds in every state description in that system. Carnap regarded this as a precise characterization of Leibniz's idea that necessary or analytic or logical truths hold in all possible worlds. Next, Carnap defines the notion of L-equivalence for sentences, predicates and individual constants. Effectively, two names, predicates or sentences are L-equivalent (in a semantical systeme.g. S1) iff they have the same extension at every state-description in that system, (so L-equivalent names must name the same individual at every state description, L-equivalent predicates must be true of the same individuals at every state description, etc.).

The importance of Carnap's notion of L-equivalence is that he uses it to sketch a semantics for belief ascriptions. In order to do this, Carnap extends his notion of L-equivalence in several ways. First, he extends it so that expressions of different "semantical systems" (roughly, formal languages) may be L-equivalent (in effect, expressions e of system 1 and e' of system 2 are L-equivalent just in case the semantical rules of the two systems together suffice to show that the expressions have the same extension, p. 57). Second, he extends the notion of L-equivalence to apply to sentential connectives, variables (they are L-equivalent iff they have the same range of values) and to quantifiers (they are L-equivalent iff they are quantifiers of the same sort [universal, existential] whose variables are L-equivalent, p. 58). Third, he defines what it is for two expressions of the same or different semantical systems (again, roughly formal languages) to be intensionally isomorphic. Roughly, expressions are intensionally isomorphic just in case they are built up in the same way out L-equivalent parts. With these tools in hand, Carnap writes (p. 6162):

It seems that the sentence 'John believes that D' in S [a fragment of Englishsee p. 53] can be interpreted by the following semantical sentence:

15-1. 'There is a sentence 𝔖1in the semantical system S' such that (a) 𝔖i is intensionally isomorphic to 'D' and (b) John is disposed to an affirmative response to 𝔖i

Though Carnap's semantics for belief ascriptions was criticized by Alonzo Church (1950), many philosophers were influenced by Carnap's idea that the objects of belief are structured entities built up in the same way out of entities with the same intensions. See, for example, Lewis (1970).

The final important feature of Meaning and Necessity was its semantic treatment of modality. Carnap begins his discussion of modality by mentioning the work of C. I. Lewis (presumably he had in mind especially Lewis and Langford [1932]) in constructing various systems of modal logic. As mentioned above, Carnap considered as an object of semantical investigation a language that was the first order predicate logic (S1) supplemented with the sign 'N' "for logical necessity." He called the resulting language S2. Syntactically, prefixing 'N' to a matrix (either a sentence or a formula with free variables) results in a matrix. A detailed discussion of Carnap's semantics for this modal language would go beyond the scope of the present entry. However, a couple points are worth making. First, if we just consider the case in which 'N' fronts a sentence (formula with no free variables) ϕ, to get the rules of range for S2 we would simply add to the rules of range of S1 the following:

N(ϕ) holds in every state-description if ϕ holds in every state description; otherwise N(ϕ) holds in no state-description.

This is a consequence of Carnap's idea that 'N' is the sign for logical necessity, and the notion of L-truth is the explicatum of the vague notion of logical necessity. Thus a sentence fronted by 'N' should hold at a state description iff the sentence it embeds holds at every state-description. But then if the sentence fronted by 'N' holds at a state-description, it holds at every state-description. Thus, the above.

But of course since 'N''could front a matrix with free variables, one could then attach a quantifier to the result. Letting '..u..' be a matrix containing the variable 'u' free, we get things like

(u)N(..u..)

That is, we get quantifying into the sign 'N' for logical necessity. However, Carnap's treatment here results in the above being equivalent to (indeed, L-equivalent to)

N(u)(..u..).

The important point, however, is that Carnap had sketched a semantics for quantified modal logic.

Though virtually all of the crucial analyses and explications in Meaning and Necessity were eventually significantly modified or rejected (the explication of "logical necessity" by the notion of L-truth, understood in terms of holding at all state-descriptions; the treatment of 'N,' the sign of "logical necessity"; and the semantics for belief ascriptions), the work was nonetheless very important in the development of semantics. It provided a glimpse of how to use techniques from logic to systematically assign semantic values to sentences of languages, and began the project of providing a rigorous semantics for recalcitrant constructions like sentences containing modal elements and verbs of propositional attitude.

In the 1950s and early 1960s Carnap's ideas on the semantic treatment of modal logic were refined and improved upon. The result was the now familiar "Kripke style" semantics for modal logic. Kripke's formulations will be discussed here, but it is important to understand that similar ideas were in the air (see Hintikka [1961], Kanger [1957], and Montague [1960a]). Though these works were in the first instance works in logic, as we will see, they had a profound effect on people who were beginning to think about formal semantics for natural languages.

We will concern ourselves with the specific formulations in Kripke (1963). What follows will be of necessity slightly technical. The reader who is not interested in such things can skip to the end of the technical discussion for informal remarks. Assume that we have a standard first order logic with sentential connectives ,& and (the first and third one-place, the second two-place), individual variables (with or without subscripts) x,y,z, ; n-place predicates Pn, Qn, (0 place predicate letters are propositional variables ), and universal quantifier (for any variable xi, (xi)). A model structure is a triple G, K, R , where K is a set, G K and R is a reflexive relation on K (i.e. for all H K, H R H). Intuitively, G is the "actual world" and the members of K are all the possible worlds. R is a relation between worlds and is usually now called the accessibility relation. Intuitively, if HR H (H is accessible from H), then what is true in H is possible in H. Again intuitively, the worlds accessible from a given world are those that are possible relative to it.

Putting conditions on R gives one model structures appropriate to different modal logics. If R is merely reflexive, as required above, we get an M model structure. If R is reflexive and transitive (i.e. for any H, H, H K, if H R H and H R H, then H R H), we get an S4 model structure. Finally, if R is reflexive, transitive and symmetric (i.e. for any H, H K, if H R H, then H R H), we get an S5 model structure. (It should be recalled that for Carnap, state-descriptions, which represented possible worlds, were each accessible for every otherin effect because there was no accessibility relation between state-descriptions; thus translated into the present framework Carnap's "models" would be S5 models. Also, in Kripke's semantics, possible worlds (members of K) are primitive; in Carnap's, of course, they are explicated as state descriptions.) A quantificational model structure is a model structure G, K, R together with a function ψ that assigns to every H in K a set of individuals: the domain of H. Intuitively this is the set of individuals existing in the possible world H. Of course, this allows different worlds (members of K) to have different domains of individuals. This formally captures the intuitive idea that some individuals that exist might not have, and that there might have been individuals that there aren't.

Given a quantificational model structure, consider the set U which is the union of ψ(H) for all H in K. Intuitively, this is the set of all possible individuals (i.e. the set U of individuals such that any individual in the domain of any world is in U). Then Un is the set of all n-tuples whose elements are in U. A quantificational model on a quantificational model structure G, K, R is a function φ that maps a zero-place predicate and a member of K to T or F; and for n>0, an n-place predicate and a member of K to a subset of Un. We extend φ by induction to assign truth values to all formula/world pairs relative to a function assigning members of U to variables :

1. Propositional Variable : Let f be a function assigning elements of U to all individual variables. Let P be a propositional variable. Then for any H in K, φ(P, H)=T relative to f iff φ(P, H)=T; otherwise φ(P, H)=F relative to f.

2. Atomic : Let f be as in 1. For any H in K, φ(Pnx1,,xn, H)=T relative to f iff f(x1), , f(xn) φ(Pn, H); otherwise φ(Pnx1, , xn, H)=F relative to f.

(Note that 2 allows that an atomic formula can have a truth value at a world relative to an assignment to its variables, where some or all of its variables get assigned things not in the domain of the world, since f assigns elements of U to free variables; and φ assigns subsets of Un to Pn!)

3. Truth functional connectives : Let f be as in 1. Let A and B be formulae. For any H in K, φ(A &B, H)=T relative to f iff φ(A, H)=T relative to f and φ(B, H)=T relative to f; otherwise φ(A &B, H)=F relative to f. (Similarly for )

4. Modal operator : Let f be as in 1. Let A be a formula. φ(A, H)=T relative to f iff φ (A, H)=T relative to f for all H K such that H R H; otherwise φ(A, H)=F relative to f.

(Note that according to 4, whether a formula A is true at a world (relative to f) depends only on whether A is true at all worlds accessible from the original world.)

5. Quantifier : Let f be as in 1. Let A (x, y1, yn) be a formula containing only the free variables x, y1,,yn. For any H in K, and any function g (assigning elements of U to free variables), suppose φ(A(x, y1, , yn), H) relative to g is defined. Then φ((x) A(x, y1, yn), H)=T relative to f iff for every f such that f(x) ψ (H) and f differs from f at most in that f(x) is not f(x), φ(A(x, y1, yn), H) =T relative to f; otherwise, φ((x) A(x, y1, yn), H) =F relative to f.

(As Kripke notes, that in 5 we consider only functions f such that f(x) ψ(H) means that quantifiers range over only the objects that exist at the world where the quantified sentence is being evaluated.)

Now having gone through Kripke's semantics for quantified modal logic in some detail, let us step back and ask why it was important in terms of thinking of the semantics of natural language. People like Richard Montague, who we will discuss below, were clearly influenced in their thinking about the semantics of natural language by Kripke's semantics for modal logic, (recall too that Montague [1960a] itself contained ideas related to Kripke's). Since at least Carnap's Meaning and Necessity (and perhaps before), philosophers had thought of sentences as semantically associated with propositions and of n-place predicates as semantically associated with n-place relations (properties being one-place relations). Further, they had thought of these propositions and relations as determining truth values and extensions for the sentences and predicates expressing them relative to a "possible world" (which, of course, Carnap represented by a state description).

Now in Montague (1960b), it is suggested that an n-place relation just is a function from possible worlds to a set of n-tuples (intuitively, the set of n-tuples whose elements stand in the relation in question from the stand point of the world in question); and that a proposition just is a function from possible worlds to truth values. Generalizing these ideas leads straightforwardly to possible worlds semantics for natural languages discussed below. Further, Montague claims this way of understanding relations and propositions (which Montague calls predicates, one-place predicates, then, are properties; and zero-place predicates are propositions) is to be found for the first time in Kripke (1963). This, in turn, means that at least Montague saw the seeds of possible worlds semantics for natural languages in Kripke (1963).

This initially seems at least a little bit strange, since nowhere in Kripke (1963) does one find the identification of propositions with functions from possible worlds to truth values or relations with functions from possible worlds to sets of n-tuples. However, it is easy to see why a logician like Montague would see those ideas in Kripke (1963). Consider again a model on a quantificational model structure, forgetting for the moment about functions f that are assignments to free variables and that the domains of members of K can vary, (essentially, this means we are considering a model on a propositional model structure). A model φ on a (M/S4/S5) model structure G, K, R assigns to a propositional variable (a zero-place predicatean atomic formula without any variables) and a member of K either T or F. Now consider a particular propositional variable P. Consider the function fP defined as follows:
For any H in K, fP(H) = T iff φ(P, H)=T; otherwise
fP(H) =F
fP is a function from worlds to truth values and so can be thought of a la Montague as the proposition expressed by P (in the model φ on the model structure G, K, R )! That is, propositions, understood as functions from worlds to truth values, are trivially definable using Kripke's models. Similar remarks apply to n-place relations, understood as functions from possible worlds to sets of n-tuples of individuals. It seems likely that this is why a logician like Montague would take Kripke to have introduced them. Montague, after making the attribution to Kripke, does add (p.154): " Kripke employs, however, a different terminology and has in mind somewhat different objectives."

These functions from worlds to truth values or sets of n-tuples are now generally called intensions. Their values at a world (truth values; sets of n-tuples) are generally called extensions (at worlds). The idea that the primary job of semantics is to assign to expressions of natural languages intensions and extensions of the appropriate sort very much took hold in the wake of work by Kripke and others in the semantics of modal logic.

With the resources Kripke and others had made available in hand, researchers thinking about the semantics of natural languages eagerly made use of them. Thus the late 1960s and early 1970s saw dizzying progress in natural language semantics as the techniques for modal logic were applied. Two works from that era that particularly capture the spirit of the times are Lewis (1970) and Montague (1973). The latter will be discussed here, since it is probably the most sophisticated and influential of the works of that period. The particular semantic phenomena Montague was concerned to understand were the workings of verbs of propositional attitudes like 'believes,' the workings of intensional verbs like 'worships' and related phenomena (see p. 248 where Montague lists some of his concerns).

We saw above that both Frege and Carnap were also concerned with understanding the semantics of verbs like 'believes.' We are now in a position to say more about why such expressions attract the attention of semanticists. Consider the expression 'It is not the case' in sentences like

4. It is not the case that snow is white.

4a. It is not the case that Mt. Whitney is more than 14,000 feet high.

Whether a sentence fronted by 'It is not the case' is true or false depends only on the extension/truth value of the embedded sentence. Since both the embedded sentences are true, 4 and 4a are both false. Let's put this by saying that 'It is not the case that' creates extensional contexts. As we saw above, 'believes' doesn't create extensional contexts. 3 and 3a can differ in truth value even though the embedded sentences are both true. Let's say that 'believes' creates nonextensional contexts. The same is true of 'Necessarily.' The following differ in truth value even though the embedded sentences have the same extensions/truth values:

5. Necessarily, everything is identical to itself.

5a. Necessarily, Aristotle is a philosopher.

Finally, intensional verbs like 'worship' exhibit similar behavior and we could extend our characterization of creating nonextensional contexts so as to include such verbs. For even though 'Samuel Clemens' and 'Mark Twain' have the same extension (a certain individual), the following two sentences apparently may differ in extension/truth value:

6. Lori worships Samuel Clemens.

6a. Lori worships Mark Twain.

Now semanticists have been puzzled as to how to think of the semantics of expressions that create nonextensional contexts. But the work of Carnap and Kripke suggested the way to understand 'Necessarily.' In particular,

Necessarily S is true at a world w just in case the intension of S maps every world (accessible from w) to true.

In other words, whereas 'It is not the case' looks at the extension of the sentence it embeds to determine whether the entire sentence containing it is true, 'Necessarily' looks at the intension of the sentence it embeds to determine whether the entire sentence containing it is true. And given Kripke's semantics, intensions were well defined, respectable entities: functions from worlds to extensions. This made it appear to many that a semantics that assigned intensions to expressions could treat all expressions creating nonextensional contexts. Certainly, Montague had a version of this view.

As indicated above, Montague (1973) wanted to provide semantic treatments of verbs of propositional attitude such as 'believes,' intensional verbs such as 'worships,' and other phenomena. We will concentrate on these phenomena as well as Montague's treatment of quantification. Montague (1973) provides a syntax for a fragment of English. The fragment includes common nouns ('woman'; 'unicorn'), intransitive verbs (including 'run' and 'rise'), transitive verbs, (including both intensional transitives and "normal" transitive verbs like 'love'), ordinary names and pronouns, adverbs (including 'rapidly' and 'allegedly'), prepositions, verbs of propositional attitude and modal sentence adverbs ("adsentences"'necessarily'). The fragment allows the formation of relative clauses (though they employ the somewhat stilted 'such that,' so that we get things like 'man such that he loves Mary') and so complex noun phrases, as well as prepositional phrases and quantifier phrases ('Every woman such that she loves John'). Thus, Montague's syntactic fragment includes sentences like:

7. Every man loves a woman such that she loves him.

8. John seeks a unicorn.

9. John talks about a unicorn.

10. Mary believes that John finds a unicorn.

11. Mary believes that John finds a unicorn and he eats it.

It should be noted that many sentences of Montague's fragment had non-trivially different syntactic analyses: that is, distinct syntactic analyses that are interpreted differently semantically. So, for example, 8 above has an analysis on which 'a unicorn' is the constituent last added to the sentence and an analysis on which 'John' is the last constituent added. The latter has an interpretation on which it may be true even if there are no unicorns and so John is seeking no particular one. The former requires John to be seeking a particular unicorn. Thus, it is really syntactic analyses of sentences, and not the sentences themselves, that get semantic interpretations.

The next aspect of Montague's semantic treatment of his fragment of English is his intensional logic. Montague's intensional logic is typed. In particular, e and t are the basic types; and whenever a and b are types, a,b is a type. Finally, for any type a, s,a is a type. For each type, there will be both constants and variables of that type (and hence quantifiers of that type). The key to understanding the syntactic interactions of the expressions of various types is to know that if α is of type a,b and β is of type a, then α(β) is of type b. Interpretations assign expressions of the logic various denotations (relative to an assignment of values to variables). Expressions of type e get assigned individuals (possible individuals); expressions of type t get assigned truth values. Expressions of type a,b get assigned as denotations functions from denotations of type a to denotations of type b. Finally, expressions of type s,a get assigned functions from a world/time pair to a denotation of type a ("an intension of a type a expression"). To take some examples, expressions of type e,t get assigned functions from individuals to truth values (the denotations can alternatively be thought of as sets of individuals: those that get assigned to true). Expressions of type s,e are assigned functions from world/time pairs to individuals. Such functions Montague called individual concepts. Expressions of type s,e,t are assigned functions from individual concepts to truth values (alternatively, sets of individual concepts). Expressions of type s,t are assigned functions from world/time pairs to truth values. As indicated above, Montague thought of these as propositions.

The way Montague provided a semantic interpretation of his syntactic fragment of English was to provide an algorithm for translating English sentences (really, syntactic analyses of English sentences) into his intensional logic. Then the interpretation of the English sentences was given by the interpretation of its translation in intensional logic. Recall again that sentences like 8 above can be true even if there are no unicorns. Thus, a verb like 'seeks' could not have as its denotation (really, its translation into intensional logic could not have as its denotation) a relation between individuals (or a function from individuals to a function from individuals to truth values).

In order to get the proper results, Montague decided to assign to common nouns and intransitive verbs as their denotations sets of individual concepts rather than sets of individuals. Verbs like 'believes' have as their denotations functions from propositions to sets of individual concepts. Since individual concepts essentially function as individuals in Montague's semantics (recall that common nouns like 'man' have as denotations sets of individual concepts), this treatment essentially amounts to holding that verbs of propositional attitude denote relations between individuals and propositions. Quantifiers such as 'Every man' denote sets of properties of individual concepts (functions from world/time pairs to sets of individual concepts). Roughly, 'Every man walks' is true at a world and time w,t just in case the property of individual concepts that determines the correct set of individual concepts denoted by 'walks' at every world and time is in the set of properties of individual concepts denoted by 'Every man' at w,t. 'Necessarily' denotes at a world/time w,t a set of propositions: those that are necessary at w,t.

Finally, a transitive verb denotes a function from properties of properties of individual concepts (denotations of expressions of type s,s,s,e,t,tfunctions from world/time pairs to sets of properties of individual concepts) to sets of individual concepts. Again, recalling that individual concepts essentially stand in for individuals in Montague's framework, this means that transitive verbs in effect denote relations between individuals and properties of properties of individuals. Note that this means that for 8 to be true at a world/time pair w,t is for John to stand in a relation to the property of being a property possessed by a unicorn. This can be the case even if there are no unicorns.

Montague chose to treat all expressions of a given syntactic category the same way semantically. This means that transitive verbs like 'loves' get the odd denotation required by 'seeks' to get 8 right. But don't we want 'John loves Mary' to be true at world/time pair iff the individual John stands in a relation to the individual Mary? Surely this shouldn't require instead that John stands in a relation to the property of being a property possessed by Mary. Where's the love (between individuals)? Montague essentially requires interpretations to make true meaning postulates for "ordinary" verbs like 'loves,' and these end up insuring that 'John loves Mary' is true at w,t iff John and Mary themselves are properly related.

Montague's semantic account here was very influential. He showed that the resources Kripke and others developed for the semantics of modal logic could be rigorously applied to natural languages, and arguably treat such recalcitrant expressions as 'believes,' 'necessarily,' and 'seeks.' Montague's basic approach was picked up by many philosophers and linguists and much work in semantics through the 1980s and beyond was conducted in this framework. Indeed, much work is still done in this and closely related frameworks.

At about the same time Montague was doing his pioneering work on formal semantics for natural languages, Donald Davidson was developing a very different approach to semantics. Davidson (1967) begins with the idea that a theory of meaning for a natural language must specify how the meaning of a sentence is determined by the meanings of the words in it, and presumably how they are combined (in other writings, Davidson puts the point by saying that the meaning of sentence must be a function of a finite number of features of the sentencepresumably, one is its syntax). Davidson thought that only a theory of this sort could provide an explanation of the fact that on the basis of mastering a finite vocabulary and a finite number of syntactic rules, we are able to understand a potentially infinite number of sentences. More specifically, Davidson thought a theory of meaning should comprise an axiomatized theory, with a finite number of axioms, that entails as theorems (an infinite number of) statements specifying the meaning of each sentence of the language. Davidson thought that grasping such a theory would allow one to understand all the sentences of the language. Further, as suggested above, such a theory would explain how creatures like us are capable of understanding an infinite number of sentences. It would only require us to grasp the axioms of the theory of meaning, which are finite in number.

It might be thought that the theorems of a theory of meaning of the sort discussed would be all true sentences of the form 's means that p,' where 's' is replaced by a structural description of a sentence of the language and 'm' is replaced by a term referring to a meaning. Further, it might be thought that a theory would have such theorems in part by assigning meanings to the basic expressions of the language (such assignments being made by axioms). However, Davidson thinks that we have not a clue as to how to construct such a theory, mainly because we have no idea how the alleged meanings of simpler expressions combine to yield the meanings of complex expression of which they are parts. Thus, Davidson concludes, postulating meanings of expressions gets us nowhere in actually giving a theory of meaning for a language.

Davidson's counterproposal as to what a theory of meaning should be like is radical. A theory of meaning must be a finite number of axioms that entail for every sentence of the language a true sentence of the form 's is true iff p,' where 's' is replaced by some sort of description of a sentence whose theory of meaning we are giving, and 'p' is replaced by some sentence. Henceforth, we will call such sentences T-sentences. Recalling our discussion of Tarski, the language we are giving a theory of meaning for is the object language and the theory of meaning is given in the metalanguage. Thus, the formulation just given requires the metalanguage to have some sort of (presumably standardized) description of each sentence of the object language (to replace 's'); if we imagine 'p' to be replaced by the very sentence that what replaces 's' describes (as Davidson sometimes supposes) the metalanguage must also contain the sentences of the object language. In short, Davidson held that to give a theory of meaning for a language is to give a Tarski-style truth definition for it.

Tarski thought that a condition of adequacy for a theory of truth for a (in his case, formal) language L was that the theory has as consequences all sentences of the form 's is true (in L) iff p', where 's' is replaced by a structural description of a sentence of the object language and 'p' is replaced by a translation of it. Here Tarski clearly seemed to think that for one sentence to translate another is for them to share a meaning. However in characterizing what is to replace 'p' in his T-sentences, Davidson cannot require 'p' to be replaced by a translation of the sentence the thing replacing 's' describes, assuming anyway that for one sentence to be a translation of another is for them to share the same meaning. For Davidson eschews meanings. After all, a theory of truth was supposed to be a theory of meaning; it would hardly do, then, to appeal to meanings in constructing one's theory of truth. Thus Davidson famously merely requires the T-sentences to be true. But this requirement is very weak, for 'iff' is truth functional in Davidson's T-sentences, and so the sentences require for their truth only that the two sides share a truth value. But then there is nothing in principle yet to prevent having a theory of truth for English that yields not:

12. 'Snow is white' is true (in English) iff snow is white.

but instead

13. 'Snow is white' is true (in English) iff grass is green.

After all, 13 is true! Davidson was aware of this consequence of his view, and explicitly discussed it. He claimed that by itself, the fact that a theory of truth yields 13 as a theorem instead of 12 doesn't cut against it. However, the theory has to get all the other T-sentences coming out true, and Davidson thought it was unlikely that it could do that and yield 13 as a theorem.

Of course, the picture sketched so far needs to be complicated to account for contextually sensitive expressions. It won't do to have as theorems of one's truth theory things such as:

14. 'I am hungry' is true (in English) if I am hungry.

Davidson himself thought that the way to deal with this was to relativize truth to e.g. a speaker and a time (to handle tense). Others have suggested that a theory of truth for a language containing such contextually sensitive words must define truth for utterances of sentences. For example, see Weinstein (1974).

Further complications are required as well. Natural language contains devices not contained in the relatively austere formal languages for which Tarski showed how to define truth. Natural languages contain verbs of propositional attitude ('believes'), non-indicative sentences and other features. Davidson attempted to provide accounts of many such devices in other papers. Davidson (1968) for example takes up verbs of propositional attitude.

One sometimes hears model theoretic approaches to semantics contrasted with those that offer an absolute truth theory. The contrast is illustrated by comparing Montague and Davidson, since each is perhaps the paradigmatic case of one of these approaches. As we saw, Montague gives a semantics for English sentences by associating them with formulae of intensional logic. He then gives a semantics for the formulae of intensional logic. Now the latter includes a definition of truth relative to an interpretation (and other parameters as well). As discussed, expressions of Montague's intensional logic only have denotations (and intensions) relative to interpretations, which are also sometimes called models. Roughly, then, a model theoretic semantics is one that defines truth relative to models or interpretations. By contrast, as we have seen, Davidson wants a theory of truth simpliciter (actually, truth for L, but truth isn't relativized to models). Thus, Davidson's approach is sometimes called an absolute truth theory approach. I believe it is fair to say that most semanticists today use a model theoretic approach.

The 1960s and 1970s saw an explosion in the sort of model theoretic semantics pioneered by Montague, Lewis and others. Some of the important developments had to do with evolving notions of an index of evaluation. As we saw above, in Montague's intensional logic, expressions are assigned extensions/denotations at world/time pairs (under an interpretation relative to an assignment of values to variablesthis will be suppressed in the present discussion for ease of exposition). In particular, formulae are assigned truth values at a pair of a world and time.

Since expressions of Montague's English fragment receive semantic interpretations by being given the interpretation assigned to the expressions of intensional logic they are translated into, exactly similar remarks apply to English expressions and sentences. We shall call these elements at which expressions are assigned extensions (in this case, world/time pairs) indices. (Terminology here varies: Montague called these things points of reference ; Lewis [1970] called them indices, which is probably the most common term for them.) It should be obvious why sentences are assigned truth value at worlds. The reason Montague included times in his indices was that his intensional logic included tense operators in order that he could capture the rudimentary behavior of tense in English. Semantically, such operators work by shifting the time element of the index. Thus, where P is a past tense operator, φ a formula, w a world and t a time, Pφ is true at w,t iff φ is true at w,t for some t prior to t. Similarly, modal operators shift the world element of the index: Necessarily φ is true at w,t iff φ is true at w,t for all w.

So the truth values of formulae of Monatgue's intensional logic, and so of the English sentences they translate, depend on (or vary with) both a world and a time. Of course, it was noticed that the truth values of some English sentences vary with other features as well, such as who is speaking (if the sentence contains 'I'); who is being addressed (if the sentence contains 'you'); where the sentences is uttered (if the sentence contains 'here') and so on. A natural thought was to build into indices features for all such expressions, so that indices would contain all the features that go into determining extensions of expression. Thus, indices would be n-tuples of a world, time, place, speaker, addressee and so on. Lewis (1970) is a good example of an "index semantics" with indices containing many features. However, a number of developments resulted in such approaches being abandoned or at least significantly modified.

Hans Kamp (1971) discovered that in a language with standard feature-of-index shifting tense operators and contextually sensitive expressions that are sensitive to that same feature, such as 'now,' one needs two temporal coordinates. The point can be illustrated using a sentence in which 'now' occurs embedded under e.g. a past tense operator (assume 'one week ago' is a past tense operator):

15. One week ago Sarah knew she would be in Dubrovnik now.

When this sentence is evaluated at an index, there must be a time in the index for 'one week ago' to shift. The embedded sentence ('Sarah knew she would be in Dubrovnik now') is then evaluated relative to an index whose time feature has been shifted back one week. But then if 'now' takes that time as its value, we predict that 15 means that one week ago Sarah knew she would be in Dubrovnik then. But the sentence doesn't mean that. So the index must contain a second time, in addition to the one shifted by 'one week ago,' that remains unshifted so that the embedded occurrence of 'now' can take it as its value.

Kamp's requirement of there being two time coordinates is sometimes called the requirement of double indexing. I emphasize again that the requirement stems from there being in the language an operator that shifts a certain feature (time, in our case) and a contextually sensitive expression that picks up as its value the same feature. The argument above given for double indexing of times, then, assumes that temporal expressions ('One week ago') are index shifting operators. Many, including the present author, doubt this claim. (See King [2003] for discussion.) But similar arguments (involving 'actual' and 'Necessarily') could be given for double indexing of worlds.

At any rate, on the basis of such considerations, it was thought that minimally, one needed two indices, each of which contained (at least) a world and a time. However it was Kaplan (1989) (written in the early 1970s and circulated for years in mimeograph form) that provided the proper theoretical understanding of double indexing. Kaplan forcefully argued that not only do we need two indices for the reasons Kamp suggested as well as others (see section VII of 'Demonstratives'), but we need to recognize that the indices are representing two very different things, with the result that we need to recognize two different kinds of semantic values. One index represents context of utterance. This is the index that provides values for contextually sensitive expressions such as 'I,' 'now,' 'here' and so on. The intuitive picture is that a sentence taken relative to a context of utterance has values assigned to such contextually sensitive expressions. This results in the sentence having a content, what is said by the sentence, taken in that context.

So If I utter 'I am hungry now' on June 12, 2006, the content of the sentence in that context, what I said in uttering it then, is that Jeff King is hungry on June 12, 2006. Now that very content can be evaluated at different circumstances of evaluation, which are what the other index represents. For simplicity, think of circumstances of evaluation as simply possible worlds. Then we can take the sentence 'I am hungry now' and consider its content relative to the context of utterance described above. That content, or proposition, can then be evaluated for truth or falsity at different circumstances of evaluation (possible worlds). It is true at worlds in which Jeff is hungry on June 12, 2006 and false at those where he is not.

This distinction between context and circumstance, which the two indices represent, gives rise to a distinction between two kinds of semantic value (here we confine ourselves to the semantic values associated with sentences). On the one hand, the sentence 'I am hungry now' has a meaning that is common to utterances of it regardless of speaker or time. It is this meaning that determines what the content of that sentence is taken relative to contexts with different speakers and times. So this meaning, which Kaplan called character, determines a function from contexts to propositional content or what is said. By contrast, there is a sense in which the sentence 'I am hungry now' uttered by me now and Rebecca tomorrow means different things. This is because the sentence has different contents relative to those two contexts. So content is the other kind of semantic value had by sentences. Contents are true or false at worlds, so contents determine functions from worlds to truth values. In summary, character determines a function from context to content; content determines a function from worlds to truth values. Kaplan's distinction between context and circumstance and the corresponding distinction between character and content has been hugely influential and widely accepted.

Another important feature of Kaplan's (1989) work is his argument that both demonstratives (contextually sensitive words whose use requires the speaker to do something like demonstrate (point at) who she is talking about: 'he,' 'she,' 'this,' 'that') and pure indexicals (contextually sensitive words that don't require such demonstrations: 'I,' 'today,' etc.) are devices of direct reference. If we think of contents of sentences, propositions, as structured entities having as constituents the individuals, properties and relations that are the contents (relative to a context) of the expressions in the sentence, a view Kaplan likes, we can understand the claim that indexicals and demonstratives directly refer as the claim that these expressions contribute to propositions (relative to a context) the individuals they refer to (in the context). Thus, when I say: 'I am hungry,' the indexical 'I' contributes me to the proposition expressed by that sentence in that context.

Historically, the importance of this direct reference account of indexicals and demonstratives is its anti-Fregean thrust. Recall that for Frege, expressions generally, even those that refer to individuals, contribute to propositions senses that pick out their references and not the references themselves. In claiming that indexicals and demonstratives contribute individuals to propositions rather than senses that pick out those individuals, Kaplan was proposing a radically anti-Fregean account of indexicals and demonstratives. Kaplan's arguments here complemented the anti-Fregean arguments of one of the most influential works in philosophy of language of the twentieth century: Saul Kripke's (1980) Naming and Necessity.

Among other things, Kripke (1980) provided powerful arguments against what he sometimes calls the description theory of names. On the description theory, names are held to be both synonymous with definite descriptions and (more weakly) to have their references fixed by definite descriptions. So, for example, 'Aristotle' might be thought to be synonymous with 'the teacher of Alexander,' and whoever satisfies this description is the referent of 'Aristotle.' Frege's view was thought to be a version of the description theory, since Frege seems to say that the sense of a proper name can be expressed by a definite description (Frege [1892a] note B), in which case the name and descriptions would be synonymous. Kripke argued very compellingly that descriptions were neither synonymous with, nor determined the reference of, proper names. As to synonymy, Kripke pointed out that whereas

16. The teacher of Alexander taught Alexander.

expressed (nearly) a necessary truth,

17. Aristotle taught Alexander.

expresses a highly contingent truth. But if the name and description were synonymous, the two sentences should be synonymous and so both should be contingent or both should be necessary. But they aren't. Indeed, the name and description seem to function very differently semantically. As Kripke famously noted, whether 17 is true at any possible world depends on the properties of Aristotle at that world. This because 'Aristotle' is what Kripke called a rigid designator : the expression designates Aristotle at every world where he exists, and never designates any individual other than Aristotle. Hence evaluating the sentence at a world always requires us to check Aristotle's properties there. By contrast, 'the teacher of Alexander' presumably designates different individuals at different worlds, depending on who taught Alexander there. Thus, this expression is non-rigid.

As to descriptions determining the referents of names, Kripke adduced a number of considerations but perhaps the most persuasive was the following. Consider a name and any description that allegedly fixes the referent of the name, say 'the man who proved the completeness of arithmetic' fixes the referent of 'Godel.' If we imagine that in fact some man Schmidt satisfies the description, we do not conclude that he is the referent of 'Godel.' Quite the contrary, we conclude that the referent of 'Godel,' that is, Godel, fails to satisfy the description. But then the description does not fix the referent of the name (i.e. the referent is not whoever satisfies the description).

The arguments of Kaplan (1989) and Kripke (1980), together with arguments given by Donnellan, Marcus, Putnam and others turned semantics in a very anti-Fregean direction from the 1970s on. This anti-Fregean strain as applied to singular terms is sometimes called the new theory of reference.

As we saw above, Kaplan claimed that indexicals and demonstratives were directly referential and contributed their referents (relative to a context) to the propositions expressed by sentences in which they occur (interestingly, this is not reflected in Kaplan's [1989] formal system, which makes use of unstructured propositions that have no constituents corresponding to the words in the sentences that express the propositions; but his informal remarks make clear his intent). By contrast, though Kripke (1980) argued against the descriptive theory of names, he cautiously made no positive claims about what names contribute to propositions (the preface to Kripke [1980] makes clear that this caution was intendedsee pp. 2021). In a series of works in the 1980s, most famously Salmon (1986) and Soames (1987), Scott Soames and Nathan Salmon offered powerful arguments in favor of the view that names too were devices of direct reference and contributed only their bearers to propositions expressed by sentences in which they occur. Both Soames and Salmon defended the view that sentences (relative to contexts) express structured propositions, with names (and indexicals and demonstratives) contributing the individuals to which they refer to propositions. Salmon and Soames both also thought that attitude ascriptions such as the following:

18. Nathan believes that Mark Twain is an author.

assert that the subject (Nathan) stands in a certain relation (expressed by 'believes') to a structured proposition (expressed by the embedded sentence). If that is right and if names contribute only individuals to propositions expressed by sentences in which they occur, then (assuming a simple principle of compositionality) 18 expresses the same proposition as

19. Nathan believes that Sam Clemens is an author.

Thus, on the Soames-Salmon view 18 and 19 cannot differ in truth value. Though this seems counterintuitive, Soames (1987) and Salmon (1951) offer spirited defenses of this result. Soames (1987) also offers extremely compelling arguments against the view that propositions are unstructured sets of worlds (or circumstances). Some version of the Soames/Salmon view is widely considered to be the standard direct reference view in semantics. Views such as theirs, which make use of structured propositions and endorse direct reference for names, demonstratives and indexicals, are often called Russellian.

About the same time the new theory of reference was becoming prominent, quite different developments were taking place in semantics. In pioneering work first presented in the late 1960s (as the William James Lectures at Harvard; later published in Grice [1989] as Essay 2), Paul Grice sought to give a (somewhat) systematic account of (as we would now put it) how the production of a sentence with a certain semantic content can convey further information beyond its semantic content. To give an example from Grice, suppose A and B are planning their itinerary for a trip to France and both know A wants to visit C if doing so wouldn't take them too far out of their way. They have the following exchange:

A: Where does C live?

B: Somewhere in the south of France.

Since both are aware that B offered less information than is required for the purposes at hand, and since B can be presumed to be attempting to cooperate with A, B conveys that she doesn't know where C lives, though this is no part of the semantic content of the sentence she uttered. Grice gave an account of how such information (not part of the semantic content of any sentence asserted) can be conveyed. The account depended on the claim that conversational participants are all obeying certain principles in engaging in conversation. The main idea, as illustrated above, is that conversational participants are trying in some way to be cooperative, and so to contribute to the conversation at a given point what is required given the purpose and direction of the conversation. Grice's central theoretical idea was that certain types of information exchange and certain types of regularities in conversations don't have purely semantic explanations. The study of how information gets conveyed that goes beyond the semantic content of the sentences uttered falls in the field of pragmatics, (which is why, though Grice's work is extremely important, it hasn't been discussed more in an entry on semantics).

In a series of papers that (for our purposes anyway) culminated in Stalnaker (1978), Robert Stalnaker, consciously following Grice, was concerned with ways in which in conversations information can be conveyed that goes beyond the semantic contents of sentences uttered as a result of conversational participants obeying certain principles governing conversation. More specifically, Stalnaker developed an account of how context of utterance and semantic contents of sentences (relative to those contexts) produced in those contexts can mutually influence each other.

Of course, how context influences the semantic content of sentences relative to those contexts was already fairly well understood. As discussed above, for example, context supplies the semantic values relative to those contexts for contextually sensitive expressions such as 'I.' Stalnaker sought to understand how the content relative to a context of a sentence uttered can affect the context. Stalnaker began by introducing the notion of speaker presupposition. Stalnaker understood the proposition expressed by a sentence (relative to a context) to be a set of possible worlds (the set of worlds in which the sentence taken in that context is true). Very roughly, the propositions a speaker presupposes in a conversation are those whose truth he takes for granted and whose truth he thinks the other participants take for granted too.

Consider now the set of possible worlds that are compatible with the speaker's presuppositions (the set of worlds in which every presupposed proposition is true). Stalnaker calls this the context set, and it is for him a central feature of a context in which a conversation occurs. (Strictly, every participant in the conversation has his own context set, but we will assume that these are all the sameStalnaker calls this a non-defective context.) They contents of sentences (relative to a context) affect the context in the following way: if a sentence is asserted and accepted, then any world in the context set in which the sentence (taken in that context) is false is eliminated from the context set. In short, (accepted) assertions function to reduce the size of the context set, or eliminate live options.

Stalnaker uses this idea to explain a variety of phenomena, including how the utterance of sentences with trivial semantic content (relative to a context) can nonetheless be informative. It is important to see that Stalnaker, like Grice, took his account here to be not part of semantics, but rather to be something that presupposed the semantics of sentences (taken in contexts). In short, like Grice's work, it was work in pragmatics. However, Stalnaker's idea that the information conveyed by the utterance of multiple sentences in a discourse can go beyond anything countenanced by traditional semantics and that it is important to understand the dynamics of conversation to understand how information is conveyed influenced others who went on to develop semantic theories that capture the dynamics of conversation, (Lewis [1979] was another important early influence to the same effect).

In the early 1980s, Irene Heim (1982) and Hans Kamp (1981) independently arrived at very similar semantic accounts that were intended to apply to multi-sentence discourses. Kamp's view is called Discourse Representation Theory (DRT ) and Heim's view is sometimes called that or File Change Semantics (FCS ). To take a simple example of the sort that DRT and FCS were designed to handle, consider a (short) discourse such as:

20. Alan owns a donkey. He beats it.

Using Kamp's formulation, the discourse representation structure (DRS ) associated with the first sentence of 20 would (roughly) look as follows:

x1 x2

x1=Alan

donkey(x2)

x1 owns x2

After the utterance of the second sentence, the DRS associated with the entire discourse would looks as follows, where we have simply added one more line (a condition ) to the DRS for the first sentence of 20 (we assume that 'He' is anaphoric on 'Alan' and 'it' on 'a donkey'):

x1 x2

x1=Alan

donkey(x2)

x1 owns x2

x1 beats x2

Note that expressions like 'a donkey' introduce variables (called discourse referents ) and predicates ('donkey') into DRS's and not existential quantifiers. Again very roughly, this DRS (and hence the original discourse) is true in a model iff there is an assignment to the variables of the DRS that results in all its conditions being true in that model. It is the requirement that there is such an assignment that results in default existential quantification of free variables. So though indefinites like 'a donkey' are not existential quantifiers on this view, they have existential force (in this case, anyway) due to default existential quantification of free variables. Aside from the desire to apply semantics at the level of discourse instead of sentence, much of the motivation for DRT and FCS came from cases such as 20 (and others) in which a pronoun is anaphoric on another expression (see entry on anaphora ).

DRT and FCS led directly to the development of other semantic accounts designed to capture the dynamics of conversation. In the paper that initiated what is now often called dynamic semantics, Groenendijk and Stokhof (1991) make clear that they see their account as a descendent of DRT and throughout the paper they compare their Dynamic Logic (DL ) account to DRT. The basic idea of DL is that instead of thinking of expressions as having "static" meanings, think of meanings as things that given inputs, produce outputs. A bit more formally, think of the meanings (in models) of formulae of first order logic as given by the sets of assignments to variables that satisfy the formulae. So for example, the meaning of 'Fx' in a model M is the set of all assignments such that they assign to 'x' something in the extension of 'F' in the model M. Dynamic logic claims that the meaning of a formula in first order logic is a set of pairs of assignments: the first, the input assignment; the second, the output assignment. For "externally dynamic" expressions (e.g. conjunction, existential quantifiers), these can differ and the result is that interpreting these expression can affect how subsequent expressions get interpreted. For since the output assignments can be different from the input assignments for these dynamic expressions, and since the output of these expressions may be the input to subsequent expressions, the interpretation of those subsequent expressions may be affected.

There is currently much research being done within the framework of dynamic semantics, particularly among linguists. Muskens, van Benthem and Visser (1997) provide a good general overview.

There are many important topics in semantics that could not be covered in the present article. These include the theory of generalized quantifiers, the semantics of conditionals, the semantics of non-declarative sentences, the semantics of metaphor and two dimensional semantics. Interested readers are encouraged to pursue these matters on their own.

See also Carnap, Rudolf; Conversational Implicature; Davidson, Donald; Frege, Gottlob; Grice, Herbert Paul; Heidegger, Martin; Hempel, Carl Gustav; Hintikka, Jaakko; Kaplan, David; Kripke, Saul; Lewis, Clarence Irving; Lewis, David; Logical Positivism; Marcus, Ruth Barcan; Meaning; Modality, Philosophy and Metaphysics of; Montague, Richard; Pragmatics; Putnam, Hilary; Reference; Russell, Bertrand Arthur William; Syntax; Tarski, Alfred; Wittgenstein, Ludwig Josef Johann.

Bibliography

Ayer, A.J., ed. Logical Positivism. Glencoe, IL: Free Press, 1959.

Beaney, Michael, ed. The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Bennett, Michael. "Some Extensions of a Montague Fragment." PhD diss, UCLA, 1974.

Carnap, Rudolf. "The Elimination of Metaphysics through The Logical Analysis of Language" (1932). In Logical Positivism, edited by A.J. Ayer. Glencoe, IL: Free Press, 1959.

Carnap, Rudolf. Meaning and Necessity, A Study in Systematics and Modal Logic. Chicago: University of Chicago Press, 1947.

Church, Alonzo. "A Formulation of the Logic of Sense and Denotation." In Structure, Method, and Meaning: Essays in Honor of Henry M. Sheffer. New York: Liberal Arts Press, 1951.

Church, Alonzo. "On Carnap's Analysis of Statements of Assertion and Belief." Analysis 10 (1950): 9799.

Davidson, Donald. "On Saying That." Synthese 19 (1968): 130146

Davidson, Donald. "Truth and Meaning." Synthese 17 (1967): 304323.

Frege, Gottlob. "Function and Concept" (1891). In The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Frege, Gottlob. "On Concept and Object" (1892). In The Frege Reader. Oxford U.K.; Cambridge, MA: Blackwell, 1997.

Frege, Gottlob. "On Sense and Reference" (1892). In The Frege Reader. Oxford, U.K.; Cambridge, MA: Blackwell, 1997.

Grice, Paul. Studies in the Ways of Words. Cambridge, MA: Harvard University Press, 1989.

Groenendijk, J., and M. Stokhof. "Dynamic Predicate Logic." Linguistics and Philosophy 14 (1991): 39100.

Heim, Irene. "The Semantics of Definite and Indefinite Noun Phrases." Doctoral thesis, University of Massachusetts, 1982.

Hempel, Carl. "The Empiricist Criterion of Meaning" (1950). In Logical Positivism. Glencoe, IL: Free Press, 1959.

Hintikka, Jaakko. "Modality and Quantification." Theoria 27 (1961): 110128.

Hodges, Wilfrid. "Tarski's Truth Definitions." The Stanford Encyclopedia of Philosophy, winter 2001 ed. Available from http://plato.stanford.edu/archives/win2001/entries/tarski-truth/.

Kamp, Hans. "Formal Properties of 'Now.'" Theoria 37 (1971): 227273

Kamp, Hans. "A Theory of Truth and Semantic Representation." In Formal Methods in the Study of Language, edited by J. Groenendijk and M. Stokhof. Amsterdam: Mathematical Centre, 1981.

Kanger, Stig. Provability in Logic. Stockholm: Almqvist & Wilksell, 1957.

Kaplan, David. "Demonstratives." In Themes from Kaplan, edited by Joseph Almog, John Perry, and Howard Wettstein. New York: Oxford University Press, 1989.

King, Jeffrey C. "Tense, Modality, and Semantic Values." Philosophical Perspectives 17 (2003): 195245.

Kripke, Saul. "A Completeness Theorem in Modal Logic." The Journal of Symbolic Logic 24 (1) (1959): 114.

Kripke, Saul. "Semantical Considerations on Modal Logic" (1963). In Reference and Modality, edited by Leonard Linsky. London: Oxford University Press, 1971.

Kripke, Saul. Naming and Necessity (1972). Cambridge, MA: Harvard University Press, 1980.

Lewis, C. I., and C. H. Langford. Symbolic Logic. New York: Century, 1932.

Lewis, David. "General Semantics." Synthese 22 (1970): 1867.

Lewis, David. "Scorekeeping in a Language Game." Journal of Philosophical Logic 8 (1979): 339359.

Montague, Richard. "Logical Necessity, Physical Necessity, Ethics, and Quantifiers" (1960a). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven, CT: Yale University Press, 1974.

Montague, Richard. "On the Nature of Certain Philosophical Entities" (1960b). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven CT: Yale University Press, 1974.

Montague, Richard. "The Proper Treatment of Quantification in Ordinary English" (1973). In Formal Philosophy; Selected Papers of Richard Montague, edited by Richmond Thomason. New Haven, CT: Yale University Press, 1974.

Muskens, Reinhard, Johan van Ventham, and Albert Visser. "Dynamics." In Handbook of Logic and Language, edited by Johan van Bentham and Alice ter Meulen. Cambridge, MA: MIT Press, 1997.

Russell, Bertrand. Principles of Mathematics. 2nd ed. Cambridge, U.K.: Cambridge University Press, 1903.

Salmon, Nathan. Frege's Puzzle. Cambridge, MA: MIT Press, 1986.

Soames, Scott. "Direct Reference, Propositional Attitudes, and Semantic Content." Philosophical Topics 15 (1987): 4787.

Stalnaker, Robert. "Assertion" (1978). In Context and Content; Essays on Intentionality in Speech and Thought. New York: Oxford University Press, 1999.

Tarski, Alfred. "Der Wahrheitsbegriff in den formalisierten Sprachen." Studia Philosophica 1 (1935): 261405.

Weinstein, Scott. "Truth and Demonstratives." Nous 8 (1974): 179184.

Jeffrey C. King (2005)

Semantics

views updated May 18 2018

SEMANTICS

The term "semantics" came into general use in many disciplines during the 20th century. The word was first coined and used by Michael Bréal in 1883 to designate the study of the laws that govern changes in meaning. It is popularly used to mean a study designed to improve human relations by an understanding of the ways in which words can mean different things to different persons because of their various emotional and experiential backgrounds. This is called general semantics by the followers of Alfred Korzybski (d. 1950) and pragmatics by Charles Morris.

According to Morris, semantics is one of the three branches of semiotics, the science of signs, which consists of syntactics, semantics, and pragmatics. Each of these branches can be theoretical or empirical. Linguistic semantics is descriptive and empirical, attempting to discover from history and from social science the laws that govern changes in the meaning and structure of words. Pure semantics, as a branch of logic, discusses relationships between expressions and the objects they denote with special emphasis on the problems of truth, denotation, and meaning. It is derived from the Vienna Circle and developed in Cambridge, England, thence going to America with Rudolph Carnap and Alfred Tarski, its chief exponents (see logical positivism). Bertrand russell, while one of the originators, has since disavowed its extreme theories. Willard V. O. Quine and P. F. Strawson are interested in the denotation of words, but they have not developed the formalized system of Tarski and Carnap. John G. Kemeny is associated with semantics through his interest in symbolic logic (see logic, symbolic). Linguistic analysis, while related to semantics in its neopositivist assumptions and in its insistence that all problems in philosophy are due to confusion over terms, does not attempt a formalized language but prefers to discuss the meaning of words as used in ordinary language.

This article deals with pure semantics, discussing its origins; its chief concepts, such as antinomy, metalanguage, truth, description, and meaning; and its relation to traditional logic.

Antinomy and Metalanguage. The relationship between language and the objects denoted by it has been studied as far back as the time of Aristotle. As a separate discipline, however, pure semantics can be said to have begun only in the late 19th century when Russell, in a letter to Gottlob Frege, formulated his antinomy of the class of classes. Other antinomies, both linguistic and mathematical, were soon pointed out; these showed weaknesses in a natural language, such as English, when it is used to discuss itself. Accordingly language came to be intensively studied not merely as an instrument for other disciplines, but as itself an object of research. This led to the invention of formalized languages. In order to talk about a natural or object language, a stronger language is needed, a metalanguage, which in turn can be discussed only by a still stronger metalanguage. An adequately formed system of semantics therefore requires a hierarchy of formalized languages. For example, English could be the metalanguage of a simple calculus language, or object language, in which one designates capital letters to represent adjectives and small letters to represent nouns; French could then be the metalanguage of English, or a stronger formalized language could be developed.

When it was seen that metaphysical difficulties arise from the improper use of words or from an inadequate understanding of their use, interest centered in syntax. Logical syntax, as understood by Carnap, abstracts from both the object denoted and the thinking subject and lays down rules for the formation and transformation of linguistic entities. It soon became evident, however, that such matters as truth, denotation, and meaning cannot be discussed in a science of syntax that abstracts from objects.

Truth, Denotation, and Connotation. Tarski, maintaining that truth as such can be considered without involvement in contradictions, based his notion of truth on denotation and satisfaction. A meaning of a word can be either the objects denoted by the word or the notion or idea given by the word. The denotative or extensional meaning of the word "man" is all the men who have ever lived. This is called Bedeutung by Frege, denotation by J. S. Mill, extension in traditional logic, and reference by Max Black. The idea of man is given as rational animal and this is called Sinn (sense) by Frege, connotation by Mill, comprehension in traditional logic, and sense by Black. Current usage distinguishes between "essence" (e.g., rational animal) and "connotation," the essence plus emotions usually associated with the word used to designate it. An extensional definition of the word "horse" would be the listing of all horses ever existing; an intensional definition would be the notion or meaning of the concept of horse, and would be declared nonexistent by neopositivists. Two words can have the same denotation, while the comprehension or intensional meaning may vary. For example, morning star and evening star have the same denotation, Venus, but the connotation varies slightly.

Tarski uses the semantic concepts of denotation, satisfaction, and definition when attempting to formulate the conditions under which a sentence may be said to be true. Thus, "the author of Waverley " denotes Scott and "Scott" satisfies the sentential function "X is the author of Waverley. " To define, for Tarski, means to uniquely determine; thus, the equation 2x = 1 uniquely determines the number 1/2. The word "true" does not apply to such relations, but rather expresses a property of certain expressions, especially of sentences.

Tarski says that he uses "true" in the sense of Aristotelian metaphysics: "To say of what is that it is not or of what is not that it is, is false, while to say of what is that it is or of what is not that it is not is true." In realist terms this would be: "The truth of a sentence consists in its agreement with (or correspondence to) reality." As a condition for the term "true" to be adequate, he adds that all the equivalences of the form, "X is true, if and only if P, " must logically follow and be able to be asserted. When one fills in values for "X " and "P, " one gets a definition of the truth of one sentence; the general definition would then be a union of all partial truths, and would constitute the semantic concept of truth. For example, the sentence, "snow is white" is true if, and only if, snow is white. The first sentence, "snow is white," is the name (or suppositio formalis in traditional logic) of the sentence following the conditional, which asserts a matter of fact (suppositio materialis in traditional logic). Therefore, Tarski's definition of truth is simply this: a sentence is true if it is satisfied by all objects; otherwise it is false.

Analytical Truth. While the semantic concept of truth is based on denotation, satisfaction, and extension, the concept of analytical truth is based on connotation, intension, and meaning. According to Immanuel Kant, a sentence whose predicate is contained in the meaning of the subject, and is noncontradictory, is analytically true. The sentences, "Bachelors are unmarried men," and "Man is a rational animal," are analytically true. Synthetic propositions, also called propositions of fact, are different from this in that they depend on information from the physical world. An example would be: "It is raining." Negation of this sentence, if it is raining, is synthetically false.

A sentence valid in all models (negative, disjunctive, and universal) is analytically true; a sentence valid in no model is analytically false or self-contradictory. By the principle of the excluded middle, a sentence is either analytically true or not analytically true, but not necessarily analytically false. It could be synthetically true or synthetically false, since this is valid in some but not in all models.

There has been much discussion of the concept of analytical truth, especially by Quine. He questions the existence of any analytic sentences and maintains that all sentences are synthetic or that the differences between the two types are minimal. For example, "The earth is a flat surface" could have been considered an analytically true sentence in 1450, while after 1492 it came to be considered as analytically false.

Ambiguity and Description. Two minor problems associated with denotation concern ambiguity and description. An ambiguous term can be clearly true of one object and clearly false of others. Ambiguity causes variation in the truth value of a sentence due to variation in the circumstances of its utterance.

The denotation of a description presents a different problem. According to Russell, descriptions do not function as names nor do they require a name. A description is true only if there is but one denoted object. Descriptions, however, may denote the same object while being factually different. For example, Paul VI is denoted by the two descriptions: "the successor of Pope John XXIII" and "the Archbishop of Milan in 1960." Both of these must be factually ascertained.

Russell maintains that a description such as "Franklin Delano Roosevelt was the 32nd President of the United States" is really an abbreviation of: "There was one and only one man who was the thirty-second president of the United States, and Franklin Delano Roosevelt was this man." The first part in this sentence would be false if there were no man, or more than one man, satisfying the denotation. Alonso Church, on the other hand, holds that a description is about two concepts, not about the object to which they refer. Thus, in the example above, the two concepts about Pope Paul VI are related to each other rather than to the person of this pope. All are agreed that descriptions can never be analytically true but must be factually verified, and, therefore, can be only synthetically true.

Meaning of Meaning. Another problem discussed by semanticists is that of the meaning of meaning. Carnap seems to hold that language, like mathematics, can be arbitrarily imposed; thus, for him, words have meanings and objects have names (therefore meanings) that are decreed by men. In this view, there is no meaning except by convention. Others have formulated alternative explanations, as summarized in the following list:

  1. Meaning is the object of which the sign is name denotative.
  2. Meaning is an intrinsic property of objects, e.g., Mill's connotation.
  3. Meaning is an ideal object, as for plato, or, as Edmund husserl puts it, an ideal entity manifesting itself in intentional acts owing to a direct eidetic intuition of the essence of the thing.
  4. Meaning is the relation between words, e.g., the lexical meaning obtained by getting a synonym of the word.
  5. Meaning consists in the reaction to the word. William james called it practical consequences of a thing in our future experience. For Ivan Pavlov, meaning is a reflex conditioning of the human organism to the sign or signal. According to C. S. S. Peirce, "to develop its [the word's] meaning, we have simply to determine what habit it produces."
  6. Meaning is the set of operations man can perform with the object, as in P. W. Bridgman's opera-tionalism.
  7. Meaning is the total personal reaction to a thing. This is somewhat like 5 but stresses the personal reaction. It is the theory of the general semanticists, who hold, as does B. L. Whorf, that language molds the worldview of a people rather than vice versa.
  8. A direct relationship exists between thought and object; and an indirect relationship between word and object. Only when a thinker makes use of words do they stand for anything or have meaning. Therefore words have the meanings of the persons who use them. This is the theory of C. K. Ogden and I. A. Richards.

Traditional scholastic logic does not discuss meaning as such, but its detailed treatment of first and second intentions is quite relevant to modern discussions of the problem of meaning (see concept; intentionality). In fact, the medieval problem of universals seems once again to be revived by semanticists. Since for the most part they eliminate cognitive elements and favor a behavioristic interpretation of language, they seem to join medieval nominalists in holding that words are mere sounds, or flatus voces (see nominalism). Yet some object to such a characterization, since semanticists do recognize a kind of personal meaning.

Evaluation. Pure semantics is an effort to solve philosophical problems by making language more precise. It is a modern form of nominalism to the extent that it regards language as arbitrary and erects its theory of metalanguage on this base. For most semanticists, names have no intrinsic relation to the objects they denote. While seeing quite correctly that many philosophical difficulties arise over meanings of words, these thinkers universalize the diagnosis and conclude that all philosophical problems can be solved by establishing a formalized language. Problems that cannot be so solved they regard as pseudo-problems.

Despite this narrowness of viewpoint, semanticists have made important contributions in the field of analytical philosophy, and in those areas of philosophical thought related to modern science and its methodology. Their sentential calculi and formalized languages have been particularly valuable in research involving digital computers, and in setting up computations for decision making (see cybernetics).

The problem of truth and that of the relationship existing between words and objects and between objects and thought, however, remain perennial problems in philosophy. The semantic movement, Anglo-Saxon in origin and positivist in inspiration, has attempted to bring scientific accuracy to their solution. Philosophers from other lands and with other orientations question, with good reason, whether rigorous language or vocabulary can of themselves provide lasting solutions to these problems.

See Also: analysis and synthesis; analytical philosophy.

Bibliography: r. carnap, Introduction to Semantics (Cambridge 1942); Meaning and Necessity: A Study in Semantics and Modal Logic (2d ed. Chicago 1956). h. feigl and w. sellars, eds., Readings in Philosophical Analysis (New York 1949). l. linsky, ed., Semantics and the Philosophy of Language (Urbana, Ill. 1952). c. w. morris, Foundations of the Theory of Signs (Chicago 1938); Signs, Language and Behavior (New York 1946). c. k. ogden and i. a. richards, The Meaning of Meaning: A Study of the Influence of Language upon Thought and of the Science of Symbolism (New York 1949). c. s. s. peirce, "Logic as Semiotic; The Theory of Signs," The Philosophical Writings of Peirce, ed. j. buchler (New York 1955) 98119; Values in a Universe of Chance, ed. p. p. wiener (Stanford 1958). w. v. o. quine, From a Logical Point of View (Cambridge, Mass. 1953); Word and Object (Cambridge 1960). b. russell, An Inquiry into Meaning and Truth (London 1940). a. tarski, Logic, Semantics, Metamathematics: Papers from 1923 to 1938, tr. j. h. woodger (Oxford 1956). s. ullmann, The Principles of Semantics (Glasgow 1951). m. w. hess, "The Semantic Question," New Scholasticism 23 (1949) 186206. j. a. oesterle, "The Problem of Meaning," Thomist 6 (July 1943) 180229; "Another Approach to the Problem of Meaning," ibid. 7 (April 1944) 233263.

[m. gorman]

SEMANTICS

views updated Jun 11 2018

SEMANTICS The study of MEANING. The term has at least five linked senses: (1) Sometimes semasiology. In LINGUISTICS, the study of the meaning of words and sentences, their denotations, connotations, implications, and ambiguities. The three levels or components of a common model of language are phonology, syntax, and semantics. (2) In philosophy, the study of logical expression and of the principles that determine the truth or falsehood of sentences. (3) In SEMIOTICS, the study of signs and what they refer to, and of responses to those signs. (4) In general usage, interest in the meanings of words, including their denotations, connotations, implications, and ambiguities. (5) Informally and often pejoratively, the making of (pedantic and impractical) distinctions about the meaning and use of words.

Background

The attempt to formulate a science of signs dates from the late 19c, when the French linguist Michel Bréal published Essai de sémantique (1897). He was interested in the influence of usage on the evolution of words and wished to extend the philological study of language (largely based on text and form) to include meaning. The historical study of meaning, however, is not currently central to the work of semanticists: see SEMANTIC CHANGE. Present-day semantic theory has developed largely from the later theories of the Swiss linguist Ferdinand de Saussure, who emphasized synchronic system and not diachronic evolution. Post-Saussurean semantics is the study of meaning as a branch of linguistics, like GRAMMAR and phonology. In its widest sense, it is concerned both with relations within language (sense) and relations between language and the world (reference). Generally, sense relations are associated with the word or lexical item/lexeme and with a lexical structure; their study is known as structural or lexical semantics. REFERENCE is concerned with the meaning of words, sentences, etc., in terms of the world of experience: the situations to which they refer or in which they occur.

Semantic fields

One approach has been the theory of semantic fields, developed by J. Trier and W. Porzig in 1934. It attempts to deal with words as related and contrasting members of a set: for example, the meaning of English colour words like red and blue, which can be stated in terms of their relations in the colour spectrum, which in turn can be compared with the colour words of other languages. Thus, there is no precise equivalent of blue in Russian, which has two terms, goluboy and siniy, usually translated as ‘light blue’ and ‘dark blue’. In Russian, these are treated as distinct colours and not shades of one colour, as users of English might suppose from their translation.

Sense relations

In addition to semantic fields and lexical sets, a number of different types of SENSE relation have been identified, some traditional, some recent: (1) Hyponymy. Inclusion or class membership: tulip and rose are HYPONYMS of flower, which is their hyperonym or superordinate term. In its turn, flower is a hyponym of plant. In ordinary language, however, words can seldom be arranged within the kinds of strict classification found in zoology or botany. For example, there are arguments about whether rhubarb is a vegetable or a fruit, and whether the tomato is a fruit or a vegetable. (2) SYNONYMY. Sameness of meaning: large is a SYNONYM of big. It is often maintained that there are no true synonyms in a language, but always some difference, of variety (AmE fall, BrE autumn), style (polite gentleman, colloquial BrE chap), emotive meaning (general politician, appreciative statesman), collocation (rancid modifying only bacon or butter). Partial or near synonymy is common, as with adult, ripe, and mature. (3) Antonymy. Oppositeness of meaning. There are, however, several types of opposite: wide/narrow and old/young are gradable both explicitly (X is wider than Y, A is older than B) and implicitly (a wide band is narrower than a narrow road). Such pairs allow for intermediate stages (neither wide nor narrow) and are ANTONYMS proper. Male/female and alive/dead are not usually gradable and allow for no intermediate stage, except in expressions such as more dead than alive. Such pairs are complementaries. Buy/sell and husband/wife are relational opposites (X sells to Y and Y buys from X; only a husband can have a wife, and vice versa). Such pairs are converses. (4) POLYSEMY or multiple meaning. The existence of two or more meanings or senses to one word: for example, flight defined in at least six different ways: the power of flying; the act of flying; an air journey; a series (of steps); fleeing; a unit in an air force. (5) Homonymy. Words different in meaning but identical in form: mail armour, mail post. It is not always easy to distinguish homonymy and polysemy, and dictionaries rely partly on etymology to help maintain the distinction. Ear (of corn) and ear (the organ) are examples of homonymy, because etymologically the former derives from Old English éar (husk) while the latter derives from Old English éare (ear). See HOMONYM, -ONYM.

Componential analysis

An approach which makes use of semantic components was first used by anthropologists in the analysis of kinship terms. Componential analysis seeks to deal with sense relations by means of a single set of constructs. Lexical items are analysed in terms of semantic features or sense components: for example, such sets as man/woman, bull/cow, ram/ewe have the proportional relationships man : woman :: bull : cow :: ram : ewe. Here, the components [male]/[female], and [human]/[bovine]/[ovine] may account for all the differences of meaning. Generally, components are treated as binary opposites distinguished by pluses and minuses: for example, [+male]/[−male] or [+female]/[−female] rather than simply [male]/[female]. It has been argued that projection rules can combine the semantic features of individual words to generate the meaning of an entire sentence, and to account for ambiguity (as in The bill is large) and anomaly (as in *He painted the walls with silent paint). There are complexities where the features are not simply additive but arranged in hierarchical structure: for example, in the proposal to analyse kill as [cause] [to become] [not alive]. It is controversial whether there is a finite set of such universal semantic components accounting for all languages and whether the components have conceptual reality.

Semantics and grammar

The meaning of a sentence is generally assumed to be derived from the meaning of its words, but it can be argued that we usually interpret whole sentences and that the sentence, not the word, is the basic unit of meaning, the meaning of words being derived from the meaning of sentences. This view is implicit in the referential theories of meaning discussed below. A distinction has been made by the British linguist John Lyons between sentence meaning and utterance meaning: sentence meaning is concerned with ‘literal’ meaning determined by the grammatical and lexical elements, unaffected by the context or what the speaker ‘meant’ to say.

Utterance meaning includes: (1) Presupposition. The statement The king of France is bald presupposes that there is a king of France, and the statement I regret that Mary came presupposes that Mary did come, but I believe that Mary came does not. What is presupposed in this sense is not asserted by the speaker but is nevertheless understood by the hearer. (2) Implicature. A term associated with H. P. Grice. The statement It's hot in here may imply the need to open a window, I tried to telephone John yesterday would normally suggest that I failed, and I've finished my homework (as a reply to Have you finished your homework and put your books away?) would suggest that the books have not been put away. Implicature is concerned with the various inferences we can make without actually being told, and includes presupposition. (3) Prosodic features. The use of stress and tone, as when He SAW Mary this morning means that he did not avoid her or telephone her, in contrast with He saw MARY this morning, rather than or in addition to anyone else. (4) Speech acts. Associated with J. L. Austin (How to do Things with Words, 1962). When a ship is launched with the words I name this ship…, the usage is not a statement of fact but an action. Similarly, I declare this meeting closed is the act of closing that meeting. Such speech acts, called performatives, cannot be said to be true or false. The notion of speech act can be extended to more common types of speech function: questions, orders, requests, statements, etc., and it is instructive to note that what appears to be a question may actually be a request: for example, Can you pass the salt?, where it would be inappropriate, though true, to reply Yes, of course I can without taking any action.

Reference

The place of reference in semantics is controversial. A problem with word meaning in terms of reference is that though words for objects may seem to denote, or refer to, objects (as with stone and house), other words (abstract nouns, verbs, and prepositions, etc.) do not seem to refer to anything. Many words are quite vague in their reference, with no clear dividing line between them (hill/mountain, river/stream/brook), and may be used for sets of objects that are very different in appearance (dog and table covering a wide range of animals and pieces of furniture). Referential meaning (usually of words but also of sentences) is sometimes known as cognitive meaning, as opposed to emotive or evaluative meaning. In traditional terms, this is the difference between denotation and connotation. Since there are theoretical problems with the concept of referential meaning (which seems inapplicable to abstract nouns, verbs, etc.), some scholars prefer the terms cognitive and affective. Thus, the pairs horse/steed, statesman/politician, and hide/conceal may be said to have the same cognitive meanings, but different affective meanings.

Approaches to meaning

The American linguist Leonard Bloomfield regarded meaning as a weak point in language study and believed that it could be wholly stated in behaviourist terms. Following the Polish anthropologist Bronislaw Malinowski, the British linguist J. R. Firth argued that context of situation was an important level of linguistic analysis alongside syntax, collocation, morphology, phonology, and phonetics, all making a contribition to linguistic meaning in a very wide sense. However, there have been few attempts to make practical use of that concept. Many scholars have therefore, excluded reference from semantics. Thus, in transformational-generative grammar, the semantic component is entirely stated in terms of sense or semantic components, as described above in terms of componential analysis. Others have argued for a truth-conditional approach to semantics, in which the meaning of bachelor as ‘unmarried man’ is shown by the fact that if X is an unmarried man is true, then X is a bachelor is also true.

Pragmatics

Every aspect of meaning which cannot be stated in truth-conditional terms is PRAGMATICS; the distinction is close to that of sentence and utterance meaning. But there are problems with this distinction and with the exclusion of reference. Thus, such deictic relationships as here/there and this/that, and words such as today and the personal pronouns, appear to contribute to sentence meaning, yet depend for their interpretation on reference, which varies according to the identity of speaker and hearer and the time and place of the utterance.

Conclusion

There can be no single, simple approach to the study of semantics, because there are many aspects of meaning both within language and in the relation between language and the world. The complexity of semantics reflects the complexity of the use of human language.

See AMBIGUITY, COMMUNICATION, CONNOTATION AND DENOTATION, CONTEXT, LANGUAGE CHANGE, LEVEL OF LANGUAGE, LEXICOGRAPHY, LOGIC, SIGN, SLANG, STRESS, SYMBOL, TONE.

Semantics

views updated May 23 2018

SEMANTICS

SEMANTICS conveniently divides into two branches, the theory of designation and/or denotation and the theory of meaning. The former constitutes extensional, the latter intensional semantics. Both branches are thus parts of the modern trivium of syntax, semantics, and pragmatics, which is often called "logical semiotics" for short. Semiotics is in fact modern logic in full dress, and is thought by many, especially perhaps at Oxford University, to occupy a central place in the study of the liberal arts. Syntax is the theory of signs as such and how they are interrelated to form longer signs, phrases, sentences, texts, and so on. In semantics, signs are interrelated in one way or another with the objects for which they stand. And in pragmatics the user of language is brought in fundamentally, as well as the various relations that he or she bears to signs and combinations of signs in particular occasions of use.

Signs are often understood in a broader, nonlinguistic sense to allow for "natural" signs, human artifacts, and the like. Thus a weathercock is a sign that the wind is blowing in a certain direction, smoke is a sign of fire, a stop sign on the highway is a sign to the driver, and so on. The study of nonlinguistic signs harks back to the medieval period and in the nineteenth century was given a considerable boost by the work of the American philosopher C. S. Peirce. Even so, it has not yet achieved the exactitude of logical semiotics and, pending such a development, remains somewhat contro-versial.

Designation is the fundamental relation between a sign and what it stands for. In the theory of meaning, much more is taken into account. Thus, in Frege's famous example, the phrases "the morning star" and "the evening star" designate the same object, the planet Venus, but differ considerably in meaning. What is meaning? No easy answer is forthcoming. In any adequate theory of it, however, account should surely be taken of the contexts, linguistic and nonlinguistic alike, in which signs or expressions are used, including, where needed, reference to the user.

A detailed history of semantical concepts, and of the broader domain of semiotical concepts, has not yet been written. Especially important here is the material in book 2 of Augustine's On Christian Doctrine and book 4 of Peter Lombard's Book of Sentences that sustains the doctrine of sacramental theology even to the present day. The contributions of the Scholastic logicians also constitute a rich mine of material that has not yet been sufficiently studied from a modern point of view. Logical semiotics, including semantics, has an important role to play in the study of the languages of theology, both those of fundamental theory and of particular religions.

Bibliography

Bochenski, Joseph M. A History of Formal Logic. Notre Dame, Ind., 1961.

Deely, John. Introducing Semiotics: Its History and Doctrines. Bloomington, Ind., 1982.

Martin, R. M. Truth and Denotation: A Study in Semantical Theory. Chicago, 1958.

Martin, R. M. Semiotics and Linguistic Structure. Albany, N. Y., 1978.

New Sources

Houben, Han, Jac van Wout, and Ineke Sluiter, eds. The Emergence of Semantics in Four Linguistic Traditions; Hebrew, Sanskrit, Greek, Arabic. Amsterdam, 1997.

R. M. Martin (1987)

Revised Bibliography

semantics

views updated Jun 08 2018

se·man·tics / səˈmantiks/ • pl. n. [usu. treated as sing.] the branch of linguistics and logic concerned with meaning. There are a number of branches and subbranches of semantics, includingformal semantics, which studies the logical aspects of meaning, such as sense, reference, implication, and logical form,lexical semantics, which studies word meanings and word relations, andconceptual semantics, which studies the cognitive structure of meaning. ∎  the meaning of a word, phrase, sentence, or text: such quibbling over semantics may seem petty stuff.DERIVATIVES: se·man·ti·cian / ˌsēmanˈtishən/ n.se·man·ti·cist n.

semantics

views updated May 21 2018

semantics That branch of the study of symbols which deals primarily with the development of the meaning of words. Sometimes viewed as a branch of linguistics, sometimes as a sister discipline, semantics attempts to study the attribution of meaning to words, and how these are combined to produce complex meaningful utterances; the nature of meaning itself; and the difficulties people experience when meaning is confused or distorted. Semantics is a background influence in areas such as, for example, ethnomethodology and post-structuralism. See also MEAD, G. H.; PIAGET, JEAN.

semantics

views updated May 18 2018

semantics That part of the definition of a language concerned with specifying the meaning or effect of a text that is constructed according to the syntax rules of the language. See also denotational semantics, operational semantics, axiomatic semantics, interpretation.

semantics

views updated May 29 2018

semantics Branch of linguistics and philosophy concerned with the study of meaning. In historical linguistics, it generally refers to the analysis of how the meanings of words change over time. In modern linguistics and philosophy, semantics seeks to assess the contribution of word-meaning to the meanings of phrases and sentences, and to comprehend the relationship among and between words and the things they refer to, or stand for.