Logic, Non-Classical
LOGIC, NON-CLASSICAL
The purpose of this entry is to survey those modern logics that are often called "non-classical," classical logic being the theory of validity concerning truth functions and first-order quantifiers likely to be found in introductory textbooks of formal logic at the end of the twentieth century.
For the sake of uniformity I will give a model-theoretic account of the logics. All of the logics also have proof-theoretic characterizations, and in some cases (such as linear logic) these characterizations are somewhat more natural. I will not discuss combinatory logic, which is not so much a non-classical logic as it is a way of expressing inferences that may be deployed for both classical and non-classical logics. I will use A, B, … for arbitrary sentences; ∧, ∨, ¬, and →, for the standard conjunction, disjunction, negation, and conditional operators for whichever logic is at issue. "Iff" means "if and only if." For references see the last section of this article.
Extensions versus Rivals
An important distinction is that between those non-classical logics that take classical logic to be alright as far as it goes, but to need extension by the addition of new connectives, and those which take classical logic to be incorrect, even for the connectives it employs. Call the former extensions of classical logic, and the latter rivals. Thus modal logics, as now usually conceived, are extensions of classical logic. They agree with classical logic on the extensional connectives (and quantifiers if these are present) but augment them with modal operators. By contrast, intuitionist and relevant logics are more plausibly thought of as rivals. Thus A ∨¬A is valid in classical logic but not intuitionist logic, and A →(B →A ) is valid in classical logic but not relevant logic.
The distinction must be handled with care however. Modern modal logics can be formulated, not with the modal operators, but with the strict conditional, ⥽ (from which modal operators can be defined), as primitive; and A ⥽(B ⥽A ) is not valid. From this perspective modal logic is a rival to classical logic (which is the way it was originally intended). Similarly it is (arguably) possible to add a negation operator, $, to relevant logics which behaves as does classical negation. Classical logic is, then, just a part of this logic, identifying the classical ¬A and A →B with the relevant $A and $A ∨ B, respectively. From this perspective, in a relevant logic, → and ¬ are operators additional to the classical ones, and relevant logic is an extension of classical logic.
What these examples show is that whether or not something is an extension or a rival of classical logic is not a purely formal matter but a matter of how the logic is taken to be applied to informal reasoning. If, in a modal logic, one reads A ⥽ B as "if A then B " then the logic is a rival of classical logic. If one reads A →B as "if A then B " and A ⥽B as "necessarily, if A then B," it is an extension. If, in a relevant logic, one reads A →B as "if A then B," and ¬A as "it is not the case that A," the logic is a rival to classical logic; if one reads $A ∨ B as "if A then B " and $A as "it is not the case A," it is an extension. (The examples also raise substantial philosophical issues. Thus both a relevant logician and an intuitionist are liable to deny that $ is a connective with any determinate meaning.)
Many-Valued Logics
A central feature of classical logic is its bivalence. Every sentence is exclusively either true (1) or false (0). In many-valued logics, normally thought of as rivals to classical logic, there are more than two semantic values. Truth-functionality is, however, maintained; thus the value of a compound formula is determined by the values of its components. Some of the semantic values are designated, and a valid inference is one in which, whenever the premises are designated, so is the conclusion.
A simple example of a many-valued logic is that in which there are three truth values, 1, i, 0; and the truth functions for the standard connectives may be depicted as follows:
The only designated value is 1 (which is what the asterisk indicates). This is the Łukaziewicz 3-valued logic, Ł3. If the middle value of the table for → is changed from 1 to i we get the Kleene 3-valued logic K 3. The standard interpretation for i in this logic is neither true nor false. If in addition i is added as a designated value, we get the paraconsistent logic LP. The standard interpretation for i in this is both true and false.
Ł3 can be generalized to a logic, Łn, with n values, for any finite n, and even to one with infinitely many values. Thus the continuum-valued Łukasiewicz logic, Łℵ, has as semantic values all real numbers between 0 and 1 (inclusive). Normally only 1 is designated. If we write the value of A as ν(A ), ν(A ∨B ) and ν(A ∧B ) are the maximum and minimum of ν(A ) and ν(B ), respectively; ν(¬A )=1-ν(A ); ν(A →B )=1 if ν(A )≤ν(B ) and ν(A →B )=1-(ν(A )-ν(B )) otherwise. Standardly the semantic values are thought of as degrees of truth (so that 1 is completely true ). Interpreted in this way Łℵ is one of a family of many-valued logics called fuzzy logics.
Modal Logics
Another family of non-classical logics maintains bivalence, but rejects truth-functionality. Modal logics augment the connectives of classical logic with the operators □ (it is necessarily the case) and ♢ (it is possibly the case). The truth-values of □A and ♢A depend on more than just the truth value of A.
Standard semantics for modal logics invoke a set of (possible) worlds, augmented with a binary relation, R. wRw′ means, intuitively, that from the state of affairs as it is at w, the state of affairs as it is at w′ is possible. (In first-order modal logics each world comes also with a domain of quantification.) The extensional connectives are given their usual truth conditions with respect to a world, but if we write the value of A at world w as νw (A ):
νw (□A )=1, iff for all w′ such that wRw′, νw ′(A )=1
νw (♢A )=1, iff for some w′ such that wRw′, νw ′(A )=1
Validity is defined in terms of truth preservation at all worlds. (This is for normal modal logics. Non-normal modal logics have also a class of non-normal worlds, at which the truth conditions of the modal operators are different.)
Different modal logics are obtained by putting constraints on R. If R is arbitrary we have the system K. If it is reflexive (validating □A →A ), we have T ; if transitivity is also required (validating □A →□□A ), we have S 4; if symmetry is added (validating A →□♢A ), we have S 5. (Alternatively, in this case, R may be universal: For all w and w′, wRw′.) If we have just the condition that every world is related to some world or other (validating □A →♢A ), we have D.
The notion of possibility is highly ambiguous (logical, physical, epistemic, etc.). Arguably, different constraints on R are appropriate for different notions.
Intensional Logics
World semantics have turned out to be one of the most versatile techniques in contemporary logic. Generally speaking, logics that have world-semantics are called intensional logics (and are normally thought of as extensions of classical logic). There are many of these in addition to standard modal logics.
□ may be interpreted as "it is known that", in which context it is usually written as K and the logic is called epistemic logic. (The most plausible epistemic logic is T.) It may be interpreted as "it is believed that," in which case it is usually written as B, and the logic is called doxastic logic. (Though even the logic K seems rather too strong here, except as an idealization to logically omniscient beings.) □ may be interpreted as "it is obligatory to bring it about that," in which case it is written as O, and the logic is called deontic logic. The standard deontic logic is D.
One can also interpret □ as "it is provable that." The best-known system in this regard is usually known as GL and called provability logic. This logic imposes just two constraints on the accessibility relation. One is transitivity; the other is that there are no infinite R -chains, that is, no sequences of the form w 0Rw 1, w 1Rw 2, w 2Rw 3, … This constraint verifies the principle □(□A →A )→□A, but not □A →A. The interest of this system lies in its close connection with the way that a provability predicate, Prov, works in standard systems of formal arithmetic. By Gödel's second incompleteness theorem, in such logics one cannot prove Prov (〈A 〉) → A (where 〈A 〉 is the numeral for the gödel number of A ); but Löb's theorem assures us that if we can prove Prov (〈A 〉) → A we can prove A, and so Prov (〈A 〉). It is this idea that is captured in the characteristic principle of GL.
Another possibility is to interpret □ and ♢ as, respectively, 'it will always be the case that,' and 'it will be the case at some time that.' In this context the operators are normally written as G and F, and the logic is called tense logic. In the world-semantics for tense logics, worlds are thought of as times, and the accessibility relation, R, is interpreted as a temporal ordering. In these logics there are also past-tense operators: H and P ("it has always been the case that" and "it was the case at some time that," respectively). These are given the reverse truth conditions. Thus for example:
νw(HA )=1, iff for all w′ such that w′Rw, νw ′(A )=1
The past and future tense operators interact in characteristic ways (e.g., A →HFA is logically valid). The basic tense logic, Kt, is that obtained when R is arbitrary. As with modal logics, stronger systems are obtained by adding constraints on R, which can now represent the ideas that time is dense, has no last moment, and so on.
Of course it is not necessary to have just one family of intensional operators in a formal language: One can have, for example, modal and tense operators together. Each family will have its own accessibility relation, and these may interact in appropriate ways. Systems of logic with more than one family of modal operators are called multi-modal. One of the most important multi-modal logics is dynamic logic. In this there are operators of the form [α] and 〈α〉, each with its own accessibility relation, R α. In the semantics of dynamic logic, the worlds are thought of as states of affairs or of a computational device. The αs are thought of as (non-deterministic) actions or programs, and wR αw′ is interpreted to mean that starting in state w and performing the action α (or running the program α) can take one to the state w′. Thus [α] A (〈α〉A ) holds at state w, just if performing α at w will always (may sometimes) lead to a state in which A holds. The actions themselves are closed under certain operations. In particular, if α and β are actions, so are α;β (perform α and then perform β); α∪β (perform α or perform β, non-deterministically); α* (perform α some finite number of times, non-deterministically). There is also an operator,? ("test whether"), which takes sentences into programs. The corresponding accessibility relations are: xR α;βy iff for some z, xR αz and zR βy ; xR α∪βy iff xR αy or xR βy ; xR α*y iff for some x=x1, x2, …, xn=y, x0Rαx1, x1Rαx2, …, xn-1Rαxn; xR A?y iff (x=y and νx(A)=1). Because of the * operator, dynamic logic can express the notion of finitude in a certain sense. This gives it some of the expressive strength of second-order logic.
Conditional Logics
Another family of logics of the intentional variety was triggered by some apparent counter-examples to the following inferences:
A →B ⊦ (A ∧C)→B
A →B, B →C ⊦ A →C
A →B ⊦ ¬B →¬A
which are valid for the material conditional. (For example: "If you strike this match it will light; hence if you strike this match and it is under water it will light.") Logics of the conditional that invalidate such principles are called conditional logics. Such logics add an intentional conditional operator, >, to the language. In the semantics there is an accessibility relation, RA, for every sentence, A (or one, RX, for every proposition, that is, set of worlds, X ). Intuitively wR Aw ′ iff w′ is a world which A holds but is, ceteris paribus, the same as w. The truth conditions for > are:
νw (A >B )=1 iff for all w′ such that wRA w′, νw ′(B )=1
The intuitive meaning of R motivates the following constraints:
wR A w ′ then νw ′(A )=1
if νw (A )=1,then wRAw
Stronger logics in the family are obtained by adding further constraints to the accessibility relations. A standard way of specifying these is in terms of "similarity spheres"—neighbourhoods of a world containing those worlds that have a certain degree of similarity to it.
The natural way of taking a conditional logic is as a rival to classical logic (giving a different account of the conditional). Some philosophers, however, distinguish between indicative conditionals and subjunctive/counterfactual conditionals. They take the indicative conditional to be the material conditional of classical logic, and > to be the subjunctive conditional. Looked at this way conditional logics can be thought of as extensions of classical logic.
Intuitionist Logic
There are a number of other important non-classical logics that, though not presented originally as intentional logics, can be given world semantics. One of these is intuitionist logic. This logic arose out of a critique of Platonism in the philosophy of mathematics. The idea is that one cannot define truth in mathematics in terms of correspondence with some objective realm, as in a traditional approach. Rather one has to define it in terms of what can be proved, where a proof is something that one can effectively recognize as such. Thus, semantically, one has to replace standard truth-conditions with proof-conditions, of the following kind:
A ∨B is provable when A is provable or B is provable.
¬A is provable when it is provable that there is no proof of A
∃xA (x ) is provable when we can effectively find an object, n, such that A(n) is provable
Note that in the case of negation we cannot say that ¬A is provable when A is not provable: We have no effective way of recognizing what is not provable; similarly, in the case of the existential quantifier, we cannot say that ∃xA (x ) is provable when there is some n such that A(n) is provable: we may have no effective way of knowing whether this obtains.
Proceeding in this way produces a logic that invalidates a number of the principles of inference that are valid in classical logic. Notable examples are: A ∨¬A, ¬¬A →A, ¬∀xA (x )→∃x ¬A (x ). For the first of these, there is no reason to suppose that for any A we can find a proof of A or a proof that there is no proof of A. For the last, the fact that we can show that there is no proof of ∀xA (x ) does not mean that we can effectively find an n such that A(n) can be proved.
In the world-semantics for intuitionist logic, interpretations have essentially the structure of an S 4 interpretation. The worlds are interpreted as states of information (things proved), and the accessibility relation represents the acquisition of new proofs. We also require that if νw(A )=1 and wRw′, νw′(A )=1 (no information is lost), and if x is in the domain of quantification of w and wRw′ then x is in the domain of quantification of w′ (no objects are undiscovered). Corresponding to the provability conditions we have:
νw (A ∨B )=1 iff νw(A )=1 or νw(A )=1
νw (¬A )=1 iff for all w′ such that wRw′, νw′(A )=0
νw (∃xA (x ))=1 iff for all n in the domain of w, νw (A (n ))=1
Unsurprisingly, given the above semantics, there is a translation of the language of intuitionism into quantified S 4 that preserves validity.
Another sort of semantics for intuitionism takes semantic values to be the open sets of some topology. If the value of A is x, the value of ¬A is the interior of the complement of x.
Relevant Logic
Another logic standardly thought of as a rival to classical logic is relevant (or relevance ) logic. This is motivated by the apparent incorrectness of classical validities such as: A →(B →B ), (A ∧¬A )→B. A (propositional) relevant logic in one in which if A →B is a logical truth A and B share a propositional parameter. There are a number of different kinds of relevant logic, but the most common has a world-semantics. The semantics differs in two major ways from the world semantics we have so far met.
First it adds to the possible worlds a class of logically impossible worlds. (Though validity is still defined in terms of truth-preservation over possible worlds.) In possible worlds the truth conditions of → are as for ⥽ in S 5:
νw (A →B )=1 iff for all w′ (possible and impossible) such that νw ′(A )=1, νw ′(B )=1
In impossible worlds the truth conditions are given differently, in such a way that logical laws such as B →B may fail at the world. This may be done in various ways, but the most versatile technique employs a three-place relation, S, on worlds. If w is impossible, we then have:
νw (A →B )=1 iff for all x,y such that Swxy, if νx (A )=1, νy (B )=1
This clause can be taken to state the truth conditions of → at all worlds, provided that we add the constraint that, for possible w, Swxy iff x =y. With no other constraints on S, this gives the basic (positive) relevant logic, B. Additional constraints on S give stronger logics in the family. Typical constraints are:
∃x (Sabx and Sxcd )⇒∃y (Sacy and Sbyd )
Sabc ⇒Sbac
Sabc ⇒∃x (Sabx and Sxbc )
Adding all three gives the (positive) relevant logic, R. Adding the first two gives RW, R minus Contraction (A →(A →B )⊦A →B ). The intuitive meaning of S is, at the time of this writing, philosophically moot.
The second novelty of the semantics is in its treatment of negation. It is necessary to arrange for worlds where A ∧¬A may hold. This may be done in a couple of ways. The first is to employ the Routley * operator. Each world, w, comes with a "mate," w * (subject to the constraint that w **=w, to give Double Negation). We then have:
νw (¬A )=1 iff νw *(A )=0
(If w =w *, this just delivers the classical truth conditions.) Alternatively, we may move to a four-valued logic in which the values at a world are true only, false only, both, neither ({1}, {0}, {1,0}, ∅). We then have:
1∈νw (¬A ) iff 0∈νw (A )
0∈νw (¬A ) iff 1∈νw (A )
The semantics of relevant logic can be extended to produce a (relevant) ceteris paribus conditional, >, of the kind found in conditional logics, by adding the appropriate binary accessibility relations.
Distribution-Free Logics
There are some logics in the family of relevant logics for which the principle of Distribution, A ∧(B ∨C)⊦(A ∧B )∨(A ∧C), fails. To achieve this the truth conditions for disjunction have to be changed. In an interpretation, let [A ] be the set of worlds at which A holds. Then the usual truth conditions for disjunction can be written:
νw (A ∨B )=1 iff w ∈[A ] ∪[B ]
To invalidate Distribution, the semantics are augmented by a closure operator, 𝕮, on sets of worlds, x, satisfying the following conditions:
X ⊆𝕮(X )
𝕮𝕮(X )=𝕮X
if X ⊆Y then 𝕮(X )⊆𝕮(Y )
The truth conditions of disjunction can now be given as:
νw (A ∨B )=1 iff w ∈𝕮([A ] ∪[B ])
Changing the truth conditions for disjunction in RW in this way (and using the Routley * for negation) gives linear logic (LL ). LL is usually formulated with some extra intentional connectives, especially an intentional conjunction and disjunction. These connectives can be present in standard relevant logics too. Intuitionist, relevant, and linear logics all belong to the family of substructural logics. Proof-theoretically, these logics can be obtained from a sequent-calculus for classical logic by weakening the structural rules (especially Weakening and Contraction).
Another logic in which distribution fails is quantum logic. The thought here is that it may be true (verifiable) of a particle that it has a position and one of a range of momenta, but each disjunct attributing to it that position and a particular momentum is false (unverifiable). The states of a quantum system are canonically thought of as members of a Hilbert space. In the world-semantics for quantum logic, the space of worlds is taken to be such a space, and sentences are assigned closed subsets of this. [A ∧B ] =[A ] ∩[B ], [A ∨B ] =𝕮([A ] ∪[B ] ), where 𝕮(X ) is the smallest closed space containing X ; and [ ¬A ] =[A ]⟂. X ⟂ is the space comprising all those states that are orthogonal to members of X. (It satisfies the conditions: X = X ⟂⟂, if X ⊆Y then Y ⟂⊆X ⟂, and X ∩X ⟂=∅.) In quantum logic A →B can be defined in various ways. Perhaps the most plausible is as ¬A ∨(A ∧B ). (The subspaces of a Hilbert space also have the structure of a partial Boolean algebra. Such an algebra is determined by a family of Boolean algebras collapsed under a certain equivalence relation, which is a congruence relation on the Boolean operators. Partial Boolean algebras can be used to provide a slightly different quantum logic.)
Paraconsistent Logics
Before we turn to quantifiers there is one further kind of logic to be mentioned: paraconsistent logic. Paraconsistent logic is motivated by the thought we would often seem to have to reason sensibly from information, or about a situation, which is inconsistent. In such a case, the principle A,¬A ⊦B (ex falso quodlibet sequitur, Explosion), which is valid in classical logic, clearly makes a mess of things. A paraconsistent logic is precisely one where this principle fails.
There are many different families of paraconsistent logics—as many as there are ways of breaking Explosion. Indeed many of the techniques we have already met in this article can be used to construct a paraconsistent logic. The 3-valued logic LP is paraconsistent, as is the Łukasiewicz continuum-valued logic, provided we take the designated values to contain 0.5. The ways that negation is handled in relevant logic also produce paraconsistent logics, as long as validity is defined over a class of worlds in which A and ¬A may both hold. Another approach (discussive logic ) is to employ standard modal logic and to take A to hold in an interpretation iff A holds at some world of the interpretation. In this approach the principle of Adjunction (A,B ⊦ A ∧B ) will generally fail, since A and B may each hold at a world, whilst A ∧B may not. Another approach ("positive plus") is to take any standard positive (negation free) logic, and add a non-truth-functional negation—so that the values of A and ¬A are assigned independently. In these logics, the principle of Contraposition (A ↔B ⊦¬B ↔¬A ) will generally fail. Yet another is to dualise intuitionist logic. In particular one can take semantic values to be the closed sets in some topology. If the value of A is X, the value of ¬A is the closure of the complement of X.
Second-Order Quantification
We now turn to the issue of quantification. In classical logic there are quantifiers ∀ and ∃. These range over a domain of objects, and ∀xA (x ) [∃xA (x )] holds if every [some] object in the domain of quantification satisfies A (x ). All the propositional logics we have looked at may be extended to first-order logics with such quantifiers. Other non-classical logics may be obtained by adding to these (or replacing these with) different kinds of quantifiers.
Perhaps the most notable of these is second-order logic. In this there are bindable variables (X, Y, …) that can stand in the place where a monadic first-order predicate can stand and which range over sets of objects in the first-order domain—canonically all of them. (There can also be variables that range over the n -ary relations on that domain, for each n, as well as variables that range over n -place functions. The second-order extension of classical logic is much stronger than the first-order version. It can provide for a categorical axiomatization of arithmetic and consequently is not itself axiomatizable.
Monadic second-order quantifiers can also be given a rather different interpretation, as plural quantifiers. The idea here is to interpret ∃X Xa not as "There is a set such that a is a member of it," but as "There are some things such that a is one of them." The proponents of plural quantification argue that such quantification is not committed to the existence of sets.
Other Sorts of Quantifiers
There are many other non-classical quantifiers. For example one can have a binary quantifier of the form Mx (A (x ),B (x )), "most A s are B s." This is true in a finite domain if more than half the things satisfying A (x ) satisfy B (x ). It is not reducible to a monadic quantifier plus a propositional connective.
Another sort of quantifier is a cardinality quantifier. The quantifier "there exist exactly n things such that" can be defined in first-order logic with quantification and identity in a standard way. The quantifier "there is a countable number of things such that" (or its negation, "there is an uncountable number of things such that") cannot be so defined—let alone the quantifier "there are κ things such that," for an arbitrary cardinal, κ. Such quantifiers can be added, with the obvious semantics. These quantifiers extend the expressive power of the language towards that of second-order logic—and beyond.
Another kind of quantifier is the branching quantifier. When, in first-order logic, we write:
∀x 1∃y 1∀x 2∃y 2A (x 1,x 2,y 1,y 2)
y 2 is in the scope of x 1, and so its value depends on that of x 1. To express non-dependence one would normally need second-order quantification, thus:
∃f 1∀x 1∃f 2∀x 2A (x 1,x 2,f 1(x 1),f 2(x 2))
But we may express it equally by having the quantifiers non-linearly ordered, thus:
As this would suggest, branching quantifiers have something of the power of second-order logic.
A quite different kind of quantifier is the substitutional quantifier. For this there is a certain class of names of the language, C. ΠxA (x ) [ΣxA (x )] holds iff for every [some] c ∈C, A (c ) holds. This is not the same as standard (objectual) quantification, since some objects in the domain may have no name in C ; but first-order substitutional quantifiers validate the same quantificational inferences as first-order objectual quantifiers. Note that the notion of substitutional quantification makes perfectly good sense for any syntactically well-defined class, including predicates (so we can have second-order substitutional quantification) or binary connectives (so that Σx (AxB ) can make perfectly good sense).
Finally in this category comes free quantifiers. It is standard to interpret the domain of objects of quantification (at a world) as comprising the objects that exist (at that world). It is quite possible, however, to think of the domain as containing a bunch of objects, some of which exist, and some of which do not. Obviously this does not change the formal properties of the quantifiers. But if one thinks of the domain in this way one must obviously not read ∃x as 'there exists an x such that'; one has to read it simply as 'for some x '. Given this set-up, however, it makes sense to have existentially loaded quantifiers, ∀E and ∃E, such that ∀EA (x ) [∃EA (x )] holds (at a world) iff all [some] of the existent objects (at the world) satisfy A (x ). If there is a monadic existence predicate, E, these quantifiers can be defined in the obvious way, as (respectively): ∀x (Ex →A (x )) and ∃x (Ex ∧A (x )). Clearly, existentially loaded quantifiers will not satisfy some of the standard principles of quantification, such as ∀ExA (x )→A (c ), A (c )→∃xEA (x ) (since the object denoted by 'c ' may not exist). Some logics do not have the existentially unloaded quantifiers, just the loaded ones. These are usually called free logics.
Non-Monotonic Logics
It remains to say a word about one other kind of logic that is often categorized as non-classical. In all the logics we have been considering so far:
if Σ⊦A then Σ∪Δ⊦A
(where Σ and Δ are sets of formulas): Adding extra premises makes no difference. This is called monotonicity. Logics in which this principle fails are called non-monotonic logics. Non-monotonic inferences can be thought of as inferences that are made with certain default assumptions. Thus I am told that something is a bird, and I infer that it can fly. Since most birds fly this is a reasonable conclusion. If, however, I also learn that the bird weighs 20 kg. (and so is an emu or an ostrich), the conclusion is no longer a reasonable one.
There are many kinds of non-monotonic logics, depending on what kind of default assumption is implemented, but there is a common structure that covers many of them. Interpretations, I, of the language come with a strict partial ordering, ≻ (often called a preference ordering ). Intuitively, I 1≻I 2 means that the situation represented by I 1 is more normal (in whatever sense of normality is at issue) than that represented by I 2. (In particular cases it may be reasonable to suppose that ≻ has additional properties.) I is a most normal model of Σ iff every B ∈Σ holds in I, and there is no J ≻I for which this is true. A follows from Σ iff A holds in every most normal model of Σ. As is clear a most normal model of Σ is not guaranteed to be a most normal model of Σ∪Δ. Hence monotonicity will fail. As might be expected there is a close connection between non-monotonic logics and conditional logics, in which the inference A →B ⊦(A ∧C )→B fails. Though non-monotonic logic has come to prominence in modern computational logic, it is just a novel and rigorous way of looking at the very traditional notion of non-deductive (inductive, ampliative) inference.
History, Persons, References
We conclude this review of non-classical logics by putting the investigations discussed above in their historical context. References that may be consulted for further details are also given at the end of each paragraph. For a general introduction to propositional non-classical logics, see Priest (2001). Haack (1996) is a discussion of some of the philosophical issues raised by non-classical logics.
The first modern many-valued logics, the Łn family, were produced by Jan Łukasiewicz in the early 1920s. (Emil Post also produced some many-valued logics about the same time.) Łukasiewicz's major philosophical concern was Aristotle's argument for fatalism. In this context he suggested a many-valued analysis of modality. Logics of the both/neither kind were developed somewhat later. Canonical statements of K 3 and LP were given (respectively) by Stephen Kleene in the 1950s and Graham Priest in the 1970s. Łℵ was first published by Łukasiewicz and Alfred Tarski in 1930. The intensive investigation of fuzzy logics and their applications started in the 1970s. A notable player in this area was Lotfi Zadeh. (Rescher 1969, Urquhart 2001– , Hájek 1998, Yager and Zadeh 1992.)
Modern modal logics were created in an axiomatic form by Clarence Irving Lewis in the 1920s. Lewis's concern was the paradoxes of the material conditional, and he suggested the strict conditional as an improvement. Possible-world semantics for modal logics were produced by a number of people in the 1960s, but principally Saul Kripke. The semantics made possible the systematic investigation of the rich family of modal logics. (Bull and Segerberg 2001– , Garson 2001– , Hughes and Cresswell 1996.)
The idea that the techniques of modal logics could be applied to notions other than necessity and possibility occurred to a number of people around the middle of the twentieth century. Tense logics were created by Arthur Prior, epistemic and doxastic logic were produced by Jaakko Hintikka, and deontic logics by Henrik von Wright. Investigations of provability logic were started in the 1970s by George Boolos and others. Dynamic logic was created by Vaughn Pratt and other logicians particularly interested in computation, including David Harrel, in the 1970s. (van Bentham 1988, Burgess 2001– , Thomason 2001– , Meyer 2001– , Åqvist 2001– , Boolos 1993, Harrel, Kozen, and Tiuryn 2001- .)
Conditional logics (with "sphere semantics") were proposed by David Lewis and Robert Stalnaker in the 1970s. They were formulated as multi-modal logics by Brian Chellas and Krister Segerberg a few years later (Harper, Stalnaker, and Pearce 1981, Nute and Cross 2001–).
The intuitionist critique of classical mathematics was started by Luitzen Egbertus Jan Brouwer in the early years of the twentieth century. This generated a novel kind of mathematics: intuitionist mathematics. Intuitionist logic, as such, was formulated by Arend Heyting and Andrei Kolmogorov in the 1920s. The intuitionist critique of mathematical realism was extended to realism in general by Michael Dummett in the 1970s (Dummett 1977, van Dalen 2001–).
Systems of relevant logic, in axiomatic form, came to prominence in the 1960s because of the work of Alan Anderson, Nuel Belnap and their students. World-semantics were produced by a number of people in the 1970s, but principally Richard Routley (later Sylvan) and Robert Meyer. The semantics made possible the investigation of the rich family of relevant logics. The four-valued semantics for negation is due to J. Michael Dunn (Dunn and Restall 2001– , Mares 2004).
Linear logic was produced by Jean-Yves Girard in the 1980s. Although many members of the class of sub-structural logics had been studied before, the fact that they could be viewed in a uniform proof-theoretic way, was not appreciated until the late 1980s. The formulation of quantum logic in terms of Hilbert spaces is due, essentially, to George Birkhoff and John von Neumann in the 1930s. The use of an abstract closure operator to give the semantics for non-distributive logics is due to Greg Restall. (Troelstra 1992, Restall 2000, Paoli 2002, Chihara and Giuntini 2001, Hughes 1989).
The first paraconsistent logic (discussive logic) was published by StanisŁaw Jaśkowski in 1948. Other non-adjunctive logics were later developed in the 1970s by Peter Schotch and Raymond Jennings. Newton da Costa produced a number of different paraconsistent logics and applications, starting with positive-plus logics in the 1960s. The paraconsistent aspects of relevant logic were developed by Priest and Routley in the 1970s. (Priest, Routley and Norman 1989, Priest 2001, Carnielli et al. 2001, Mortensen 1995).
Second-order quantification goes back to the origins of classical logic in the work of Gottlob Frege and Bertrand Russell. Its unaxiomatizability put it somewhat out of fashion for a number of years, but it made a strong come-back in the last years of the twentieth century. The notion of plural quantification was made popular by George Boolos in the 1980s. (Shapiro 1991, 2001– ; Boolos 1984).
Quantifier phrases other than "some A " and "all A " are pervasive in natural language; and since Frege provided an analysis of the quantifier many different kinds have been investigated by linguists and logicians. Branching quantifiers were proposed by Jaakko Hintikka in the 1970s. Substitutional quantification came to prominence in the 1960s, put there particularly in connection with quantification into the scope of modal operators by Ruth Barcan Marcus. It was treated with suspicion for a long time, but was eventually given a clean bill of health by Kripke. Free logics were first proposed in the 1960s, by Karel Lambert and others (van der Does and van Eijck 1996, Barwise 1979, Kripke 1976, Bencivenga 2001– ).
Non-monotonic logics started to appear in the logic/computer-science literature in the 1970s. There are many kinds. The fact that many of them could be seen as logics with normality orderings started to become clear in the 1980s (Shoham, 1988; Crocco, Fariñas del Cerro, and Herzig 1995; Brewka, Dix, and Konolige, 1997).
See also Aristotle; Brouwer, Luitzen Egbertus Jan; Combinatory Logic; Dummett, Michael Anthony Eardley; First-Order Logic; Frege, Gottlob; Fuzzy Logic; Gödel's Incompleteness Theorems; Hintikka, Jaako; Intensional Logic; Intuitionism and Intuitionistic Logic; Kripke, Saul; Lewis, Clarence Irving; Lewis, David; Łukasiewicz, Jan; Many-Valued Logics; Modal Logic; Neumann, John von; Non-Monotonic Logic; Platonism and the Platonic Tradition; Prior, Arthur Norman; Provability Logic; Quantifiers in Natural Language; Quantum Logic and Probability; Russell, Bertrand Arthur William; Second-Order Logic; Semantics; Tarski, Alfred; Wright, Georg Henrik von.
Bibliography
Åqvist, Leonard. "Deontic Logic." In Gabbay and Guenthner (2001–), vol. 8.
Bencivenga, Ermanno. "Free Logic." In Gabbay and Guenthner (2001–), vol. 5.
van Bentham, Johan. A Manual of Intensional Logic. Stanford, CA: CSLI, 1988.
van Bentham, Johan. "On Branching Quantifiers in English." Journal of Philosophical Logic 8 (1979): 47–80.
Boolos, George. The Logic of Provability. Cambridge, U.K.: Cambridge University Press, 1993.
Boolos, George. "To Be Is to Be the Value of a Variable (Or Some Values of Some Variables)." Journal of Philosophy 81 (1984): 430–449. Reprinted in Boolos's Logic, Logic, and Logic. Cambridge, MA: Harvard University Press, 1998.
Brewka, Gerhard, Jürgen Dix, and Kurt Konolige. Non-Monotonic Logic: an Overview. Stanford, CA: CLSI, 1997.
Bull, Robert A., and Krister Segerberg. "Basic Modal Logic." In Gabbay and Guenthner (2001–), vol. 3.*
Burgess, John A. "Basic Tense Logic." In Gabbay and Guenthner (2001–), vol. 7.*
Chihara, Maria-Luisa D., and Roberto Giuntini. "Quantum Logics." In Gabbay and Guenthner (2001–), vol. 6.*
Crocco, Gabriella, Luis Fariñas del Cerro, and Andreas Herzig. Conditionals: From Philosophy to Computer Science. Oxford: Oxford University Press, 1995.
van Dalen, Dirk. "Intuitionistic Logic." In Gabbay and Guenthner (2001–), vol. 5.*
van der Does, Jaap, and Jan van Eijck. "Basic Quantifier Theory." In Quantifiers, Logic, and Language, edited by Jaap van der Does and Jan van Eijck. Stanford, CA: CLSI, 1996.
Dummett, Michael. Elements of Intuitionism. Oxford: Oxford University Press, 1977.
Gabbay, Dov, and Franz Guenthner, eds. Handbook of Philosophical Logic. 2nd ed. Dordrecht: Kluwer Academic, 2001–. Articles marked * are (possibly) revised versions of chapters in the much shorter first edition, Dordrecht: Reidel, 1983–89.
Garson, James. "Quantification in Modal Logic." In Gabbay and Guenthner (2001–), vol. 3.*
Haack, Susan. Deviant Logic, Fuzzy Logic: Beyond the Formalism. Chicago: University of Chicago Press, 1996.
Hájek, Petr. Metamathematics of Fuzzy Logic. Dordrecht: Kluwer Academic, 1998.
Harper, William, Robert Stalnaker, and Glenn Pearce, eds. Ifs: Conditionals, Belief, Decision, Chance, and Time. Dordrecht: Kluwer Academic, 1981.
Harrel, David, Dexter Kozen, and Jerzy Tiuryn. "Dynamic Logic." In Gabbay and Guenthner (2001–), vol. 4.*
Hughes, George, and Max Cresswell. A New Introduction to Modal Logic. London: Routledge, 1996.
Hughes, R. I. G. The Structure and Interpretation of Quantum Mechanics. Cambridge, MA: Harvard University Press, 1989.
Kripke, Saul. "Is There a Problem about Substitutional Quantification?" In Truth and Meaning, edited by Gareth Evans and John McDowell. Oxford: Oxford University Press, 1976
Mares, Edwin. Relevant Logic. Cambridge, U.K.: Cambridge University Press, 2004.
Meyer, John Jules. "Modal Epistemic and Doxastic Logic." In Gabbay and Guenthner (2001–), vol. 10.
Mortensen, Chris. Inconsistent Mathematics. Dordrecht: Kluwer Academic, 1995.
Paoli, Francesco. Substructural Logics: A Primer. Dordrecht: Kluwer Academic, 2002.
Priest, Graham. Introduction to Non-Classical Logic. Cambridge, U.K.: Cambridge University Press, 2001.
Priest, Graham, Richard Routley, and Jean Norman. Paraconsistent Logic: Essays on the Inconsistent. Munich: Philosophia Verlag, 1989.
Rescher, Nicholas. Many-Valued Logic. New York: McGraw Hill, 1969.
Restall, Greg. An Introduction to Substructural Logics. London: Routledge, 2000.
Shapiro, Stewart. Foundations without Foundationalism: A Case for Second-Order Logic. Oxford: Oxford University Press, 1991.
Shoham, Yoav. Reasoning about Change. Cambridge, MA: MIT Press, 1988.
Thomason, Richmond H. "Combinations of Tense and Modality." In Gabbay and Guenthner (2001–), vol. 7.*
Troelstra, Anne. Lectures on Linear Logic. Stanford, CA: CSLI, 1992.
Urquhart, Alasdair. "Basic Many-Valued Logic." In Gabbay and Guenthner (2001–), vol. 2.*
Yager, Ronald R., and Lotfi A. Zadeh. An Introduction to Fuzzy Logic Applications in Intelligent Systems. Dordrecht: Kluwer Academic, 1992.
Graham Priest (2005)