Explanation
EXPLANATION
The three cardinal aims of science are prediction, control, and explanation, but the greatest of these is explanation. It is also the most inscrutable: Prediction aims at truth, and control at happiness, and insofar as one has some independent grasp of these notions, one can evaluate science's strategies of prediction and control from the outside. Explanation, by contrast, aims at scientific understanding, a good intrinsic to science, and therefore something that it seems one can only look to science itself to explicate.
Philosophers have wondered whether science might be better off abandoning the pursuit of explanation. Pierre Duhem (1954), among others, argued that explanatory knowledge would have to be a kind of knowledge so exalted as to be forever beyond the reach of ordinary scientific inquiry: it would have to be knowledge of the essential natures of things, something that neo-Kantians, empiricists, and level-headed practitioners of science could all agree was neither possible nor perhaps even desirable.
Everything changed when Carl Gustav Hempel formulated his deductive-nomological account of explanation. In accordance with the previous observation, that one's only clue to the nature of explanatory knowledge is science's own explanatory practice, Hempel proposed simply to describe what kind of things scientists tendered when they claimed to have an explanation, without asking whether such things were capable of providing true understanding. Since Hempel, the philosophy of scientific explanation has proceeded in this humble vein, seeming more like a sociology of scientific practice than an inquiry into a set of transcendent norms. In keeping with its mission as a branch of philosophy, however, the study of explanation pursues a particular kind of sociological knowledge: It is concerned almost exclusively with the ideal at which scientists take themselves to be aiming, and barely at all with the steps and missteps taken on the way to realizing the ideal.
As Hempel saw it, scientific explanation was of a piece with prediction, requiring the same resources and giving a similar kind of satisfaction. No doubt this modest view of the explanatory enterprise played a part in making the study of explanation acceptable in the climate of postwar empiricism. The story of explanation in decades since Hempel's time, however, is an expansionist one. Over the years philosophers of explanation have gradually required more resources for, and made grander claims for the significance of, explanation's role in science. (For a comprehensive overview of the philosophy of explanation from 1948 to 1988, with a full bibliography, see Wesley C. Salmon [1990].)
The Deductive-Nomological Account
Hempel's deductive-nomological (DN) account (Hempel and Oppenheim 1948) is intended to capture the form of any deterministic scientific explanation of an individual event, such as the expansion of a particular metal bar when heated, the extinction of the dinosaurs, or the outbreak of the U.S. Civil War.
According to Hempel such an explanation is always a deductive derivation of the occurrence of the event to be explained from a set of true propositions including at least one statement of a scientific law. (The event to be explained is called the explanandum; the set of explaining statements is sometimes called the explanans.) In other words, a deterministic event explanation is always a sound, law-involving, deductive argument with the conclusion that the explanandum event occurred.
Intuitively, the premises of a DN explanation spell out the relevant initial and background conditions, and the laws governing the behavior of the system in which the explanandum occurred. For example, Hempel cites the following argument as a typical DN explanation of the event of a thermometer's mercury expanding when placed in hot water:
The (cool) sample of mercury was placed in hot water, heating it,
Mercury expands when heated thus
The sample of mercury expanded
Because the law or laws that must be cited in a DN explanation typically cover the pattern of behavior of which the explanandum is an instance, the DN account is sometimes referred to as the covering law account of explanation.
One can see that the DN account is not intended to give the form of probabilistic event explanations; Hempel offers a separate account of probabilistic explanation, which will be discussed later on. The explanation of phenomena other than events is, by contrast, apparently amenable to the DN approach. Hempel suggests that a scientific law can be explained, for example, much like an event, by deducing it from premises including at least one other law. However, he finds himself unable to make good on this proposal, for reasons connected to the relevance problem discussed in the next section.
Many scientific explanations of events and other phenomena undoubtedly have the form proposed by the DN account: They are logical derivations from laws and other information. Hempel does not entirely satisfy himself, however, with answering questions of form. Taking one step beyond sociological humility, he advances a thesis as to why deductive, law-involving arguments should confer understanding, "[A DN explanation] shows that, given the particular circumstances and the laws in question, the occurrence of the phenomenon was to be expected ; and it is in this sense that the explanation enables us to understand why the phenomenon occurred" (1965a, p. 337, emphasis in the original).
Scientific understanding, then, takes the form of retrospective expectation: One might say (loosely) that, whereas prediction is concerned with what one should expect in the future, explanation is concerned with what one should have expected in the past. Explanation is, then, put on a par with prediction and so made safe for empiricist philosophy of science. Hempel even goes so far as to say that the difference between explanation and prediction is merely pragmatic (Hempel and Oppenheim 1948) though the DN account does not in itself entail such a thesis.
Objections to the dn Account
Three kinds of objections to the DN account have been especially important for the subsequent development of the philosophy of explanation.
The first kind of objection, developed by Henry Kyburg, Salmon, and others, points to the DN theory's inability to account for judgments of explanatory relevance. The paradigm is the following argument, which satisfies all the DN account's criteria for a good explanation of the event of a particular teaspoon of salt's dissolving:
The teaspoon of salt was hexed (meaning that certain hand gestures were made over the salt),
The salt was placed in water,
All hexed salt dissolves when placed in water thus,
The salt dissolved.
The explanation appears to attribute the salt's dissolving in part to its being hexed, when in fact the hexing is irrelevant.
There are various responses to the counterexample that aim to preserve as much of the DN account as possible, for example, holding that the generalization about hexed salt is not a true law or imposing the requirement that a DN explanation use the most general law available.
Salmon's much less conservative reaction is to conclude that Hempel is wrong to think of explanation in terms of expectability, therefore of explanations as kinds of argument. The relation between the factors cited in an explanation and the explanandum itself, Salmon holds, is not epistemic, but ontic; it should be a physical relevance relation—a relation of statistical relevance, he first proposes (1970), or a relation of causal relevance, as he later comes to believe (1984). The faulty explanation of the salt's dissolving is to be discarded, argues Salmon, not because of some formal or logical defect, but because it cites an event, the hexing of the salt, that fails to bear the appropriate relevance relation to the explanandum.
Hempel himself declines (early in his career, at least) to give a DN account of the explanation of laws because of a related problem. Kepler's laws may be derived from a single law that is simply the conjunction of Kepler's laws and Boyle's law. Such a derivation is clearly no explanation of Kepler's laws, writes Hempel, yet it satisfies the DN account's requirements: The premises are true and the argument is valid and law involving (Hempel and Oppenheim 1948).
The second important objection to the DN account is perhaps also the most famous. It shows, most philosophers would agree, that the DN account pays insufficient attention to the explanatory role of causal relations.
The height of a flagpole can be cited, along with the position of the sun and the law that light travels in straight lines, to explain the length of the flagpole's shadow. The DN account is well able to make sense of this explanation: It can be cast in the form of a sound, law-involving argument. However, now take this same argument and switch the premise stating the height of the flagpole with the premise stating the length of the shadow. One now has a sound, law-involving argument for the height of the flagpole that cites, among other things, the length of the shadow—thus, according to the DN account, one has an explanation of the height of the flagpole that cites, as an explainer, the length of the shadow. This consequence of the DN account—that the height of a flagpole can be explained by the length of its shadow—seems obviously wrong, and it is wrong, it seems, because a cause cannot be explained by its own effects.
A further famous example strongly suggests that effects can only be explained by their causes and the laws and background conditions in virtue of which they are causes. Suppose that the arrival of a certain kind of weather front is always followed by a storm and that a certain reading on a barometer is a sure sign that such a front has arrived. Then a barometer reading of this sort is always followed by a storm. The storm cannot be explained, however, by citing the barometer reading and that such readings are always followed by storms, though these two facts together satisfy the requirements of the DN account. A constant, robust correlation is not, it appears, enough for explanation. What is needed, as Salmon eventually concludes, is a causal relation.
At first Hempel resists the suggestion that facts about causation play any special role in explanation (e.g., see 1965a, §2.2). Over the years, however, due in part to the development of sophisticated empiricist accounts of causation, this has become a minority view.
The third class of objections to the DN account focuses on the account's requirements that every explanation cite a law and that (except in probabilistic explanation) the law or laws be strong enough to entail, given appropriate boundary conditions, the explanandum. One way to develop the objection is to point to everyday explanations that cite the cause of an event as its explanation, without mentioning any covering law, as when one cites a patch of ice on the road as the cause of a motorcycle accident.
More important for the study of explanation in science are varieties of explanation in which there is no prospect and no need for either the entailment or the probabilification of the explanandum. Perhaps the best example of all is Darwinian explanation, in which a trait T of some species is explained by pointing to the way in which T enhanced, directly or indirectly, the reproductive prospects of its possessor. Attempting to fit Darwinian explanation into the DN framework creates a host of problems, among which the most intractable is perhaps the following (Scriven 1959): For every trait that evolved because it benefited its possessors in some way, there are many other, equally valuable traits that did not evolve, perhaps because the right mutation did not occur, perhaps for more systematic reasons (e.g., the trait's evolution would have required a dramatic reconfiguration of the species' developmental pathways). To have a DN explanation of T, one would have to produce a deductive argument entailing that T, and none of the alternatives, evolved. In other words, one would have to be in a position to show that T had to evolve. Not only does this seem close to impossible but also it seems unnecessary for understanding the appearance of T. One can understand the course of evolution without retrospectively predicting its every twist and turn.
Hempel is aware of the problem with Darwinian explanation. His response is to argue that there is no such thing: Faced with a choice between the DN account and Darwinian explanation, one should opt for the former and consider Darwinian stories to be at best partial explanations of traits (Hempel 1965c). He advocates a similar deflationary treatment of functionalist explanation in sociology and of historical explanations that are not entailments.
The Inductive-Statistical Account
Hempel's (1965a, §3) account of the probabilistic explanation of events, the inductive-statistical (IS) account, in many ways parallels the DN account of deterministic event explanation. Like a DN explanation, an IS explanation is a law-involving argument giving good reason to expect that the explanandum event occurred. However, whereas a DN explanation is a deductive argument entailing the explanandum, an IS explanation is an inductive argument conferring high probability on the explanandum.
Hempel's example is the explanation of John Jones's swift recovery from a strep infection. The probability of a swift recovery without the administration of penicillin, Hempel supposes, is 0.1, while the probability with penicillin is 0.9. Citing Jones's infection, his treatment with penicillin, and the resulting high probability of recovery, then, confers a high probability on Jones's swift recovery; in the circumstances, one would expect Jones to recover swiftly. This inductive argument is sufficient, in Hempel's view, to explain the swift recovery.
Inductive soundness imposes one additional requirement with no parallel in deductive logic. Suppose one knows that Jones's strain of strep is resistant to penicillin. An inductive argument is said to be sound only if all relevant background knowledge is taken into account; consequently, an inductive argument for Jones's swift recovery must cite the infection's penicillin resistance. But once the new premise is added, the argument will no longer confer a high probability on its conclusion. This is what is wanted: There ought to be no inductive argument for swift recovery—one ought not to expect swift recovery—when the strep is known to be resistant.
Hempel imposes a similar requirement on IS explanations, which he calls the requirement of maximal specificity (for details, see Hempel 1965a, §3.4). In virtue of this requirement, it is not possible to explain Jones's swift recovery by citing treatment with penicillin when the infection is known to be penicillin resistant.
As with the DN account of explanation, a number of objections to the IS account have exerted a strong influence on the subsequent development of the philosophical study of explanation. Versions of both the relevance and the causal objections apply to the IS account as well as to the DN account. Two other important criticisms will be briefly described here.
The first is the complaint that it is too much to ask that explanations confer high probability on their explananda. In many ways, this is the analogue of the third objection to the DN account mentioned earlier; in the same paper that Michael Scriven (1959) expresses doubts about the existence of a DN treatment of Darwinian explanation, he describes the following example, best conceived of as an objection to the IS account. The probability that Jones contracts paresis, a form of tertiary syphilis that attacks the central nervous system, given that he has untreated secondary syphilis, is low. However, only syphilitics contract paresis. It seems reasonable to cite untreated syphilis, then, as explaining Jones's paresis, though the explanation confers only a low probability on the explanandum.
The proponent of the IS account is committed to rejecting such attempts at explanation, as Hempel does, arguing that in such cases one has only a partial explanation of why the patient contracted syphilis. This is perhaps one of the most convincing of Hempel's defenses, but the paresis example is nevertheless widely regarded as posing a serious problem for the expectability approach to explanation.
A second objection to the IS account focuses on the requirement of maximal specificity. The requirement insists that all relevant background knowledge must be included in a probabilistic event explanation, but it does not require that relevant but unknown information be taken into account. In particular, if Jones's infection is penicillin resistant, but this fact is not known to the explainer, then the IS account deems the explainer's appeal to the administration of penicillin as a perfectly good explanation of Jones's swift recovery.
As J. Alberto Coffa (1974) argues, this is surely not correct. If the infection is resistant to penicillin, then the administration of penicillin cannot explain the recovery, regardless of what the explainer does and does not know. The requirement of maximal specificity makes probabilistic explanation relative to the explainer's epistemic situation, then, in a way that it appears not to be. This objection hits right at the heart of the expectability conception of explanation, suggesting that explanation is not an epistemic matter in the least. A third objection that is applicable to many accounts of probabilistic explanation will be raised in the following discussion of the statistical relevance account.
The Statistical Relevance Account
In response to the DN account's relevance problem, Salmon suggests that the factors cited in an explanation must stand in a relation of statistical relevance (SR) to the explanandum. He does not intend this as a friendly amendment to Hempel's account, but as a radical reconceptualization of the nature of explanation: The function of an explanation, Salmon (1970) argues, is not to show that the explanandum was to be expected, but to describe factors—ideally, all the factors—statistically relevant to the occurrence of the explanandum.
From the beginning, statistical relevance is presented as an objective relation, that is, a relation holding independently of the explainer's background knowledge or other context. (Coffa's [1974] critique of the IS account, discussed earlier, discourages relativistic backsliding.) Salmon thus requires an account of probability that is both objective and broad enough to encompass any possible explanandum.
For breadth, he settles on frequentism, the view that the probability of an event type is equal to the frequency with which it occurs in a reference class of outcomes. For objectivity, he develops what he calls a homogeneity constraint on the reference classes that can be used as bases for explanatory probabilities. Such a constraint, he believes, is strong enough to determine a single, observer-independent probability distribution over any set of outcomes of interest. Salmon (1984) summarizes the theory of homogeneity; for further information, see the discussion of the reference class problem in the separate entry on probability and chance.
Statistical relevance is a comparative concept: To say that a factor A is statistically relevant to the occurrence of an event E is to say that the probability of E (or for the frequentist, of events of the same type as E ) in the presence of A is greater than the probability of E in the absence of A. Thus, the determination of a relevance relation requires not only a reference class—a class of outcomes all of which occurred in the presence of A —but a contrast class, a class of outcomes all of which occurred in the absence of A. The contrast class is not normally homogeneous. Thus for Salmon, the contrast probability must be a weighted sum of different homogeneous probabilities, each corresponding to a different way that A might have been absent, and giving the probability of E when A is absent in that way.
Perhaps inevitably, if not inescapably, Salmon arrives at the view that a full SR explanation is a complete table of relevance, describing not only factors that are present and statistically relevant to the explanandum but also factors that are absent but would have been statistically relevant if they had been present. He further adds to the table all the alternatives to the explanandum E with respect to which there existed homogeneous probabilities, and a list of all the factors that would have been relevant to these alternatives, if they occurred. Consequently, the information proffered in an SR explanation of an event E not only explains the actual occurrence of E but would also explain any occurrence of an event of the same type, even if different relevant factors were present, as well as the occurrence of any alternative to E.
As something of a corollary to this view, Salmon holds that negatively relevant factors—factors that lower the probability of the explanandum—are as explanatory as positively relevant factors and that all factors should be mentioned regardless of their degree of relevance. Salmon's not discriminating among these factors is perhaps best understood as follows. Seeing that a factor is statistically relevant to the explanandum is an explanatory end in itself. That the factor makes a particular kind of change—positive or negative, large or small—to the total probability of the explanandum would be important only if appreciating the value of the total probability were also an explanatory end. However, it is not; knowing which relevance relations hold is all that matters.
Four objections to the SR account are considered here. First, for all Salmon's justifications, an SR account seems to contain too much information. To explain E when A was absent, why is it necessary to know that, had A been present, it would have been relevant? Why is it further necessary to know what would have been relevant to the occurrence of some alternative to a type E event that did not in fact occur? This information does not appear to be directly relevant to the explanatory task at hand, that of explaining E itself.
Second, the SR account seems vulnerable to the causal objection to the DN account; it seems to hold that A is explanatorily relevant to E whenever A is correlated with E, when in fact it is necessary that A be a cause of E. The barometer reading is statistically relevant to the storm in the example described earlier, but it does not thereby explain the storm.
Salmon is well aware of this problem and proposes that only certain kinds of statistical relevance relations are explanatory, namely, those that survive a screening off test. A factor A that is correlated with E is screened off from E by another factor B if, conditional on B, A makes no difference to the probability of E (just as, for example, conditional on the presence of the front, the barometer reading makes no difference to the probability of the storm), but conditional on A, B does make a difference to the probability of E. When there is some B that screens off A from E, Salmon says that A is not genuinely statistically relevant to E. And A 's relevance will indeed disappear in a relevance table that also cites B. Note that Salmon's treatment does not make an explicit appeal to causal facts. Whether all problems concerning the role of causation in explanation can be solved in this way is unclear.
A third objection dogs all the probabilistic accounts of explanation to be considered in this entry. Suppose that I strap a small but unreliable bomb to one wheel of your car. The probability that the bomb detonates is 50 percent, in which case your tire goes flat. The trigger fails, but you drive over a nail and your tire does go flat. The bomb has increased the probability of the flat, but it plays no role in its explanation. (Does the presence of the nail screen off the presence of the bomb? No, if it is assumed that the nail's effect is, like that of the bomb, probabilistic.) Sometimes statistically relevant factors are explanatorily irrelevant. Finally, it is not easy to see how the SR account might be generalized to give an account of the explanation of phenomena other than events.
The Unification Account
Michael Friedman (1974) suggests that, while the logical empiricists' official account of explanation is the expectability account, they have an unofficial account, too, on which to explain a phenomenon is to see it as an instance of a broad pattern of similar phenomena. Hempel himself occasionally writes in this vein, "The understanding [explanation] conveys lies … in the insight that the explanandum fits into, or can be subsumed under, a system of uniformities represented by empirical laws or theoretical principles" (1965a, p. 488). Friedman formulates what he calls a unification account of explanation, a particularly global version of this conception of explanation as pattern subsumption, on which a phenomenon is explained by the system of subsuming laws that best unifies all the phenomena there are. Philip Kitcher (1981, 1989) amends and extends Friedman's account in various ways.
The unifying power of a theory is proportional, on both Friedman's and Kitcher's accounts, not only to the number of phenomena that can be subsumed under the theory but also to the simplicity of the theory. (Kitcher imposes some additional desiderata.) The theory that best unifies all the phenomena, then, might be said to yield the most for the fewest: The most derivable phenomena for the fewest number of basic principles. It is characteristic of the unificationist position to insist that only the most unifying theory has full explanatory power, but this view does not in itself preclude the possibility of partial explanation by more weakly unifying theories.
Why be a unificationist? Friedman suggests that the virtue of the most unifying theory is that it reduces to a minimum the number of fundamental incomprehensibilities, that is, unexplained explainers. Perhaps a more common justification for unificationism is that suggested by Hempel: To understand something is to fit it into a wider pattern. Add that the wider the pattern, the more powerful the explanation, and one is well on the way to unificationism.
Many of the virtues of the unification account stem from the great versatility of the pattern subsumption relation. A subsuming pattern need not be exceptionless, so not only probabilistic explanation but also other forms of nondeductive explanation fit the unification mold. Darwinian explanation, for example, can be seen as accounting for a trait by seeing it as part of a widespread pattern of adaptedness in the biological world—though Kitcher (1989, §5), for one, resists this view of evolutionary explanation, and indeed, argues that all explanations can be formulated as deductive arguments. More inclusively, Kitcher argues that unificationism supplies an effective account of mathematical, as well as scientific, explanation. For some further claimed advantages of the unification over the causal approach, see Kitcher (1989, §3).
Unificationism promises to give a powerful and subtle account of explanatory relevance. For example, an explanation of a teaspoon of salt's dissolving that cites the law "all hexed salt dissolves in water" is rejected as insufficiently unifying, because the law is both more complex and covers fewer phenomena than the law "all salt dissolves in water." More interesting, the unificationist can give an account of why many of the low-level details of the implementation of biological, psychological, economic, and social mechanisms seem to be irrelevant to understanding those mechanisms' behavior; the details, however, have yet to be worked out (Kitcher 1984).
Two important classes of objections stand in the way of the unification approach to explanation. First is the familiar question concerning the role of causation in explanation. Can the unification account explain why explanation so often, perhaps always, seems to follow the direction of causation? One might think not: The explanation of a flagpole's height in terms of the length of its shadow seems to cite just as unifying a pattern as the explanation of shadow length in terms of pole height—the same pattern, in fact.
Kitcher (1981) takes up the challenge, arguing that the unification account reproduces the asymmetries in explanation usually put down to something causal. On his view, a unifying pattern is an argument pattern. Since arguments have a direction, the pattern in which the pole height explains the shadow length is distinguished from the pattern in which the length explains the height. The unifying power of each must, therefore, be assessed separately. To solve the problem, the correct comparison is not between the unifying power of these two argument patterns, but between the unifying power of the pattern that wrongly explains pole height in terms of shadow length and that of the pattern one usually cites to explain the height of a flagpole.
Kitcher calls this latter argument pattern an origin and development pattern and claims that it is instantiated by, and so subsumes, every account one gives of the properties of a thing that describes its origin and development, as when, for example, one tells the story of the construction and erection of the flagpole. The pattern is enormously general, then, and so easily wins the right to explain the height of the flagpole. Having argued, in effect, that unificationist explanation tends to proceed in the direction of causation, Kitcher then makes the dramatic claim that it is the order of explanation that determines the order of causation; one's causal beliefs depend on and reflect one's explanatory practice.
The second objection to explanatory unificationism is that it makes explanation an overly global matter. How one phenomenon is to be explained depends, according to the unificationist, on what best unifies all the other phenomena, therefore on what the other phenomena are. To many writers, it seems that finding an explanation does not require, even in principle, knowledge extending to all corners of the universe. A more moderate or local unificationism is possible, of course, but another natural place to look for locality is in the causal approach to explanation.
The Causal Approach
In 1965 Hempel could regard the idea that there is something causal to explanation over and above the exceptionless regularities cited by a DN explanation as lacking a "precise construal" (1965a, p. 353). Since that time philosophers have come to see claims about causal relations as having a rich empirical content that goes far beyond mere regularities and their instantiation (see Spirtes et al. [2000]), though the tradition began well before Hempel made his remark, with Reichenbach [1956]). Even metaphysical empiricists, then, can agree that there is a distinctive causal approach to explanation. Thanks to the development of sophisticated but wholly empiricist accounts of causation (again beginning with Reichenbach), they can go further and in good conscience endorse the causal approach.
Strong arguments suggest that the causal approach is correct. The first and most persuasive is the equation of causal and explanatory direction suggested by the flagpole/shadow and barometer/storm examples. The second is the observation that a requirement of causal relevance between explainers and the explained will deal with the problem of the hexed salt and similar cases. The third is that one can give a cause for a phenomenon without being able to predict it. In those counterexamples to the DN and IS accounts where grounds insufficient for prediction nevertheless seem to be sufficient for explanation—the explanation of paresis by syphilis and of a trait's evolution by its conferring a certain benefit—the force of the explanation might well be thought to lie in the aptness of the cited cause. The causal approach is now dominant in the philosophy of explanation.
The most important divide within the causal approach concerns the nature of the causal relation called on to do the explanatory work. Salmon (1984) invokes a notion of causation close to fundamental physics and declares the explanation of an event to consist of the sum total of causal influences on the explanandum in this fundamental level sense.
Such an account, however, appears to count far too many events as explanatorily relevant. As Salmon concedes, though a baseball causally influences the window that it shatters, and so rightly counts as a part of the explanation of the shattering, so do the shouts of the ball players, which cause the window to vibrate even as it is struck by the ball. The shouts, too, then, will be counted on Salmon's approach as a part of the explanation of the shattering. However, they are surely (except perhaps in some unusual cases) irrelevant.
A popular response to this worry begins with the observation that, while it is correct to say that the ball caused the window to shatter, it is not correct to say that the shouts caused the window to shatter. Such locutions suggest that there is another kind of causal relation, distinct from Salmon's fundamental physical relation, that holds between the ball and the shattering but not between the shouts and the shattering.
How can it be that Salmon's relation holds between the shouts and the shattering but the new causal relation does not? One response is that Salmon's relation is based on a faulty theory of causation, but this is not the answer normally given. Rather, the new causal relation is understood as relating events at all levels, whereas Salmon's causality relates events only at the lowest level.
The high-level event of the shattering is the event that would have occurred no matter what the physical details of the shattering, that is, no matter which shards of glass flew where. The low-level event is the event individuated by all the shattering's physical details; this event only occurred, then, because the window shattered in exactly the way that it did. (Some writers call high-level events states of affairs or facts and hold that events proper are always low level.)
When one asks for an explanation of the shattering, one is normally asking for an explanation of the fact that the window shattered, not that it shattered in exactly the way it did. Thus, one asks for the causes of the high-level event, not the low-level event. Even though the low- and high-level events are coextensive in space and time, it seems that there are causes of the former that are not among the causes of the latter, namely, the events that determine, given that the shattering occurred, exactly how it occurred. These detail-determining events, because they are not causes of the explanandum, the shattering, do not explain it (for more on the potential for different causal relations between low- and high-level events, see Bennett 1988).
The idea, in short, is that there are many different levels of explananda, corresponding to different levels of eventhood, and different causal relations at all these different levels. Salmon's fundamental physical causation, then, is only one among many different levels of causation. Add this conception of causation as a multilevel relation to the causal approach to explanation, and one gets a theory on which the explainers of an event depend on the level of the event. (This level dependence of the explanation is also characteristic of the DN, IS, and SR accounts.)
The best-known multilevel theory of causation is the counterfactual account. If the shouting had not happened, the high-level shattering event would still have occurred, but because it would have happened in a different way, the low-level shattering event would not have occurred. Thus, the high-level shattering does not, whereas the low-level shattering does, counterfactually depend on the shouting. On a counterfactual approach to causation, this implies that the shouting is a cause of the low-level shattering but not the high-level shattering, and so, taking this multilevel relation as the explanatory causal relation, that the shouting does not explain the high-level shattering, even though—as its causation of the low-level shattering shows—it is connected causally to the shattering in Salmon's sense. For this approach to explanation, but based on a more sophisticated counterfactual account of causation, see David Lewis (1986); for a different though related multilevel approach, see James Woodward (2003).
An alternative to the multilevel approach is a two-factor approach to causal explanation, on which all explainers of an event must causally influence that event at the fundamental physical level, as prescribed by Salmon, but on which they must pass in addition a further test for explanatory relevance. Salmon (1997) himself suggests, late in his career, that the further test might be one of statistical relevance; only the causal influences that change the probability of an event explain the event. Michael Strevens (2004) suggests a different two-factor approach.
An advantage of the two-factor approach is the relatively modest demands it makes of the metaphysics of causation, transferring as it does much of the burden of determining explainers to the further test for relevance. What, then, to say about claims apparently stating the existence of high-level causal relations, such as "The ball's hitting the window, but not the players' shouting, caused the window to shatter"? Strevens (2004) suggests that locutions of this form are in fact causal-explanatory claims, asserting the explanatory relevance of certain causal influences (compare Kitcher's theory of causation mentioned earlier).
Despite the popularity of the causal approach, it is relatively undeveloped. For example, little has been written about the causal explanation of laws; it is usually said that they are explained by describing their underlying mechanisms, but not every law explicitly concerns causes and effects. Equally, not every event explanation appears to involve the delineation of causes. For examples of both kinds of worry, see Kitcher (1989, §3).
Work on the causal approach to probabilistic event explanation is more advanced. Two main currents can be distinguished in the literature. The first springs from the idea that probabilities themselves have the character of dispositions and are able to cause the events to which they are attached. The probability of one-half that a tossed coin lands heads, for example, is interpreted as a statistical disposition that causes the coin (in most cases) to land heads about one-half of the time (Fetzer 1981).
The second current flows from the idea that other events or states of affairs can cause events by making a difference to the probabilities of those events. This view is compatible with the dispositional view of probabilistic causality, but it is compatible also with its rejection. Paul Humphreys writes that "chance is literally nothing" (1989, p. 65) and so cannot cause anything itself, but that events nevertheless cause other events in an indeterministic world by making a difference to their probabilities. Because probability itself is impotent, Humphreys holds that the kind of difference a cause makes to the probability of its effect is irrelevant. It does not matter whether the change in probability is positive or negative, large or small (compare with the SR account). Whatever the change, the factor responsible for the change is a cause and so ought to be cited in an explanation of the effect.
Peter Railton (1978) offers an account of probabilistic explanation that makes room for both conceptions of the relation between probability and causation. On what Railton calls his DNP account, an event is explained by deriving its exact probability from the appropriate initial conditions, background conditions, and laws. Formally, a DNP explanation resembles, as its name suggests, a DN explanation, except that it is the probability of the explanandum, not the explanandum itself, that is deduced. In contrast to Hempel's IS account of probabilistic event explanation, the DNP account does not require a high probability for the explanandum, and because it asks for an accurate derivation of the exact probability, it requires, like the SR account, that an explanation cite all factors probabilistically relevant to the explanandum, whether known or unknown, and (though Railton does not give a criterion for relevance) no irrelevant factors. Perhaps most important of all, the DNP account is, unlike Hempel's various accounts, open to a causal interpretation: The factors that make a difference to the probability, and even the probability itself, can be considered causes of the explanandum, and the explanation successful precisely because it specifies these causes.
An important lacuna in causal accounts of probabilistic explanation is a detailed treatment of probabilistic explanation in sciences such as statistical mechanics and evolutionary biology, where there is some possibility at least that the underlying processes producing the usual explananda are approximately deterministic. The consensus is to regard such explanations as not genuinely probabilistic; Railton (1981) suggests that they can be reinterpreted as reporting on the robustness of the underlying processes with respect to the event to be explained, that is, the processes' tendency to produce the same kind of outcome given a variety of initial and background conditions.
Other Issues
This entry will conclude with a brief sketch of some issues concerning scientific explanation not mentioned earlier. First is the question of pragmatics in explanation. Most writers hold that pragmatics affects the explanatory enterprise in only one, relatively minor, way: When an explanation is transmitted from one person to another, the act is subject to the usual pragmatics of communication. This position on pragmatics dovetails with the majority view that the explanatory facts are not essentially communicative; explanations exist independently of anyone's intention to explain anything to anyone else.
Both Bas C. van Fraassen (1980, chapter 5) and Peter Achinstein (1983) dissent from this majority, holding that there is no explanation without communication and finding in the pragmatics of communication an account of many facets of explanatory practice. However, this literature has yet to answer the question why science treats explanations as preexisting facts to be discovered, rather than as entities created in the act of communication.
Second, it is an open question whether there is a single standard for evaluating scientific explanations that has remained constant since the beginning of modern science, let alone for the entire history of human explanation. The accounts of explanation in this entry assume, of course, a positive answer, but most work on explanation lacks a substantial historical dimension.
A third issue is idealization in explanation: While almost every account of explanation surveyed earlier requires that explanations contain no false representations of reality, the practice of using idealized models in scientific explanation is widespread. These models deliberately misrepresent the nature of the systems they describe; the ideal gas model, for example, represents gas molecules as having zero volume, but despite this distortion of the facts, it is considered to explain certain behaviors of real gases. Some writers regard idealization as a temporary or practical measure, out of place in a perfected science. Strevens (2004) suggests that on both the unificationist and a certain causal approach to explanation idealizations can be seen as serving a genuine and enduring explanatory role.
See also Causation: Metaphysical Issues; Causation: Philosophical Problems in; Hempel, Carl Gustav; Laws, Scientific.
Bibliography
Achinstein, Peter. The Nature of Explanation. New York: Oxford University Press, 1983.
Bennett, Jonathan. Events and Their Names. Indianapolis, IN: Hackett, 1988.
Coffa, J. Alberto. "Hempel's Ambiguity." Synthese 28 (1974): 141–163.
Duhem, Pierre. The Aim and Structure of Physical Theory. Translated by Philip P. Wiener. Princeton, NJ: Princeton University Press, 1954.
Fetzer, James H. Scientific Knowledge: Causation, Explanation, and Corroboration. Dordrecht, Netherlands: D. Reidel, 1981.
Friedman, Michael. "Explanation and Scientific Understanding." Journal of Philosophy 71 (1974): 5–19.
Hempel, Carl G. "Aspects of Scientific Explanation." In his Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, 331–496. New York: Free Press, 1965a.
Hempel, Carl G. Aspects of Scientific Explanation and Other Essays in the Philosophy of Science. New York: Free Press, 1965b.
Hempel, Carl G. "The Logic of Functional Analysis." In his Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, 297–330. New York: Free Press, 1965c. Revised version of a paper originally published in Symposium on Sociological Theory, edited by L. Gross (New York: Harper and Row, 1959).
Hempel, Carl G., and Paul Oppenheim. "Studies in the Logic of Explanation." Philosophy of Science 15 (1948): 135–175.
Humphreys, Paul. The Chances of Explanation: Causal Explanation in the Social, Medical, and Physical Sciences. Princeton, NJ: Princeton University Press, 1989.
Kitcher, Philip. "Explanatory Unification." Philosophy of Science 48 (1981): 507–531.
Kitcher, Philip. "Explanatory Unification and the Causal Structure of the World." In Minnesota Studies in the Philosophy of Science, vol. 13, Scientific Explanation, edited by P. Kitcher and Wesley C. Salmon, 410–505. Minneapolis: University of Minnesota Press, 1989.
Kitcher, Philip. "1953 and All That: A Tale of Two Sciences." Philosophical Review 93 (1984): 335–373.
Kyburg, Henry Ely. "Comment." Philosophy of Science 32 (1965): 147–151.
Lewis, David. "Causal Explanation." In Philosophical Papers. Vol. 2, 214–240. New York: Oxford University Press, 1986.
Railton, Peter. "A Deductive-Nomological Model of Probabilistic Explanation." Philosophy of Science 45 (1978): 206–226.
Railton, Peter. "Probability, Explanation, and Information." Synthese 48 (1981): 233–256.
Reichenbach, Hans. The Direction of Time. Edited by Maria Reichenbach. Berkeley: University of California Press, 1956.
Salmon, Wesley C. "Causality and Explanation: A Reply to Two Critiques." Philosophy of Science 64 (1997): 461–477.
Salmon, Wesley C. Explanation and the Causal Structure of the World. Princeton, NJ: Princeton University Press, 1984.
Salmon, Wesley C. Four Decades of Scientific Explanation. Minneapolis: University of Minnesota Press, 1990.
Salmon, Wesley C. "Statistical Explanation." In Statistical Explanation and Statistical Relevance, 29–87. Pittsburgh: University of Pittsburgh Press, 1970.
Scriven, Michael. "Explanation and Prediction in Evolutionary Theory." Science 30 (1959): 477–482.
Spirtes, Peter, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. 2nd ed. Cambridge, MA: MIT Press, 2000.
Strevens, Michael. "The Causal and Unification Accounts of Explanation Unified—Causally." Noûs 38 (2004): 154–176.
Van Fraassen, Bas C. The Scientific Image. New York: Oxford University Press, 1980.
Woodward, James. Making Things Happen: A Theory of Causal Explanation. New York: Oxford University Press, 2003.
Michael Strevens (2005)
Explanation
Explanation
What causes one human being to kill another, not for anything the victim has done but simply because the victim belongs to a particular religion, ethnic or communal group? Such behavior confounds rationality, and analysts are forced to focus on either identifying the broad macrophenomena and the structural-cultural factors that correlate with genocide or on specifying the psychological processes that might contribute to genocide.
The most frequently cited precipitating factors or facilitating conditions that correlate with genocide and ethnic violence are political unrest and economic upheavals. The Holocaust—certainly the best known genocide—is usually "explained" by reference to the political dislocations resulting from World War I, especially the ensuing breakup of political empires, the punitive Versailles Treaty, a weak Weimar Republic, and the economic depression that gripped the world but which was particularly acute in Germany. The breakup of the Ottoman Empire (which gave rise to the Armenian genocide) and the disintegration of Yugoslavia and the USSR (which was followed by ethnic cleansing in Bosnia) provide further illustrations of macro-events contributing to genocide.
Beyond this, genocide occurs most frequently in plural societies in which there are diverse racial, ethnic, and/or religious groups that exhibit persistent and pervasive communal cleavages. A strong overlap between such cleavages and political and socio-economic inequities, plus a history of conflict between the diverse groups, also encourages genocide and ethnic violence. Genocide rarely occurs in political regimes that are not totalitarian or authoritarian. This was evident during the Holocaust and in the recent genocides in the Balkans and Africa (Rwanda-Burundi, the Sudan). The isolation and secrecy that accompany totalitarian regimes that lack a free press are major contributors, enabling elites to manipulate internal tensions and turn them toward violence. Such structural-cultural factors form the foundation for another category of explanation.
Psychological Factors
The richest and most varied explanations of genocide are found at a more personal level, all focusing on the psychology of the genocidalist. The psychoanalysis of genocidal leaders such as Hitler has led some scholars, such as Alan Bullock, to focus attention on their tendency toward neurotic-psychopathic personalities. The argument here is that certain people have a deep-seated and psycho-pathological need that leads them toward genocide, either through the elite manipulation of masses or the actual, personal commission of genocide. Other scholars, including Theodor Adorno and Bob Altemeyer, focus on the extent to which an entire society can exhibit patterns of behavior, such as child-rearing or authority relations in school, that result in certain kinds of psychodynamics, such as the authoritarian personality, that encourage genocide.
The work of scholars such as Daniel J. Goldhagen still accept explanations of genocide that are painted in such broad cultural terms, but most social psychologists and historians, including Stanley Milgram and Christopher Browning, find the situation more complex, arguing that situational factors can turn even an ordinary person into a genocidalist. The fundamental assumption for these scholars is a median personality around which a great deal of variance occurs. Analysts in this school focus on external stimuli and understanding how situational or contextual effects can trigger genocide in ordinary people.
Studies of social cognition find all political behavior strongly influenced by how people think about themselves and the social world, especially how people select, remember, interpret, and use social information to make judgments and decisions. Attitudes, schemas and social representations all offer ways in which the definition of social identities of self and others might be conceptualized, and provide the building blocks upon which more detailed theories of socio-political identity and prejudice are built. Such approaches include social role theories focusing on the "internalized role designations corresponding to the social location of persons" (Stryker, 1987, p. 84) and stress the shared behavioral expectations that become salient. Such explanations have been offered to explain the traditional "I was just following orders" excuse for genocide. Robert Jay Lipton's intriguing 1986 study of Nazi doctors turned the concept of social roles upside down by asking: How could doctors and health officials, dedicated to saving lives, utilize their knowledge to perfect killing? The answer—a desire to protect the German body politic from infestation by inferior and diseased untermenschen—suggests how traditional social roles can be utilized to lead people to genocide.
Other social psychologists focus more on the cognitive process of drawing boundaries and categorizing individuals in conflict situations. Social-identity theory and self-categorization literature suggest that perceptions of competition for scarce resources reinforce ingroup/out-group distinctions but are not necessary conditions for in-group favoritism and inter-group discrimination to occur. The social identity theory employed by Michael A. Hogg and Dominic Abrams and based on Henri Tajfel's "minimal group paradigm" has found that in situations of group decision making, people tend to favor their own membership group over out-groups, even when these groups are artificial laboratory constructs and competition for resources between groups is absent. Previous perspectives in group psychology, exemplified by the work of Muzafer Sherif, explained group differentiation in terms of real or perceived competition between in-group and out-groups, but Tajfel's research suggests that the mere formation of otherwise meaningless groups may produce in-group favoritism. Tajfel argues that groups provide their members with positive self-esteem, and that group-members are therefore motivated to enhance their image of the in-group in relation to relevant outgroups.
The Self-Categorization Theory of Group Formation
A 1987 study by John C. Turner and Michael Hogg suggests that the formation of psychological groups is driven by the cognitive elaboration of one's self-identity in comparison with others and implies mechanisms for the formation of political preferences. The salient level of self-categorization and the determination of which schemas and categories are evoked by a given political object or objects will interact to shape a person's political preferences in relation to that political object. The key assumptions of Turner's self-categorization theory of group formation suggest that self-categorizations are hierarchical. In other words, the category of "human being" functions as the most inclusive and superordinate group level, below which in-group/out-group categories based on social comparisons of gender and ethnicity or other dimensions form an intermediate level categorization, and there are subordinate level categories that distinguish individuals as unique.
Turner's framework assumes that the cognitive representation of the self is a multi-faceted affair, and that different portions of that self become salient in different contexts. The theory hypothesizes that factors enhancing the relevance of in-group/out-group categorizations increase the perceived identity between self and in-group members, thus depersonalizing individual self-perception on the stereotypical dimensions that define the relevant in-group membership. This makes the depersonalization of self-perceptions the critical process underlying group behavior, such as stereotyping, ethnocentrism, cooperation and altruism, emotional contagion, collective action, shared norms, and social influence processes.
Members of groups who are perceived as different from the self will tend to be seen in terms of stereotypes. Self-categorization theory builds upon social identity theory by arguing that the self-categorization with a cognitive representation of the group results in the depersonalization of self and the homogenization of both the in-group and the out-group, based on dimensions that reflect the prototypicality or stereotypicality of members of each group. Thousands of experiments underlying social identity theory—for instance, those conducted by A. Gagnon and R. Y. Bourhis—have consistently shown that individuals will identify with the in-group, support group norms, and derogate out-group members along stereotypical lines, even when there is no individual gain at stake. The introduction of "superordinate goals," which is posited as a solution by some realistic conflict theorists, can be seen instead as the cognitive reclassification of social identity by individuals into another social identity category.
This cognitive reclassification of groups may provide the key to ending genocide, prejudice, and ethnic violence; Serbs and Croats can think of themselves as Yugoslavs. Preliminary empirical work suggests cognitive categorization may affect all participants in genocide, not just genocidalists. Kristen Renwick Monroe's work on rescuers, published in 1996 and 2004, and James Glass's 1997 study of genocidalists have noted the importance of cognitive classifications during the Holocaust. A 1997 study by Lina Haddad Kreidie and Kristen Monroe found similar categorization and dehumanization in communal violence in the Middle East. Historical literature on slaves within United States also points to the process of declassification and recategorization as critical before people feel justified in the mistreating and eventual killing of other human beings. This comparative work suggests that if we can declassify people, we also can reclassify them in an upward manner. The process, in other words, works both ways. Further work to determine how this recategorization process works may provide an answer to the implicit question underlying most analyses of genocide: How can it be stopped?
SEE ALSO Genocide; Philosophy
BIBLIOGRAPHY
Adorno, Theodor, et al. (1950). The Authoritarian Personality. New York: Harper and Row.
Altmeyer, Robert (1988). Enemies of Freedom: Understanding Right Wing Authoritarianism. San Francisco, Calif.: Jossey-Bass.
Browning, Christopher (1992). Ordinary Men: Reserve Police Battalion 101 and the Final Solution in Poland. New York: Aaron Asher/HarperCollins.
Bullock, Alan (1991). Hitler and Stalin. London: HarperCollins.
Gagnon A., and Bourhis R Y. (1996). "Discrimination in the Minimal Group Paradigm: Social Identity or Self-Interest." Personality and Social Psychology Bulletin 22, no. 12:1289–1301.
Glass, James (1997). Life Unworthy of Life: Racial Phobia and Mass Murder in Hitler's Germany. New York: Basic Books.
Hogg, Michael A. (1992). The Social Psychology of Group Cohesiveness: From Attraction to Social Identity. New York: New York University Press.
Hogg, Michael A., and Dominic Abrams (1988). Social Identifications: A Social Psychology of Intergroup Relationships and Group Processes. New York: Routledge.
Kreidie, Lina Haddad, and Kristen Monroe (1997). "The Perspectives of Islamic Fundamentalists and the Limits of Rational Choice Theory." Political Psychology 18(1):19–43.
Lipton, Robert Jay (1986). The Nazi Doctors: Medical Killings and the Psychology of Genocide. New York: Basic Books.
Milgram, Stanley (1974). Obedience to Authority: An Experimental View. New York: Harper and Row.
Monroe, Kristen Renwick (1996). The Heart of Altruism: Perceptions of a Common Humanity. Princeton, N.J.: Princeton University Press.
Monroe, Kristen Renwick (2004). The Hand of Compassion: Portraits of Moral Choice during the Holocaust. Princeton, N.J.: Princeton University Press.
Sherif, Muzafer (1973). Groups in Harmony and Tension: An Integration of Studies on Intergroup Relations. New York: Octagon Books.
Stryker, Sheldon (1987). "Identity Theory: Developments and Extensions." In Society and Identity: Psychosocial Perspectives, ed. Krysia Yardley and Terry Honess. New York: Wiley.
Tajfel, Henri (1981). "Human Groups and Social Categories." Studies in Social Psychology. New York: Cambridge University Press.
Tajfel, Henri and John C. Turner (1979). "An Integrative Theory of Intergroup Conflict." In The Social Psychology of Intergroup Relations, ed. W. G. Austin and S. Worchel. Monterey, Calif.: Brooks/Cole.
Turner, John C., and Michael A. Hogg (1987). Rediscovering the Social Group: A Self-Categorization Theory. Oxford, U.K.: Basil Blackwell.
Kristen Renwick Monroe
Explanation
Explanation
When one wants to understand something, one asks for an explanation. In principle, everything can be the object of explanation. Some explanations, such as in classification and interpretation, explain what something is: What is a whale? It is a mammal; Could you explain the movie Dr. Strangelove to me? It is about the Cold War. Some explanations explain how something works or how something is possible: How does the door open? Press the button; How is it possible that some children survive the most cruel experiences? There are adults who love them and care about them. Finally, some explanations explain why something happens: Why did the aircraft crash? Because one of its motors came loose. In all three cases, the explanation is supposed to yield knowledge. Concerning the why-questions, however, not all are requests for an explanation and, thus, not every answer to a why-question yields knowledge. Some why-questions express the wish to find consolation (Why have you abandoned me?) or the wish to get rid of prejudices (Why should men and women not get the same salary for the same job?). In the science and religion discussion it is intensely debated whether religious answers to such a question as "Why is the universe so special and finely tuned for life?" are explanations, yielding knowledge, or whether they have other functions in the believer's life.
The covering law model
Quite often explanations of why something is the case are related to causation. Other explanations are functional or teleological, as they are also called. The white fur of the polar bear is explained by its camouflage function. A common view is that those explanations actually are causal explanations referring to past causes in evolution that led to the natural selection of the biological trait in question. Causal and functional or teleological explanations are seen as two different variants of the so-called covering law model.
According to the covering law model, an explanation of an event consists in subsuming it under a causal law: All metals expand when heated; this rod is metallic and it was heated; therefore, it expanded. There are four conditions for such a scientific explanation:
- The explanandum (The rod has expanded.) has to follow logically from the explanans (All metals expand when heated; this rod is metallic and it was heated.). Only if the explanandum can be deduced from the initial circumstances and the applied causal laws, the explanandum is really explained and justifies the prediction of a similar event, even if it has not been observed yet.
- The applied causal laws (All metals expand when heated.) have to be laws proper and not only all-statements. It is not an explanation to say: "All apples in this basket are red; this apple is from the basket; therefore, the apple is red."
- The explanans needs to have empirical content; it should be possible, at least in principle, to confirm or falsify the explanans through experience. Without this condition explanations like God's wrath as the cause of historical catastrophes could not be excluded from science.
- The explanans has to be true. If it were false, the implication between the explanans and any explanandum would be true for logical reasons and the explanandum would not be explained.
Although the covering law model cannot be applied everywhere in its strict form, it is supposed to represent an ideal that at least all explanations in the natural sciences that are answers to why-questions ought to strive to attain. It satisfies one feature that one would expect of such explanations, namely, that it explain why a certain event occurred and not another. By subsuming an event under causal laws, it is shown that the event had to occur. The price to be paid for this is determinism, excluding the possibility of exceptions. One way of coping with this difficulty is to allow deterministic as well as nondeterministic explanations. Thus, the explanation of a patient's death from lung cancer may take the form of a statistical explanation referring to the frequency of dying from lung cancer and smoking heavily. According to this variant of the covering law model, the explanandum is not deduced from the explanans. It is supposed to follow with an inductive probability. The reference is not to exceptionless laws but to statistical regularities concerning events and to tendencies concerning human actions.
Rational reconstruction
Another model of explanation that is supposed to be an alternative to the covering law model in, for instance, historical research consists in explaining an event as the result of human action by rational reconstruction. First, an event is shown to be an intentional act that the agent in question has undertaken in accordance with beliefs that seemed reasonable in the situation at issue. Second, the critical examination of the agent's beliefs, whether true or reasonable, contributes to explain why the action resulted in precisely that event. So, in some historical cases, the fact that the belief in the enemy's strength was false may explain why the army was defeated in a certain battle.
Explanation in the science-religion dialogue
One frequently discussed question in theology and in philosophy of religion concerns the relationship between scientific and religious explanations: whether they are on the same categorical level or belong to completely different domains. The difference between scientific and religious explanations is sometimes identified by pointing out that science causes questions that go beyond its own power to answer, for instance, the question "Why is the universe so special and finely tuned for life?" Since this question is not a question formed within science, it cannot be answered scientifically. Instead, the question is a metaphysical why-question, wherefore it can be answered, for instance, theistically by saying "because the universe is created by God who wills that it be so." Since it is not always clear to what extent cosmological theories about the beginning of the universe are metaphysically laden, it is also not clear to what extent the theistic answer competes with scientific explanations or with the metaphysical aspects of some of these theories.
See also Causality, Primary and Secondary; Causation
Bibliography
clayton, philip d. explanations from physics to theology: an essay in rationality and religion. new haven, conn.: yale university press, 1989.
collingwood, robin george. the idea of history, rev. edition. oxford: clarendon press, 1993.
dray, william h. laws and explanation in history. oxford: oxford university press, 1957.
gregersen, niels henrik, and van huyssteen, j. wentzel, eds. rethinking theology and science: six models for the current dialogue. grand rapids, mich.: eerdmans, 1998.
hempel, carl g. aspects of scientific explanation and other essays in the philosophy of science. new york: free press, 1965.
polkinghorne, john charlton. quarks, chaos, and christianity: questions to science and religion. london: triangle, 1994.
van fraassen, bas c. the scientific image. oxford: clarendon press, 1980.
von wright, georg henrik. explanation and understanding. london: routledge and kegan paul, 1971.
eberhard herrmann
explanation
ex·pla·na·tion / ˌekspləˈnāshən/ • n. a statement or account that makes something clear: the birth rate is central to any explanation of population trends. ∎ a reason or justification given for an action or belief: my application was rejected without explanation.