Rationality
Rationality
Rationality in its ordinary sense is reasonableness. It requires justified beliefs and sensible goals as well as judicious decisions. Scholars study rationality in many ways and adopt diverse views about it.
Some theorists adopt a technical definition of rationality, according to which it is just maximization of utility. This definition is too narrow. It considers only adoption of means to reach ends, that is, instrumental rationality. It also evades a major normative question, namely, whether rationality requires maximization of utility. The definition simply stipulates an affirmative answer.
A traditional theory of the mind takes reason as a mental faculty. It characterizes humans as rational animals because they have the faculty of reason, whereas other animals lack that faculty. According to this tradition, any behavior resulting from reasoning is rational. This account of rationality sets the bar low. Most accounts hold that the products of reasoning must meet certain standards before they qualify as rational. A conclusion, for example, must fit the evidence to be rational. It is not rational simply because it results from an inference. Reasoning must be good to yield rational beliefs reliably.
For simplicity, some theorists take being rational to be the same as being self-interested. Being rational differs from being self-interested, however. Promoting self-interest means doing what is good for oneself. Doing what is good for others promotes their interests, not one’s own. Rationality may require some measure of self-interestedness but does not require exclusive attention to self-interest. It permits altruism, as Amartya Sen (1977) and Howard Rachlin (2002) explain.
Epistemologists treat justified belief. Under one interpretation, a justified belief is just a rational belief. However, other interpretations of justification are common because a conventional view takes knowledge to be true, justified belief. Making justification fit into that view of knowledge motivates taking justified belief to differ from rational belief. Children rationally believe many true propositions without having knowledge of them because the grounds for their beliefs do not amount to justification.
Rationality is a normative concept. Principles of rationality state how people should behave rather than how they behave. However, some fields assume that people are rational by and large and then use principles of rationality to describe and explain behavior. For instance, some economic theories assert that consumers make purchases that express their preferences. They take this as a fact about consumers’ behavior rather than as a norm for it. Psychologists seeking to infer a person’s beliefs and desires from the person’s behavior may assume that behavior maximizes utility. The assumption simplifies inference of beliefs and desires. Several prominent representation theorems show that if a person’s preferences concerning acts meet certain axioms, such as transitivity, then one may infer the person’s probability and utility assignments (given a choice of scale) from the person’s preferences, under the assumption that preferences concerning acts agree with their expected utilities. Richard Jeffrey ([1965] 1983) presents a theorem of this sort.
PHILOSOPHY AND RATIONALITY
Philosophy treats rationality because it is the most important normative concept besides morality. Understanding how a person should conduct her or his life requires a thorough understanding of rationality. Being a rational person requires being sufficiently rational in the various aspects of one’s life. Common principles of rationality attend to beliefs and desires and to the decisions they yield. Some principles evaluate character traits and emotions. They judge, for example, that some fears are rational and that others are irrational. Principles of rationality extend from individuals to groups. Committees may pass rational or irrational resolutions. Political philosophy evaluates social contracts for rationality. Mancur Olson (1965) investigates rational collective action and the conditions that foster it.
A traditional metaphysical question asks for the grounds of principles of rationality. What makes consistency a requirement of rationality? Are the principle’s grounds conventions or something more universal? A common answer claims that natural properties realize normative properties. Consistency increases prospects for true beliefs.
A traditional practical question asks for reasons to be rational. A common answer is that being rational yields the best prospects (with respect to one’s evidence) for meeting one’s goals and so achieving a type of success. Decisions that maximize expected utility are more likely to be successful than decisions that do not maximize expected utility.
Some philosophers hope to derive principles of morality from principles of rationality. Kantians, for example, hold that a rational person acts in accord with moral principles. Hobbesians hold that the legitimacy of a government derives from the rationality of the social contract that creates it. Rawlsians hold that principles of justice emerge from a hypothetical social contract rational to adopt given ignorance of one’s position in society.
BELIEF AND INFERENCE
A general principle states that rationality is attainable. Its attainability follows from the familiar principle that “ought” implies “can.” Well-established principles of rationality also treat formation of belief and inference. Consistency is a noncontroversial requirement. Holding inconsistent beliefs is irrational unless some extenuating factor, such as the difficulty of spotting the inconsistency, provides an excuse. Perceptual beliefs are rational when the processes producing them are reliable. Vision in good light yields reliable judgments about the colors of objects. Logic describes in meticulous detail patterns of inference that are rational. For example, if one believes a conditional and believes its antecedent, then believing its consequent is a rational conclusion. Repeated application of rules of inference to prove theorems in logic requires sophistication that ordinary rationality does not demand. Rationality requires an ideal agent to believe each and every logical consequence of her or his beliefs. Its requirements for real people are less demanding. (For a sample of principles of rationality concerning belief, see Foley 1993; Halpern 2003; and Levi 2004.)
Rationality governs both deductive and inductive inference. Principles of statistical reasoning express principles of rational inductive inference. If one knows that an urn has exactly eighty red balls and twenty black balls, then it is rational to conclude that 80 percent is the probability that a random draw will yield a red ball. Given a statistical sample drawn from a population, principles of statistical inference attending to the size of the sample and other factors state reasonable conclusions about the whole population.
PREFERENCES
Preferences may arise from partial or complete consideration of relevant reasons. Common principles of rational preference apply to preferences held all things considered.
The principle of transitivity requires preferring A to C given that one prefers A to B and also prefers B to C. The principle of coherence requires having preferences among acts that may be represented as maximizing expected utility. The definition of preference affects the force of such principles. The ordinary sense of preference acknowledges the possibility of weakness of will and acting contrary to preferences. However, some theorists for technical reasons define preferences so that a person acts according to preferences, so telling a person to pick an option from the top of her or his preference ranking of options has no normative force—she or he does that by stipulation.
The principle of consumer sovereignty puts basic preferences beyond criticism. Some basic preferences are irrational, however. Forming preferences among ice cream flavors one has not tasted may be irrational. Having a pure time preference may be irrational. That is, it may be irrational to prefer the smaller of two goods just because it will arrive sooner than the larger good. Certainty of having the larger good if one waits for it is a strong reason for waiting.
The chief principle of rational decision making is to pick an option from the top of one’s preference ranking of options. If some options are gambles, a supplementary principle says to prefer one option to another option just in case its expected utility is higher than the expected utility of the other option. J. Howard Sobel (1994) and Paul Weirich (2001) analyze such principles of rational choice.
DECISION MAKING
Rationality evaluates free acts that an agent fully controls. Decisions are in this category; so are acts such as taking a walk. Rationality evaluates acts an agent controls directly by comparing them with rivals and evaluates acts an agent controls indirectly by evaluating their components. An agent directly controls a decision, and so rationality evaluates it by comparing it with its rivals. An agent indirectly controls taking a walk, and so rationality evaluates it by evaluating its directly controlled components. The rationality of a series of acts, such as having dinner and going to a movie, depends on the rationality of its temporal components.
Game theory, expounded in classic texts by John von Neumann and Oskar Morgenstern (1944) and R. Duncan Luce and Howard Raiffa (1957), addresses decisions people make in contexts where the outcome of one person’s decision depends on the decisions that other people make. Strategic reasoning looks for combinations of decisions that form an equilibrium in the sense that each decision is rational given the other decisions. A common principle for such strategic situations recommends making a decision that is part of an equilibrium combination of decisions. Edward McClennen (1990), Robert Stalnaker (1998, 1999), Weirich (1998), Andrew Colman (2003), and Michael Bacharach (2006) conduct critical appraisals of principles of rationality widespread in game theory.
A principle of rationality may be controversial. A common pattern for controversy begins with a claim that in some cases thoughtful people fail to comply with the principle. Some respond that in those cases people are rational and the principle is faulty. Others respond that the principle is fine and people are irrational. Still others hold that people in the problem cases actually comply with the principle, contrary to the initial claim.
For example, Amos Tversky and Daniel Kahneman (Tversky and Kahneman 1982) present cases in which people form judgments that fail to comply with the probability axioms. In their study a story describes a young woman as a political activist and a college graduate with a philosophy major. People asked to speculate about the woman’s current activities may put the probability of her being a feminist and a bank teller higher than the probability of her being a bank teller only. This ignores the law that the probability of a conjunction cannot be higher than the probability of a conjunct. Some theorists may conclude that people are irrational in their probability judgments, others that people have in mind the probability that the woman is a feminist given that she is a bank teller rather than the probability of the conjunction that she is a feminist and is a bank teller. In this particular example, few dispute the law of probability concerning conjunctions.
Kahneman and Tversky (Kahneman and Tversky 1979) also present cases in which it seems that people fail to comply with the principle to maximize expected utility. A person may prefer a gamble that pays a guaranteed $3,000 to a gamble that pays $4,000 with probability 80 percent and $0 with probability 20 percent. The same person may prefer a gamble that pays $4,000 with probability 20 percent and $0 with probability 80 percent to a gamble that pays $3,000 with probability 25 percent and $0 with probability 75 percent. Let U stand for utility. If the first preference agrees with expected utilities, it seems that U ($3,000) > 0.80 U ($4,000). If the second preference agrees with expected utilities, it seems that 0.20 U ($4,000) > 0.25 U ($3,000) and hence, multiplying both sides by 4, that 0.80 U ($4,000) > U ($3,000). Because the inequalities for the two preferences are inconsistent, it seems impossible that both preferences agree with expected utilities.
One response is to reject the principle of expected utility maximization. Another response denies the rationality of having the pair of preferences. A third response claims that people care about factors besides monetary outcomes. They may, for instance, value certainty and the elimination of risk. Then the pair of preferences may agree with broadly based expected utilities without implying inconsistent inequalities.
RATIONAL CHOICE THEORY
Rational choice theory uses principles of rationality to explain behavior. The social and behavioral sciences and even literary interpretation employ it. Proponents claim that rational choice theory yields insightful analyses using simple principles of rational behavior. Critics claim that those simple principles are too austere to adequately characterize human behavior. This debate turns on the principles of rationality at issue. Some rational choice theorists may use only principles of instrumental rationality. In that case, evaluation of basic goals is missing. Other rational choice theorists use more comprehensive principles of rationality to extend the theory’s scope. They provide for principles that evaluate basic goals.
Various applications of rationality yield distinct types of rationality, such as bounded, procedural, and substantive rationality. Herbert Simon (1982) is famous for treating these types of rationality. Principles of bounded rationality set standards for people and other nonideal agents with limited cognitive power. Contrasting principles set high standards for ideal agents with unlimited cognitive power. Rationality may require ideal agents to maximize utility, whereas it requires real people to satisfice, that is, to adopt the first satisfactory option discovered. The principle to satisfice is a principle of procedural rationality because it recommends a procedure for making a decision and does not characterize the content of the decision it recommends. A substantive principle may recommend making a decision that maximizes utility. Whether a decision maximizes utility depends on its content. It depends on the prospects of acting according to the decision. Compliance with a substantive principle of rationality, such as utility maximization, may require a procedure that is more trouble than its outcome justifies. Spending hours to make a move in a chess game may sap the game’s fun. Sometimes thorough calculation is too costly, and one should make a quick decision. It may be sensible to adopt the first satisfactory course of action that comes to light instead of running through all options, calculating and comparing their utilities.
An evaluator may apply a substantive principle to an act already performed. The principle judges the act without regard for the process that produced it. The principle of utility maximization gives an optimal option high marks whether it arose from a thorough or a hasty review of options. The principle evaluates the option adopted and not the method of its adoption. In contrast, an agent applies a procedural principle to discover an act to perform. A rational procedure may culminate in an act that is not optimal. Rationality does not require calculating in all cases. In many cases, weighing pros and cons, computing utilities, and comparing all options is not a rational way to make a decision—spontaneity may be appropriate. A rational decision procedure takes account of circumstances. Brian Skyrms (1990), Ariel Rubinstein (1998), Gerd Gigerenzer (2000), Gigerenzer and Reinhard Selten (2000), Weirich (2004), and John Pollock (2006) pursue these themes.
Principles of rationality vary in the scope of their evaluations of acts. Some principles evaluate a decision for instrumental rationality, taking for granted the beliefs and desires that generate it. Others evaluate the beliefs and desires along with the decision. Principles of rationality also adopt conditions. A principle may evaluate a decision, assuming unlimited time and cognitive resources for reaching it. Idealizations play a crucial role by generating an initial theory with simplified principles of rationality. Relaxing idealizations later leads to more general principles and to a more realistic theory.
Principles of conditional rational also provide a way of putting aside mistakes. A person’s act may be rational given his or her beliefs, though his or her beliefs are mistaken and if corrected would support a different act. Evaluating his or her act for nonconditional rationality requires a complex assessment of the significance of the mistaken beliefs. Conditional rationality has an interesting structure resembling the structure of conditional probability. The rationality of an act given a background feature is not the rationality of the conditional that if the background feature holds then the act is performed. Nor is it the truth of the conditional that if the background feature holds then the act is rational.
Theoretical rationality treats belief formation, and practical rationality treats action. A theory of practical reasoning formulates rules of inference, leading to a conclusion that an act should be performed. It classifies reasons for acts and assesses their force. (For a survey, see Parfit 1984; Bratman 1987; Broome 2001; and Bittner 2001.)
Some arguments that degrees of belief should conform with the probability axioms point out that failure to comply leaves one open to a series of bets that guarantees a loss, that is, a Dutch book. This observation yields pragmatic reasons for compliance with the axioms. Some theorists hold that the probability axioms require a purely epistemic justification.
The principle to maximize expected utility uses probability, and so there are grounds for holding that probability is not purely epistemic and that its axioms do not need a purely epistemic justification. In contrast, probability’s role in assessing an option’s prospects requires that it represent only the strength of evidence. If it is sensitive to an agent’s goals, even cognitive goals, then using it to calculate an option’s expected utility counts the agent’s goals twice: one time in calculating the utilities of the option’s possible outcomes and a second time in calculating the probabilities of the possible outcomes. A purely epistemic justification of the probability axioms may be required given probability’s role in the calculation of an option’s expected utility. It may be required because of probability’s role as a guide to action.
Studies of rationality are multidisciplinary because several fields have a stake in their outcomes. Progress with theories of rationality is broadly rewarding, and many scholars are contributing.
SEE ALSO Altruism; Behavior, Self-Constrained; Collective Action; Economics, Experimental; Epistemology; Expected Utility Theory; Game Theory; Information, Economics of; Kant, Immanuel; Logic; Maximization; Minimization; Morality; Optimizing Behavior; Philosophy; Probability Theory; Psychology; Random Samples; Rawls, John; Risk; Sen, Amartya Kumar; Simon, Herbert A.; Social Contract; Theory of Mind; Utility, Von Neumann-Morgenstern
BIBLIOGRAPHY
Bacharach, Michael. 2006. Beyond Individual Choice: Teams and Frames in Game Theory. Eds. Natalie Gold and Robert Sugden. Princeton, NJ: Princeton University Press.
Bittner, Rüdiger. 2001. Doing Things for Reasons. Oxford: Oxford University Press.
Bratman, Michael. 1987. Intention, Plans, and Practical Reason. Cambridge, MA: Harvard University Press.
Broome, John. 2001. Normative Practical Reasoning. Proceedings of the Aristotelian Society 75 (supp.): 175-193.
Colman, Andrew. 2003. Cooperation, Psychological Game Theory, and Limitations of Rationality in Social Interaction. Behavioral and Brain Sciences 26: 139-198.
Foley, Richard. 1993. Working without a Net. New York: Oxford University Press.
Gigerenzer, Gerd. 2000. Adaptive Thinking: Rationality in the Real World. New York: Oxford University Press.
Gigerenzer, Gerd, and Reinhard Selten. 2000. Rethinking Rationality. In Bounded Rationality: The Adaptive Toolbox, eds. Gerd Gigerenzer and Reinhard Selten, 1-12. Cambridge, MA: MIT Press.
Halpern, Joseph. 2003. Reasoning about Uncertainty. Cambridge, MA: MIT Press.
Jeffrey, Richard. [1965] 1983. The Logic of Decision. 2nd ed. Chicago: Chicago University Press.
Kahneman, Daniel, and Amos Tversky. 1979. Prospect Theory: An Analysis of Decision under Risk. Econometrica 47: 263-291.
Levi, Isaac. 2004. Mild Contraction: Evaluating Loss of Information due to Loss of Belief. Oxford: Clarendon.
Luce, R. Duncan, and Howard Raiffa. 1957. Games and Decisions: Introduction and Critical Survey. New York: Wiley.
McClennen, Edward. 1990. Rationality and Dynamic Choice: Foundational Explorations. Cambridge, U.K.: Cambridge University Press.
Mele, Alfred, and Piers Rawling, eds. 2004. The Oxford Handbook of Rationality. New York: Oxford University Press.
Olson, Mancur. 1965. The Logic of Collective Action: Public Goods and the Theory of Groups. Cambridge, MA: Harvard University Press.
Parfit, Derek. 1984. Reasons and Persons. Oxford: Clarendon.
Pollock, John. 2006. Thinking about Acting: Logical Foundations for Rational Decision Making. New York: Oxford University Press.
Rachlin, Howard. 2002. Altruism and Selfishness. Behavioral and Brain Sciences 25: 239-296.
Rescher, Nicholas. 1988. Rationality: A Philosophical Inquiry into the Nature and the Rationale of Reason. Oxford: Clarendon.
Resnik, Michael. 1987. Choices. Minneapolis: University of Minnesota Press.
Rubinstein, Ariel. 1998. Modeling Bounded Rationality. Cambridge, MA: MIT Press.
Sen, Amartya. 1977. Rational Fools. Philosophy and Public Affairs 6: 317-344.
Simon, Herbert. 1982. Behavioral Economics and Business Organization. Vol. 2 of Models of Bounded Rationality. Cambridge, MA: MIT Press.
Skyrms, Brian. 1990. The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press.
Sobel, J. Howard. 1994. Taking Chances: Essays on Rational Choice. Cambridge, U.K.: Cambridge University Press.
Stalnaker, Robert. 1998. Belief Revision in Games: Forward and Backward Induction. Mathematical Social Sciences 36: 31-56.
Stalnaker, Robert. 1999. Knowledge, Belief, and Counterfactual Reasoning in Games. In The Logic of Strategy, eds. Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, 3-38. New York: Oxford University Press.
Tversky, Amos, and Daniel Kahneman. 1982. Judgments of and by Representativeness. In Judgment under Uncertainty: Heuristics and Biases, eds. Daniel Kahneman, Paul Slovic, and Amos Tversky, 84-98. Cambridge, U.K.: Cambridge University Press.
Von Neumann, John, and Oskar Morgenstern. 1944. Theory of Games and Economic Behavior. Princeton, NJ: Princeton University Press.
Weirich, Paul. 1998. Equilibrium and Rationality: Game Theory Revised by Decision Rules. Cambridge, U.K.: Cambridge University Press.
Weirich, Paul. 2001. Decision Space: Multidimensional Utility Analysis. Cambridge, U.K.: Cambridge University Press.
Weirich, Paul. 2004. Realistic Decision Theory: Rules for Nonideal Agents in Nonideal Circumstances. New York: Oxford University Press.
Paul Weirich
Rationality
RATIONALITY
Philosophers have, at least characteristically, aspired to possess "rationality" but have not thereby sought exactly the same thing. Portrayed vaguely, rationality is reasonableness, but not all philosophers take rationality as dependent on reasons; nor do all philosophers have a common understanding of reasons or of reasonableness. Some theorists consider rationality to obtain in cases that lack countervailing reasons against what has rationality; they thus countenance rationality as, in effect, a default status. In ordinary parlance, persons can have rationality; so, too, can beliefs, desires, intentions, and actions, among other things. The rationality appropriate to action is practical, whereas that characteristic of beliefs is, in the language of some philosophers, theoretical.
Many philosophers deem rationality as instrumental, as goal oriented. You have rationality, according to some of these philosophers, in virtue of doing your best, or at least doing what you appropriately think adequate, to achieve your goals. If ultimate goals are not themselves subject to assessments of rationality, then rationality is purely instrumental, in a manner associated with David Hume's position. Rationality, according to this view, is a minister without portfolio; it does not require any particular substantive goals of its own but consists rather in the proper pursuit of one's ultimate goals, whatever those goals happen to be. Many decision-theoretic and economic approaches to rationality are purely instrumentalist. If, however, ultimate goals are susceptible to rational assessment, as an Aristotelian tradition and a Kantian tradition maintain, then rationality is not purely instrumental. The latter two traditions regard certain rather specific (kinds of) goals, such as human well-being, as essential to rationality. Their substantialist approach to rationality lost considerable influence, however, with the rise of modern decision theory.
When relevant goals concern the acquisition of truth and the avoidance of falsehood, so-called epistemic rationality is at issue. Otherwise, some species of nonepistemic rationality is under consideration. One might individuate species of nonepistemic rationality by the kind of goal at hand; moral, prudential, political, economic, aesthetic, or some other. Some philosophers have invoked rationality "all things considered" to resolve conflicts arising from competing desires or species of rationality; even so, there are various approaches to rationality "all things considered" in circulation. The standards of rationality are not uniformly epistemic, then, but epistemic rationality can play a role even in what some call nonepistemic rationality. Regarding economic rationality, for instance, a person seeking such rationality will, at least under ordinary conditions, aspire to epistemically rational beliefs concerning what will achieve the relevant economic goals. Similar points apply to other species of nonepistemic rationality. A comprehensive account of rationality will characterize epistemic and nonepistemic rationality, as well as corresponding kinds of irrationality (e.g., weakness of will).
Taking rationality as deontological, some philosophers characterize rationality in terms of what is rationally obligatory and what is merely rationally permissible. If an action, for instance, is rationally obligatory, then one's failing to perform it will be irrational. Other philosophers opt for a nondeontological evaluative conception of rationality that concerns what is good (but not necessarily obligatory) from a certain evaluative standpoint. Some of the latter philosophers worry that, if beliefs and intentions are not voluntary, then they cannot be obligatory. Still other philosophers understand rationality in terms of what is praiseworthy, rather than blameworthy, from a certain evaluative standpoint. The familiar distinction between obligation, goodness, and praiseworthiness thus underlies three very general approaches to rationality.
Following Henry Sidgwick, William Frankena has distinguished four conceptions of rationality: (1) an egoistic conception implying that it is rational for one to be or do something if and only if this is conducive to one's own greatest happiness (e.g., one's own greatest pleasure or desire satisfaction); (2) a perfectionist conception entailing that it is rational for one to be or do something if and only if this is a means to or a part of one's moral or nonmoral perfection; (3) a utilitarian conception implying that it is rational for one to be or do something if and only if this is conducive to the greatest general good or welfare; and (4) an intuitionist conception implying that it is rational for one to be or do something if and only if this conforms to self-evident truths, intuited by reason, concerning what is appropriate. The history of philosophy represents, not only these conceptions of rationality, but also modified conceptions adding further necessary or sufficient conditions to one of (1)–(4).
Given an egoistic conception of rationality, one's being rational will allow for one's being immoral, if morality requires that one not give primacy to oneself over other people. Rationality and morality can then conflict. Such conflict is less obvious on a utilitarian conception of rationality. In fact, if morality is itself utilitarian in the way specified (as many philosophers hold), a utilitarian conception of rationality will disallow rational immorality. A perfectionist conception of rationality will preclude rational immorality only if the relevant perfection must be moral rather than nonmoral; achieving nonmoral perfection will, of course, not guarantee morality. As for an intuitionist conception of rationality, if the relevant self-evident truths do not concern what is morally appropriate, then rational immorality will be possible. An intuitionist conception will bar conflict between rationality and morality only if it requires conformity to all the self-evident truths about what is morally appropriate that are relevant to a situation or person. So, whether rationality and morality can conflict will depend, naturally enough, on the exact requirements of the conception of rationality at issue.
Richard Brandt has suggested that talk of what it would be rational to do functions to guide action by both recommending action and by making a normative claim that evaluates the available action relative to a standard. An important issue concerns what kind of strategy of using information to choose actions will enable one to achieve relevant goals as effectively as any other available strategy. Brandt has offered a distinctive constraint on such a strategy: A rational decision maker's preferences must be able to survive their being subjected to repeated vivid reflection on all relevant facts, including facts of logic. This constraint suggests what may be called (5) a relevant-information conception of rationality: Rationality is a matter of what would survive scrutiny by all relevant information.
A relevant-information conception of rationality depends, first, on a clear account of precisely when information is relevant and, second, on an account of why obviously irrational desires cannot survive scrutiny by all relevant information. Evidently, one could have a desire caused by obviously false beliefs arising just from wishful thinking, and this desire could survive a process of scrutiny by all relevant information where the underlying false beliefs are corrected. In any case, a relevant-information conception of rationality will preclude rational immorality only if it demands conformity to all relevant moral information.
The egoistic, perfectionist, utilitarian, and relevant-information conceptions of rationality are nonevidential in that they do not require one's having evidence that something is conducive to self-satisfaction, perfection, general welfare, or support from all relevant information. Many philosophers would thus fault those conceptions as insufficiently sensitive to the role of relevant evidence in rationality. If relevant evidence concerns epistemic rationality, we again see the apparent bearing of epistemic rationality on rationality in general. The latter bearing deserves more attention in contemporary work on nonepistemic rationality.
Philosophers currently divide over internalism and externalism about rationality. If rationality demands reasons of some sort or other, the dispute concerns two senses of talk of a person's having a reason to perform an action. An internalist construal of this talk implies that the person has some motive that will be advanced by the action. An externalist construal, in contrast, does not require that the person have a motive to be advanced by the action. Bernard Williams, among others, has suggested that any genuine reason for one's action must contribute to an explanation of one's action and that such a contribution to explanation must be a motivation for the action. He concludes that externalism about rationality is false, on the ground that external reasons do not contribute to explanation of action in the required manner. Externalism about rationality does allow that reasons fail to motivate, but this, according to externalists, is no defect whatever. Externalists distinguish between merely motivating reasons and justifying reasons, contending that only the latter are appropriate to rationality understood normatively; what is merely motivating in one's psychological set, in any case, need not be justifying. Perhaps, then, disputes between internalists and externalists will benefit from attention to the distinction between justifying and merely motivating reasons.
Modern decision theory assumes that, in satisfying certain consistency and completeness requirements, a person's preferences toward the possible outcomes of available actions will determine, at least in part, what actions are rational for that person by determining the personal utility of outcomes of those actions. In rational decision making under certainty one definitely knows the outcomes of available actions. In decision making under risk one can assign only various definite probabilities less than 1 to the outcomes of available actions. (Bayesians assume that the relevant probabilities are subjective in that they are determined by a decision maker's beliefs.) In decision making under uncertainty one lacks information about relevant states of the world and hence cannot assign even definite probabilities to the outcomes of available actions. Acknowledging that rationality is purely instrumental (and thus that even Adolf Hitler's Nazi objectives are not necessarily rationally flawed), Herbert Simon has faulted modern decision theory on the ground that humans rarely have available the facts, consistent preferences, and reasoning power required by standard decision theory. He contends that human rationality is "bounded" in that it does not require utility maximization or even consistency. Rather, it requires the application of a certain range of personal values (or preferences) to resolve fairly specific problems one faces, in a way that is satisfactory, rather than optimal, for one. Simon thus relies on actual human limitations to constrain his account of rationality.
Contemporary theorists divide over the significance of human psychological limitations for an account of rationality. The controversy turns on how idealized principles for rationality should be. This raises the important issue of what exactly makes some principles of rationality true and others false. If principles of rationality are not just stipulative definitions, this issue merits more attention from philosophers than it has received. Neglect of this metaphilosophical issue leaves the theory of rationality as a subject of ongoing philosophical controversy.
See also Aristotle; Bayes, Bayes' Theorem, Bayesian Approach to Philosophy of Science; Decision Theory; Hume, David; Kant, Immanuel; Sidgwick, Henry; Utilitarianism.
Bibliography
Audi, Robert. Action, Intention and Reason. Ithaca, NY: Cornell University Press, 1993.
Audi, Robert. The Architecture of Reason: The Structure and Substance of Rationality. New York: Oxford University Press, 2001.
Benn, S. I., and G. W. Mortimore, eds. Rationality and the Social Sciences. London: Routledge and Paul, 1976.
Brandt, R. A Theory of the Good and the Right. Oxford: Clarendon Press, 1979.
Brown, Harold. Rationality. London: Routledge, 1988.
Elster, J., ed. Rational Choice. New York: New York University Press, 1986.
Foley, Richard. The Theory of Epistemic Rationality. Cambridge, MA: Harvard University Press, 1987.
Frankena, W. "Concepts of Rational Action in the History of Ethics." Social Theory and Practice 9 (1983): 165–197.
Fumerton, Richard. "Audi on Rationality: Background Beliefs, Arational Enjoyment, and the Rationality of Altruism." Philosophy and Phenomenological Research 67 (2003): 188–193.
Gauthier, D. Morals by Agreement. Oxford: Clarendon Press, 1986.
Harman, Gilbert. Change in View: Principle of Reasoning. Cambridge, MA: MIT Press, 1986.
Hollis, M., and S. Lukes, eds. Rationality and Relativism. Cambridge, MA: MIT Press, 1982.
Jeffrey, Richard. The Logic of Decision. 2nd ed. Chicago: University of Chicago Press, 1983.
Katz, Jerrold. Realistic Rationalism. Boston: MIT Press, 1998.
Mele, A. Irrationality. New York: Oxford University Press, 1987.
Mele, A., and P. Rawlings, ed. The Oxford Handbook of Rationality. Oxford: Oxford University Press, 2004.
Moser, P., ed. Rationality in Action. Cambridge, U.K.: Cambridge University Press, 1990.
Nozick, R. The Nature of Rationality. Princeton, NJ: Princeton University Press, 1993.
Rescher, N. Rationality. Oxford: Clarendon Press, 1988.
Sidgwick, H. The Methods of Ethics. 7th ed. London, 1907.
Simon, Herbert. Reason in Human Affairs. Stanford, CA: Stanford University Press, 1981.
Skyrms, Brian. The Dynamics of Rational Deliberation. Cambridge, MA: Harvard University Press, 1990.
Slote, M. Beyond Optimizing: A Study of Rational Choice. Cambridge, MA: Harvard University Press, 1989.
Williams, B. "Internal and External Reasons." In Rational Action, edited by R. Harrison. Cambridge, U.K.: Cambridge University Press, 1979.
Paul K. Moser (1996)
Bibliography updated by Benjamin Fiedor (2005)