Artificial Morality

views updated

ARTIFICIAL MORALITY

Artificial morality is a research program for the construction of moral machines that is intended to advance the study of computational ethical mechanisms. The name is an intentional analogy to artificial intelligence (AI). Cognitive science has benefited from the attempt to implement intelligence in computational systems; it is hoped that moral science can be informed by building computational models of ethical mechanisms, agents, and environments. As in the case of AI, project goals range from the theoretical aim of using computer models to understand morality mechanistically to the practical aim of building better programs. Also in parallel with AI, artificial morality can adopt either an engineering or a scientific approach.


History

Modern philosophical speculation about moral mechanisms has roots in the work of the philosopher Thomas Hobbes (1588–1679). More recently, speculation about ways to implement moral behavior in computers extends back to Isaac Asimov's influential three laws of robotics (1950) and pioneer cyberneticist Warren McCulloch's 1965 sketch of levels of motivation in games. On the lighter side, Michael Frayn's The Tin Men (1965) is a parody of artificial morality that features an experimental test of altruism involving robots in life rafts. Although there has been fairly extensive work in this field broadly considered, it is an immature research area; a recent article calls itself a "Prolegomena" (Allen, Varner, and Zinser 2000). The following survey will help explain some of the goals and methods in this young field.


Ethics in the Abstract

Consider first the easiest goal: to understand ethics in the abstract context provided by computer programs. Robert Axelrod (1984) made a breakthrough in the field when he organized tournaments by asking experts in decision and game theory to submit programmed agents to play a well-known game: the iterated prisoner's dilemma. That challenge entailed the basic computational assumption that everything relevant to such a player could be specified in a computer program. Although games-playing programs figured in the early history of artificial intelligence (for example, A. L. Samuel's [1959] checkers program), the prisoner's dilemma is a mixed motive game that models morally significant social dilemmas such as the tragedy of the commons. In such situations one alternative—overfishing or creating more greenhouse gas—is rational yet morally defective because it is worse for all.

These models have generated considerable interest in the question of the ways rational choice relates to ethics. By focusing on an abstract game Axelrod was able to avoid trying to model full human moral decision making. Nonetheless, the iterated prisoner's dilemma is a hard problem. There is a large strategy set, and good strategies must take account of the other players' strategies. Thus, unlike AI, which for much of its first generation focused on single agents, artificial morality began by focusing on a plurality of agents.


Ethics and Game Theory

One result of Axelrod's initiative was to unite ethics and game theory. On the one hand, game theory provides simple models of hard problems for ethics, such as the prisoner's dilemma. First, game theory forces expectations for ethics to be made explicit. Early work in this field (Danielson 1992) expected ethics to solve problems—such as cooperation in a one-play prisoner's dilemma—that game theory considers impossible. More recent work (Binmore 1994, Skyrms 1996) lower the expectations for ethics. Consider Axelrod's recommendation of the strategy tit-for-tat as a result of its relative success in his tournament. Because the game is iterated, tit-for-tat is not irrationally cooperative. However, its success shows only that tit-for-tat is an equilibrium for this game; it is rational to play tit-for-tat if enough others do. But game theory specifies that many—indeed infinitely many—strategies are equilibria for the iterated prisoner's dilemma. Thus game theory shifts the ground of ethical discussion, from a search for the best principle or strategy, to the more difficult task of selecting among many strategies, each of which is an equilibrium, that is to say, a feasible moral norm.


Artificial Evolution

Another result of Axelrod's work was to link ethics and the evolutionary branch of game theory and modeling. Axelrod established equilibriums by means of an evolutionary simulation (a form of the standard replicator dynamics) of the initial results. His later work introduced agents whose strategies could be modified by mutation. Classic game theory and modern ethics share many assumptions that focus on a normative question: What should hyperrational, fully informed agents do, taking their own or everyone's interests into account, respectively? However, it sometimes is easier to discover which of many simpler, less well-informed agents will be selected for solving a problem, and generally evolution selects what rationality prescribes (Skyrms 1996). This change from attempting to discover the perfect agent to experimenting with a variety of agents is especially helpful for ethics, which for a long time has been divided among partisans of different ethical paradigms. Evolutionary artificial morality promises to make it possible to test some of these differences. One benefit of combining evolution and simple programmed agents is that one can construct, for example, all possible agents as finite state machines of a given complexity, and use evolutionary techniques to test them (Binmore 1994). Another example is provided by Skyrms (1996), who ran evolutionary simulations where agents bargain in different ways, characteristic of different approaches to ethics.

A third effect of this research program is more directly ethical. A common result of experiments and simulations in artificial morality is to heighten the role of reciprocity and fairness at the expense of altruism. This shift is supported by human experiments as well as by theory. Experiments show that most subjects will carry out irrational threats to punish unfair actions. The theory that supports these results shows that altruism alone will not solve common social dilemmas.


Moral Engineering

The previous examples illustrate the simplest cases of what more properly might be called artificial moral engineering. In this area theorists are happy to study simple agents in simple games that model social settings to establish proofs of the basic concepts of the field: that moral behavior can be programmed and that ethically interesting situations can be modeled computationally.

At the other end of the engineering spectrum are those who try to build moral agents to act in more realistic situations of real artificial agents on the Internet and in programs more generally (Coleman 2001). This highlights the most immediate importance of artificial morality: "The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the effects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of artificially intelligent automata" (Allen, Varner, and Zinser 2000, p. 251).

However, this survey of artificial moral engineering would be misleading if it did not note that there is a well-developed sub-field of AI—multiagent systems—that includes aims that fall just short of this. In a successful multiagent system computational agents without a common controller coordinate activity and cooperate rather than conflict. No current multiagent system is ethically sophisticated enough to understand harm to humans, but the aims of these fields clearly are convergent.


Moral Science

All this is engineering, not science. Artificial moral science adds the goal of realism. An effective ethical program might work in ways that shed no light on human ethics. (Consider the analogy between cognitive engineering and science, in which the Deep Blue chess program would be the analogous example of cognitive engineering. The clearest cases of artificial moral science are computational social scientists who test their models of social interaction with human experiments. For example, Peter Kollock (1998) tests a model in which moral agents achieve cooperation by perceiving social dilemmas in the more benign form of assurance games by running experiments on human subjects.

Finally, one benefit of the computational turn in ethics is the ability to embed theories in programs that provide other researchers with the tools needed to do further work. Again there is an analogy with artificial intelligence, many early discoveries in which have been built into standard programming languages. In the case of artificial morality academic computational tools such as Ascape and RePast allow researchers to construct experiments in "artificial societies" (Epstein and Axtell 1996). A related benefit of the computational approach to ethics is the development of a common language for problems and techniques that encourage researchers from a range of disciplines, including philosophy, biology, computing science, and the social sciences, to share their results.


Computer Games

While the work discussed so far is academic research some of the issues of artificial morality have already come up in the real world. Consider computer games. First, some of the most popular games are closely related to the artificial society research platforms discussed above. The bestselling SimCity computer game series is a popularized urban planning simulator. The user can select policies favoring cars or transit, high or low taxes, police or education expenditures, but, crucially, cannot control directly what the simulated citizens do. Their response is uncontrolled by the player, determined by the user's policies and values and dynamics programmed into the simulation. This serves as a reminder that artificial morality is subject to the main methodological criticism of all simulation: Assumptions are imbedded in a form that can make their identification and criticism difficult (Turkle 1995, Chapter 2).

Second, as computer games make use of AI to control opponents and other agents not controlled by humans, so too they raise issues of artificial morality. Consider the controversial case of the popular grand theft auto series of games, in which the player can run over pedestrians or attack and kill prostitutes. The victims and bystanders barely react to these horrible acts. These games illustrate what one might call "artificial amorality" and connect to criticisms that video and computer games "create a decontextualized microworld" (Provenzo 1991, p. 124) where harmful acts do not have their normal social consequences.

Third, games and programmed agents on the internet raise questions about what features of artificial characters lead to their classification in morally relevant ways. Turkle (1995) shows how people adjust their category schemes to make a place for artificial agents they encounter that are "alive" or "real" in some but not all respects.


PETER DANIELSON

SEE ALSO Altruism;Artificial Intelligence;Artificiality;Game Theory;Robots and Robotics.

BIBLIOGRAPHY

Allen, Colin; Gary Varner; and Jason Zinser. (2000). "Prolegomena to Any Future Artificial Moral Agent." Journal of Experimental and Theoretical Artificial Intelligence 12: 251–261. A survey of problems to which artificial morality might apply.

Asimov, Isaac. (1950). I, Robot. New York: Fawcett. Classic science fiction source of the three laws of robotics.

Axelrod, Robert. (1984). The Evolution of Cooperation. New York: Basic Books. Accessible introduction to some game theory and evolution methods for studying moral problem of cooperation.

Binmore, Ken. (1994). Game Theory and the Social Contract: Playing Fair. Cambridge, MA: MIT Press. A sophisticated use of game theory to criticize and extend artificial morality

Coleman, Kari. (2001). "Android Arete: Toward a Virtue Ethic for Computational Agents." Ethics and Information Technology 3(4): 247–265.

Danielson, Peter. (1992). Artificial Morality: Virtuous Robots for Virtual Games. London: Routledge.

Danielson, Peter. (1998). Modeling Rationality, Morality, and Evolution. New York: Oxford University Press.

Danielson, Peter. (2002). "Competition among Cooperators: Altruism and Reciprocity." Proceedings of the National Academy of Sciences 99: 7237–7242.

Epstein, Joshua M., and Robert Axtell. (1996). Growing Artificial Societies: Social Science from the Bottom Up. Cambridge, MA: MIT Press. Introduction to agent based modeling in the social sciences.

Frayn, Michael. (1965). The Tin Men. New York: Ace. A parody of artificial intelligence and artificial morality.

Kollock, Peter. (1998). "Transforming Social Dilemmas: Group Identity and Co-operation." In Modeling Rationality, Morality, and Evolution, ed. Peter Danielson. New York: Oxford University Press. Illustrates how experiments and game theory can be used together.

McCulloch, Warren Sturgis. (1965). "Toward Some Circuitry of Ethical Robots or an Observational Science of the Genesis of Social Evaluation in the Mind-Like Behavior of Artifacts." In his Embodiments of Mind. Cambridge, MA: MIT Press. Early speculation of the ethical potential of simple machines and game-like situations.

Provenzo, Eugene F. (1991). Video Kids: Making Sense of Nintendo. Cambridge, MA: Harvard University Press.

Samuel, A. L. (1959). "Some Studies in Machine Learning Using the Game of Checkers." IBM Journal of Research and Development 3: 210–229. Classic article on machine chess.

Skyrms, Brian. (1996). Evolution of the Social Contract. Cambridge, UK, and New York: Cambridge University Press. Accessible introduction to evolutionary game theory and ethics.

Turkle, Sherry. (1995). Life on the Screen: Identity in the Age of the Internet. New York: Simon & Schuster.