Artificial Life
Artificial Life
Artificial life is a cross-disciplinary field of research devoted to the study and creation of lifelike structures in various media (computational, biochemical, mechanical, or combinations of these). A central aim is to model and even realize emergent properties of life, such as self-reproduction, growth, development, evolution, learning, and adaptive behavior. Researchers of artificial life also hope to gain general insights about self-organizing systems, and to use the approaches and principles in technology development.
Evolution of research
The historical and theoretical roots of the field are manifold. These roots include:
- early attempts to imitate the behavior of humans and animals by the invention of mechanical automata in the sixteenth century;
- cybernetics as the study of general principles of informational control in machines and animals;
- computer science as theory and the idea of abstract equivalence between various ways to express the notion of computation, including physical instantiations of systems performing computations;
- John von Neumann's so-called self-reproducing Cellular Automata;
- computer science as a set of technical practices and computational architectures;
- artificial intelligence (AI)
- robotics;
- philosophy and system science notions of levels of organization, hierarchies, and emergence of new properties;
- non-linear science, such as the physics of complex systems and chaos theory; theoretical biology, including abstract theories of life processes; and
- evolutionary biology.
Despite the field's long history, the first international conference for artificial life was not held until 1987. The conference was organized by the computer scientist C. G. Langton, who sketched a future synthesis of the field's various roots and formulated important elements of a research program.
In the first five years after 1987, the research went through an exploratory phase in which it was not always clear by what criteria one could evaluate individual contributions, and some biologists were puzzled about what could falsify a specific piece of research. Later the field stabilized into clusters of research areas, each with it own models, questions, and works in progress. As in artificial intelligence research, some areas of artificial life research are mainly motivated by the attempt to develop more efficient technological applications by using biologic inspired principles. Examples of such applications include modeling architectures to simulate complex adaptive systems, as in traffic planning, and biologically inspired immune systems for computers. Other areas of research are driven by theoretical questions about the nature of emergence, the origin of life, and forms of self-organization, growth, and complexity.
The media of artificial life
Artificial life may be labeled software, hardware, or wetware, depending on the type of media researchers work with.
Software. Software artificial life is rooted in computer science and represents the idea that life is characterized by form, or forms of organization, rather than by its constituent material. Thus, "life" may be realized in some form (or media) other than carbon chemistry, such as in a computer's central processing unit, or in a network of computers, or as computer viruses spreading through the Internet. One can build a virtual ecosystem and let small component programs represent species of prey and predator organisms competing or cooperating for resources like food.
The difference between this type of artificial life and ordinary scientific use of computer simulations is that, with the latter, the researcher attempts to create a model of a real biological system (e.g., fish populations of the Atlantic Ocean) and to base the description upon real data and established biologic principles. The researcher tries to validate the model to make sure that it represents aspects of the real world. Conversely, an artificial life model represents biology in a more abstract sense; it is not a real system, but a virtual one, constructed for a specific purpose, such as investigating the efficiency of an evolutionary process of a Lamarckian type (based upon the inheritance of acquired characters) as opposed to Darwinian evolution (based upon natural selection among randomly produced variants). Such a biologic system may not exist anywhere in the real universe. As Langton emphasized, artificial life investigates "the biology of the possible" to remedy one of the inadequacies of traditional biology, which is bound to investigate how life actually evolved on Earth, but cannot describe the borders between possible and impossible forms of biologic processes. For example, an artificial life system might be used to determine whether it is only by historical accident that organisms on Earth have the universal genetic code that they have, or whether the code could have been different.
It has been much debated whether virtual life in computers is nothing but a model on a higher level of abstraction, or whether it is a form of genuine life, as some artificial life researchers maintain. In its computational version, this claim implies a form of Platonism whereby life is regarded as a radically medium-independent form of existence similar to futuristic scenarios of disembodied forms of cognition and AI that may be downloaded to robots. In this debate, classical philosophical issues about dualism, monism, materialism, and the nature of information are at stake, and there is no clear-cut demarcation between science, metaphysics, and issues of religion and ethics. If it really is possible to create genuine life "from scratch" in other media, the ethical concerns related to this research are intensified: In what sense can the human community be said to be in charge of creating life de novo by non-natural means?
Hardware. Hardware artificial life refers to small animal-like robots, usually called animats, that researchers build and use to study the design principles of autonomous systems or agents. The functionality of an agent (a collection of modules, each with its own domain of interaction or competence) is an emergent property of the intensive interaction of the system with its dynamic environment. The modules operate quasi-autonomously and are solely responsible for the sensing, modeling, computing or reasoning, and motor control that is necessary to achieve their specific competence. Direct coupling of perception to action is facilitated by the use of reasoning methods, which operate on representations that are close to the information of the sensors.
This approach states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Representations do not need to be explicit and stable, but must be situated and "embodied." The robots are thus situated in a world; they do not deal with abstract descriptions, but with the environment that directly influences the behavior of the system. In addition, the robots have "bodies" and experience the world directly, so that their actions have an immediate feedback upon the robot's own sensations. Computer-simulated robots, on the other hand, may be "situated" in a virtual environment, but they are not embodied. Hardware artificial life has many industrial and military technological applications.
Wetware. Wetware artificial life comes closest to real biology. The scientific approach involves conducting experiments with populations of real organic macromolecules (combined in a liquid medium) in order to study their emergent self-organizing properties. An example is the artificial evolution of ribonucleic acid molecules (RNA) with specific catalytic properties. (This research may be useful in a medical context or may help shed light on the origin of life on Earth.) Research into RNA and similar scientific programs, however, often take place in the areas of molecular biology, biochemistry and combinatorial chemistry, and other carbon-based chemistries. Such wetware research does not necessarily have a commitment to the idea, often assumed by researchers in software artificial life, that life is a composed of medium-in-dependent forms of existence.
Thus wetware artificial life is concerned with the study of self-organizing principles in "real chemistries." In theoretical biology, autopoiesis is a term for the specific kind of self-maintenance produced by networks of components producing their own components and the boundaries of the network in processes that resemble organizationally closed loops. Such systems have been created artificially by chemical components not known in living organisms.
Conclusion
Questions of theology are rarely discussed in artificial life research, but the very idea of a human researcher "playing God" by creating a virtual universe for doing experiments (in the computer or the test tube) with the laws of growth, development, and evolution shows that some motivation for scientific research may still be implicitly connected to religious metaphors and modes of thought.
See also Artificial Intelligence; Cybernetics; Cyborg; Information Technology; Playing God; Robotics; Technology
Bibliography
adami, christoph; belew, richard k.; kitano, hiroaki; and taylor, charles e., eds. artificial life vi: proceedings of the sixth international conference on artificial life. cambridge, mass.: mit press, 1998.
boden, margaret a., ed. the philosophy of artificial life. oxford: oxford university press, 1996.
emmeche, claus. the garden in the machine: the emerging science of artificial life. princeton, n.j.: princeton university press, 1994.
helmreich, stefan. silicon second nature: culturing artificial life in a digital world. berkeley and los angeles: university of california press, 1998. updated edition, 2000.
langton, christopher g, and shimohara, katsunori, eds. artificial life v: proceedings of the fifth international workshop on the synthesis and simulation of living systems. cambridge, mass.: mit press, 1997.
langton, christopher g. artificial life, vol. 6: santa fe institute studies in the sciences of complexity. redwood city, calif.: addison-wesley, 1989.
morán, federico; moreno, alvaro; merelo, juan julián; and chacón, pablo, eds. advances in artificial life. lecture notes in artificial intelligence 929. berlin and new york: springer, 1995.
varela, francisco j. and bourgine, paul, eds. toward a practice of autonomous systems. cambridge, mass.: mit press, 1992.
claus emmeche
Artificial Life
Artificial Life
Artificial life (also known as "ALife") is an interdisciplinary study of life and lifelike processes by means of computer simulation and other methods. The goals of this activity include understanding and creating life and lifelike systems, and developing practical devices inspired by living systems. The study of artificial life aims to understand how life arises from non-life, to determine the potentials and limits of living systems, and to explain how life is connected to mind, machines, and culture.
The American computer scientist Christopher Langton coined the phrase "artificial life" in 1987, when he organized the first scientific conference explicitly devoted to this field. Before there were artificial life conferences, the simulation and synthesis of lifelike systems occurred in isolated pockets scattered across a variety of disciplines. The Hungarian-born physicist and mathematician John von Neumann (1903–1957) created the first artificial life model (without referring to it as such) in the 1940s. He produced a self-reproducing, computation-universal entity using cellular automata . Von Neumann was pursuing many of the questions that still drive artificial life today, such as understanding the spontaneous generation and evolution of complex adaptive structures.
Rather than modeling some existing living system, artificial life models are often intended to generate wholly new—and typically extremely simple—instances of lifelike phenomena. The simplest example of such a system is the so-called Game of Life devised by the British mathematician John Conway in the 1960s before the field of artificial life was conceived. Conway was trying to create a simple system that could generate complex self-organized structures.
The Game of Life is a two-state, two-dimensional cellular automaton. It takes place on a rectangular grid of cells, similar to a huge checkerboard. Time advances step by step. A cell's state at a given time is determined by the states of its eight neighboring cells according to the following simple "birth-death" rule: a "dead" cell becomes "alive" if and only if exactly three neighbors were just "alive," and a "living" cell "dies" if and only if fewer than two, or more than three, neighbors were just "alive." When all of the cells in the system are simultaneously updated again and again, a rich variety of complicated behavior is created and a complex zoo of dynamic structures can be identified and classified (blinkers, gliders, glider guns, logic switching circuits, etc.). It is even possible to construct a universal Turing machine in the Game of Life, by cunningly arranging the initial configuration of living cells. In such constructions, gliders perform a role of passing signals. Analyzing the computational potential of cellular automata on the basis of glider interactions has become a major direction in research. Like living systems, Conway's Game of Life exhibits a vivid hierarchy of dynamical self-organized structures. Its self-organization is not a representation of processes in the real world, but a wholly novel instance of this phenomenon.
To understand the interesting properties of living systems, von Neumann and Conway each used a constructive method. They created simple and abstract models that exhibited the kind of behavior they wanted to understand. Contemporary artificial life employs the same constructive methodology, often through the creation of computer models of living systems. This computer methodology has several virtues. Expressing a model in computer code requires precision and clarity, and it ensures that the mechanisms invoked in the model are feasible.
Artificial life is similar to artificial intelligence (AI) . Both fields study natural phenomena through computational models, and most naturally occurring intelligent systems are, in fact, alive. Despite these similarities, AI and artificial life typically employ different modeling strategies. In most traditional artificial intelligence systems, events occur one by one (serially). A complicated, centralized controller typically makes decisions based on global information about all aspects of the system, and the controller's decisions have the potential to affect directly any aspect of the whole system.
This centralized, top-down architecture is quite unlike the structure of many natural living systems that exhibit complex autonomous behavior. Such systems are often parallel, distributed networks of relatively simple low-level "agents," and they all simultaneously interact with each other. Each agent's decisions are based on information about only its own local situation, which they affect.
In similar fashion, artificial life characteristically constructs massively parallel, bottom-up-specified systems of simple local agents. One repeats the simultaneous low-level interactions among the agents, and then observes what aggregate behavior emerges. These are sometimes called "agent-based" or "individual-based" models, because the system's global behavior arises out of the local interactions among a large collection of "agents" or "individuals." This kind of bottom-up architecture with a population of autonomous agents that follow simple local rules is also characteristic of the connectionist (parallel, distributed processing, neural networks ) movement that swept through AI and cognitive science in the 1980s. In fact, the agents in many artificial life models are themselves controlled internally by simple neural networks.
Computer simulation in artificial life plays the role that observation and experiment play in more conventional science. The complex self-organizing behavior of Conway's Game of Life would never have been discovered without computer simulations of thousands of generations for millions of sites. Simulation of large-scale complex systems is the single most crucial development that has enabled the field of artificial life to flourish.
Living systems exhibit a variety of useful properties such as robustness, flexibility, and automatic adaptability. Some artificial life research aims to go beyond mere simulation by constructing novel physical devices that exhibit and exploit lifelike properties. Some of this engineering activity also has a theoretical motivation on the grounds that a full appreciation of life's distinctive properties can come only by creating and studying real physical devices. This engineering activity includes the construction of evolving hardware, in which biologically-inspired adaptive processes control the configuration of micro-electronic circuitry. Another example is biologically inspired robots, such as those robotic controllers automatically designed by evolutionary algorithms .
see also Artificial Intelligence; Biology; Computer Vision; Neural Networks; Robotics.
Mark A. Bedau
Bibliography
Bedau, Mark A., et al. "Open Problems in Artificial Life." Artificial Life 6 (2000): 363–376.
Berlekamp, Elwyn R., John H. Conway, and Richard K. Guy. Winning Ways for Your Mathematical Plays, vol. 2: Games in Particular. New York: Academic Press, 1982.
Boden, Margaret, ed. The Philosophy of Artificial Life. Oxford: Oxford University Press, 1996.
Kauffman, Stuart A. At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. New York: Oxford University Press, 1995.
Levy, Steven. Artificial Life: The Quest for a New Creation. New York: Pantheon, 1992.