Extinction, Human

views updated

EXTINCTION, HUMAN


Even a small risk of near-term human extinction (within one century, for example) should be taken seriously, considering the stake. It is therefore remarkable that there has been so little systematic study of this topic–less than on the life habits of the dung fly. Some particular threats have been studied, however, and some thinkers have made attempts to synthesize what is known about the risks of human extinction. Because many of the dangers are hard to quantify, it is often necessary to fall back on informed subjective risk estimates.

Threats to near-term human survival include:

  • Nanotechnology disaster. Advanced molecular manufacturing will make it possible to construct bacterium-scale self-replicating mechanical robots that can feed on dirt or other organic matter, and that could be programmed to destroy the biosphere by eating, poisoning, or burning it, or by blocking out sunlight. This capability could be extremely dangerous, and countermeasures could take a long time to develop. While the possibility of global-scale nanotech accidents shouldn't be ignored, deliberate misuse poses the gravest risk.
  • Nuclear holocaust. Current arsenals are probably insufficient to obliterate all human life, although it is hard to be certain of this because science has a poor understanding of secondary effects (such as impact on global climate–"nuclear winter"). Much larger arsenals may be created in future arms races.
  • Superbugs. Advanced biotechnology will almost certainly lead to better medicine, but it could also be used to create a super-pathogen. Increased urbanization and travel could also increase the risk posed by natural pandemics.
  • Evolution or re-engineering of the human species. While natural biological human evolution takes place on time-scales much longer than a hundred years, one can imagine that scientists will develop technologies, such as nanomedicine (medical interventions based on mature nanotechnology) or very advanced genetic engineering, that will enable them to re-design the human organism to such a degree that these new humans become what could arguably be classified as a different species. Alternatively, humans might be able to "upload" their minds into computers. Evolution in a population of "uploads" could happen on much shorter time-scales than biological evolution. (Whether human extinction in this sense would be a bad thing presumably depends on what humans become instead.) Further, some have suggested the possibility that humans are currently living in a simulated world within computers built by some super-advanced civilization. If so, then one possible risk is that these simulators will decide to terminate the simulation.
  • Artificial intelligence takeover. In this scenario, a badly programmed superhuman artificial intelligence is created and proceeds to destroy humanity.
  • Something unforeseen. Certainly scientists cannot anticipate all future risks; none of the risks listed here were known to people a hundred years ago.

Additionally, a number of lesser risks deserve mention: physics disasters–there have been speculations that high-energy physics experiments could knock the space nearest Earth out of a metastable vacuum state, and future developments in theoretical physics may reveal other disturbing possibilities; asteroid or comet impact–this is a small but realthreat; runaway global warming–the warming effect would have to be very large to kill all humans; and annihilation in encounter with an extraterrestrial civilization.

To directly estimate the probability of human existence a century into the future, statisticians would analyze the various specific failure-modes, assign them probabilities, and then subtract the sum of these disaster probabilities from one to determine the success probability. A complementary, indirect way of estimating this success probability is by studying relevant theoretical constraints. One such constraint is based on the Fermi paradox: Could the absence of any signs of extraterrestrial civilizations be due to the fact that that nearly all civilizations reaching a sufficiently advanced stage develop some technology that causes their own destruction? Another is the highly controversial Doomsday argument. The Doomsday argument purports to show that there is now indexical information about current humanity's position in the human species that lends support to the hypothesis that there will not be a significantly larger number of people living after us than have lived before us. Others include the simulation argument mentioned above and studies of risk estimation biases. It is possible that there is a "good story" bias shaping perceptions of risk; scenarios in which humankind suddenly and uncinematically becomes extinct may rarely be explored because they are boring, not because they are improbable.

The gravest near-term risks to human existence are of humanity's own making and involve present or anticipated technologies. Of course, it does not follow that trying to stop technological progress would make the world safer–it could well have the opposite effect. A sensible approach to risk reduction would involve increasing awareness and funding more research on "existential risks;" fostering peace and democracy to reduce the risks of future war; promoting development of protective technologies such as vaccines, anti-viral drugs, detectors, nanotech defense systems, and surveillance technologies; creating a comprehensive catalogue of threatening meteors and asteroids; and retarding the development and proliferation of dangerous applications and weapons of mass destruction. Another possible longer-term approach to risk-reduction involves colonizing space. Proactive approaches must emphasize foresight: In managing existential risks, there is no opportunity to learn from mistakes.

Prospects for long-term human survival remain unclear. If humans begin to colonize other planets, it may be much harder to cause the extinction of the widely-scattered species. Whether the physical laws of the universe will permit intelligent information processing to continue expanding forever is an open question. Scientists' current best understanding of the relevant physics seems to rule this out, but cosmology and quantum gravity still hold many mysteries. If humans survive another hundred years, the species and their intelligent machine descendents may well have a very long time to look for possible loopholes.

See also: Disasters; Future Generations, Obligations to; Outer Space, Colonization of.

bibliography

Bostrom, Nick. 2002. Anthropic Bias: Observation Selection Effects in Science and Philosophy. New York: Routledge.

Drexler, K. Eric. 1985. Engines of Creation: The Coming Era of Nanotechnology. London: Fourth Estate.

Jaffe, Robert L., et al. 2000. "Review of speculative 'disaster scenarios' as RHIC." Review of Modern Physics 72: 1125–40.

Leslie, John. 1996. The End of the World: The Science and Ethics of Human Extinction. London: Routledge.

Morrison, David, et al. 1994. "The Impact Hazard." In Hazards Due to Comets and Asteroids. T. Gehrels. Tucson: The University of Arizona Press.

internet resources.

Bostrom, Nick. 2002. "Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards." Journal of Evolution and Technology 9. <http://www.transhumanist.com/volume9/risks.html>.

Freitas, Robert A., Jr. 2000. "Some Limits to Global Ecophagy by Biovorous Nanoreplicators, with Public Policy Recommendations." Foresight Institute. Richardson, TX: Zyvex Preprint. <http://www.foresight.org/NanoRev/Ecophagy.html>.

Gubrud, Mark. 2000. "Nanotechnology and International Security." Foresight Institute, Fifth Foresight Conference on Molecular Nanotechnology. <http://www.foresight.org/Conferences/MNT05/Papers/Gubrud/index.html>

Nick Bostrom

More From encyclopedia.com