Life Cycles of the Stars
Life Cycles of the Stars
Overview
Until the last half of the nineteenth century, astronomy was principally concerned with the accurate description of the movements of planets and stars. However, developments in electromagnetic theories of light, along with the articulation of quantum and relativity theories at the start of the twentieth century, allowed astronomers to probe the inner workings of the stars. Of primary concern was an attempt to coherently explain the life cycle of the stars and to reconcile the predictions of newly advanced physical theories with astronomical observation. Profound questions regarding the birth and death of stars led to the stunning conclusion that, in a very real sense, humans are a product of stellar evolution.
Background
It is now known that the mass of a star determines the ultimate fate of a star. Stars that are more massive burn their fuel quicker and lead shorter lives. These facts would have astonished astronomers working at the dawn of the twentieth century. At that time, understanding of the source of the heat and light generated by the Sun and stars was hindered by a lack of understanding of nuclear processes.
Based on Newtonian concepts of gravity, many astronomers understood that stars were formed in clouds of gas and dust, termed nebulae, that were measured to be light years across. These great molecular clouds, so thick they are often opaque in parts, teased astronomers. Decades later, after development of quantum theory, astronomers were able to understand the energetics behind the "reddening" of light leaving stellar nurseries (reddened because the blue light is scattered more and the red light passes more directly through to the observer). Astronomers speculated that stars formed when regions of these cloud collapsed (termed gravitational clumping). As the region collapsed it accelerated and heated up to form a protostar, heated by gravitational friction.
In the pre-atomic age, the source of heat in stars was a mystery to astronomers who sought to reconcile apparent contradictions regarding how stars were able to maintain their size (i.e., not shrink as they burned fuel) and radiate the tremendous amounts of energy measured. In accordance with dominant religious beliefs, the Sun, using conventional energy consumption and production concepts, could be calculated to be less than a few thousand years old.
In 1913 Danish astronomer Ejnar Hertzsprung (1873-1967) and English astronomer Henry Norris Russell (1877-1957) independently developed what is now known as the Hertzsprung-Russell diagram. In the Hertzsprung-Russell diagram the spectral type (or, equivalently, color index or surface temperature) is placed along the horizontal axis and the absolute magnitude (or luminosity) along the vertical axis. Accordingly, stars are assigned places top to bottom in order of increasing magnitude (decreasing brightness) and from right to left by increasing temperature and spectral class.
The relation of stellar color to brightness was a fundamental advance in modern astronomy. The correlation of color with true brightness eventually became the basis of a widely used method of deducing spectroscopic parallaxes of stars, which allowed astronomers to estimate the distance of distant stars from Earth (estimates for closer stars could be made by geometrical parallax).
In the Hertzsprung-Russell diagram, main sequence stars (those later understood to be burning hydrogen as a nuclear fuel) form a prominent band or sequence of stars, from extremely bright, hot stars in the upper left-hand corner of the diagram, to faint, relatively cooler stars in the lower right-hand corner of the diagram. Because most stars are main sequence stars, most stars fall into this band on the Hertzsprung-Russell diagram. The Sun, for example is a main sequence star that lies roughly in the middle of diagram, among what are referred to as yellow dwarfs.
Russell attempted to explain the presence of giant stars as the result of large gravitational clumps. Stars, according to Russell, would move down the chart as they lost mass burned as fuel. Stars begin their life cycle as huge, cool red bodies, and then undergo a continual shrinkage as they are heated. Although the Hertzsprung-Russell diagram was an important advance in understanding stellar evolution—and it remains highly useful to modern astronomers—Russell's reasoning behind the movements of stars on the diagram turned out to be exactly the opposite of modern understanding of stellar evolution, which is based on an understanding of the Sun and stars as thermonuclear reactors.
Advances in quantum theory and improved models of atomic structure made it clear to early twentieth-century astronomers that deeper understanding of the life cycle of stars and of cosmological theories explaining the vastness of space was linked to advances in the understanding of the inner workings of the universe on an atomic scale. A complete understanding of the energetics of mass conversion in stars was provided by Albert Einstein's (1879-1955) special theory of relativity and his relation of mass to energy (Energy = mass times the speed of light squared, or E = mc2).
During the 1920s, based on the principles of quantum mechanics, British physicist Ralph H. Fowler determined that, in contrast to the predictions of Russell, a white dwarf would become smaller as its mass increased.
Indian-born American astrophysicist Subrahmanyan Chandrasekhar (1910-1995) first classified the evolution of stars into supernova, white dwarfs, and neutron stars and predicted the conditions required for the formation of black holes, subsequently found in the later half of the twentieth century. Prior to World War II, American physicist J. Robert Oppenheimer (1904-1967), who ultimately supervised project Trinity (the making of the first atomic bombs), made detailed calculations reconciling Chandrasekhar's predictions with general relativity theory.
Over time, as the mechanisms of atomic fission and fusion worked their way into astronomical theory, it became apparent that stars spend approximately 90% of their lives as main sequence stars before the fate dictated by their mass becomes manifest.
Astronomers refined concepts regarding stellar birth. Eventually, as a protostar contracts enough, the increase in its temperature triggers nuclear fusion, and the star becomes visible as it vaporizes the surrounding cocoon. Stars then lead the majority of their life as main sequence stars, by definition burning hydrogen as their nuclear fuel.
It was the death of the stars, however, that provided the most captivating consequences.
Throughout the life of a star, a tense tug-ofwar exists between the compressing force of the star's own gravity and the expanding pressures generated by nuclear reactions at its core. After cycles of swelling and contraction associated with the burning of progressively heavier nuclear fuels, eventually the star runs out of useable nuclear fuel. The spent star then contracts under the pull of its own gravity. The ultimate fate of any individual star is determined by the mass of the star left after blowing away its outer layers during its paroxysmal death spasms.
Low mass stars could fuse only hydrogen, and when the hydrogen was used up, fusion stopped. The expended star shrank to become a white dwarf.
Medium mass stars swell to become red giants, blowing off planetary nebulae in massive explosions before shrinking to white dwarfs. A star remnant less than 1.44 times the mass of the Sun (termed the Chandrasekhar limit) collapses until the pressure in the increasingly compacted electron clouds exerts enough pressure to balance the collapsing gravitational force. Such stars become "white dwarfs," contracted to a radius of only a few thousand kilometers—roughly the size of a planet. This is the fate of our Sun.
High mass stars can either undergo carbon detonation or additional fusion cycles that create and then use increasingly heavier elements as nuclear fuel. Regardless, the fusion cycles can only use heavier elements up to iron (the main product of silicon fusion). Eventually, as iron accumulates in the core, the core can exceed the Chandrasekhar limit of 1.44 times the mass of the Sun and collapse. This preliminary theoretical understanding paved the way for many of the discoveries in the second half of the twentieth century, when it was more fully understood that as electrons are driven into protons, neutrons are formed and energy is released as gamma rays and neutrinos. After blowing off its outer layers in a supernova explosion (type II), the remnants of the star formed a neutron star and/or pulsar (discovered in the late 1960s).
Although he did not completely understand the nuclear mechanisms (nor, of course, the more modern terminology applied to those concepts), Chandrasekhar's work allowed for the prediction that such neutron stars would be only a few kilometers in radius and that within such a neutron star the nuclear forces and the repulsion of the compressed atomic nuclei balanced the crushing force of gravity. With more massive stars, however, there was no known force in the universe that could withstand the gravitational collapse. Such extraordinary stars would continue their collapse to form a singularity—a star collapsed to a point of infinite density. According to general relativity, as such a star collapses its gravitational field warps space-time so intensely that not even light can escape, and a "black hole" forms.
Although the modern terminology presented here was not the language of early twentieth-century astronomers, German astronomer Karl Schwarzschild (1873-1916) made important early contributions toward understanding of the properties of geometric space around a singularity when warped, according to Einstein's general relativity theory.
Impact
There are several important concepts stemming from the evolution of stars that have had enormous impact on science and philosophy in general. Most importantly, the articulation of the stellar evolutionary cycle had a profound effect on the cosmological theories developed during the first half of the twentieth century, culminating in the Big Bang theory, first proposed by Russian physicist Alexander Friedmann (1888-1925) and Belgian astronomer Georges Lemaître (1894-1966) in the 1920s and subsequently modified by Russian-born American physicist George Gamow (1904-1968) in the 1940s.
The observations and theories regarding the evolution of stars meant that only hydrogen, helium, and a perhaps a smattering of lithium were produced in the big bang. The heavier elements, including carbon, oxygen, and iron, were determined to have their genesis in the cores of increasingly massive dying stars. The energy released in the supernova explosions surrounding stellar death created shock waves that gave birth via fusion to still heavier elements and allowed the creation of radioactive isotopes.
The philosophical implications of this were as startling as the quantum and relativity theories underpinning the model. Essentially all of the elements heavier than hydrogen that comprise everyday human existence were literally cooked in a process termed nucleosynthesis that took place during the paroxysms of stellar death. Great supernova explosions scattered these elements across the cosmos.
By the mid-twentieth century, man could look into the night sky and realize that he is made of the dust of stars.
K. LEE LERNER
Further Reading
Hawking, Stephen. A Brief History of Time. New York: Bantam Books, 1988.
Hoyle, Fred. Astronomy. Garden City, New York: Doubleday, 1962.
Sagan, Carl. Cosmos. New York: Random House, 1980.
Trefil, James. Space, Time, Infinity. New York: Pantheon Books, 1985.