Stellar Life Cycle
Stellar life cycle
Until the last half of the nineteenth century, astronomy was principally concerned with the accurate description of the movements of planets and stars. Developments in electromagnetic theories of light along with the articulation of quantum and relativity theories at the start of the twentieth century, however, allowed astronomers to probe the inner workings of the stars. Of primary concern was an attempt to coherently explain the life cycle of the stars and to reconcile the predictions of advances in physical theory with astronomical observation. Profound questions regarding the birth and death of stars led to the stunning conclusion that, in a very real sense, life itself was a product of stellar evolution .
It is now known that the mass of a star determines the ultimate fate of a star. Stars that are more massive burn their fuel quicker and lead shorter lives. These facts would have astonished astronomers working at the dawn of the twentieth century. At that time an accurate understanding of the source of the heat and light generated by the Sun and stars suffered from a lack of understanding of nuclear processes.
Based on Newtonian concepts of gravity , many astronomers understood that stars were formed in clouds of gas and dust termed nebulae that were measured to be light years across. These great molecular clouds, so thick they are often opaque in parts, teased astronomers. Decades later, after development of quantum theory , astronomers were able to develop a better understanding of the energetics behind the "reddening" of light leaving stellar nurseries (reddened because the blue light is scattered more and the red light passes more directly through to the observer). Astronomers speculated that stars formed when regions of these clouds collapsed (termed gravitational clumping). As the region collapsed, it accelerated and heated up to form a protostar heated by gravitational friction.
The source of heat in stars in the pre-atomic age eluded astronomers who sought to reconcile seeming contradictions regarding how stars were able to maintain their size (i.e., not shrink as they burned fuel) and radiate the tremendous amounts of energy measured. In accord with dominant religious beliefs, the Sun, using conventional energy consumption and production concepts could be calculated to be less than a few thousand years old.
In 1913, Danish Astronomer Ejnar Hertzsprung (1873–1967) and English astronomer Henry Norris Russell (1877–1957) independently developed what is now known as the Hertzsprung-Russell diagram. In the Hertzsprung-Russell diagram, the spectral type (or, equivalently, color index or surface temperature ) is placed along the horizontal axis and the absolute magnitude (or luminosity) along the vertical axis. Accordingly, stars are assigned places top to bottom in order of increasing magnitude (decreasing brightness) and from right to left by increasing temperature and spectral class.
The relation of stellar color to brightness was a fundamental advance in modern astronomy. The correlation of color with true brightness eventually became the basis of a widely used method of deducing spectroscopic parallaxes of stars that allowed astronomers to estimate how far distant stars are from the Earth (estimates from closer stars could be made by geometrical parallax).
In the Hertzsprung-Russell main sequence, stars (those later understood to be burning hydrogen as a nuclear fuel) form a prominent band or sequence of stars from extremely bright, hot stars in the upper left-hand corner of the diagram to faint, relatively cooler stars in the lower right-hand corner of the diagram. Because most stars are main sequence stars, most stars fall into this band on the Hertzsprung-Russell diagram. The Sun, for example, is a main-sequence star that lies roughly in the middle of the diagram among what are referred to as yellow dwarfs.
Russell attempted to explain the presence of giant stars as the result of large gravitational clumps. Stars, according to Russell, would move down the chart as they lost mass burned as fuel. Stars began life huge cool red bodies and then undergo a continual shrinkage as they heated. Although the Hertzsprung-Russell diagram was an important advance in understanding stellar evolution—and it remains highly useful to modern astronomers—Russell's reasoning behind the movements of stars on the diagram turned out to be exactly the opposite of the modern understanding of stellar evolution made possible by an understanding of the Sun and stars as thermonuclear reactors.
Advances in quantum theory and improved models of atomic structure made it clear to early twentieth century astronomers that deeper understanding of the life cycle of stars and of cosmological theories explaining the vastness of space was to be forever tied to advances in understanding inner workings of the universe on an atomic scale. A complete understanding of the energetics of mass conversion in stars was provided by Albert Einstein's (1879–1955) special theory of relativity and his relation of mass to energy (Energy = mass times the square of the speed of light).
During the 1920s, based on the principles of quantum mechanics, British physicist Ralph H. Fowler determined that, in contrast to the predictions of Russell, a white dwarf would become smaller as its mass increased.
Indian-born American astrophysicist Subrahmanyan Chandrasekhar (1910–1995) first articulated the evolution of stars into supernova, white dwarfs, neutron stars and for predicting the conditions required for the formation of black holes, which were subsequently found in the later half of the twentieth century. Before the intervention of World War II, American physicist J. Robert Oppenheimer (1904–1967), who ultimately supervised Project Trinity (the making of the first atomic bombs), made detailed calculations reconciling Chandrasekhar's predictions with general relativity theory .
In the decade that followed, as the mechanisms of atomic fission and fusion worked their way into astronomical theory, it became apparent that stars spend approximately ninety percent of their lives as main sequence stars before the fate dictated by their mass becomes manifest.
Astronomers refined concepts regarding stellar birth. Eventually as a protostar contracts enough, the increase in its temperature triggers nuclear fusion , and the star becomes visible as it vaporizes the surrounding cocoon. Stars then lead the majority of their life as main sequence stars, by definition burning hydrogen as their nuclear fuel.
It was the death of the stars, however, that provided the most captivating consequences. Throughout the life of a star, a tensional tug-of-war exists between the compressing force of the star's own gravity and the expanding pressures generated by nuclear reactions at its core. After cycles of swelling and contraction associated with the burning of progressively heavier nuclear fuels , eventually the star runs out of useable nuclear fuel. The spent star then contracts under the pull of it own gravity. The ultimate fate of any individual star is determined by the mass of the star left after blowing away its outer layers during its paroxysmal death spasms.
Low mass stars could fuse only hydrogen, and when the hydrogen was used up fusion stopped. The expended star shrank to become a white dwarf.
Medium mass stars swell to become red giants, blowing off planetary nebulae in massive explosions before shrinking to white dwarfs. A star remnant less than 1.44 times the mass of the Sun (termed the Chandrasekhar limit) collapses until the pressure in the increasing compacted electron clouds exerts enough pressure to balance the collapsing gravitational force. Such stars become "white dwarfs" contracted to the radius of only a few thousand kilometers, roughly the size of a planet. This is the fate of the Sun.
High mass stars can either undergo carbon detonation or additional fusion cycles that create and then use increasingly heavier elements as nuclear fuel. Regardless, the fusion cycles can only use heavier elements up to iron (the main product of silicon fusion). Eventually, as iron accumulates in the core, the core can exceed the Chandrasekhar limit of 1.4 times the mass of the Sun and collapses. These preliminary theoretical understandings paved the way for many of the discoveries in the second half of the twentieth century when it was more fully understood that as electrons are driven into protons neutrons are formed and energy is released as gamma rays and neutrinos. After blowing off its outer layers in a supernova explosion (type II) the remnants of the star form a neutron star and/or pulsar (discovered in the late 1960s).
Although he did not completely understand the nuclear mechanisms (nor of course the more modern terminology applied to those concepts), Chandrasekhar's work allowed the prediction that such neutron stars would be only a few kilometers in radius and that within such a neutron star the nuclear forces and the repulsion of the compressed atomic nuclei balanced the crushing force of gravity. With more massive stars, however, there was no known force in the universe that could withstand the gravitational collapse. Such extraordinary stars would continue their collapse to form a singularity—a star collapsed to a point of infinite density. According to general relativity, as such, a star that collapses its gravitational field warps space time so intensely that not even light can escape and a black hole forms.
Although the modern terminology presented here was not the language of early twentieth century astronomers, German astronomer Karl Schwarzschild (1873–1916) made important early contributions to the properties of geometric space around a singularity when warped according to Einstein's general relativity theory.
There are several important concepts stemming from the evolution of stars that have enormous impact on science and philosophy in general. Most importantly, the articulation of the stellar evolutionary cycle had profound effects on the cosmological theories developed during the first half of the twentieth century that culminated with the Big Bang theory , first proposed by Russian physicist Alexander Friedmann (1888–1925) and Belgian astronomer Georges Lemaitre (1894–1966) in the 1920s and subsequently modified by Russian-born American physicist George Gamow (1904–1968) in the 1940s.
The observations and theories regarding the evolution of stars meant that only hydrogen, helium, and a perhaps a smattering of lithium were produced in the big bang. The heavier elements including, of course, carbon and oxygen , up to iron were determined to have their genesis in the cores of increasingly massive dying stars. The energy released in the supernova explosions surrounding stellar death created shock waves that gave birth via fusion to still heavier elements and allowed the creation of radioactive isotopes.
The philosophical implications of this were as startling as quantum and relativity theories underpinned the model. Essentially, all of the elements heavier than hydrogen that comprise man's everyday existence were literally cooked in a process termed nucleosynthesis that took place during the paroxysms of stellar death. Great supernova explosions scattered these elements across the Cosmos.
See also Cosmology