Weaponry, Evolution of

views updated

Weaponry, Evolution of. Weapons are instruments designed to harm, kill, or otherwise disable other human beings, to destroy other military resources, or to deter an enemy's ability to make war through the actual or threatened destruction of crucial components of their society. Broadly conceived, weapons include not only the instruments themselves and their munitions but also their delivery vehicles—so‐called weapons platforms: tanks, ships, aircraft, missile launchers. Today, the combinations are often labeled weapons systems.

Because weapons, like other physical objects, operate under natural laws, discoveries in chemistry, physics, quantum mechanics, and other areas of science and technology have helped propel both the industrial revolution and the dramatic expansion of weaponry in the nineteenth and twentieth centuries. Thus, as shown in the accompanying articles on the development of weaponry in the army, marine corps, navy, and air force, the U.S. armed forces have followed and sometimes originated major developments in science and technology.

Improvements in metallurgy, for example, created stronger gun barrels. These could withstand more powerful explosive charges, themselves the result of chemical discoveries. Stronger rifled barrels in turn provided more accuracy and longer range for standard infantry side arms, artillery, and naval guns. Mechanical improvements eventually produced automatic weapons, including self‐loading, magazine rifles and machine guns. The internal combustion engine led to the development of submarines, tanks, and aircraft in the twentieth century. At sea, steam power, iron, and steel transformed naval warships in the nineteenth century. Later, submarines were transformed by new alloys, shapes, and nuclear propulsion. Aircraft made the transition from fabric and wood to aluminum in the 1930s, then more recently in some cases to titanium, carbon‐fibre composites and high‐strength plastics. Experiments in rocketry combined with developments in guidance mechanisms and gas‐turbine engines led to jet aircraft and to ballistic and cruise missiles. Computer technology and electronic sensing and guidance systems have dramatically improved fire control and accuracy, leading eventually to precision‐guided munitions designed to make corrections in flight and on final approach to the target.

Weapons of mass destruction are in a class by themselves. Choking and burning forms of poison gas first developed in World War I were later augmented by nerve agents. Stocks of infectious microbes and other toxins were accumulated for chemical and biological weapons and warfare, but the controversy over the use of Agent Orange and other defoliants in the Vietnam War led President Richard Nixon to renounce biological and toxin weapons, to begin destroying the stocks of toxic agents, and to ratify an international agreement prohibiting them. In 1997, the United States ratified a treaty banning poison gas weapons.

The development of nuclear fission weapons and later thermonuclear fusion weapons represented an incomparable revolution in weaponry. Yet their enormous lethality contributed to a universal refusal to use the weapons after the bombing of Hiroshima and Nagasaki. Thus nuclear weapons have become predominantly instruments of threat, operating in a nuclear strategy described as deterrence. The proliferation of such weapons to additional countries and possibly eventually to terrorist groups has long threatened to weaken the tabu against their use. Attempts to curtail weapons of mass destruction have been offset in part by the growing lethality and destructiveness of conventional arms.

But the evolution of weaponry has not been simply a narrow history of scientific invention or technological development. Weapons are artifacts both of the armed forces and of the societies that create them. Essential to the conduct of war, they can in part be understood through the functions they are expected to perform in warfare on the land, at sea, or in the air. But a fuller understanding of their evolution derives from the recognition that their origins and development derive from particular inventions and also from larger cultural attitudes and ideology and political, military, economic and other institutional structures in society which help to define national security and allocate resources for defense.

Technology—the purposeful, systematic manipulation of the material world—encompasses, of course, inventions for both civilian and military use. Increasingly in the past two centuries, radically new science‐based technologies—inventions providing new power sources and means of transportation and communication, for example—have had a transformative effect on society, and on warfare. But despite widespread popular belief in technology as a determinative agent of change, indeed as part of the culture of modernity, a debate continues over the inevitability of the social consequences of particular major inventions. While some see technology as a virtually autonomous agent of change, others contextualize it in larger socio‐cultural processes. The latter emphasize that material innovation is initiated and developed, or not developed by human beings with particular abilities and resources (the gun was largely banned from feudal Japan, for example, for more than two centuries; see Noel Perrin , Giving up the Gun: Japan's Reversion to the Sword, 1543–1879, 1979)
. Despite the power of a technological development once it has begun, the beginning and end of every such sequence, as Robert L. O'Connell (Of Arms and Men, 1989) has said, is a point when human choice can and does exert itself.

Dedicated to the idea of progress and heirs of the Enlightenment, Americans have traditionally embraced science and technology as instruments for human and material betterment as well as national security. Ingenuity and invention have been valued attributes, protected legally and rewarded economically. All of this encouraged technological development and change.

The military, however, has traditionally not sought nor often welcomed change. Virtually all the most important military devices invented in nineteenth‐century Europe or America—the breech‐loading rifle; built‐up steel, rifled cannon; effective armored warships; the automatic machine gun; the modern submarine—originated with civilians who brought them uninvited to the military. None of the most important weapons transforming warfare in the twentieth century—the airplane, tank, radar, jet engine, helicopter, electronic computer, not even the atomic bomb—owed its initial development to a doctrinal requirement or request of the military.

Despite their desire for more weapons, most admirals and generals until World War II had been reluctant to adopt new and unproven weapons. The U.S. Army initially rejected development of the revolver, the repeating rifle, and the machine gun in the mid‐nineteenth century. It suppressed generations of available improvements in artillery in the nineteenth and twentieth centuries. And until the eve of World War II, it delayed development of the tank, which later became its most favored weapon. The U.S. Navy rejected or resisted pivotal inventions by David Bushnell, Robert Fulton, Samuel Colt, and John Ericsson, and it suppressed and sometimes even persecuted such uniformed technological reformers as John Dahlgren, William Sims, and Hyman Rickover. Even in its comparatively short history, the U.S. Air Force, with its dedication to piloted planes, initially resisted liquid fueled missiles, sold fuel missiles, cruise missiles, and unmanned spacecraft.

The armed forces tend to be even less flexible than most other large bureaucratic organizations. In part, this results from their compartmentalization, need for standardization, innate conservatism, and the limitations imposed on them by Congress. Partly it is because military organizations, designed to operate at great risk in a medium of enormous uncertainty—the unpredictability and chaos of war—have emphasized discipline and subordination in a rigidly hierarchical command structure. But the reluctance of the military bureaucracy to innovate has other sources as well. Traditionally, it reflected a dedication to an existing weapon already proven in combat and integrated into doctrine and training (and deployed at great expense) over uncertainties about a projected weapon, which might or might not eventually prove itself in combat. The new weapon's failure, of course, might well mean the deaths of many of those relying on it. High‐ranking officers with the power to make such decisions often owe their lives and their careers to particular weapons and doctrines. The officer corps of each branch is a community, and as Elting Morison (Men, Machines, and Modern Times, 1967) suggests, communities, particularly to the degree that they are autonomous and isolated from external influence, are often resistant to change. Particularly with radical innovation, resistance may stem from concerns about the costs of purchasing an expensive but unproven technology or fears of potential impact upon the structure, status, and traditions of the organization. Officers of the sailing ship navy in the mid‐nineteenth century were correct in their fears that the replacement of sails by steam propulsion would mean the end to an entire way of life.

Civilian leaders have often been more receptive to radical new weapons technologies than the military. Consequently, uniformed reformers, civilian inventors, or corporate manufacturers have often circumvented the military bureaucracy through political connections. Frustrated, Samuel Colt sent his proposal for underwater mines directly to Congress; Dalhgren took his ideas about a new naval gun to President Abraham Lincoln; and William Sims relayed his proposals for rapid‐firing gunnery directly to President Theodore Roosevelt. Less successfully, Billy Mitchell took his case for air power to the public in an abortive attempt to exert public pressure on Congress and President Calvin Coolidge.

Although traditionally not the initiator of new weapons (since World War II, this has been reversed and the military has become the initiator), the military has often been quite successful in developing those that it became convinced were warranted. In time of war or continuing danger to national security, the government has mobilized enormous financial resources for the military, particularly for weaponry. Before World War II, most of America's wars were too short to be fought with weapons other than those on hand or in development at the beginning of the conflict (the lead time on research and development of modern sophisticated weapons can run 15 years or more). The atomic bomb, developed in a massive effort under the supervision of the Army Corps of Engineers' Manhattan Project in three years (1943–45), was an exception.

Once invented and adopted, military weapons have been produced in the United States either by government facilities or more commonly in the twentieth century by corporate manufacturers under government contract. The new republic used its own national armories at Springfield, Massachusetts, left from the Revolutionary War, and Harper's Ferry, Virginia, newly constructed by 1801. After decades of producing small arms by hand, by 1842 the armories introduced large‐scale assembly of muskets from uniform, interchangeable parts. Together with their private competitors, such as Colt's factory in Hartford, Connecticut, the federal armories became important centers of technological and manufacturing innovation, contributing to what arms makers and others around the world soon called the “American system of manufactures.” To make cannon, caissons, gunpowder, and other military supplies, the government possessed five federal arsenals, in or near Boston, upstate New York, Philadelphia, Pittsburgh, and Washington, D.C. (with later additions at Rock Island, Illinois, and Fayetteville, North Carolina).

Thus in the nineteenth century, government manufacturing for the military provided a means to continue technological development, when private manufacturers feared uncertain economic returns in a market environment offering large‐scale profits for such items mainly in war‐time. During the Civil War, the military‐run government facilities ran at full capacity while also providing the specifications and techniques for private subcontractors to mass produce arms for the Union Army. The first ships of the U.S. Navy were built in half a dozen private shipyards along the Atlantic Coast in the 1790s. Later government navy yards were erected to repair the fleet and for some new construction, but the Navy Department always relied more on private contractors than on its own yards for the construction of new vessels whether in the wooden, iron, or steel navy.

After the Civil War, the spending cutbacks and other factors resulted by 1900 in the U.S. Army being a decade behind European militaries in the development of small arms and artillery. The increasing complexity of weaponry in the twentieth century and the possibilities of sustained high economic profits, first in research and development for the navy, then for the air service, and finally for the ground forces, led corporations to become continuing military contractors and the government to phase out most of its own armories, arsenals, and shipyards for conventional weapons. The U.S. government continued, however, to underwrite National Laboratories for research and development of nuclear weapons.

Production of weapons has always been profitable for private entrepreneurs in wartime, but the Cold War (1947–1991) produced a market of unprecedented duration and size for weapons. Scholars debate the origins of what President Dwight D. Eisenhower in 1961 called the “Military‐Industrial Complex,” some seeing its antecedents in the steel and steam naval construction program of the late nineteenth and early twentieth centuries others with the nexus established between the army air service, aircraft manufacturers and Congress in the 1920s and 1930s. Whatever the origins, the scale of industrial development and production of weaponry on a sustained basis has grown extraordinarily in the last sixty years, a period when, as Michael Sherry has written, Americans since 1939 lived Under the Shadow of War (1995).

The politically influential, triangular relationship between the military, defense contractors, and Congress, meant that a comparatively few giant corporations that dominated the defense contracting industry were essentially guaranteed a sustained market by the U.S. government. During the Cold War, the arms race between the United States and its NATO allies and the Soviet Union and the other Warsaw Pact nations encompassed conventional and nuclear weapons. The threat of nuclear war and the concept of deterrence meant a sustained condition of constant readiness for war, which led the U.S. military to modify some of its traditional resistance to declaring proven weapons obsolete or at least obsolescent. Instead, in concert with Congress, the Department of Defense kept research institutes, national laboratories, and defense contractors busy with requests for new and improved generations of weapons.

U.S. defense spending for most of the Cold War averaged about 7 percent of the Gross National Product (GNP), surging briefly during the administration of President John F. Kennedy to 10 per cent. As a result for more than forty years, the armed forces exerted an unprecedented continuing influence on the American economy. Domestically, such massive defense spending beginning in 1950 may have helped prevent a post‐war World War II depression as followed the cancellation of war orders after World War I, but such continued “military Keynesianism,” skewed the operation of the market system in allocation of human, financial, and material resources, a phenomenon, William H. McNeill (The Pursuit of Power, 1982) linked to a “command economy” in which the state drives the economy through the development and production of the technology of war.

Such unprecedented defense spending, particularly the development and acquisition of weaponry, was eventually challenged. Criticism and protest against certain weapons systems was hardly new. Theodore Roosevelt's battleship building program had been curtailed by public and congressional outcries against its cost. Immediately after World War I, big business joined the peace movement in stopping a second naval arms race. Development of chemical weapons was restrained in the 1920s by public outrage on moral grounds as well as protests from old‐line army leaders on the basis of tradition and ineffectiveness. In the 1950s, nuclear protest movements lobbied for restriction or elimination of nuclear weapons on various grounds: moral, health, ecological, and humanitarian. Such protests helped produce in 1963 an end to the testing of nuclear weapons above ground (with its airborne radioactive fallout). The SALT Treaties (1972, 1979) and the START negotiations reversed the nuclear arms race even before the end of the Cold War in 1991 (indeed the end of the Cold War has paradoxically made it difficult to complete START). The general downturn in arms expenditures, both in the United States and the world at large, began in 1987, before the final collapse of the Soviet Union and the official end of the Cold War.

The Vietnam War (1965–73) divided Americans and raised questions about failure of the U.S. military. Americans' belief in technological progress was also challenged by a series of setbacks including problems with nuclear energy plants and the space program as well as increased concerns about environmental and health damage from new technologies and their products. These contributed to some skepticism about technological progress and inevitability and a belief that politics, markets, and organizational structures could also condition outcomes, implying that some aspects of technological development can be controlled by political and economic decisions.

Militarily in the 1960s and 1970s, rapidly rising prices and the clear numerical superiority in conventional forces in Europe of the Warsaw Pact ignited major debates in the United States over the armed forces, their force structure, strategy, and weaponry. These debates involved issues of military effectiveness and also of civilian contractors' cost‐overruns, waste, fraud, and abuse, revealed in congressional and journalistic investigations.

A military reform movement, originating in a controversy over a new fighter plane for the air force, began a debate which spread through Congress and each of the services, prompting a searching examination of the Cold War focus on new, larger, more sophisticated, and more expensive weaponry. It raised the possibility of less expensive yet adequate alternatives, many small aircraft carriers instead of a few supercarriers, for example, or a single type of fighter aircraft that could be used with modifications by the air force, navy, and marines. The reformers liked to point out that cutting‐edge technology was not always the most appropriate, not always decisive or even victorious in war, as evidenced arguably by the failure of the Germans in Russia in World War II, the French and Americans in Vietnam, and the Russians in Afghanistan.

Beginning in 1979, Soviet actions and resurgent anti‐communism in the United States led President Jimmy Carter reluctantly and President Ronald Reagan enthusiastically to increase U.S. defense spending dramatically. The Reagan administration achieved the largest peacetime military buildup in U.S. history (approximately $2.4 trillion spent overall in 1981–89). The focus was on weapons, and each military service obtained long‐delayed and often controversial weapons systems, including the B‐1 bomber, the MX intercontinental ballistic missile (ICBM), new vehicles and helicopters, the Trident II submarine‐launched ballistic missile (SLBM), and many new warships to build toward a goal of a 600 ship navy.

The escalating arms race and the bellicosity of the Reagan administration triggered considerable opposition. The largest protest demonstrations since the Vietnam War failed to prevent the deployment of new, nuclear‐tipped, intermediate range ballistic missiles in Europe. But dissent within the scientific community and skepticism in the media limited research on President Reagan's proposed missile defense project, the Strategic Defense Initiative (SDI), known as “star wars” after a popular science fiction movie of the time. Debate continues over the reasons for the collapse of the Soviet Union in 1989–91. Some link it to economic pressures resulting from the arms race resumed by the United States a decade earlier; others attribute the failure to accumulating systemic problems in Russia and its empire.

In the U.S. armed forces, the reform plans of the 1970s and the buildup of the 1980s produced American forces in Europe which had shifted from a strategy emphasizing overwhelming firepower including nuclear weapons to the “AirLand Battle” focusing on more effective use of conventional air and ground forces to outmaneuver and defeat the greater numbers of the Warsaw Pact. Modified for different conditions, the concept and weapons were used successfully in the Persian Gulf War in 1991. Its aircraft and precision‐guided munitions were employed again in the Kosovo Crisis of 1999.

The end of the Cold War in 1991 did result in cutbacks in defense spending, even if not as much as many had expected. Although some defense contractors went out of business, merged, or shifted to other production, a military‐industrial complex, decidedly smaller, continued to exist. The American market had shrunk. U.S. defense spending in 1995 was down to 4.3 percent of Gross National Product. Defense contracting still remained lucrative to some, however. At beginning of 2000, Lockheed‐Martin and Boeing were competing against each other for the largest military contract in history, nearly one‐third of a trillion dollars, to design a Joint Strike Fighter plane, capable with modifications of serving the needs of the air force, navy, and marines, and to build 5,000 them, replacing most of the existing fighter planes (not the F‐15s or F‐18s, however) in the U.S. armed forces.

American defense contractors also turned again to foreign markets. There, limited only by certain legal constraints designed to keep the most sensitive military secrets secure from potential enemies (a continuing challenge), they competed with other arms makers. In the international arms marketplace, the new weaponry was often valued as much for the prestige that such weapons, for example, the latest most sophisticated fighter planes, seem to provide for a nation and its government and armed forces as for their contribution to that nation's security.

In the U.S. experience, as Alex Roland suggested (Journal of Military History, 1991), the development of military technology in relationship to strategy and to ground warfare, for example, has been shaped in part by fundamental American views and practices as well as the technology itself. The value put on the individual human life and labor of U.S. citizens, a concept rooted in early labor scarcity and reinforced by American democracy, has contributed to an emphasis on citizen‐soldiers, trying to protect them against usually greater enemy numbers through superior technology, especially weapons of greater firepower and accuracy. Additionally, fear of standing armies and an insistence on civilian control of the military, a reaction to British policies, contributed, directly through the Constitution's two‐year limit on military appropriations, to inhibiting long‐term development projects for the army. The navy and the air force are by definition technology‐dependent services and have required by necessity long‐term development of their ships, planes, and missiles.

For most of the nineteenth century and even the early twentieth century, the United States enjoyed freedom from threats of sudden attack by a foreign foe. This allowed the nation to be generally free from the need to prepare massive ground forces or to some extent even major naval forces in advance of war. In concert with foreign policies of neutrality and isolationism, the majority of Americans came to view this situation of comparatively free security as a natural condition for the United States. With the exception of certain expansionist‐minded industrialists and navalists at the turn of the century, no influential group saw the need or desirability to have large and expensive stocks of the latest weapons on hand. To convince Americans to build one of the largest navies in the world at the turn of the century, navalists like Theodore Roosevelt, had to link the gleaming battleships and armored cruisers of the “Great White Fleet” with the prestige of the world's newest and most powerful industrialized nation.

The era of comparatively free security was suspended with the Japanese attack on Pearl Harbor and U.S. entry into World War II, and it certainly stopped for nearly half a century during the Cold War. The commitment to containing the threat from the Soviet Union and communism, meant the development of a sustained, enormous market for weaponry, which was supplied by American defense contractors.

American decisions in the Cold War to push for the most advanced technologies and to build big, sophisticated, expensive weapons, however, over more smaller, less complex weapons, even if it meant fewer rather than more weapons, involved many factors: military, economic, political, and also cultural. For such decisions, like those at the turn of the century to build more battleships and fewer smaller warships like submarines and destroyers, can also reflect images of national identity. As the “Great White Fleet” was said to represent America's emergent status as a “world power,” so the giant bomber aircraft and supercarriers of the Cold War reinforced its image as the leader and protector of the “free world.” Even after the end of the Cold War, as economic competition surpassed military conflict as the primary continuing concern of industrialized nations, the image of America's most sophisticated weaponry—the Stealth aircraft and precision‐guided munitions were most prominent at the end of the twentieth century—remained linked in many minds to the prestige of the United States.

Yet for purposes of self‐image as well as self‐interest, Americans have sometimes sought to limit the development of certain weapons. The United States, for example, curtailed battleship development in the Washington Naval Arms Limitation Treaty of 1922. It restricted aspects of the development of nuclear weapons in the Limited Test Ban Treaty of 1963 and the Comprehensive Test Ban Treaty signed in 1996 (although still not ratified in the summer of 1999). There was also a major international movement to ban the use of land mines, but because of their use to defend South Korea and the U.S. naval base at Guantanamo, Cuba, the U.S. government had not yet joined the international agreement to prohibit land mines as the century ended. Some attempts were made to limit weaponry in outer space, but such technology has grown dramatically since the late 1950s, particularly the increasing use of military satellites in earth orbit. The development of weapons systems for attacking satellites and proposals for ballistic missile defense systems such as SDI have extended the dangers of warfare to outer space.

At the dawn of the twenty‐first century, future directions of weaponry and warfare are unclear in the post–Cold War world and the military missions of preparing for regional and littoral conflict, anti‐terrorism, and peacekeeping operations. But requests from the U.S. military for satellite global positioning systems, microcomputers, superconductors, fiber optics, and biotechnical materials suggest that the cyber revolution has led to new forms of vulnerability, for example, the electronic network upon which postmodern societies and their military depend. Such dual‐use technology also suggests the degree to which the American economic and technological infrastructure has come to be seen as a backbone of national security. Whatever the weaponry of the future, decisions about its development or nondevelopment will be shaped by technological innovation and by cultural attitudes and political, economic, and military institutions as well as dominant perceptions of the international situation.
[See alsoArms Control and Disarmament; Arms Race; Civil‐Military Relations: Civilian Control of the Military; Disciplinary Views of War: History of Science and Technology; Economy and War; Industry and War; Military‐Industrial Complex; Nuclear Weapons; Procurement; Public Financing and Budgeting for War; Science, Technology, War and the Military; Space Program, Military Involvement in the; War: American Way of War.]

Bibliography

Walter Millis , Arms and Men, 1956;
Arthur A. Ekirch , The Civilian and the Military: A History of the American Antimilitarist Tradition, 1956;
Ralph Lapp , Arms Beyond Doubt: The Tyranny of Weapons Technology, 1970;
Merritt Roe Smith , Harpers Ferry Armory and the New Technology: The Challenge of Change, 1977;
Alex Roland , Underwater Warfare in the Age of Sail, 1978;
Trevor N. Dupuy , The Evolution of Weapons and Warfare, 1980;
William H. McNeill , The Pursuit of Power: Technology, Armed Force, and Society since A.D. 1000, 1982;
Martin van Creveld , Technology and War: From 2000 B.C. to the Present, 1989;
Robert L. O'Connell , Of Arms and Men: A History of War, Weapons, and Aggression, 1989;
Thomas L. McNaughter , New Weapons: Old Politics: America's Procurement Muddle, 1989;
Donald MacKenzie , Inventing Accuracy: An Historical Sociology of Nuclear Missile Guidance, 1990;
Alex Roland , Technology, Ground Warfare, and Strategy: The Paradox of American Experience, Journal of Military History (October 1991,): 447–468;
Bhupendra Jasani, ed., Outer Space: A Source of Conflict or Co‐Operation?, 1992;
James G. Burton , The Pentagon Wars: Reformers Challenge the Old Guard, 1993;
Merritt Roe Smith and Leon Marx, eds., Does Technology Drive History? The Dilemma of Technological Determinism, 1994;
Michael S. Sherry , In the Shadow of War: The United States Since the 1930s, 1995;
Paul A.C. Koistinen , Beating Plowshares into Swords: The Political Economy of American Warfare, 1606–1865, 1996;
Paul A.C. Koistinen , Mobilizing for Modern War: The Political Economy of American Warfare, 1865–1919, 1997; and John Whiteclay Chambers II , The American Debate over Modern War, 1871–1914, in Manfred F. Boemeke, Roger Chickering and Stig Foerster, eds., Anticipating Total War: The German and American Experience, 1871–1914, 1999.

John Whiteclay Chambers II