Skip to main content
Select Source:

Medical Research

MEDICAL RESEARCH

MEDICAL RESEARCH in the United States has been very dependent on research standards from overseas as well as American social, economic, and political issues. In the eighteenth century American medicine inherited two traditions: an ancient one of clinical observation, and a more recent one, associated with experimental science, in which conditions were modified and observations made in order to gain insights into possible causes. Most medical research was supported and undertaken by individuals. In seventeenth century London and Paris the profession saw the introduction of scientific organizations for the presentation and discussion of research and journals for the publication of results. In America, Benjamin Franklin was a leader in founding the American Philosophical Society (1743), which, at least in part, served a similar purpose.

In 1818 the Army Medical Department was established along with the permanent position of Surgeon General of the Army. Joseph Lovell, the first to occupy the position, believed that physicians should increase medical knowledge. He required regular weather and disease incidence reports by all army physicians in an effort to correlate disease with climate, an ancient epidemiological observation. In the 1820s he encouraged the investigations of Dr. William Beaumont, who was studying the process of digestion by direct observation through a patient's fistula. Beaumont's book, Experiments and observations on the gastric juice and the physiology of digestion (1833), is a classic of physiology and was the first American contribution to basic medical research to be seriously noted by the European medical establishment.

During the antebellum period there were several abortive attempts to organize systematic medical research societies on the East Coast. Some physicians accumulated large pathology collections, which became research and teaching tools, but most did not have a sustained interest in research; it was something done while awaiting the development of practice. Only the Army had an institutional interest, which was given form during the Civil War with the creation of the Army Medical Museum (later the Armed Forces Institute of Pathology).

After the war, cities and states developed public health agencies, many of which funded laboratories where research was undertaken. Hospitals and reforming university medical schools also capitalized laboratories, some of which were used episodically for research in addition to their primary purposes of patient care, diagnostic service, and teaching. The single most important change was the conversion of the antebellum college into a research university. Although there were precursors, Johns Hopkins University in Baltimore is considered the oldest university in America because it was the first to adopt the German university ideal of creating new knowledge in addition to teaching what was known. By the end of the century there were research facilities in many American cities.

The changes in medical science at the end of the nineteenth century introduced new ambiguities in the medical research process, and scientists looked for ways to limit researcher bias. The idea of comparison and control was venerable, but it was applied erratically. In medical bacteriology Robert Koch provided what many considered the "gold standard" of animal modeling, but the problem of identifying animal disease was significant. Organizations for the discussion of medical research and refereed journals for publication became increasingly important to acceptance of the work.

Research scientists needed places to work, and philanthropists established them. The Rockefeller Institute for Medical Research (1901) was probably the most influential of a variety of similar private medical research institutes around the country. By 1930 there were over thirty foundations supporting medical research in one way or another.

A secondary source of research support was special-interest funding. The largest source of such funding was industry. Several corporations had internal laboratories by the 1920s, and drug companies also made grants and contracts with university-based researchers. The use of university-based researchers gave some physicians pause because of the presumed ethical conflicts generated by corporate research. There was considerable division of opinion on the ethics of medical patents: some schools refused to permit patenting of research done in their laboratories. The American Medical Association was opposed to pharmaceutical support of therapeutic research, believing such work inherently tainted.

Therapeutics quickly emerged as the most confusing area for the establishment of research standards. A medieval French saying highlights the task of the medical profession and captures the essence of the therapeutic research problem: Guerir quelquefois, soulager souvent, consoler toujours ("To cure sometimes, to relieve often, and to care always"). If the natural history of the disease is agreed upon and interrupted, then the patient might be judged cured, but the natural history of many diseases was difficult to agree upon. Relief was a very difficult concept, confounded by the placebo effect: simply going to the doctor often made a patient feel better. The recognition that innovators sometimes interpreted things too optimistically led to a quest for repeatable results, but bias of various sorts, including spontaneous remissions, might obscure the reports of individual workers.

Confusion in therapeutic research led early-twentieth-century researchers into a variety of areas where their successors would wish they had not gone. The promise of pardons for prisoners led some to agree to serve as research subjects, and orphans and the insane were also exploited, all populations that provided large groups of easy-to-study patients who would later be found to be incapable of giving truly informed consent. After reports of Nazi atrocities committed in the name of "medical research" appeared in the late 1940s, Henry Beecher of Harvard University led a campaign to increase sensitivity to human use, and standards of conduct began to be part of all research protocols.

The role of science and technology during World War II led President Roosevelt to charter a study of the appropriate role of the federal government in scientific research in the coming postwar era. The U.S. Public Health Service, like other large public health agencies, had done some research since the nineteenth century. Its National Institutes of Health organized the Division of Research Grants (later renamed the Center for Scientific Review) to send federal research dollars to university medical research programs.

Bradford Hill, a British biostatistician, urged a new approach in evaluating therapy—the double-blinded clinical trial. The idea included the random assignment of patients to new-therapy and non-new-therapy groups without their knowing to which group they were assigned—this was the first blind. It also prevented observer bias by not letting the attending physician know to which therapeutic group the patient belonged—this was the second blind. Patients were evaluated and at the end of the trial the codes were read and the members of each group identified. Other methodological studies found a variety of biases in the collection and interpretation of data. By the 1990s these techniques were brought together under the rubric of "evidence-based medicine."

The traditional research approaches continued to pay dividends. Especially important was the progress in immunology. One of the greatest successes was the development of the polio vaccine in the 1950s, which was supported in large part by the National Foundation for Infantile Paralysis, a single-disease foundation. Providing seed funds and other grants, the special-interest foundations played an increasing role in medical research. Even more important, the special-interest foundations served as interest groups encouraging public research funding and charitable donations to combat specific problems.

From a largely derivative, independent activity of interested amateurs, medical research in the United States has become a multibillion-dollar professional activity, attracting worldwide attention and raising significant social issues. With public funding has come political and social issues ranging from the ethics of human experimentation, animal care and use, and fetal tissue research to fears of negative eugenics raised by the possibility of genomic modifications. Despite the difficulties, its promise of human relief supports the continued growth and development of medical research.

BIBLIOGRAPHY

Harvey, A. McGehee. Science at the Bedside: Clinical Research in American Medicine 1905–1945. Baltimore: Johns Hopkins University Press, 1981.

Lederer, Susan E. Subjected to Science: Human Experimentation in America before the Second World War. Baltimore: Johns Hopkins University Press, 1995.

Marks, Harry M. The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990. Cambridge: Cambridge University Press, 1997.

Shryock, Richard Harrison. American Medical Research, Past and Present. New York: Arno, 1980.

Strickland, Stephen P. Politics, Science, and Dread Disease: A Short History of United States Medical Research Policy. Cambridge, Mass.: Harvard University Press, 1972.

Dale C.Sith

See alsoHuman Genome Project ; Laboratories ; Microbiology ; Molecular Biology .

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Medical Research." Dictionary of American History. . Encyclopedia.com. 21 Oct. 2017 <http://www.encyclopedia.com>.

"Medical Research." Dictionary of American History. . Encyclopedia.com. (October 21, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/medical-research

"Medical Research." Dictionary of American History. . Retrieved October 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/medical-research

Clinical Research

CLINICAL RESEARCH

CLINICAL RESEARCH, the controlled use of humans in medical experiments, dates from the Greek physician Galen (c. 129–199), the founder of experimental medicine. Clinical research in the United States, however, rose in importance in the late nineteenth century following European advances in medical research. In 1884, disease investigators in the United States formed the American Clinical and Climatological Association, and in 1909, medical experimenters established the American Society of Clinical Investigation, which promoted correlation of clinical research with medical practice. For much of the twentieth century clinical researchers investigated the safety and effectiveness of diagnosis, prevention, and treatment of human diseases and disorders. Usually, but not always, basic laboratory research and animal experimentation preceded human testing.

In the twentieth century, clinical research increased with the expansion of military medical research, the growth of academic medical science, the rise of pharmaceutical companies, and the establishment of private research clinics. As a result of the 1906 Pure Food and Drug Act and its subsequent amendments, new drugs underwent clinical testing prior to widespread use by physicians. The importance of clinical research grew significantly after the establishment of the National Institute of Health (NIH) in 1930 and NIH expansion to multiple institutes after World War II. By then, the Public Health Service, which included the Food and Drug Administration and the NIH, was probably the most generous supporter of clinical research in the country. In 1953, the NIH opened the largest clinical research center in the nation in Bethesda, Maryland.

Following World War II, the Nuremberg Code, drawn up after revelations of brutal experiments on humans by the Nazis, exerted significant influence on clinical researchers in the United States. The code limited the degree of risk in clinical research to a level that would not exceed that determined by the humanitarian importance of the problem to be solved by the experiment. Subsequently, many institutions used the code as ethical guidance. In 1962, Congress enacted the first federal law regulating human medical experimentation. After learning that thalidomide had caused the birth of deformed babies in Europe, the legislators amended the Food and Drug Act to require that patients be informed that they were being given experimental drugs not fully licensed by the federal government.

Later in the century, clinical research came under fire because of revelations about the federal government's neglect of women, minorities, and the elderly in clinical trials; radiation experiments on humans, especially those lacking informed consent; and charges of fraud. Congress responded in 1993 by passing the NIH Health Revitalization Act to correct the imbalance of women and minorities in clinical research. Subsequently the NIH launched the largest clinical health trial in the history of the United States by selecting 63,000 women for a nine-year trial to determine the effects of certain regimens on preventing cancers, osteoporosis, and coronary heart disease. Congress and the executive branch also investigated charges of impropriety in conducting potentially harmful radiation exposure experiments on humans and allegations of fraud.

Despite the controversies, the nation has benefited from clinical research. Trials in the 1970s showed that lowering blood cholesterol diminished chances of heart disease in men. In The 1980s, clinical research saved patients with soft tissue sarcomas of the limbs from amputations by demonstrating the effectiveness of radiation therapy and chemotherapy combinations. And in the late 1980s, clinical tests indicated that azidothymidine (AZT), an antiviral drug, could slow down the development of AIDS in some patients.

BIBLIOGRAPHY

Annas, George J., and Michael A. Grodin, eds. The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. New York: Oxford University Press, 1995.

Grady, Christine. The Search for an AIDS Vaccine: Ethical Issues in the Development and Testing of a Preventive HIV Vaccine. Bloomington: Indiana University Press, 1995.

Marks, Harry M. The Progress of Experiment: Science and Therapeutic Reform in the United States, 1900–1990. New York: Cambridge University Press, 1997.

Mastroianni, Anna C., Ruth R. Faden, and Daniel Federman, eds. Women and Health Research: Ethical and Legal Issues of Including Women in Clinical Studies. 2 vols. Washington, D.C.: National Academy Press, 1994.

McNeill, Paul M. The Ethics and Politics of Human Experimentation. New York: Cambridge University Press, 1993.

Ruth RoyHarris/c. p.

See alsoAcquired Immune Deficiency Syndrome ; Chemotherapy ; Epidemics and Public Health ; Medicine and Surgery ; National Institutes of Health ; Pharmaceutical Industry ; Pure Food and Drug Movement .

Cite this article
Pick a style below, and copy the text for your bibliography.

  • MLA
  • Chicago
  • APA

"Clinical Research." Dictionary of American History. . Encyclopedia.com. 21 Oct. 2017 <http://www.encyclopedia.com>.

"Clinical Research." Dictionary of American History. . Encyclopedia.com. (October 21, 2017). http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/clinical-research

"Clinical Research." Dictionary of American History. . Retrieved October 21, 2017 from Encyclopedia.com: http://www.encyclopedia.com/history/dictionaries-thesauruses-pictures-and-press-releases/clinical-research