humans in research
humans in research Experiments in man, and the concerns that relate to them, are largely phenomena which began in the twentieth century. To be sure, it could be argued that over the centuries any deviation from the standard treatment — an alteration in drug dosage, say, or a change in operative procedure — has always been an experiment. Also, several classic human studies were carried out before 1900: Sanctorius' (1561–1636) work on insensible losses from the body in himself started a whole line of self-experimentation, which continued into the twentieth century with J. S. Haldane's work on decompression and Werner Forssmann's on cardiac catherization; James Lind's (1916–94) on preventing scurvy in seamen on long voyages; and William Beaumont's (1785–1853) on digestion in the stomach as observed through a fistula in a patient with a gunshot wound, being well-known examples.
However, experiment here is used in the sense of research with an element of the unknown, undertaken to investigate a hypothesis completely anew or to confirm preliminary findings, and several events illustrate the heavy emphasis on man as the subject of large-scale research since 1900. In the US (where the bulk of this research has been done) the Rockefeller Institute for Medical Research and the Carnegie Institution of Washington were both founded in 1902; the National Institute of Health was established through the transformation of the Hygienic Laboratory in 1930; and the Committee on Medical Research arose as a subdivision of the National Defense Research Committee set up by President Franklin D. Roosevelt in 1940. Though the initial emphasis of all of these may not have been on clinical research, within a few years it was clearly a priority (as shown for example by the opening of the hospital attached to the Rockefeller Institute within six years of its opening).
From the beginning of the twentieth century, then, the explosive and sustained growth of such research has been underlined by the development of other new institutions and the expansion of existing departments to establish new programmes. Moreover, scientists recognized that the enterprise would be incomplete if they could not disseminate their results and discuss them with their peers. Hence not only did the existing societies expand their programmes, but new ones were formed as well — the path-breaking and broadly based American Society of Clinical Investigation in 1907, and the British Medical Research Society in 1930.
Why did such an emphasis on experimentation in man occur at the turn of the century? The true medico-scientific revolution began largely in the nineteenth century, and was based mainly on studies either in vitro or in animals in the laboratory. Nevertheless, some studies, such as on sensation, clearly could not be done in any other way than on human subjects, while in Britain especially there was a growing opposition to vivisection, culminating in the Cruelty to Animals Act of 1876. The development of safer investigative techniques, such as anaesthesia and antisepsis in surgery, must have been an additional stimulus to human research, aided finally by the tendency by the 1890s for most of the principal academic medical centres in France and Germany to have their own research institutes. Touring Europe to study medical education at the beginning of the twentieth century, Simon and Abraham Flexner were to emphasize the importance of these institutes in their report, which was seminal in reforming medical education, principally in the US but also in Britain (particularly after William Osler had become Regius Professor of Medicine at Oxford in 1905). Thus, certainly by the end of World War II, several medical schools in the US had professorial clinical departments in which research was a key element, and, albeit haltingly, in the 1920s Britain followed suit.
Finally, three further stimuli to the expansion of clinical research should be mentioned, in the early, middle, and latter parts of the century. In 1901, under the will of the Swedish armaments manufacturer, Alfred Nobel, the Nobel Prizes for physiology or medicine were inaugurated emphasizing the societal value of research. To take just the first three of these, given in 1901–3, is to show just how ‘respectable’ research targeted to the needs of patients had become: respectively, the prizes were awarded to Emil von Behring (diphtheria antiserum), Ronald Ross (mosquitoes and malaria), and Niels Finsen (ultraviolet light in lupus vulgaris). The second impetus came from the specific needs raised in both World Wars — to prevent tetanus in the troops fighting in the manure-contaminated trenches of Flanders, for example, or to find an effective substitute for controlling malaria when the Japanese captured the sources of quinine in 1941. After both World Wars techniques in medical research were to benefit from spin-offs in technological advances elsewhere (particularly instrumentation — a feature that continues to the present day, as, for example, the miniaturization of devices suitable for inserting into body cavities based on developments initially introduced in the space programme). Thirdly, starting in the late 1920s and early 1930s, statistics were introduced into research. Just after World War II this was to culminate in the randomized controlled trial, turning the screw of scientific rigour through a whole revolution.
Thereafter, until the 1950s, the concern followed a sine wave course, tending to be neutralized by other events, and sometimes by action — which in the event was to prove ineffectual. Thus for yellow fever the research came to a quick close with the establishment of the postulated mechanism and the tragic death of one of the principal investigators. After this we can see five major episodes that provoked public scrutiny. In the first decade of the twentieth century there was a debate on the rights of inpatients in public hospitals (particularly paupers and children), with reports of procedures, such as lumbar puncture and radiological examinations, undertaken purely for research without any consent. Hospital physicians argued that admission to public hospitals automatically gave them the right to carry out what procedures they wished without consultation.
Despite this, the second crisis occurred in 1916, with a published report on the transmission of syphilis from patients with general paresis into animals. Under local anaesthesia, burr holes had been made in the skulls of six living patients, and material was aspirated from their brains and then injected into rabbits. The animals developed a syphilitic infection, thus showing that the human infection was still active even at a very late stage of the illness. The professional and public rumpus over the lack of any informed consent in this work died down probably only because it took place at the time when the US declared war on the Central Powers.
The third episode happened in Germany in the late 1920s with the Lübeck disaster. A preparation of Bacille Calmette Guérin vaccine, used to immunize against tuberculosis, was contaminated with virulent bacilli, and 100 children died. The government responded by issuing official guidelines in 1931 on human research, which included emphasizing the importance of informed consent. Despite such guidelines, however, the fourth event provoking public concern also occurred in Germany: during World War II the Nazis conducted egregiously unethical research on Jewish and other prisoners in concentration camps — with (often lethal) experiments on hypothermia, explosive decompression, and iatrogenic infections.
Soon after the war the United Nations established the Nüremberg Code, reiterating the minimum conditions needed to make human clinical research acceptable. Nevertheless, ten years later an American anaesthetist, Henry K. Beecher, and an English physician, Maurice Pappworth, produced books and journal articles showing how widely, and dangerously, its provisions were being ignored in practice. This time, Western society was not prepared to leave matters alone, and in 1965 the World Medical Association, guided by Hugh Clegg (editor of the British Medical Journal) and Tapani Kosonen (Secretary General of the Finnish Medical Association), produced the Declaration of Helsinki. Its provisions are now universally observed in research laboratories and clinics all over the world, with the establishment of protocols, approval by research ethics committees, and provision for compensation for any subject still unfortunate enough to be injured despite stringent safeguards.
Thus, although long ago enlightened physician–scientists had recognized the need for informed consent ( William Beaumont, for example, putting such a clause in his contract with Alexis St Martin, his patient with the stomach fistula), it took over another 100 years before subjects achieved parity with laboratory workers in what should have been an equal partnership from the beginning.
However, experiment here is used in the sense of research with an element of the unknown, undertaken to investigate a hypothesis completely anew or to confirm preliminary findings, and several events illustrate the heavy emphasis on man as the subject of large-scale research since 1900. In the US (where the bulk of this research has been done) the Rockefeller Institute for Medical Research and the Carnegie Institution of Washington were both founded in 1902; the National Institute of Health was established through the transformation of the Hygienic Laboratory in 1930; and the Committee on Medical Research arose as a subdivision of the National Defense Research Committee set up by President Franklin D. Roosevelt in 1940. Though the initial emphasis of all of these may not have been on clinical research, within a few years it was clearly a priority (as shown for example by the opening of the hospital attached to the Rockefeller Institute within six years of its opening).
Evolution outside the US
A similar evolution occurred in other Western countries. In Britain (probably the second country in terms of recent output), the Medical Research Committee, set up in 1913 largely to study tuberculosis, evolved in 1923 into the Medical Research Council, with a much broader remit. In 1935 the Postgraduate Medical School (with its heavy emphasis on research) was established at Hammersmith Hospital in London; and throughout the 1920s and 30s Sir Thomas Lewis, professor of medicine at University College Hospital, was popularizing his phrase ‘clinical research’. The recent extent of this research is staggering, as can be illustrated for just the US and the National Institutes of Health (as it became in 1948). For 1946–94 the statistics show 109 474 principal investigators, 275 195 competing for awards worth $26.3 billion, and 786 444 awards of all types, worth $121.7 billion. Much of this was devoted to research in which human studies were involved.From the beginning of the twentieth century, then, the explosive and sustained growth of such research has been underlined by the development of other new institutions and the expansion of existing departments to establish new programmes. Moreover, scientists recognized that the enterprise would be incomplete if they could not disseminate their results and discuss them with their peers. Hence not only did the existing societies expand their programmes, but new ones were formed as well — the path-breaking and broadly based American Society of Clinical Investigation in 1907, and the British Medical Research Society in 1930.
Importance of publication
Publication in journals played an equally important part. Throughout this period there were opportunities for this in the general science journals, such as Science and Nature. Over time the general medical journals, such as the Journal of the American Medical Association and the Lancet, also started to publish many more research-oriented articles, and new specialist journals were founded to accommodate the more recondite ones. The last were often associated with specialist societies — for example, the Journal of Clinical Investigation (founded in 1924), and Clinical Science (previously called Heart), were linked with the American Society for Clinical Investigation and the Medical Research Society, respectively. In addition, the specialty journals — such as those devoted to paediatrics, chest disease, or neurology — began to shift the emphasis of their contents from clinical case descriptions to reports of research based on human experiment.Why did such an emphasis on experimentation in man occur at the turn of the century? The true medico-scientific revolution began largely in the nineteenth century, and was based mainly on studies either in vitro or in animals in the laboratory. Nevertheless, some studies, such as on sensation, clearly could not be done in any other way than on human subjects, while in Britain especially there was a growing opposition to vivisection, culminating in the Cruelty to Animals Act of 1876. The development of safer investigative techniques, such as anaesthesia and antisepsis in surgery, must have been an additional stimulus to human research, aided finally by the tendency by the 1890s for most of the principal academic medical centres in France and Germany to have their own research institutes. Touring Europe to study medical education at the beginning of the twentieth century, Simon and Abraham Flexner were to emphasize the importance of these institutes in their report, which was seminal in reforming medical education, principally in the US but also in Britain (particularly after William Osler had become Regius Professor of Medicine at Oxford in 1905). Thus, certainly by the end of World War II, several medical schools in the US had professorial clinical departments in which research was a key element, and, albeit haltingly, in the 1920s Britain followed suit.
Finally, three further stimuli to the expansion of clinical research should be mentioned, in the early, middle, and latter parts of the century. In 1901, under the will of the Swedish armaments manufacturer, Alfred Nobel, the Nobel Prizes for physiology or medicine were inaugurated emphasizing the societal value of research. To take just the first three of these, given in 1901–3, is to show just how ‘respectable’ research targeted to the needs of patients had become: respectively, the prizes were awarded to Emil von Behring (diphtheria antiserum), Ronald Ross (mosquitoes and malaria), and Niels Finsen (ultraviolet light in lupus vulgaris). The second impetus came from the specific needs raised in both World Wars — to prevent tetanus in the troops fighting in the manure-contaminated trenches of Flanders, for example, or to find an effective substitute for controlling malaria when the Japanese captured the sources of quinine in 1941. After both World Wars techniques in medical research were to benefit from spin-offs in technological advances elsewhere (particularly instrumentation — a feature that continues to the present day, as, for example, the miniaturization of devices suitable for inserting into body cavities based on developments initially introduced in the space programme). Thirdly, starting in the late 1920s and early 1930s, statistics were introduced into research. Just after World War II this was to culminate in the randomized controlled trial, turning the screw of scientific rigour through a whole revolution.
Ethical aspects
Much space elsewhere in this Companion is devoted to descriptions of some of the important research studies in man and the researchers who conducted them. Hence for the remainder of this article I will concentrate on an issue that had always been in the background, but grew gradually from the beginning of the twentieth century until World War II, when it assumed the prominence it has never since lost — namely, the ethical aspects of human experimentation. Concerns over risks to the subjects of research which offered no direct benefit to them came into prominence at the turn of the century, with newspaper comment over Walter Reed's work on yellow fever, in which volunteers were paid $100 in gold for volunteering to be bitten by the postulated vector, the mosquito, and another $100 if they developed the disease (given to the widow in case of death). In particular, the debate centred on issues of how truly informed consent could be obtained.Thereafter, until the 1950s, the concern followed a sine wave course, tending to be neutralized by other events, and sometimes by action — which in the event was to prove ineffectual. Thus for yellow fever the research came to a quick close with the establishment of the postulated mechanism and the tragic death of one of the principal investigators. After this we can see five major episodes that provoked public scrutiny. In the first decade of the twentieth century there was a debate on the rights of inpatients in public hospitals (particularly paupers and children), with reports of procedures, such as lumbar puncture and radiological examinations, undertaken purely for research without any consent. Hospital physicians argued that admission to public hospitals automatically gave them the right to carry out what procedures they wished without consultation.
Informed consent
Public concern provoked sufficient anxiety among research workers that the American Medical Association established a special committee on the protection of medical research, with Walter Cannon as its chairman. A few years later, in 1914, this issued a statement to the editors of medical journals, urging them to check that informed consent was specifically mentioned in any paper accepted for publication in which human experimentation was mentioned.Despite this, the second crisis occurred in 1916, with a published report on the transmission of syphilis from patients with general paresis into animals. Under local anaesthesia, burr holes had been made in the skulls of six living patients, and material was aspirated from their brains and then injected into rabbits. The animals developed a syphilitic infection, thus showing that the human infection was still active even at a very late stage of the illness. The professional and public rumpus over the lack of any informed consent in this work died down probably only because it took place at the time when the US declared war on the Central Powers.
The third episode happened in Germany in the late 1920s with the Lübeck disaster. A preparation of Bacille Calmette Guérin vaccine, used to immunize against tuberculosis, was contaminated with virulent bacilli, and 100 children died. The government responded by issuing official guidelines in 1931 on human research, which included emphasizing the importance of informed consent. Despite such guidelines, however, the fourth event provoking public concern also occurred in Germany: during World War II the Nazis conducted egregiously unethical research on Jewish and other prisoners in concentration camps — with (often lethal) experiments on hypothermia, explosive decompression, and iatrogenic infections.
Soon after the war the United Nations established the Nüremberg Code, reiterating the minimum conditions needed to make human clinical research acceptable. Nevertheless, ten years later an American anaesthetist, Henry K. Beecher, and an English physician, Maurice Pappworth, produced books and journal articles showing how widely, and dangerously, its provisions were being ignored in practice. This time, Western society was not prepared to leave matters alone, and in 1965 the World Medical Association, guided by Hugh Clegg (editor of the British Medical Journal) and Tapani Kosonen (Secretary General of the Finnish Medical Association), produced the Declaration of Helsinki. Its provisions are now universally observed in research laboratories and clinics all over the world, with the establishment of protocols, approval by research ethics committees, and provision for compensation for any subject still unfortunate enough to be injured despite stringent safeguards.
Thus, although long ago enlightened physician–scientists had recognized the need for informed consent ( William Beaumont, for example, putting such a clause in his contract with Alexis St Martin, his patient with the stomach fistula), it took over another 100 years before subjects achieved parity with laboratory workers in what should have been an equal partnership from the beginning.
Stephen Lock
More From encyclopedia.com
You Might Also Like
NEARBY TERMS
humans in research