Death and the Dying Process

views updated

Death and the Dying Process

Early Beliefs About Death and Dying

A Good Death

A Move Toward Hospital Care

Defining Death

A New Definition—Brain Death

The Death Awareness Movement

Death as a Public Issue

Few people want to talk or even think about death and dying. For most, just the thought of one’s own death causes great fear. Swiss physician and the author of the book On Death and Dying Elisabeth Kübler-Ross elaborates, “Death is still a fearful, frightening happening and the fear of death is a universal fear.”6 Death reminds of us our vulnerability and our own mortality.

Most people, when they do think about death, believe it is something that is far removed, distant, and nearly inconceivable. Death is something that happens to other people, not to them. However, “dying is one of the few events in life certain to occur,” elaborates Time journalist John Cloud, “and yet one we are not likely to plan for. We will spend more time getting ready for two weeks away from work than we will for our last two weeks on earth.”7

For many people, dying may involve the loss of faith, the loss of hope, and often a feeling of being abandoned by God. For those who believe in an afterlife or a heaven, however, death is also viewed as a transition from one state of human existence to another. Just as each individual has his or her own way of living, it is now believed that each also has his or her own way of dying.

Whatever one believes, however, death has always fascinated people. It has also been clouded by myth and by the portrayal of death in the media. Death has been depicted throughout history in images that range from merely going to sleep to a brutal and painful end. These various presentations, rather than calming fears, have served to further confuse and alarm people. Thus, author Burnell concludes, “To make death no longer a source of dread is one of the great challenges of the age.”8

Early Beliefs About Death and Dying

From the beginning of history, people have viewed death as inevitable and universal. Yet different cultures have reacted to death and dying in different ways. These viewpoints have been affected by religious beliefs, cultural traditions, and the availability of medical resources.

The early Romans and Greeks, for instance, greatly feared death. This fear is evident in their myths and their beliefs in many gods and goddesses who exacted punishment on humans for various sins. In many instances these gods were vengeful and caused great suffering among the human race.

During the Middle Ages, many people believed that suffering was always part of the dying process and, therefore, had to be borne silently and stoically. Many Christians believed that this

prolonged suffering actually gave the dying a chance to put their affairs in order and make amends to people and the church for their various sins.

Most of the early indigenous people of North America, South America, Asia, Australia, and Europe did not fear death; they considered death a part of life. They believed that death was not the end of life but rather a part of nature’s never-ending cycle. They also believed in an afterlife where a person lived in peace and harmony much as they had done during their years on earth. As a result, many older people chose death rather than burdening their community with their aging needs such as daily care and feeding.

In certain Inuit (formerly called Eskimo) cultures, for instance, old or sick individuals were allowed to petition for death by simply telling their families that they were ready to die. These older people preferred to stay or be left behind on the ice to die naturally rather than burdening their family and tribe with their aging needs. Historians James M. Hoefler and Brian E. Kamore elaborate, “Eskimos believe that anyone dying in this way spends eternity in the highest of heavens.”9

Dying Is a Stage of Growth

“Much of our society’s crisis around death could be said to stem from a lack of awareness of the dying process as a stage of growth. Just as different steps must be mastered from childhood, adolescence, and adulthood, dying presents its own challenges.”

—Journalist Pythia Peay.

pythia peay, “a good death,” common boundary, september/october 1997, p. 34.

Elderly and ill Blackfoot of Canada and the Northern Plains also acted in this manner. When older people felt life coming to an end, they often disposed of all their property and belongings and then willingly left the village to find a quiet spot to die. Occasionally, in times of hardship or starvation, the elderly were simply abandoned by the tribe. There was no guilt or condemnation attached to this practice, and the people involved willingly stayed behind to lessen the impact on the remainder of the community.

A Good Death

These indigenous cultures would have described such deaths as “good ones.” Not only did the elderly elect to die in a manner of their own choosing, but their deaths were viewed by the entire community as both natural and meaningful. In modern society, most people yearn for a similar “good death.” Journalist Richard Venus elaborates, “A good death is one in which we are at peace, without a great loss of dignity, without so much pain that it becomes our sole focus; a death that does not add terrible anguish to those who love and care for us.”10

A good death, according to most public opinion polls, is a natural death, which excludes suicide or homicide. It is relatively painless, with a person dying surrounded and supported by his or her loved ones. It also involves a person who has accepted death and is ready to die, made possible by saying goodbye to loved ones and making amends for any wrongdoing. Finally, a good death is one that happens at home.

Home deaths were common until the last half of the twentieth century. During the nineteenth century, for instance, hospitals were few and provided only basic food, shelter, and minimum nursing care to patients. Surgery was very primitive and primarily involved the amputation of limbs and a few other simple procedures. Few really sound medical treatments were available for the sick. For the most part, hospitals were run by religious and charity groups.

These early hospitals were often dangerous places to be. Doctors knew very little about the importance of sanitary conditions and, as a result, infections spread rapidly in the hospital setting. Even basic cleanliness techniques were slow in developing. Hospitals, as a rule, were dirty, smelly places to be avoided. Thus, most people stayed home when they were ill and when they were dying.

A Move Toward Hospital Care

Hospitals began improving toward the beginning of the twentieth century as more and more were built. In 1872, for instance, the United States had less than two hundred hospitals countrywide. By 1910 there were over four thousand. The discovery of radiation and the use of X-ray technology, better infection control, improvements in surgery, and better pain medications helped lead to this boom in hospitals. More and more people began relying on hospital care for major illnesses and surgeries.

The 1960s saw the birth of specialization among physicians. Family doctors, while still taking care of a patient’s basic needs, gave way to specialists in every imaginable field. A hospitalized patient in critical condition, for instance, might end up having a dozen different physicians caring for him or her. Patients benefited from this expertise by having specialized doctors dealing with specific problems, but they also suffered. They began to feel that their doctors had less of a personal interest in them. Patients also felt bombarded with various medical opinions. And in most cases, they felt left out of the decision-making process.

Despite people still wanting to die at home, more and more deaths began to occur in hospital settings. In fact, as of the beginning of the twenty-first century, nearly half of all Americans die not only in hospitals, but in pain, surrounded and treated by strangers, and separated from their families. John Farrell, chair of surgery at the University of Miami in the 1950s, explains:

In our pursuit of the scientific aspects of science, the art of medicine has sometimes unwittingly and justifiably suffered.… The death bed scenes I witness are not particularly dignified. The family is shoved out into the corridor

Bioethics

Beginning in the late 1960s, because of increased public awareness of people’s rights as patients, physicians and hospitals often found their decisions being questioned and challenged. This, as well as the technological advances in medicine and a new definition of death, brought about the creation of bioethics committees in all the major hospitals. Ethics has been defined as the study of principles; bioethics became the study of moral principles as they related to medicine.

Hospitals used bioethics committees to decide all major ethical decisions. These included decisions relating to the welfare of the patient, the withdrawal of lifesustaining treatment, and the settlement of disputes between patients and their physicians. The committees are also charged with educating hospital staff on ethical questions such as the ones listed above. Today when a conflict arises between a doctor and patient on the course of treatment, the problem is referred to the hospital ethics committee.

by the physical presence of intravenous stands, suction machines, oxygen tanks, and tubes emanating from every natural and several surgically induced orifices. The last words . . . are lost behind an oxygen mask.11

In addition, death, once viewed as a natural part of life, has come to be seen as a failure by doctors and hospitals alike. Death has become an unacceptable outcome and often an embarrassment for modern medical practitioners. Hospitals may be the best place to receive advanced treatment for acute diseases, but many are ill-prepared to help or allow patients to die. “The modern hospital is the greatest enemy of meaningful death,”12 wrote John Carmody, a former priest, about his own father’s death in a hospital.

Charles F. McKhann, a specialist in cancer surgery, agrees. His interest in the subject grew out of his father’s death from cancer in a hospital. His father was kept alive for nearly three months, even though his outlook was totally hopeless. Treatment included intravenous fluids, blood transfusions, and other aggressive measures. McKhann wrote: “An intelligent, competent

man, himself a physician, hospitalized in a major medical center, had absolutely no control over stopping useless treatment. … It seems unfair that people who manage their own affairs successfully in life should be required to turn over so much of their death and dying to others.”13

McKhann’s experience has been shared by countless others. In the late twentieth century, for instance, it was estimated that Americans spent an average of eighty days in the hospital during the year they died. Americans are also spending more time in nursing homes and similar facilities.

Defining Death

Whether dying took place in a hospital or at home, for many decades death simply meant the cessation of breathing and heart function. With the advent of resuscitation methods and mechanical ventilators, knowing when death occurred became more difficult. These medical technologies could keep an unconscious person alive for an indefinite amount of time. The line between life and death began to blur.

The first major development that changed how people viewed death was the use of cardiopulmonary resuscitation. Physicians realized that if emergency breathing and other techniques were performed in the first few minutes after a patient collapsed from a cardiac arrest (usually caused by a heart attack), that patient had a good chance of recovering. Rescue breathing and artificial pumping of the heart could keep an individual alive until he or she reached the hospital, where more advanced techniques could improve the person’s chance of surviving.

A Time of Opportunity

“The experience of dying can . . . be understood as a part of full and healthy living, as a time of caring and as a time of remarkable opportunity for persons, families, and communities.” —Hospice physician Ira R. Byock.

quoted in robert f. weir, physician-assisted suicide. bloomington: indiana university press, 1997, p. 132.

Then came the development of even more advanced ways to resuscitate a patient. Bernard Lown was one of the first physicians to use electric shock to restart a patient’s heart in the early 1960s. He began to investigate the use of electrical currents and invented a primitive defibrillator, a machine that delivered electric shock to the body in the hopes of restarting the heart and correcting fatal heart rhythms. He also helped develop one of the United States’ first coronary intensive care units. By the mid-1960s mobile “crash carts” for use in cardiac arrest were common in intensive care units and hospitals throughout the country. Many patients who previously would have died were given a new chance at life with the use of these machines and techniques.

In addition to the use of resuscitation techniques and a defi-brillator, several other devices were invented and refined that enabled patients to survive normally fatal occurrences. A massive machine called an iron lung was developed in the 1950s for polio patients who needed assistance in breathing. This machine was unwieldy in size and configuration, but it enabled paralyzed patients to continue breathing. This led to the development of

bedside ventilators that were able to breathe for the patient. The ventilator enabled patients who were in a coma to survive for longer periods of time, especially when the patient was unable to breathe on his or her own. Prior to this invention, many patients who were comatose or severely injured died as a result of respiratory arrest or the inability to breathe on their own. Writers Hoefler and Kamore summarize, “Advances in both diagnostic and rescue-medicine technology have helped to create a whole population of individuals who would have died quickly only a few decades ago.”14

In addition to the development of resuscitation techniques, other medical advances led to new treatments that also prolonged life. Patients with life-threatening kidney failure, for instance, were able to live longer through the use of dialysis, a technique that removed toxins from a patient’s body by recycling the blood through filters. Advances in the treatment of diabetes, asthma, and infectious diseases through the development of drugs enabled longer survival rates with those illnesses. Finally, improved surgical techniques and care of pregnant women and their newborns decreased the death rate in those areas of medicine. While all of these treatments prolonged life and improved mortality rates, they did not affect the definition of death. Since a person’s death could be postponed indefinitely by these new technological advances that could keep a patient’s heart and respiratory system functioning, the old definition of death needed revising.

A New Definition—Brain Death

The need for a new definition of death became critical on December 3, 1967, with the first successful heart transplant operation by Christiaan Barnard in South Africa. For the first time, a severely diseased heart was removed and replaced with a healthy heart from a recently deceased person. As transplant surgery became more successful and more popular, the question arose as to when a dying person’s organs could be harvested or removed for transplant. It was necessary to harvest organs in a timely fashion following death; in fact, mere seconds could make the difference between a viable organ and one that could not be used. Doctors realized that if someone was being kept alive artificially despite all other signs of death, with the family’s permission, organs could be removed at the exact moment of death, thus preserving healthy organs. Medicine and society needed to come up with a new definition of death to enable perfectly good organs in a dying patient to be used to help heal another one. Journalist Pythia Peay contends that with transplantation came a blurring of the definition of death: “As medicine has advanced to extend life despite grave illness, the boundaries between a natural death and an artificially sustained life have become blurred.”15

The new life-sustaining technologies made the traditional boundary between life and death murky. In order to clarify this boundary and to make more transplantations successful, the medical community moved toward formulating a new definition of death. The task of creating this definition fell to the Harvard Brain Death Committee, a group of physicians and ethicists who met and later published their conclusions in the Journal of the American Medical Association (JAMA).

Granting Control to the Dying

“Findings suggest that dying patients should be allowed as much control over their lives and routines, and life should, as far as possible, be consistent with the life they led before their illness; this applies especially in their relationships with important people in their lives and being allowed to spend as much time as possible in familiar and comfortable surroundings.” —Writers Joseph Braga and Laurie Braga.

joseph braga and laurie braga, death: the final stage of growth. englewood cliffs, nj: prentice hall, 1975, p. 75.

In 1968, after extensive consultation, the committee issued its report: “Our primary purpose is to define irreversible coma as a new criterion for death.” Speaking for the committee, Henry Beecher elaborated that, in addition, their purpose was “to choose a level where, although the brain is dead, usefulness of other organs is still present.”16 Journalists Edd Doerr and M.L. Tina Stevens further explain that the committee wanted to construct “guidelines that left as little room as possible for a legal charge

that they were removing organs from people who have even a modest chance of recovery.”17

The Harvard committee listed a number of symptoms or conditions that had to be present in order to classify a death as “brain death.” The primary criterion was a flat electroencephalogram (EEG), an electrical recording of brain waves and brain activity. A flat EEG meant that the brain was essentially non-functioning or dead. In addition, the patient had to be nonre-sponsive to any stimuli, including pain, and also lack any voluntary movement. Finally, brain death was apparent when the patient had no involuntary reflexes and the pupils of the eye were dilated and fixed or nonresponsive to light.

The new definition was particularly useful for patients who were diagnosed as being in an irreversible coma. In the past such patients had not been considered dead under the old guidelines, because their breathing and heart continued to function. With the new criteria, life support could theoretically be stopped for patients who met the brain death criteria.

The Fear

“There is a deep-seated fear of high-tech medicine in America, of being locked into machines and losing control of their own lives.” —Hemlock Society founder Derek Humphry.

quoted in donald w. cox, hemlock’s cup: the struggle for death with dignity. buffalo, ny: prometheus, 1993, p. 27.

Between 1970 and 1981, twenty-seven different states adopted legislation that allowed physicians to use the brain death criteria. The laws also permitted doctors to end life support without fear of prosecution in such cases. Since the 1980s the remaining states have similarly used the definition of brain death.

The Death Awareness Movement

Around the same time that the Harvard committee was issuing its new definition, another notable event occurred that directly impacted the perception of death by both the medical profession and the public. That event was the publication of Swiss physician Elisabeth Kübler-Ross’s best-selling book On Death and Dying in 1969.

Born in Zurich, Switzerland, Kübler-Ross began her work with the terminally ill at the University of Colorado. She later became a clinical professor of behavioral medicine and psychiatry at the University of Virginia. Kübler-Ross focused her studies on the terminally ill and discovered that contrary to medical thought, these patients were eager to talk about their experiences.

Her interviews with the dying led her to put her thoughts into book form. Kübler-Ross called for “the rejection of a lonely, mechanical, dehumanized environment for the dying.”18 She concluded that the dying were often isolated in sterile hospital environments,

separated by technology from their loved ones, and often left to suffer in pain prior to their deaths. In addition, she found that few physicians actually consulted their terminal patients on their course of treatment. She wrote, “Dying nowadays is more gruesome in many ways, namely, more lonely, mechanical, and dehumanized.”19 She suggested that because so many people feared and ignored death, the dying were also often feared and ignored.

The publication of this book is often credited with the birth of the “death awareness” movement. As a result of her book, increasing numbers of physicians, nurses, psychologists, and social workers began to study the concepts of death and dying, and in particular the care of the terminally ill. According to Stephen Connor, vice president of the National Hospice and Palliative Care Organization, Kübler-Ross “brought the taboo notion of death and dying into the public consciousness.”20

Stages of Grief

Elisabeth Kübler-Ross revolutionized the way the public looked at the dying process. From her interviews with the terminally ill, she identified five stages that all dying people progress through. The first stage is denial. Most patients react in shock when they receive a diagnosis of a terminal disease, often saying, “No, not me, it can’t be true.” Then comes the stage of anger. This anger is directed at the disease, at the doctors who diagnosed it, and often at God. Why me? is a question often asked by people in this stage of the grief process.

Anger is followed by bargaining. Many patients silently try to bargain with God, promising to do this or that if only the diagnosis could change. This usually brief phase is followed by depression as the truth of the matter settles in. Patients often feel a sense of hopelessness. They feel burdened with financial concerns and other unfinished business. In many cases depression deepens as the fatal disease causes more and more weakness and loss of function.

The final stage is acceptance. For many, acceptance comes late in the disease process, often in the days leading up to death. M. Christina Puchalski writes, “The dying need an opportunity to bring closure to their lives by forgiving those they had conflicts with, making peace with themselves and God, and saying goodbye to friends and family.” Even at this stage, many patients still hope for a miracle in the form of a new drug or a new treatment. They are trying to maintain a small thread of hope that will keep them going through the worst of the disease.

Quoted in James Haley, ed., Death and Dying: Opposing Viewpoints. Farmington Hills, MI: Greenhaven, 2003, p. 73.

Death as a Public Issue

Partially as a result of Kübler-Ross’s work, in the late 1960s and early 1970s, death began to be widely discussed not only by the medical profession, but by the general public. There was an abundance of television shows, books, and movies about death and the dying process. From 1968 to 1973, for instance, the number of articles about death in mainstream American magazines doubled and then redoubled. Over twelve hundred new books on death and dying were published during that same five-year period. In addition, people flocked to weekend seminars focusing on these issues. Americans became nearly obsessed with death and dying.

This obsession arose as a result of several social factors in addition to Kübler-Ross’s work. The 1960s, for instance, was a period of great unrest in the United States as the civil rights movement impacted the lives of Americans throughout the country. As blacks fought to gain equal rights, the consciousnesses of many Americans were awakened to the injustices that occurred in daily life. This led to a reexamination of individual rights in America and spawned the women’s rights movement. With an emphasis on individual rights, there was also a phenomenal growth of self-help groups and the use of psychotherapy. From there it was a natural outgrowth to an increased interest in each person’s right to

a death with dignity. This led to the creation of patients’ rights groups, intent on protecting each patient’s rights to good medical care and informed decision-making. Thus, Americans began to demand more control over their medical treatment.

In 1973, due to pressure from many special interest groups, the American Hospital Association adopted the Patient’s Bill of Rights. Medical historian Peter G. Filene elaborates, “First among the twelve points was the right to considerate and respectful care.”21 Part of this document, supported by the American Medical Association, allowed a patient or the patient’s family to withdraw any extraordinary treatment.

Prior to the publication of this document, most patients never questioned their physicians. Relying on the doctor’s expertise, most patients accepted whatever treatment a physician suggested. As a result of the Patient’s Bill of Rights, Hoefler and Kamore suggest that “power has begun to slip away from physicians, leaving patients in a stronger position to control their own medical destinies.”22

Death, along with medical treatment, has become in the last fifty years a much more complex issue. With the increased visibility and awareness of the issues centered around death and dying, there has also been an increased interest in the way people die. One of the biggest questions that has arisen centers around the fundamental question, Do people have the right to die at a time and in a manner of their own choosing?