In January 1944, a 17-year-old Navy seaman named Nathan Schnurman volunteered to test protective clothing for the Navy. Following orders, he donned a gas mask and special clothes and was escorted into a 10-foot by 10-foot chamber, which was then locked from the outside. Sulfur mustard and Lewisite, poisonous gasses used in chemical weapons, were released into the chamber and, for one hour each day for five days, the seaman sat in this noxious vapor. On the final day, he became nauseous, his eyes and throat began to burn, and he asked twice to leave the chamber. Both times he was told he needed to remain until the experiment was complete. Ultimately Schnurman collapsed into unconsciousness and went into cardiac arrest. When he awoke, he had painful blisters on most of his body. He was not given any medical treatment and was ordered to never speak about what he experienced under the threat of being tried for treason. For 49 years these experiments were unknown to the public.
The Scandal Unfolds
In 1993, the National Academy of Sciences exposed a series of chemical weapons experiments stretching from 1944 to 1975 which involved 60,000 American GIs. At least 4,000 were used in gas-chamber experiments such as the one described above. In addition, more than 210,000 civilians and GIs were subjected to hundreds of radiation tests from 1945 through 1962.
Testimony delivered to Congress detailed the studies, explaining that “these tests and experiments often involved hazardous substances such as radiation, blister and nerve agents, biological agents, and lysergic acid diethylamide (LSD)….Although some participants suffered immediate acute injuries, and some died, in other cases adverse health problems were not discovered until many years later—often 20 to 30 years or longer.”1
These examples and others like them—such as the infamous Tuskegee syphilis experiments (1932-72) and the continued testing of unnecessary (and frequently risky) pharmaceuticals on human volunteers—demonstrate the danger in assuming that adequate measures are in place to ensure ethical behavior in research.
In 1932, the U.S. Public Health Service in conjunction with the Tuskegee Institute began the now notorious “Tuskegee Study of Untreated Syphilis in the Negro Male.” The study purported to learn more about the treatment of syphilis and to justify treatment programs for African Americans. Six hundred African American men, 399 of whom had syphilis, became participants. They were given free medical exams, free meals, and burial insurance as recompense for their participation and were told they would be treated for “bad blood,” a term in use at the time referring to a number of ailments including syphilis, when, in fact, they did not receive proper treatment and were not informed that the study aimed to document the progression of syphilis without treatment. Penicillin was considered the standard treatment by 1947, but this treatment was never offered to the men. Indeed, the researchers took steps to ensure that participants would not receive proper treatment in order to advance the objectives of the study. Although, the study was originally projected to last only 6 months, it continued for 40 years.
Following a front-page New York Times article denouncing the studies in 1972, the Assistant Secretary for Health and Scientific Affairs appointed a committee to investigate the experiment. The committee found the study ethically unjustified and within a month it was ended. The following year, the National Association for the Advancement of Colored People won a $9 million class action suit on behalf of the Tuskegee participants. However, it was not until May 16, 1997, when President Clinton addressed the eight surviving Tuskegee participants and others active in keeping the memory of Tuskegee alive, that a formal apology was issued by the government.
While Tuskegee and the discussed U.S. military experiments stand out in their disregard for the well-being of human subjects, more recent questionable research is usually devoid of obvious malevolent intentions. However, when curiosity is not curbed with compassion, the results can be tragic.
Unnecessary Drugs Mean Unnecessary Experiments
A widespread ethical problem, although one that has not yet received much attention, is raised by the development of new pharmaceuticals. All new drugs are tested on human volunteers. There is, of course, no way subjects can be fully apprised of the risks in advance, as that is what the tests purport to determine. This situation is generally considered acceptable, provided volunteers give “informed” consent. Many of the drugs under development today, however, offer little clinical benefit beyond those available from existing treatments. Many are developed simply to create a patentable variation on an existing drug. It is easy to justify asking informed, consenting individuals to risk limited harm in order to develop new drug therapies for a condition from which they are suffering or for which existing treatments are inadequate. The same may not apply when the drug being tested offers no new benefits to the subjects because they are healthy volunteers, or when the drug offers no significant benefits to anyone because it is essentially a copy of an existing drug.
Manufacturers, of course, hope that animal tests will give an indication of how a given drug will affect humans. However, a full 70 to 75 percent of drugs approved by the Food and Drug Administration for clinical trials based on promising results in animal tests, ultimately prove unsafe or ineffective for humans.2 Even limited clinical trials cannot reveal the full range of drug risks. A U.S. General Accounting Office (GAO) study reports that of the 198 new drugs which entered the market between 1976 and 1985, 102 (52 percent) caused adverse reactions that premarket tests failed to predict.3 Even in the brief period between January and August 1997, at least 53 drugs currently on the market were relabeled due to unexpected adverse effects.4
In the GAO study, no fewer than eight of the drugs in question were benzodiazepines, similar to Valium, Librium, and numerous other sedatives of this class. Two were heterocyclic antidepressants, adding little or nothing to the numerous existing drugs of this type. Several others were variations of cephalosporin antibiotics, antihypertensives, and fertility drugs. These are not needed drugs. The risks taken to develop these drugs by trial participants, and to a certain extent by consumers, were not in the name of science, but in the name of market share.