Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Who Goes First?: The Story of Self-Experimentation in Medicine
Who Goes First?: The Story of Self-Experimentation in Medicine
Who Goes First?: The Story of Self-Experimentation in Medicine
Ebook641 pages9 hours

Who Goes First?: The Story of Self-Experimentation in Medicine

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Lawrence Altman has authored the only complete history of the controversial and understudied practice of self-experimentation. In telling the stories of pioneering researchers, Altman offers a history of many of the most important medical advancements in recent years as well as centuries past—from anesthesia to yellow fever to heart disease. With a new preface, he brings readers up to date and continues his discussion of the ethics and controversy that continue to surround a practice that benefits millions but is understood by few.


Lawrence Altman has authored the only complete history of the controversial and understudied practice of self-experimentation. In telling the stories of pioneering researchers, Altman offers a history of many of the most important medical advancements in
LanguageEnglish
Release dateApr 28, 2023
ISBN9780520340473
Who Goes First?: The Story of Self-Experimentation in Medicine
Author

Lawrence K. Altman

Lawrence K. Altman, M.D., is medical correspondent of the New York Times and Clinical Associate Professor of Medicine at New York University Medical School.

Related to Who Goes First?

Related ebooks

Medical For You

View More

Related articles

Reviews for Who Goes First?

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Who Goes First? - Lawrence K. Altman

    WHO

    GOES FIRST?

    LAWRENCE K. ALTMAN, M.D.

    With a New Preface

    University of California Press

    Berkeley Los Angeles London

    University of California Press

    Berkeley and Los Angeles, California

    University of California Press, Ltd.

    London, England

    © 1986, 1987, 1998 by Lawrence K. Altman. Preface © 1998 by Lawrence K.

    Altman. Foreword © 1987 by Lewis Thomas.

    Altman, Lawrence K.

    Who goes first?: the story of self-experimentation in medicine I

    Lawrence K. Altman.

    p. cm.

    Originally published: New York: Random House, cl987.

    Includes bibliographical references and index.

    ISBN 0-520-21281-9 (paper: alk. paper)

    1. Self-experimentation in medicine—History. 2. Medical research

    personnel—Biography. 3. Medicine—Research—History. I. Title.

    [DNLM: 1. Physicians—biography. 2. History of Medicine, Modern.

    3. Human Experimentation—history. 4. Research Personnel—

    biography. WZ 129A468w 1987a]

    R853.S44A57 1998

    619’.092'2—dc21

    97-42122

    Printed in the United States of America

    987654321

    The paper used in this publication is acid-free and meets the minimum

    requirements of American National Standard for Information Sciences—

    Permanence of Paper for Printed Library Materials, ANSI Z39.48-1984.

    This book is dedicated

    TO MY MOTHER, AND TO MY FATHER,

    WILLIAM S. ALTMAN, M.D.,

    who taught me that a physician should

    first and always place himself

    in the role of a patient.

    Contents

    Contents

    Preface to the Paperback Edition

    Foreword BY LEWIS THOMAS

    Acknowledgments

    Prologue

    Chapter One AN OVERVIEW

    Chapter Two DON'T TOUCH THE HEART

    Chapter Three THE PERILOUS ROUTE TO PAINLESS SURGERY

    Chapter Four THE CASE OF THE QUEASY CHEMISTS

    Chapter Five THE PASTEURIAN CLUB

    Chapter Six THE MYTH OF WALTER REED

    Chapter Seven TAMING THE GREATEST KILLERS

    Chapter Eight TOXIC SHOCKS

    Chapter Nine FUNGI—INFECTING AND HALLUCINATING

    Chapter Ten LIFETIMES OF SELF-EXPERIMENTING

    Chapter Eleven DIETARY DEPRIVATIONS

    Chapter Twelve THE RED CELL RIDDLE

    Chapter Thirteen BLACK AND BLUE AT THE FLICK OF A FEATHER

    Chapter Fourteen CANCER: CAN YOU GIVE IT TO YOURSELF?

    Chapter Fifteen CHOOSING THE RIGHT ANIMAL

    Notes

    Index

    Preface to the Paperback Edition

    Upon entering a new millennium, we leave behind the century of medicine s greatest triumphs. Significant advances were made more often in the twentieth century than in all of history, and underlying those advances is a cardinal fact: they were achieved only through experiments on humans. This uncomfortable truth makes many people squeamish.

    Federal law properly requires a vast array of experiments on animals and humans before any potential new drug, device, or vaccine can be marketed. Animal experiments can only start the process. They are limited because species react in markedly different ways to various therapies and preventions; for example, penicillin can save human lives but kill guinea pigs.

    Determining the appropriate time for the first human experiment is a complex problem, and choosing the first volunteer for an experiment is a fundamental and recurring issue in medicine. The decisions involve researchers, volunteers, and members of the federally mandated committees, known as Institutional Review Boards (IRBs), that must approve experiments before they can be conducted on humans. Nevertheless, government officials, medical leaders, and specialists in medical ethics have devoted far less attention to the choice of the first volunteer than they have to many other arguably less important issues.

    The magnitude of human experimentation in the United States is greater than one might think. Each year, doctors conduct more than 3,000 clinical trials on products subject to U.S. Food and Drug Administration (FDA) regulations. Thousands more trials are conducted in other countries. Each involves many volunteers, and the humans who are the first to take experimental therapies and preventions often face major risks. Yet, without human experiments we would not know with any degree of certainty whether potential new preventions, such as vaccines to thwart infections and new therapies for ailments, including drugs and surgery, are safe or effective. Even then, sometimes these therapies prove toxic or dangerous in surprising ways. In 1997, two approved appetite suppressors showed dangerous cardiac side-effects when taken in tandem.

    In a long and continuing tradition, many doctors have chosen to be the first volunteer, believing that it is the most ethical way to accept responsibility for the unknown risks associated with the first steps in human experimentation. Some do it to smooth out the bugs before experimenting on another person. Others do it to prove a scientific point. A recent example is Dr. Barry J. Marshall who, along with Dr. J. Robin Warren, identified a bacterium now known as Helicobacter pylori in patients with inflamed stomachs (gastritis) and ulcers. In the next phase of his research, Marshall swallowed a tube, which was used for tests to document that he had neither gastritis nor an ulcer and was not silently harboring H. pylori. Then Marshall swallowed H. pylori, which led him to develop gastritis. Although he did not continue the self-experiment long enough to produce an ulcer, Marshall s self-experiment provided strong indirect evidence, though not proof, that many ulcers result from H. pylori infection and are not caused by stress, as was previously thought. Marshall s role was one part of the continuing research that has led to antibiotic therapies that can cure gastritis and ulcers as well as the strong suspicion that chronic H. pylori infection can cause cancer of the stomach—a finding that could ultimately help prevent one of the world s most common cancers.

    Sometimes discoveries made through self-experiments are matters of taste. A case in point is the discovery of a potentially dangerous interaction between grapefruit juice and certain drugs, a finding derived from research into the interaction of alcohol and drugs used to treat high blood pressure.

    In reviewing earlier studies on this subject, Dr. David G. Bailey, of London, Ontario, noted that other scientists had nearly always added orange juice to mask the taste of the 95 percent alcohol that was used in the studies. Saying that his nature is not to ask someone to do something that I wouldn’t do, Bailey drank a double-strength screw driver. He found the cocktail tasted ghastly and upset his stomach. Seeking a more palatable substitute for use in his own studies, Bailey and his wife spent an evening tasting several types of alcohol-juice combinations. They found that the bitterness of grapefruit juice best hid the taste of alcohol. Taking a systematic approach to the research, Bailey s team next conducted so-called crossover studies. One day Bailey and others took felodipine (a drug to lower blood pressure that was experimental at the time but is now on the market) with water. A second day they took the drug with only grapefruit juice, and another time they took the drug with grapefruit juice and alcohol. The drug did lower blood pressure, but to Bailey s surprise, the amount of felodipine in the blood was about three times higher than that reported by the scientists who had used orange juice in their experiments. Similar tests corroborated that orange juice did not have the same effect—elevating the level of felodipine—as grapefruit juice.

    Further research has shown that grapefruit juice suppresses an enzyme in the body that normally breaks down a number of drugs. Thus, grapefruit juice can markedly increase the potency of some drugs, including members of the calcium-channel blocker family used in treating high blood pressure and angina (a heart condition); cyclosporine, which is used to prevent rejection of transplanted organs and to treat severe cases of psoriasis (a skin ailment); saquinavir, an anti-HIV drug; and steroids.

    If scientists identify the enzyme-suppressing ingredient in grapefruit juice, a hope is that it could be added to certain drugs, thereby reducing the needed dose, lowering the cost, and improving reliability of the therapy.

    Successes from all kinds of human experimentation have led the public to demand cures and preventions for cancer, heart disease, AIDS, and myriad other human afflictions. If those goals are to be realized, human experimentation will continue to be mandatory since medical progress hinges on learning how humans respond to cutting-edge therapies. Furthermore, some human subjects involved in experiments to evaluate promising new therapies have suffered serious injuries that were a direct cause of the experiment. The discovery of these hazards halted further testing, but had they not been detected in the small number of human volunteers, the promising new therapies might have gone to market, possibly creating medical disasters harming even larger numbers of people.

    Obviously, doctors cannot carry out all human experiments on themselves. The very nature of the problem under study could make the doctor conducting the experiment unsuitable or ineligible. These types of experiments depend on other humans to volunteer to go first, and generally they are therapeutic, meaning that they involve patients who are ill or have serious conditions. Other types of experiments are designed to study how therapies affect the body in so- called normal volunteers, which is when self-experiments are more likely to occur.

    Even when doctors do go first, many other individuals not connected with the research must follow in order to complete a full study with a statistically meaningful analysis. Society promotes recruitment of volunteers from the general population because medical research potentially benefits the community at large, and ethi- cists believe that everyone who is eligible has an obligation to participate. Yet, with scientists increasingly focused on numbers and statistics, it is easy to lose perspective on self-experimentation s role in the research process. The method s importance is not in the number of doctors who experiment on themselves—self-experimenters are likely to total only one or a few in a particular study because scientific teams are small in number—but small as their numbers may be, self-experimenters play a crucial role in the research process because they take the initial risks, when the uncertainties are greatest, and can make it safer for others to follow.

    There is a lack of surveys that would show how often doctors experiment on themselves, how investigators view self-experimentation, and how well they understand the methodology. My impression is that self-experimentation is practiced nearly everywhere that research on humans is conducted, despite the fact that rules may forbid it.

    Misinformation abounds about self-experimentation, in part because there is less attention paid to it, in either the professional or the public realm, than in the past. Within the medical community, many doctors assume self-experimentation has gone out of vogue, that a scientist can no longer act as a lone ranger and make a significant finding, even though experimenters like Marshall prove that this view is incorrect. Another cause for the misinformation, is that IRBs rarely ask the identities of the subjects in the experiments they approve, and thus they have no way of knowing whether the subjects are volunteers who have no role in the study or the doctors them selves. Sometimes doctors experiment on themselves before seeking approval from an IRB. In a lecture in December, 1987, William J. Curran, the head of an IRB at Harvard, said that few, if any, doctors experimented on themselves without his committee s approval. But several doctors in the audience responded by declaring that they had experimented on themselves without an IRB committee s permission or knowledge.

    Another factor obscuring self-experimentation is the way in which doctors report their scientific findings. In the past, reports in medical journals identified subjects by initials, and it was often possible to verify that the author of the study was involved by matching initials. To protect confidentiality, participants are now identified by numbers, not initials. So although editors of many leading journals have experimented on themselves in their own research projects and believe in the methodology, they have not developed a way to acknowledge self-experimentation. Although some self-experimenters have found a way to acknowledge their role in the text of their scientific articles, others have not, partly to avoid potential criticism from colleagues. It seems ironic that doctors who follow the biblical Golden Rule ("do unto others as you would have them do unto yourself’) by experimenting on themselves might be criticized rather than applauded, but that may be their fate. Even at the National Institutes of Health, the renowned federal research agency, self-experimentation is not considered especially praiseworthy.

    I have participated in scores of discussions about self-experimentation, and although some criticisms have merit, most are questionable. For example, it is true that some scientists experiment on themselves before seeking approval from an IRB, in part because they regard the review board process as needless red tape. As some critics correctly note, a researcher s enthusiasm may triumph over his judgment, leading an investigator to take undue health risks. Also valid is the concern that a self-experimenter s bias could affect the findings if the experiment involved subjective responses.

    Other criticisms have less merit. Some say that doctors who experiment on themselves are simply seeking attention, if not fame. Others charge, without supporting evidence, that doctors experiment on themselves—consciously or unconsciously—as a death wish. As to lack of objectivity, it s hard to tell how a doctor using himself as a subject for his own study would be more biased than one using volunteers for an experiment he designed.

    In analyzing the criticisms, it seems that an overwhelming concern is to curb a researcher s overenthusiasm and thus to protect researchers against themselves. This attitude overlooks the fact that people vary enormously in their intellectual curiosity, desire to pursue knowledge, and attitude toward risk-taking. It is puzzling why scientists become touchy about self-experimentation when we live in a society that allows people to climb mountains and become test pilots and permits companies to pay large salaries to those who do dangerous work on ocean rigs and bridge construction. There is a delicate balance between the potentially valuable and the foolish. But why should the health profession be morally prudish about allowing researchers to take risks as long as they have approval from an IRB?

    Furthermore, self-experimentation is generally done in the earliest stage of research, not the last, and thus its role comes into play long before a larger number of volunteers are recruited. The earliest stages of drug and device testing concentrate on safety, not effectiveness, and the doctors conducting the studies usually select healthy, unaffected individuals. Doctors rarely experiment on the hundredth volunteer until they have first studied ninety-nine others in sequence. The early results usually draw little attention, even in scientific publications; therefore, by the time an experiment advances to involve a larger group, the original self-experiment is often forgotten. Thus, when an experiment or study involves a group, why not include the doctors who are conducting it? And why not let readers of scientific papers know about it?

    Certainly, there is a need to prevent hazards resulting from an overenthusiastic scientist experimenting on an individual. That is one reason why IRBs exist: to encourage scientists to analyze the risks and benefits they foresee in an experiment. However, critics usually do not distinguish the situation of scientists experimenting on themselves without IRB approval from that of scientists who first obtain IRB approval and then decide to go first. By asking doctors whether they would experiment on themselves, IRBs could stimulate discussion about experimenters’ views toward test subjects—whether they regard volunteers differently than they would regard themselves as participants in their own experiments, and why.

    The number of IRBs in the U.S. has grown to an estimated 5,000. Most work out of the limelight and rarely come under close scrutiny, despite the importance of their role. Regulations governing protection of humans who participate in experiments underwent their last major revision in 1981. IRBs do little post-approval review largely because they lack the resources to do much more than give the initial go-ahead, and thus IRBs often do not know what researchers actually did after the experiments were approved. This lack of follow-up is causing increased concern because more of the experiments and studies brought before IRBs for approval are being paid for by the pharmaceutical and biotechnology companies that produce what is being tested. Critics say that IRBs are often pressured into approving requests for human experiments because of increased competition among scientists and medical centers. Furthermore, government reports have questioned the ability of IRBs to assure the quality of informed consent (there have been charges that IRBs have approved flawed consent forms), and a new awareness of problems with IRBs has led critics to call for greater accountability. In response, government officials have begun to collect systematic data to learn more about how IRBs make their decisions, although some experts believe that the effort will not help answer this or other questions.

    I have heard medical leaders criticize colleagues for proposing, or actually carrying out, an IRB-approved self-experiment. Yet they endorse the merits of carrying out the very same experiment on an individual unconnected with the research. However, the benefits of medical expertise in experiments have been recognized by no less than IRBs, some of which have actually required that experimenters recruit volunteers only among medical students, doctors, nurses, and laboratory technicians since such individuals are considered to be more sophisticated in their comprehension of an experiment s goals and complexities. These medical leaders leave the impression that they have a double standard for the risks doctors and lay volunteers should take in participating in IRB-approved human experiments. Perhaps doctors criticize the practice of self-experimentation to avoid the issue of why they don’t participate in their own research. In explaining why they do not do to themselves what they do to others, some doctors have told me that their lives as doctors were too valuable to take such risks. They asked who would provide for their families if they were injured in a self-experiment, but the same doctors rarely asked how the family of a volunteer would be compensated in the event of injury to the volunteer.

    Issues like this raise fundamental questions of medical ethics, a topic that is attracting increasing attention. The field of medical ethics came to the fore in the late twentieth century, largely in response to a number of scandals that tarnished medical research’s image. The Nazi experiments in World War II were the most horrendous. Nazi doctors infected, mutilated, tortured, and murdered millions of people in the name of scientific experimentation, but the experiments had no scientific purpose, design, or value. After the war, a number of Nazi doctors were tried, and some hung, in the Nuremberg war crimes trials. Out of the trials, in 1947, came the Nuremberg Code, which calls for doctors to use themselves in risky experiments. The Nuremberg Code remains a gold standard for the conduct of all forms of human experimentation in the U.S. and elsewhere, and its spirit is embodied in other codes of medical ethics and international agreements.

    Only a few years after World War II, the world learned that the U.S. government had conducted medical experiments on humans that did not meet Nuremberg Code standards. Studies of infertility and other reproductive procedures, radiation exposure, and the mentally retarded were among the scandals of the time period, but the Tuskegee syphilis study attracted the most attention. Although self-experimentation was rarely, if ever, involved, the ire these experiments raised, which continues to be salient today, is illustrative of the strong reactions provoked by issues of medical ethics.

    In the Tuskegee study, the U.S. Public Health Service and the Alabama State Health Department recruited 399 poor black men with syphilis to chart the natural course of the disease. The study began in 1932 when syphilis was incurable and available therapies were largely ineffective. At that time doctors often did not tell patients their true diagnosis; instead they used euphemisms like bad blood. In return for free medical care and free burial, the men agreed to let doctors follow them for the rest of their lives. But the doctors and government officials in the Tuskegee study did not provide information about the study and its potential hazards—what would now be regarded as adequate informed consent—even though medical historians have credited the U.S. Army-sponsored yellow fever experiments conducted in Cuba in 1901 for establishing the principle of informed consent. The nature of the Tuskegee study changed drastically immediately after World War II with the discovery that penicillin could cure syphilis. The antibiotic may have come too late to help many of the Tuskegee participants; nevertheless, in failing to order doctors to discuss penicillin therapy with the participants, the U.S. Public Health Service denied the participants a voice in their own treatment decisions.

    Although adverse publicity halted the Tuskegee study in 1972, subsequent reexaminations of it have raised important questions about human experimentation. Would the government officials and doctors involved in the Tuskegee study have participated if they or a family member had had syphilis? Would they have withheld knowledge of penicillin from them? Should anyone with responsibility for experimenting on humans judge a study’s merits without deciding whether they would be a participant in the study? The government officials and doctors involved in the Tuskegee study gave no indication that, if infected, they would have been willing to share the same risks they demanded of their volunteers. As many who have studied the record of human experimentation have said, it is easy to consent to an experiment for someone else.

    Tuskegee and its ethical implications continue to be much on the nation’s mind. In 1997, a quarter of a century after the study’s ethical flaws were first brought to public attention, President Bill Clinton publicly apologized for the government’s role. In many ways, the apology came much too late. Researchers must counter wide distrust of the government and science emanating from the Tuskegee and other studies, particularly in the African-American community, which has had a disproportionately low participation rate in government-sponsored research. Outrage over other scandalous studies led the U.S. government to require in 1966 that every medical center receiving taxpayer funding for human experiments establish an IRB.

    Scandals like Tuskegee point to a deeper problem: the scientific community largely fails to study its own history and the fundamentals of human experimentation. The history of medicine is given insufficient time and attention in medical education. Ethics are discussed more frequently now owing to a federal government rule requiring ethics courses for researchers, graduate students, and faculty at medical schools and centers where human experimentation is carried out. However, the courses are not standardized and are not always taken seriously by faculty. Frequently they focus on issues like fraud, authorship, and patent rights rather than on topics like those discussed in this book. This neglect is nothing new. Doctors have had low regard for patient consent throughout history, says Dr. Jay Katz of Yale Law School, a physician and a leading expert on human experimentation. From my experiences, giving researchers just a reasonably thorough grounding into the ethics of research, including the kind of questions that you raise, is still a very rare phenomenon.

    Health officials and scientists are now emphasizing the importance of new and emerging diseases in belated recognition of their enormous overconfidence in declaring that infectious diseases were no longer a major threat. The discovery of Legionnaire s disease and AIDS were landmarks in changing public attitudes toward the threat of infection. AIDS was an unknown disease when it was first recognized in the U.S. in 1981. Now, the World Health Organization estimates that at least 30 million people will be infected with HIV, the AIDS virus, by 2000. Development of an AIDS vaccine is of the utmost importance in the ultimate control of the viral disease, and there is a long history of self-experimentation in the development of such preventions. But the future role of self-experimentation in AIDS research is uncertain.

    In 1984, when then Secretary of Health and Human Services Margaret M. Heckler erroneously gave sole credit for the discovery of HIV to American government scientists, she publicly declared that an AIDS vaccine would be available by 1986. Hecklers words stand as an embarrassing illustration of how the cabinet official overseeing the government s role in vaccine development misunderstood the difficulty of developing a new prevention. She also reflected the government s and scientific community’s overconfidence in their knowledge, technology, and ability to conquer a disease quickly.

    Scientists have developed more than twenty experimental AIDS vaccines, and more than a dozen have been tested on small numbers of uninfected individuals. Leaders of AIDS-ravaged developing countries in Africa and Asia say they urgently need an AIDS vaccine and that waiting to fully test one is costing lives. But the findings to date have been sobering, though they were tantalizing enough in mid-1997 for President Clinton to declare a national goal of developing an AIDS vaccine within a decade. However, no AIDS vaccine has produced sufficient scientific evidence in any preliminary experiment to warrant large-scale testing. Many leaders in the field say that it will take more research on additional experimental vaccines before one is selected for a large-scale trial.

    If a self-experimenter were to test an AIDS vaccine on himself, he would face a number of social, psychological, and ethical problems that would be non-issues if self-experimentation were used in testing other vaccines. One problem is that AIDS vaccines leave fingerprints (known as antibodies) in the blood similar to those resulting from natural infection with HIV. Thus, those who take AIDS vaccines will test positive in the standard HIV screening test. Being considered HIV-positive could lead to serious problems in trying to buy health or life insurance policies, or in traveling to countries that refuse entry to infected individuals.

    With certain measures, however, it could be shown that being vaccinated against HIV is not the same as carrying the virus that causes AIDS. Antibodies from an AIDS vaccine differ in important ways from those produced in response to natural HIV infection, and tests with more sophisticated laboratory methods (known as Western blots) can distinguish between the two. Furthermore, those who take an experimental vaccine can receive a certificate explaining that their HIV-positive status is a direct result of participation in an AIDS vaccine trial. Nevertheless, these potential problems make many AIDS researchers unwilling—understandably—to take a vaccine themselves.

    But we are still left with the question of who should be the first to try a vaccine. In the past, manufacturers often first tested a vaccine on people in developing countries, a policy that has ethical implications of its own. Now health officials and ethicists debate where an AIDS vaccine should be initially tested, saying that the need for a vaccine in a country and a vaccine trial in that country may differ. Some health officials believe that rules should vary country by country. Others counter that any AIDS vaccine must first be tested in the country where it is developed.

    Vaccines are derived from either killed or live bacteria or from viruses. Nearly all HIV vaccines tested on humans so far have come from a small portion of the virus’s outer coat. The most promising findings have come from a vaccine tested in primates that is derived from a mutated version of the HIV-like simian immunodeficiency virus (SIV) that infects primates. SIV and HIV each contain nine genes. Dr. Ronald C. Desrosiers’s team of scientists at the New England Regional Primate Center in Southborough, Mass., has removed a gene from SIV to weaken but not kill the virus so that it can stimulate immune system defenses without causing simian AIDS. Desrosiers has removed three and four genes from HIV in creating experimental vaccines for humans. But because HIV incorporates itself into the machinery of cells, a theoretical risk exists that the weak ened live virus might someday revert to a disease-causing agent or lead to cancer. Although laboratory preparations of the potential human vaccine appear to be safe based on early animal experiments, they do not meet FDA standards for use in humans. Even if approval were given to test the concept of a live, weakened vaccine in humans with an aim of eventually offering it to people at high risk for AIDS, more work is needed to develop a pharmaceutical-grade product. Government officials and other scientists have rejected Desrosiers’s calls to slowly begin the long-term human experiments needed to prove the safety and effectiveness of the mutant vaccine. But if permission were granted, who would take the first injection of the vaccine? And why?

    Desrosiers has rejected the idea of taking it himself because he is not at risk for AIDS. He said that the unknown risks of administering a laboratory-grade vaccine to a human would leave him with nothing to gain from experimenting on himself. If I were at any risk, I would not hesitate to take it in a minute, with or without FDA approval, Desrosiers told me. If I was at great risk and decided I wanted to test the vaccine, I would go to my freezer, take out the material, and shoot myself in the arm like the vaccine developers in your book. A small number of other doctors have said they would be willing to take Desrosiers’s vaccine if the FDA approved such an experiment.

    So far, only one researcher has gone first in trying to develop an AIDS vaccine. He is Daniel Zagury, a French researcher who says he took the first dose of an AIDS vaccine. But little of significance has came out of his experiment.

    Much misinformation exists about policies on self-experimentation. More than once, officials of drug companies and the National Institutes of Health (NIH) have told me that doctors could not experiment on themselves because the FDA prohibited it. There is no such prohibition, and, in fact, the NIH has explicitly permitted selfexperimentation at its campus in Bethesda, provided that the researcher goes through the same procedures as a non-scientist volunteer. Although the NIH pays for a large share of the human experiments carried out at medical centers throughout the U.S., they allow the individual centers to make their own policies on self-experi mentation.

    In Canada, the National Research Council, one of that country’s several granting agencies, has created confusion with its guidelines, which state that approval will be denied to a research protocol in which any of the researchers participates as a research subject. Selfexperimentation involves a basic conflict of interest that could influence the scientific validity and quality of the research, and may also induce the investigator to assume greater risks than would be reasonable. The only recognized exceptions are for calibrating equipment.

    Dr. Margaret-Anne Somerville, director of the McGill University Centre for Medicine, Ethics, and Law, headed the Canadian National Research Council Human Subjects Research Ethics Committee, which authored the guidelines. According to Somerville, the guidelines were written without broad knowledge of the history of self-experimentation. They were intended mainly to avoid bias and to prohibit a senior faculty member from coercing a junior scientist into self-experimentation. Somerville s successor as chair of that committee, Professor Bernard Dickens of the University of Toronto, says the language is not meant to be absolute and that in 1997 his committee had approved self-experiments.

    In the U.S., neither government commissions appointed to study human experimentation nor groups representing academic and organized medicine have seriously addressed the issues of self-experimentation and choice of the first human volunteer. Discussion is further thwarted by the fact that reports from these groups seldom reference publications that do address the topics. The time is long overdue for the government and professional organizations, such as the American Medical Association, the Association of American Medical Colleges, and the Association of Academic Health Centers, to study these issues as part of the ethics of human experimentation.

    This dearth of attention reflects medicine s apathy toward its own history, which is rich in stories about self-experimentation that are as fascinating as they are instructive in how discoveries were made and how health care has advanced. Self-experimentation continues in part because young doctors emulate senior members of the profession—part of the medical education system s emphasis on learning from role models. (Many doctors who have become self-experimenters followed the lead of mentors who used themselves for their experiments.) However, as research units have become larger in number and instruction less direct, many younger researchers have never learned from mentors about the process of choosing human volunteers for experiments. Much self-experimentation practiced today passes unnoticed. One reason is that the contemporary emphasis on statistics—as important as they are—obscures the identities of participants by reducing them to a number, losing a human value in the translation.

    Society has expressed strong concerns about the ethics of human experimentation. If that interest is to be sustained—and I believe it will be—then we must focus on the issue of who goes first.

    Lawrence K. Altman, M.D. October 1997

    Foreword

    BY LEWIS THOMAS

    As an experimental pathologist, I have been engaged, off and on for most of a professional lifetime, in research on the mechanisms of human disease. I have always been aware of the occasional contributions made by physician-investigators who used themselves as experimental subjects, and I have carried along in my memory of old medical school lectures the usual anecdotes about such experiments —John Hunter, for instance, and Walter Reed. I was an intern at the Boston City Hospital when John Crandon was his own guinea pig for research on vitamin C. I thought the stories were interesting but relatively unimportant anomalies in the annals of research, a sort of scientific exotica. I was skeptical about the reliability of most of the anecdotes, and I saw no way to verify any of them by perusing the standard literature of biomedical science.

    But now, seen as a whole, autoexperimentation emerges as a highly significant branch of medical history, with its own continuity and coherence. With the unique vision of a skilled physician, a meticulous scholar and a professional journalist, Lawrence K. Altman, M.D., has put the history together and uncovers a solid but previously unrecognized part of the underpinning of modern medical science and technology. Like any scholarly account of history, it presents the two sides of medical research—a flattering view of courageous, devoted, insatiably curious scientists willing to risk anything, including their lives, to get at the truth, but also the seamier side of human nuttiness and grabbiness at its most fallible.

    For all that it informs, Who Goes First? also entertains. These are dramatic stories, not only because they are about people willing to risk death but also because they recount the efforts of medical investigators to conquer disease. Ultimately, the results gained from selfexperiments affect all of us. Perhaps most important, Dr. Altman raises timely questions about the ethics of human experimentation, questions we as a society must evaluate and eventually answer.

    Who Goes First? ought to be fascinating reading for almost anyone —the general public, physicians and other professionals, people interested in science and its history, and readers fond of solid documentation and original sources. It ought to be required reading for undergraduates hoping for medical school and medical students themselves. It will, I predict, come as a surprise.

    Acknowledgments

    Over the years, many people have told me anecdotes about selfexperimentation and those who did it, and many others have provided support in additional ways. I am indebted to them all, and with hopes that I have not slighted anyone, I wish in particular to thank:

    Donna Anderson; Howard Angione; Robert S. and Mary Ascheim; Judy Baggot; William B. Bean; Gunnar Biorck; Michael Bliss; Mark and Kay Bloom; John Z. Bowers; Robert A. Bruce; Roger J. Bulger; Veronique Buttin; Eli Chemin; Matt Clark; Ronald Clark; William J. Darby; Lena Daun; Hilary Davies; Gail de Avillez; Friedrich Dein- hardt; Joe Elia; Leif Erhardt; Myron and Sabine Farber; Saul A. Farber; Agustin M. Florian; Ben and Arna Goffe; Richard A. Frank; Silvio Garattini; Alex Greenfield; Laurence A. Harker; Theodore Hauschka; David Hendin; John and Marty Herbert; Robert and Shei- lah Hillman; James G. and Beate Hirsch; Dorothy M. Horstmann; Edward J. Huth; Glenn and Ginger Irvine; Thomas H. Jukes; Lucy Kroll; Vladislav O. Kruta; David and Joanne Kudzma; Dominique and Olivier L’Ecrivain; Richard I. Levin; Anne Liebling; Richard J. Litell; F. C. Macintosh; George and Arlette Miller; Louis Nelson; Barbara Oliver; Robert G. Petersdorf; Chase N. Peterson; Carl and Betty Pforzheimer III; Ronald R. Roberto; David E. Rogers; Stanley Rothenberg; Guillermo C. Sanchez; Richard Schatzki; Jonas A. Shulman; Margaret Stanback; Andreas Sjogren; Carl and Susan Steeg; James H. Steele; John and Susan Talbott; Lewis Thomas; Jurgen Thorwald; Fred and Carol Valentine; Kenneth S. Warren; Stephen and Dorothy Weber; Paul E. Wiseman; and Richard J. Wolfe.

    My thanks also to the library staffs of the following institutions for their help at various times in this project: Centers for Disease Control in Atlanta; Francis A. Countway Library of Medicine in Boston; Karolińska Institute in Stockholm; Mount Zion Medical Center in San Francisco; Mario Negri Institute of Pharmacologic Research in Milan; National Library of Medicine in Bethesda, Md.; New York Academy of Medicine; New York Times; New York University School of Medicine; St. Luke’s-Roosevelt Hospital Center in New York; Royal Society of Medicine in London; Tufts University School of Medicine in Boston; University of California Medical School at San Francisco; University of Virginia Medical School at Charlottesville; University of Washington Medical School in Seattle; The Wellcome Trust in London; and the World Health Organization in Geneva.

    A special thanks go to research librarian Judy Consales, for her diligence in checking obscure points and the references that were misplaced in all my travels over the years, and to my editors, Rob Cowley and Carol Tarlow.

    A grant from The Josiah Macy, Jr., Foundation helped support this project.

    Prologue

    In Lima, Peru, there stands what may be the only statue in the world of a medical student. It memorializes a young man named Daniel Camón and the dramatic experiment he performed on himself in 1885, an experiment that solved a great mystery about a disease that was killing people in South America. Carrión’s research conclusively linked, for the first time, a disease of the skin called verruga peruana and another of the blood called Oroya fever (because it had struck workers of the Oroya railway line in the Peruvian Andes).

    For years doctors had been trying to find out the cause of puzzling bumps that would erupt on the skin and in the mouths of people living in the steep valleys of the Andean cordillera in Peru, Ecuador, and Colombia. These small bumps, which to victims looked like red warts and to doctors like tumors of blood vessels, were inevitably accompanied by fever and severe joint pain. The rash of bumps was called verruga from the Spanish word for warts, and the sometimes fatal condition was known to have existed in the region for centuries. Epidemics suggested that the disease might be infectious, perhaps caused by the microscopic organisms scientists were just starting to discover. But no one had yet found the organism.

    A worldwide search for clues to the puzzling disease began. Researchers from as far away as Europe sent to South America for specimens of skin with the verruga bumps to study in their laboratories, and a Peruvian medical society set up a prize competition with the hope it would spur interest in the disease. Daniel Carrion, a twenty-six-year-old Peruvian medical student, decided to enter this competition. As a youth, Carrión had often accompanied his uncle on trips through the Andes Mountains, where he had seen verruga sufferers firsthand and had been deeply impressed by their affliction. Now in his sixth year of medical training, he had spent the previous three years studying the geographic distribution and the pattern of symptoms of verruga peruana in preparation for the thesis that was required for his medical degree.

    Camón was vaguely aware that some verruga patients developed anemia, a deficiency of oxygen-carrying hemoglobin in the red blood cells, before they developed the bumps on the skin. He also knew that some doctors believed Oroya fever, a disease of the blood, was actually verruga peruana, while other doctors did not believe they were even related. Carrión’s chief interest was not in solving this controversy. He wanted to study the evolution of the eruption of the skin bumps to see how the onset of verruga peruana differed from that of other diseases such as malaria. By clarifying the earliest phases of the disease, Camón thought he could help doctors treat patients more effectively.¹

    The more he studied the disease, the more he became convinced that he needed to inject material from a verruga into a healthy person. In this way he believed he might learn whether the disease could deliberately be given to a human and, if so, document the length of an incubation period and the progression of symptoms. He decided to do the experiment on himself. His friends and professors tried to dissuade him, but Camón insisted on going ahead. He believed verruga peruana was primarily a Peruvian disease, and he was obsessed by the belief that it should be solved by a Peruvian. The prize competition had been his final impetus.

    On the morning of August 27,1885, in a hospital in Lima, Carrion examined a young boy whose skin was affected by the disease. Carrión’s professor and three of his assistants were with him. All expressed disapproval of what he was about to do and refused to help. Determined, nonetheless, to go ahead, Camón took a lancet and drew blood from a verruga over the boy’s right eyebrow. Then, unsuccessfully, he tried to inoculate the material into his own arm. At this point, one of Carrión’s colleagues overcame his misgivings and helped the medical student finish the inoculation.

    What Carrion thought about his experiment during the next three weeks remains unknown. It was not until September 21 that he recorded his first entry in his diary. He told of feeling a vague discomfort and pains in his left ankle. Two days later, that discomfort had worsened to a high fever and teeth-chattering chills, vomiting, abdominai cramps, and pain in all his bones and joints. He was unable to eat or to quench a strong thirst. By September 26, he could not even maintain his diary, and his classmates assumed the task. Carrión’s doctors had no therapy to offer other than herb poultices, comfort, and prayers. Camón was not dismayed; he believed he would recover. But tests showed that his body had suddenly become alarmingly anemic, with many millions fewer red blood cells than normal. We now know they were being destroyed by bacteria whose existence was unknown at the time. The anemia was so severe that it produced a heart murmur—an abnormal sound usually produced by a disorder of a valve in the heart. Camón could recognize it himself by listening to the sounds transmitted from his heart to the arteries in his neck.

    Although Carrión’s condition weakened each day, his mind remained alert. As he lay sweating and feverish in a rooming house, he began to understand the implications of what he had done. He remembered the varying theories about the links between Oroya fever and verruga peruana and recalled the recent death of a friend. On October 1, Camón told his friends: Up to today, I thought I was only in the invasive stage of the verruga as a consequence of my inoculation, that is, in that period of anemia that precedes the eruption. But now I am deeply convinced that I am suffering from the fever that killed our friend, Orihuela. Therefore, this is the evident proof that Oroya fever and the verruga have the same origin.²

    Carrion was right. He had shown that verruga peruana and Oroya fever were in fact one disease. Whatever caused the mild skin condition of verruga peruana could also cause the high temperatures, bone pain, and fatal anemia of Oroya fever. It was yet to be discovered that both are manifestations caused by a bacterium (Bartonella bacillifor- mis) which is spread by sandflies.³

    Carrion’s condition rapidly worsened, and he was finally admitted to the same hospital where he had first inoculated himself. There, his physician prepared a blood transfusion to help correct the anemia, but for unknown reasons a committee of doctors decided to delay giving it to him. It may seem today that Carrion’s medical care was bungled when he was denied a transfusion that might have saved his life, but the necessary blood typing tests were unknown then and transfusions were risky. As it was, he went into a coma and died on October 5—thirty-nine days after performing his self-experiment.

    Although Camón was eulogized profusely after his death, at least one prominent doctor publicly criticized the experiment as a horrible act by a naive young man that disgraced the profession. And there were some who said Camón had committed suicide. To complicate the situation, when the police learned the identity of the physician who had helped inject the verruga material into Carrion’s arm, they charged him with murder. Carrión’s professor, who had originally opposed the self-experiment, came to the defense of his student and the assistant. The professor cited the many physicians in other countries who had risked their lives in self-experiments. As a result of his arguments, the murder charge was dropped. Today, Camón is an unqualified hero in Peru, where medical students sing a ballad to his memory.

    The verruga story did not end with Carrión’s death. In 1937, Dr. Max Kuczynski-Godard, a physician and bacteriologist in Lima, repeated Carrión’s self-experiment.⁴ Kuczynski-Godard used pure cultures of the Bartonella bacilliformis bacterium in what seems to have been a crude attempt to study immunity to the infection. (The identity of the bacterium had been discovered in 1909 by Alberto Barton, a Peruvian physician.⁵) Kuczynski-Godard took skin biopsies from the area where he had injected the organisms into himself and then examined them under the microscope. Seventeen days later he became seriously ill with what had come to be known as Carrión’s disease. Max Kuczynski-Godard was apparently luckier than Daniel Camón—there is no record that he died from his self-experiment.

    Now we know that the Bartonella bacteria invade the red cells in the first and most dangerous stage of the disease (bartonellosis); it can often be fatal, but in mild cases the individual may not be aware of the infection. Then, from two to eight weeks later, it goes on to cause the bumpy skin rash (verruga peruana), which may last for up to a year. This second stage is a milder form of the disease, which now occurs only rarely in South America and can be cured with antibiotics. The self-experiments of Camón and Kuczynski-Godard led public health officials to knowledge that has enabled them to control the disease.

    Moreover, by showing that one organism could cause two vastly different diseases, Camón gave scientists insights into the enormous diversity of human biology. His discovery set the stage for others to learn, for example, that chicken pox and shingles are different manifestations of the same herpes virus. And the ramifications of Carrión’s finding can be appreciated by its application to a newly recognized disease, Acquired Immune Deficiency Syndrome, or AIDS. Scientists have learned that the feline leukemia virus can cause not only leukemia in cats but also a wasting disease and suppression of the immune system resembling AIDS. The visna virus can cause neurological damage and also a wasting disease in sheep. Both the feline leukemia virus and the visna virus are members of the retrovirus family and thus have become models to study another retrovirus—the one that causes human AIDS.

    Was Carrión foolish? What if his experiment had proved nothing at all? Like all acts of courage, self-experimentation straddles the fine line between heroism and foolishness. When the experiment goes well, scientists heap praise on the researcher who did it; when disaster occurs, some critics are quick to denounce the self-experimenter and his methodology. Carrion’s experiment was no different. Once he had made the decision that experimentation on a human was necessary, he must have asked himself: On whom? Carrion answered that question in the only way his conscience would allow: Myself.

    My interest in self-experimentation and physician-investigators who deliberately choose to do their experiments on themselves stems from a discussion during my first days at Tufts Medical School in 1958. After an embryology class a few of us gathered for an informal explanation of how doctors had learned so precisely some of the steps in the anatomical development of fetuses. We were told that several years earlier a small team of Harvard doctors had asked a group of women to cooperate in a research study. The doctors asked the women not to practice birth control during the month before their scheduled hysterectomies. Following the operations, researchers examined the uteri for indications of pregnancy and then made microscope slides from the thirty human embryos that were found. Technically, the Harvard doctors had performed abortions on the women. The research project became a subject of public and religious controversy, and because statutes then prohibited abortion in Massachusetts, some critics charged that the researchers had broken the law.

    For me, the discussion raised nagging questions about the nature of research on humans, then in its halcyon period. Did doctors customarily choose to skirt the law or commit crimes in the name of clinical research (experimentation on humans)? What was the process by which researchers sought and chose human volunteers for experiments? In the case of the Harvard project, how had the doctors explained it to their patients?⁶

    These questions were in the back of my mind a few weeks later when a lecturer, apparently trying to point out that doctors make mistakes in research, sometimes fatal ones, told us about a physician who had used himself as a volunteer for his own experiment. He was John Hunter, a surgeon to King George III and one of the most celebrated anatomists and medical teachers of his day. He pioneered in transplant surgery by placing a human tooth in a cock’s comb, made some of the finest, most detailed

    Enjoying the preview?
    Page 1 of 1