Which type of experimental protocol requires that scientists that interact with the experimental subjects be unaware of their experimental condition?

11The human experimentations that called forth the Nuremburg trials seem to have been viewed largely as a function of Nazi evil that did not represent practices typical of professional medical behavior more generally.

From: Reference Module in Biomedical Sciences, 2014

Control strategies in general anesthesia administration☆

Adriana Savoca, Davide Manca, in Control Applications for Biomedical Engineering Systems, 2020

Necessary condition for human experimentation to be both legal and ethical is the patient's informed consent. According to the Declaration of Helsinki developed by the World Medical Association (WMA) as a statement of ethical principles for medical research involving human subjects, “each potential subject must be adequately informed of the aims, methods, sources of funding, any possible conflicts of interest, institutional affiliations of the researcher, the anticipated benefits and potential risks of the study and the discomfort it may entail, post-study provisions and any other relevant aspects of the study” (World Medical Association, 2013). This means that the medical doctor must engage the patients in a discussion aimed at not only informing them but also educating, understanding, and listening to potential doubts and questions. In case of anesthesia, this is particularly important. Since the drugs involved in the procedure may have critical adverse effects and affect not only the outcome of the surgical operation, but also the postrecovery phase, the patient must be put in the condition of complete trust in the anesthetist and their capacity of judgement and use of any tools involved in the procedure.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978012817461600010X

MORAL AND ETHICAL ISSUES IN CLINICAL ENGINEERING PRACTICE

In Management of Medical Technology, 1992

HUMAN EXPERIMENTATION

Although the overriding aim of diagnosis and therapy is to provide benefits to individual patients, experimentation aims to expand the boundaries of scientific knowledge. As therapist, the clinician focuses upon the well-being of the patient, but as researcher, the immediate concern is to generate scientifically sound information. Accordingly, intention or aim rather than outcome differentiates therapy from experimentation. Even when a course of treatment fails to improve the patient's condition, it constitutes therapy if such improvement was the intent. This remains true even when the course of treatment yields exciting new medical knowledge as an unintended consequence. Similarly, even when a course of treatment does benefit a subject, it constitutes experimentation if its aim was primarily acquisition of new knowledge and it ranks as experimentation even when it fails to realize this aim.

At this point in discussions of the morality of human experimentation, the traditional move is to distinguish therapeutic from nontherapeutic experiments. As Fried (1978) wrote:

“In therapeutic experimentation a course of action (or studied inaction) is undertaken in respect to the subject for the purpose of determining how best to procure a medical benefit to the subject. In nontherapeutic experimentation, by contrast, the sole end in view is the acquisition of new information.”

Again, the central difference is a matter of intention or aim rather than results. If the purpose of an experiment is to determine how best to benefit the patient, it is deemed therapeutic. If its purpose is to generate new knowledge, it is deemed nontherapeutic.

These distinctions between therapy and experimentation on the one hand and therapeutic and nontherapeutic experimentation on the other, however, can often be misleading. To define therapy in opposition to experimentation may lead to a failure to appreciate that therapy is only one kind of clinical practice aimed at generating patient benefits and so may lead to the mistaken conclusion that all nontherapeutic practices are experimental. Clearly, this is an error. In addition to attempting to heal patients and to mitigate the effects of illness, medical-care providers also seek, where possible, to prevent disease. In addition, therapy itself is generally preceded and to some extent dictated by diagnostic measures: efforts to discover the nature and source of the patient's ailment. Therapy, prevention, and diagnosis are all conducted with the aim of benefiting the patient. This applies to the distinction between therapeutic and nontherapeutic experimentation as well. Just as a course of action may be taken to determine what form of therapy will best benefit the patient, so can a course of action be taken to determine what will yield the best diagnosis or will best prevent illness in the first place. Although diagnostic and preventive measures are not themselves therapeutic, the primary motivation for using them is to benefit patients.

Practice and Research

Given these considerations, a distinction drawn in the mid-1970s by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research is more appropriate for discussion of the morality of human experimentation. The commission offered a distinction between practice and research to replace the conventional distinction between therapy and experimentation. Quoting the commission, Alexander Capron (1986) writes:

“[T]he term ‘practice’ refers to interventions that are designed solely to enhance the well-being of an individual patient or client and that have a reasonable expectation of success.― In the medical sphere, practices usually involve diagnosis, preventive treatment, or therapy; in the social sphere, practices include governmental programs such as transfer payments, education, and the like.

By contrast, the term ‘research’ designates an activity designed to test a hypothesis, to permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge (expressed, for example, in theories, principles, statements of relationships). In the polar cases, then, practice uses a proven technique in an attempt to benefit one or more individuals, while research studies a technique in an attempt to increase knowledge.″

Although the practice/research dichotomy has the advantage of not implying that therapeutic activities are the only clinical procedures intended to benefit patients, like the therapy/experiment distinction it is based on intention rather than outcome. Interventions constitute practices when they are proven techniques intended to benefit the patient. Likewise interventions aimed at increasing generalizable knowledge constitute research.

What about those interventions that do not happily fit into the dichotomy between research and practice? As Capron (1986) notes:

″[Not} every use of unproven techniques amounts to research; the way in which the technique is used must be designed as to permit generalizable knowledge to be gained — something more than just seeing what results from using the technique in one particular instance. Conversely, not every intervention carried out with the intent of benefiting the patient (or other subject of the intervention) thereby qualifies as an instance of practice; prior study of the technique must have provided adequate information about it, to give rise to a reasonable basis for believing that it will achieve the intended result.

Nonvalidated Practice

To cover interventions that do not fit the research/practice distinction, practice must be further distinguished from nonvalidated practice. Like the term “therapeutic experiment,― “nonvalidated practice― connotes that the selected course of action aims to benefit the patient. Unlike the term “therapeutic experiment,― it explicitly encompasses prevention along with diagnosis and therapy. Additionally, the term “therapeutic experiment― seems to be at war with itself by implying that an intervention can be primarily intended to benefit the patient while also being primarily directed at generating sound scientific information. The term “nonvalidated practice― clearly connotes that the intervention's primary purpose is to benefit the patient while making clear that it has not been shown to be safe and efficacious.

An important difference between the therapeutic/nontherapeutic experimentation distinction and the concept of nonvalidated practice is that whereas the former limits attention to new and innovative interventions, the latter encompasses some interventions that are neither new nor innovative. Robert Levine, who coined the term “nonvalidated practice,― writes (Capron, 1986):

“A practice might be nonvalidated because it is new; i.e., it has not been tested sufficiently often or sufficiently well to permit a satisfactory prediction of its safety or efficacy in a patient population. An equally common way for a practice to merit the designation “nonvalidated― is that in the course of its use in the practice of medicine there arises some legitimate cause to question previously held assumptions about its safety or efficacy.”

Whether an intervention's safety and efficacy are at issue because it is new or because clinical experience has posed questions about its value, the central moral issue is the same. Patients are subjected to a course of treatment for which there is no well-founded basis from which to assess its risks and benefits. Therefore, a well-founded basis upon which to determine whether the benefits sufficiently outweigh the risks is lacking.

The Ethics of Research on Humans

In order to appreciate the moral concerns raised by nonvalidated practice, it is first necessary to address the general topic of research on human beings.

“The basic ethical problem … is the use of one person by another to gather knowledge or other benefits that may be only partly good, if at all, for the first person; that is to say, the … subject is not simply a means but is in danger of being treated as a mere token to be manipulated for research purposes― (Capron, 1978).

These remarks indicate that the major moral issue posed consists in the fact that research on humans is not intended to benefit the research subjects themselves directly. Instead, it is intended to provide knowledge that will be added to the stock of information possessed by medical science and in the long run will benefit humankind in general. Whether this is an important moral issue depends upon how the proper relationship between the well-being of distinct individuals and the social whole to which they belong is conceived.

“Those who regard individuals as parts of a collective whole which allocates rights and protects them, can justify a wide scope for experimentation that serves to advance the interests of the collectivity. Conversely, for those who emphasize the inviolability of human persons and the moral obligation not to treat them simply as parts related to a social whole, any human experimentation whose purpose is to benefit persons other than the subjects of the experimentation requires very strong justifying reasons― (Capron 1978).

From an essentially utilitarian perspective in which the common good is conceived as greater than the good of particular individuals, the fact that a program of research sacrifices the interests of particular individuals for the greater good of society need not be problematic. According to this outlook, a program of research is justified solely to the degree that it poses a genuine prospect of general benefit and that the probability and level of benefit are sufficiently high to outweigh any losses to individual research subjects. The important consideration here is that research promises a net gain even if that gain does not accrue to its individual subjects. Accordingly, the worry that research always poses the danger of using individuals as mere resources for the good of others has little significance from this utilitarian viewpoint.

From a perspective that takes individual dignity to be paramount, an essentially Kantian perspective, what can be done to promote the general good is clearly limited. Although this perspective recognizes the importance of aiming to generate the greatest good for the greatest numbers, it insists that this goal must be sought only in ways that are not demeaning or dehumanizing. This means that pursuit of the common good must take place only in those ways that do not reduce individuals to mere resources for the good of others. Adhering to this moral imperative necessarily limits what can be done in the name of even clearly beneficial medical research.

The history of atrocities perpetrated against individuals in the name of research, especially those committed by the Nazis, is well known and need not be reiterated here. An important consequence of those atrocities, though, is that the Kantian perspective has become dominant in Western thinking about research on human subjects, and the protection of those who serve as research subjects has become a paramount priority. Numerous guidelines and codes of ethics for human experimentation have been formulated since World War II to promote limits on research that will maintain and respect the dignity of human subjects. (Perhaps the most famous is the Nuremberg Code, one of the first formal codes of ethics for research on humans. This code was deliberately developed in response to the horrors of Nazi research on human subjects.)

Although the differences between various codes are numerous, they generally converge on several requirements for ethically sound human experimentation (Capron, 1986). First, research on humans must be based upon prior laboratory research and research on animals as well as upon established scientific fact “so that the point under inquiry is well focused and has been advanced as far as possible by nonhuman means” (Capron, 1986). Second, research on humans should use tests and means of observation that are reasonably believed to be able to provide the information being sought by the research. Methods that are not suited for providing the knowledge sought are pointless and rob the research of its scientific value. Third, research should be conducted only by persons with the relevant scientific expertise. Fourth, “all foreseeable risks and reasonably probable benefits, to the subject of the investigation and to science or more broadly to society, must be carefully assessed, and … the comparison of those projected risks and benefits must indicate that the latter clearly outweighs the former. Moreover, the probable benefits must not be obtainable through other less risky means” (Capron, 1986). Fifth, participation in research should be based on informed and voluntary consent. Sixth, “participation by a subject in an experiment should be halted immediately if the subject finds continued participation undesirable or a prudent investigator has cause to believe that the experiment is likely to result in injury, disability, or death to the subject” (Capron, 1986). Conforming to conditions of this sort probably does limit the pace and extent of medical progress, but society's insistence on them is its way of saying that the only medical progress truly worth having must be consistent with a high level of respect for human dignity. Of these requirements, the requirement to obtain informed and voluntary consent from research subjects is widely regarded as one of the most important protections. Before discussing this requirement, however, a second important ethical issue regarding research on humans will be discussed.

This issue concerns the oldest norms of traditional medical morality: beneficence and nonmaleficence, the duty to benefit the patient and the duty to avoid harming the patient. The purpose of research is to provide new knowledge, and although research has benefits, these are not aimed primarily at the research subject. This means that any harm the subject suffers is justified, if at all, not by benefits to the subject but by benefits to others. So when the researcher is a physician and the subject is a patient, then the physician/researcher is not discharging his duty to concern himself primarily with the patient's well-being. If the patient benefits, these benefits are incidental to the aim of the research, and if the patient is harmed, that harm cannot be justified on the grounds that it was an unavoidable concomitant to benefiting the patient. Indeed, the intervention in question, if it is genuinely an instance of research, cannot be based upon well-founded beliefs about what harms or benefits it will pose for the patient. The testing needed to have such beliefs is itself the substance of the research. How then can physicians or researchers reasonably intend their actions to benefit or not harm their patients as research subjects? Clinicians may hope for this but cannot reasonably intend it. Can the aims of research be reconciled with the traditional moral obligations of physicians? Is the researcher/physician in an untenable position?

Answering these questions leads back to the requirement of informed and voluntary consent. The leading idea behind this requirement is that if the subject of an experiment agrees to participate in the research, then what happens to him during and because of the experiment is a product of his own decision. It is not something that is imposed on him, but rather, in a very real sense, something that he elects to have done to him. Because his autonomy is thus respected, he is not made a mere resource for the benefit of others. Although he may suffer harm for the benefit of others, he does so of his own volition, as a result of the exercise of his own autonomy, rather than as a result of having his autonomy limited or diminished. An extremely important difficulty here is knowing when consent is genuine, when it is truly voluntary and not the product of coercion. Not all sources of coercion are as obvious and easy to recognize as physical violence. A subject may be coerced by fear — by the fear that there is no other recourse for treatment of the ailment (even when it is known that any benefit the intervention will bring is incidental to its purpose), by the fear that nonconsent will alienate the physician on whom the subject depends for treatment, or even by the fear of disapproval of others. This sort of coercion, if it truly ranks a such, is often difficult to detect and remedy.

In any case, the mere absence of coercion is insufficient. Voluntariness can also be undermined by ignorance and misunderstanding. One's consent to something is not autonomous unless one understands to what he is consenting. A research subject must be given information sufficient to arrive at an intelligent opinion concerning whether to participate in the research or to continue to participate after research has commenced. In the absence of this information, the subject may agree to something that is not what one believes it to be, and here consent will not reflect one's own values and priorities.

Two difficulties arise here. The first and probably less worrisome is the issue of what constitutes sufficient relevant information for the subject to arrive at an intelligent opinion. Although a subject need not be given all the information a researcher has, how much should be provided, and what can be omitted without compromising the validity of the subject's consent? The second difficulty lies in knowing whether the subject is competent to understand the information given and to render an intelligent opinion based upon it. This renders research on children and the mentally handicapped especially suspect. It also makes research suspect when its subjects include patients whose ailments compromise their competence. In any case, efforts must be make to ensure that sufficient relevant information is given and that the subject is sufficiently competent to process it. These are matters of judgment that probably cannot be made with absolute precision and certainty, but rigorous efforts must be made in good faith to prevent research on humans from involving gross violations of human dignity.

How does the requirement of informed and voluntary consent speak to the tension between the goals of research and the traditional moral norms of medicine? Being concerned about the well-being of the patient is often crucial to prevent him from being dehumanized and treated as a mere resource. In recent years, Western societies have tended to regard respect for a person's autonomy as the most crucial aspect of respecting the individual's humanity. That is, the Kantian notion that what makes humans morally special is their capacity for autonomy has become widely accepted, as expressed in a strong and widespread rejection of paternalism. To treat a person paternalistically is to limit the person's liberty for his own good. Although many paternalistic measures would benefit those subjected to them, Western societies, especially the United States, have been exceedingly reluctant to take such measures. In medicine, this respect for individual autonomy has taken the form of limiting the traditional moral obligations of physicians. Increasingly, advocates of patients' rights have persuasively argued that there is no valid justification for not allowing patients to do as they please as long as they do not harm innocent others. Acceptance of this Kantian view has meant that physicians are expected to discharge their duties of beneficence and nonmaleficence within the limits set by the autonomy of the patient. Indeed, this is why it is widely accepted that before any intervention is undertaken, no matter how well validated, informed and voluntary consent must be secured. The patient who voluntarily and with adequate information chooses to become a research subject releases the physician from the responsibility of making the patient's well-being the paramount concern. The physician can then take on the role of researcher and attend to the goal that justifies research in the first place — benefit to humanity in general. Of course, this does not mean that anything goes. The physician is still obligated to pursue the ends of the research with as little risk to the subject as possible.

The Ethics of Nonvalidated Practices

“What should one's ethical response be to activities that fall within this region of nonvalidated practices? Since they are not carried out according to a research plan (or “protocol”), they cannot be justified by their benefit to science; since they lack a valid basis in science, they cannot be defended for the benefit they will provide to those with whom they are used. The solution, of course, is to employ the procedure in the context of an appropriate research plan or to substitute a proven treatment. Yet this will not always be possible, especially when attempting to cure problems (such as life-threatening conditions) for which no satisfactory therapy exists and when the need to intervene is too urgent to allow a formal research plan to be adopted― (Capron, 1986).

A nonvalidated practice thus lacks the justification available to ethically sound research, i.e., providing some benefit to science and thereby to society as a whole. It also lacks the justification available to practice, benefit to the patient, precisely because sufficient research to show that benefit can be reasonably expected has not occurred. Is nonvalidated practice therefore always unjustifiable? As the quotation from Capron above notes, this is not necessarily the case. In circumstances where life itself is at stake or where severe debilitation is highly probable and where validated practices have failed or cannot be deployed, nonvalidated practices may be morally acceptable. Of course, the requirement of informed consent must be met here as well. Indeed given the desperate nature of the circumstances, informed consent may well be more important here than in the circumstances of research or validated practice. The issue is why a patient who is in desperate circumstances, who is well apprised of the uncertainties of the proposed intervention, and who is competent to make his own choices should be deprived of the opportunity to have his life or health saved, however uncertain that opportunity might be. Of course, the great difficulty here is knowing whether the patient in desperate circumstances really is well apprised of the relevant information and is competent to process it

Can an intervention be part of a research plan and be a nonvalidated practice at the same time? Clearly it can in certain circumstances. An intervention may be part of a research plan and thereby intended to provide generalizable knowledge and yet be used outside the plan on a patient who is not one of the research subjects if that patient is in a medical crisis where validated practices are not available or effective. It is important to note that this sort of use of an intervention as a nonvalidated practice occurs outside the research plan and on someone who is not a research subject. Therefore, it has a very different sort of purpose than use of the same intervention within a research plan. Whereas the latter aims at providing generalizable medical knowledge, with benefits accruing to research subjects being incidental to this purpose, the former use aims at providing a possibility of benefit to a patient for whom all other options are closed or have been exhausted, and any information thereby gained is incidental to this purpose. In this kind of circumstance, these different primary goals can be kept separate even when the physician conducting the research is also the physician conducting the nonvalidated practice because the intervention is applied to different persons — a research subject in the former case and a patient in the latter case.

Can an intervention be both a nonvalidated practice and part of a research plan in a meaningfully different situation? Can one and the same intervention when applied to one and the same person be both nonvalidated practice and research at one and the same time? This notion is highly problematic. The limits that research must observe to be scientifically sound might not be compatible with providing maximal benefit to the patient. When this is the case, if the physician observes the conditions required of legitimate research, he fails to do what is required of him as a physician. He does his duty as a researcher but not as physician. If he does his duty as a physician and violates the canons of legitimate research, he fails to do his duty as a researcher. His situation is thus ethically untenable. The ethical obligations of a physician with respect to nonvalidated practice are the same as those with respect to validated practice: to benefit and avoid harm to the patient. These are very different from and may be in irresolvable conflict with those of ethically sound research.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780750692526500175

TRANSFER FACTOR THERAPY IN DISSEMINATED NEOPLASMS

J. SILVAM.D., ... G. MORLEYM.D., in Transfer Factor, 1976

METHODS

Patient Selection

Only patients with extensive DNs refractory to previous therapies were asked to volunteer according to guidelines of our Committee of Human Experimentation. Nine patients with malignant melanoma, and one patient each with breast carcinoma, bronchogenic carcinoma, vulvar squamous carcinoma, ovarian carcinoma, and disseminated thymoma to the pleura with secondary mucocutaneous candidiasis (MC) were studied. TF was administered in doses of 1–3 units at monthly intervals on 4 to 6 occasions.

Studies of Cell-Mediated Immunity

CMI was studied by either skin tests by the Mantoux technique or lymphocyte transformations (LT) just prior to TF administrations. Lymphocytes were isolated from venous blood by sedimentation, washed and were cultured at 105 cells/microtiter well in triplicate with 20% autologous or homologous normal serum, and 80% RPMI 1640 media. Cultures were incubated for five days at 37° in CO2 with the following antigens: Phytohemagglutinin-P (PHA), 7.5–75 µg/ml; Concanavallin A (Con A), 5 µg/ml; Pokeweed mitogen (PWM), 1:10 dilution; Streptokinase-Streptodornase (SK-SD), 20 units SK/ml; Candida, 1:10 dilution; and PPD, 50 units. Melanoma antigen was derived according to the method of Bull et al1: A melanoma antigen concentration of approximately 5 µg/ml of tumor protein was employed in the microtiter wells.

Cultures with no antigens were similarly incubated in triplicate. Cultures were pulsed with 1 µCi of 3H-thymidine on the fifth day and harvested with an Otto-Hiller Precipitator. The filter was counted on a Beckman 25–100C scintillation counter for 10 minutes. LT data is expressed as the mean ratio of scintillation counts per minute per 105 lymphocytes of stimulated/unstimulated cultures.

In this system, counts of unstimulated cultures are usually 100–250. Our ranges of stimulation indices (S.I.) for normal lymphocytes are: PHA-P, 50 to 100; Con A, 30 to 60; Candida, 10 to 50; SK-SD, 5 to 40; PPD, 5 to 30; and melanoma <2.

Preparation of Transfer Factor

Four types of potential donors for transfer factor were sought for these 14 DN patients: healthy family members; normal, unrelated donors with positive skin tests as TF markers; patients cured from similar DNs (>5 years); and normal donors who had lesions such as halo nevi or black normal donors which may have some immunological significance in melanoma.

Blood was drawn into an ACD pack, washed with 0.87% NH4Cl, and once with modified Ringer's solution to obtain a leukocyte pellet (10–12 cc volume containing 109 leukocytes with 3–6 × 108 lymphocytes). This suspension was cooled and sonicated for 5 minutes. TF was then extracted from the cellular suspension by vacuum dialysis for 18 hours with a ProDifilt Unit (Bio-Molecular Dynamics, Beaverton, Oregon). The resultant dialysate (8–10 cc) was filtered (0.22 μ), cultured for bacteria and assayed for endotoxin. TF was administered subcutaneously over the abdomen or paraspinous areas usually within 2 months of preparation.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780120646500500611

Ethical Issues in Biomaterials and Medical Devices

Taufiek Rajab, ... Richard W. Bianco, in Biomaterials Science (Third Edition), 2013

Human Research Subjects

Origins of human research subjects’ protection are found in the Nuremberg Code, which outlined standards developed for the Nuremberg Military Tribunal against which the human experimentation conducted by Nazi Germany was judged. The Nuremberg Code outlines many of the guiding principles inherent in ethical conduct of research involving human subjects. These include freely given informed consent without coercion, as well as the option for the human subject to withdraw at any time from the study. In 1964, the 18th World Medical Assembly in Helsinki, Finland adopted the Declaration of Helsinki, which made recommendations similar to those found in the Nuremberg Code. In the United States, the NIH issued Policies for the Protection of Human Subjects in 1966 that were based on the Declaration of Helsinki. In 1974, the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research was established and issued recommendations four years later that were titled the Belmont Report. This report identifies both basic ethical principles of research involving human subjects, as well as guidelines for conducting research in accordance with these principles. The main tenets defined by the Belmont Report are firstly respect for persons (recognition of autonomy and personal dignity of individuals, as well as special protection of those persons with diminished autonomy), secondly beneficence (the obligation to maximize anticipated benefits), thirdly non-maleficience (minimizing possible risks of harm), and finally justice (fair distribution of benefits and burdens of research). These tenets continue to form the basis of all acceptable conduct of research involving human subjects. Based on the Belmont Report, the Department of Health and Human Services (DHHS) codified regulations relating to protection of human subjects, and in 1991 the Federal Policy for the Protection of Human Subjects (or “Common Rule”) was adopted. This policy was designed to unify the protection system for human subjects in all relevant federal agencies that conduct, support or otherwise regulate human subjects’ research. Regulatory compliance is monitored through routine site visits and audits conducted by federal officials, as well as through the establishment of Institutional Review Boards (IRB).

Any biomaterial investigator involved in human subject research will be regulated by an IRB. The overarching purpose of an IRB is to ensure that all research is conducted with appropriate safeguards for human subjects, as mandated by the federal regulations. An IRB is a group made up of at least five individuals with diverse experience and expertise to professionally qualify the group to adequately review and monitor research activities involving human subjects that are commonly conducted by the institution. The IRB has the legal authority to approve, disprove or require modifications in the experimental design of research activities involving human subjects at the institution with which it is affiliated. Although each institution may have additional committees that review proposed research involving human subjects, no research may be initiated that has not been approved by the IRB. While the IRB overall must possess the scientific expertise to review a specific research design, the membership of the IRB must include at least one member with interests that are primarily scientific, and one member with interests that are primarily nonscientific. Members of an IRB should make every effort to avoid gender or race bias, and the member from the community at large should be a suitable representative. The number of members of an IRB can exceed the proscribed number of five, but it should not become so large that it can no longer function effectively. Investigators may be members of an IRB, but they may not participate in the review and approval of any project in which a potential conflict of interest could arise. The basic IRB review of a submitted protocol focuses on the following components: a risk/benefit analysis; adequacy of informed consent; appropriate selection of subjects; ongoing monitoring of subjects; mechanisms to ensure confidentiality; examination of additional safeguards; evaluation of incentives for participation; and plans for continuing review. The risk–benefit analysis aims to determine whether the risks to the subject are reasonable in relation to the benefits to the subject. Risk analysis is a formal procedure that includes hazard identification, evaluation of failure modes, risk estimation, risk evaluation, risk control, and continuous risk review. The risk analysis is based on the evaluated biomaterial or medical device, as well as the manufacturer’s claims attributed to it. It also takes into account whether the device is novel or just an incremental modification of an existing device. The risks associated with the research are also distinguished from the risks of therapies that a subject would face, even if not participating in the research. It is important to determine that the probability and degree of harm associated with the research has been minimized as much as possible. When reviewing protocols involving medical devices, both the risks of the device and the risks associated with the procedure for using the device (e.g., the surgery to implant a heart valve) must be taken into consideration. Sponsors should make the initial risk assessment along with the study proposal. If the device study presents significant risk, the sponsor must submit an Investigational Device Exemption (IDE) to the FDA for approval. The sponsor must communicate the results of the IDE to the IRB. If the IRB finds that a device study presents non-significant risks only, the study may proceed without submission of an IDE. Adequacy of informed consent is another important consideration for the IRB. Human research subjects also need to be provided with an accurate and fair description of the anticipated risks, benefits, and possible discomforts. Federal regulations also require that the following information be provided to each subject as part of informed consent: a statement that the study involves research; explanations of the purposes; expected duration; descriptions of any planned procedures (including identification of procedures that are experimental); reasonably anticipated risks or discomforts; benefits to the subjects or to others; a disclosure of alternative treatments or procedures if they are advantageous to the subject; a statement describing confidentiality of records; contact information; and a statement that participation is voluntary and that refusal to participate will not result in the loss of benefits to which a subject is entitled. For research involving more than minimal risk there should be explanations of compensation or medical treatments available if injury occurs as a result of the research. The IRB also needs to ensure that the informed consent document presents the information to prospective subjects in language that can be easily understood, even by those with no medical background. Sometimes it is also appropriate that subjects be re-educated and consented periodically. The criteria for selection of subjects should take into consideration the requirements of the scientific design of the study, the susceptibility to risk, the potential benefits, and whether the selection of subjects from a proposed subject population is equitable (i.e., depending on the benefit of the study to the population as a whole, rather than disproportionately favoring one segment of the population). IRBs should also evaluate whether adequate precautions are taken to safeguard the privacy of information linked to individuals that will be recorded as part of the study. Subjects should be informed that federal officials have the right to inspect research records as part of their regulatory oversight. The initial IRB review of a protocol also includes an assessment of how often a research project should be re-evaluated by the IRB. Repeated monitoring is crucial, as preliminary data may indicate that the research design or the information presented to subjects must be changed, or even that the study should be terminated before the scheduled end date. Only after research has begun can the preliminary data be used to estimate the actual risk–benefit ratio.

Although institutions are ultimately responsible for ensuring that all regulatory requirements relating to human subject research are met, IRBs and investigators also bear part of that responsibility, and can be held accountable. At the level of the investigator, the most likely sources of noncompliance include failure to submit protocols or changes in approved protocols in a timely fashion to the IRB, and problems with obtaining informed consent. Often the IRB can resolve these deficiencies without jeopardizing the safety of the research subjects. However, research involving human subjects conducted by an investigator who has avoided or ignored an IRB cannot be allowed to proceed. Once discovered, the IRB and the institution must halt the research and take measures to correct any regulatory infractions. The fitness of the investigator to engage in research involving human subjects should also be evaluated. Noncompliance with regulations can also occur at the level of the IRB. This may arise from inadequate review of research protocols, not conducting a continuous review of research with a frequency that is commensurate with the degree of risk to subjects, failing to maintain adequate records, and consistently holding meetings without the majority of members present. Failure of an IRB to perform their responsibilities in accordance with DHHS regulations can be grounds for suspension or withdrawal of the institutional assurance. Finally, noncompliance at the institutional level is usually the result of a more systemic failure to meet their responsibilities. Institutions must provide appropriate staffing and support of an IRB so that it can function in accordance with the regulations, as well as ensure that investigators meet their obligations to the IRB as an integral part of their research using human subjects.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780080877808001352

Ethical Practice of Research Involving Humans

E. Smith, Z. Master, in Reference Module in Biomedical Sciences, 2014

Historical Cases of Research Involving Humans

In the 1900s, unethical research projects involving humans, some bordering on atrocities, raised public awareness, stimulated debate, and impelled policy development. Perhaps the most inhumane and notorious types of human experimentation done in the name of science occurred during World War II (WWII), conducted by Nazi doctors and scientists on ‘Jews, Gypsies, and Slavs’ who were depicted or categorized as pathological subjects (Weindling, 2008: 18). These experiments included the study of the human body's resistance to low pressure, hypothermia, malaria, mustard gas, typhus, and poison (Carlson, 2006; Tyson, 2000). In 1945–46, the international military tribunal charged 23 Nazi doctors and scientists with war crimes and crimes against humanity; many were convicted and some sentenced to death during the Nuremberg trials. As a result, the Nuremberg Code was drafted to provide international guidance regarding the ethics of medical (and specifically clinical) research. This international scrutiny and condemnation of the Nazi experiments are considered by many as a seminal moment for the emergence of research ethics as a field. Despite the obvious unethical and criminal aspects of these experiments, results were published because of their putative important contributions to science, and because no real objection to publication was made for fear of opposing the regime (Loewy, 2002). While the Nuremberg Code is often cited as the historical basis of research ethics, it was neither systematically implemented nor internationally adopted in the decades following WWII. US physicians, for instance, often considered the Nazi atrocities to be more of a political aberration than a matter of research ethics. Interestingly, researchers from democratic countries widely believed that such brutality could not occur in their country (Faden et al., 2003).

What we call today the ‘Nuremberg Code’ is not part of legislation or international agreement, it is the Annex to the verdict drafted by judges of the war trials. Even after the Nazi trials and the publication of the Nuremberg Code, ethical failures in research persisted. In 1966, Henry Beecher published a landmark study identifying 22 ethically questionable research projects where known effective treatment was withheld from participants, harms (including death) were deemed acceptable consequences of research, and a lack of informed consent from the participants was the norm (Beecher, 1966). Interestingly, these studies were conducted in reputable research institutions, funded by major government bodies and scientific societies, and published in well-respected scientific journals (Emanuel et al., 2003). Beecher demonstrated that even in democratic countries, unethical research was still taking place.

One well-known case included in Beecher's study occurred in 1963, where 22 debilitated patients at Brooklyn's Jewish Chronic Disease Hospital had live cancer cells injected subcutaneously. The purpose of the study was to better understand the effects of cancer on immune deficiency and response (Arras, 2008). The lead investigator, Chester M. Southam, believed that informed consent was not necessary because such nontherapeutic immunological research was routinely performed and elderly patients would not care that they had been injected with live cancer cells. Southam was convinced that no harm would befall the patients since he had conducted similar experiments on more than 600 participants (Annas and Grodin, 2008; Luna and Macklin, 2009). Albeit the risk may have been minute, most interestingly, Southam was not ready to expose himself or his colleagues to the risk caused by injecting live cancer cells (Langer, 1964). This case highlights not only the need for informed consent in research, but more importantly, the need for independent ethics oversight of human subjects of research.

The Tuskegee study, initiated in 1932, is one of the most notable and discriminatory research projects undertaken in the United States. In the rural American south, during the 1920s and 1930s, there were high rates of untreated syphilis among impoverished African-Americans (Rockwell et al., 1964). A longitudinal study was conducted to better understand the effects of untreated syphilis. However, Public Health Service (PHS) physicians never informed subjects of the presence of syphilis, of its transmissibility through sexual intercourse or from the mother to fetus (Thomas and Quinn, 1991). Subjects were actually deceived by physicians who disguised the true purpose of medical tests and presented them as free treatment (Macrina, 2005). What is particularly surprising about this study is that it lasted until 1973, some 20 years after the development of penicillin (a very effective treatment for syphilis), which was withheld from the research subjects. This case is a particularly striking example of racism and deception in research, a failure to respect individual autonomy, and the unjustified withholding of effective treatment even when it had become available.

Research on sexually transmitted diseases (STDs) conducted by Dr Cutler in Guatemala (1946–48) is another violation of research ethics that was exposed more recently by the media and then analyzed by the US Presidential Commission for the Study of Bioethical Issues. Researchers wanted to understand if chemical prophylaxis could prevent certain STDs and if not, what dose of penicillin would be appropriate to cure the STDs (Reverby, 2011). To do so, PHS physicians and researchers infected prisoners, soldiers, patients from a psychiatric hospital, and commercial sex workers with STDs, more specifically syphilis, gonorrhea, and chancroid (Presidential Commission for the Study of Bioethical Issues, 2011). Among a violation of several norms, this research project demonstrates an example of exploiting vulnerable and poorly informed individuals.

These (and other) unethical practices prompted the development of research ethics norms and best practices. However, it is also likely that these failures may have adversely affected specific publics' trust in the research enterprise (Master and Resnik, 2011), particularly true with minority communities. For example, African-Americans have less trust in medical research partly due to the knowledge of cases of unethical research (e.g., Tuskegee), which may explain their underrepresentation in clinical research (Corbie-Smith et al., 1999; Shavers et al., 2000). Similarly, the US Latino population is also underrepresented in research due to a lack of trust in research, but also because of language barriers and immigration status (Diaz, 2005). Yet, scholarly work on minority recruitment and retention has been finding new ways to increase participation of underrepresented groups using methods that include identification of specific targeted participants, community involvement, incentives, logistical modifications, and cultural adaptation (Yancey et al., 2006).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383001781

Law and Bioethics in the United States☆

Anna Gotlib, in Reference Module in Biomedical Sciences, 2017

Human and Animal Research: Codes, Declarations, and Regulations

The emergence of bioethics can be understood in several stages. First, concerns about unethical experimentation on human beings and animals, some predating the development of the academic, interdisciplinary field of bioethics itself, have been expressed through a number of codes and regulations, most of which, while designed to guide actions, are nevertheless not enforceable. For example, the Nuremberg Code (1947) was a set of principles for human experimentation that responded to the Nazi medical atrocities that occurred during the Second World War. It introduced into biomedical research, the notions of informed consent, absence of coercion, properly formulated scientific goals, and beneficence toward human participants in experiments. A number of years later, the Declaration of Helsinki (1975), developed by the World Medical Association, was put forth as a set of ethical principles for human experimentation to standardize such practices as respect for the individual and the investigator's duty to the patient and/or volunteer. In 1979, the Belmont Report, “Ethical Principles and Guidelines for the Protection of Human Subjects of Research” issued by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research (which was the first public organization within the United States to address and shape biomedical policy, motivated in part by the ethical violations of the Tuskegee Syphilis Study), put forth three fundamental principles to be considered in any human subject research: (1) respect for persons, including the protection of the individual's autonomy, and making individual respect (and thus informed consent) central to the issue; (2) beneficence, which calls for the maximization of the benefits to the human subjects in research, while minimizing the risks; and (3) justice, which forbids the exploitation of human test subjects and requires fairness in the experimentation process. Furthermore, in 1993, the Council for International Organizations of Medical Sciences (CIOMS), an international NGO, promulgated the ‘International Ethical Guidelines for Biomedical Research Involving Human Subjects’ (or the CIOMS Guidelines), addressing human experimentation and issues of informed consent, participant recruitment, standards for review, and so on (Gordon, 2012). CIOMS also put forth the ‘International Guiding Principles for Biomedical Research Involving Animals’ (Kuhse and Singer, 2009).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383001768

The History of Bioethics

M.L.T. Stevens, in Reference Module in Biomedical Sciences, 2014

Institutionalizing: Three Examples from History

Those who would become bioethicists considered biomedical agenda and the biological revolution in a society transformed by these dynamics. Would society correct power imbalances and social injustices identified by critics, or would it absorb denunciations and continue undeterred? The role that bioethics came to play in mediating that outcome is a question for continuing historical analysis. The question should be understood not only as what role bioethicists intended for themselves, but also how and why bioethics was selected for institutionalization by biomedical power structures and society more generally. There is social scientific research and interpretation to date suggesting that much of the critical impulse giving birth to bioethics was absorbed as the field institutionalized – though not without altering the political culture into which it was infused. Additionally, calls for a more ‘critical’ bioethics have grown to modify that absorption (see below). Windows on the question of how bioethics developed to negotiate the crevices between biomedical hegemony and ‘outside’ civic authority during bioethics' institutionalizing decade of the 1970s are provided through a consideration of three developments: the early experiences of the first bioethics institute, the Hastings Center; the introduction of nonphysician decision makers into the contested clinical space of neonatal intensive care; and finally, political-cultural dynamics of early bioethics commission work.

Launched in New York in 1969 by philosopher Daniel Callahan and psychiatrist Willard Gaylin, it was important to its founders that the institute (that was later to become the Hastings Center) be independent.12 They rejected university affiliation and, initially, would seek funding only from foundations, the government, or philanthropists in order to avoid corporate influence. Its first financing was a loan from Callahan's mother. By the end of the first year they had won grants from John D. Rockefeller III, Elizabeth K. Dollard, the Rockefeller Foundation, and the National Endowment for the Humanities.

Callahan's 1971 assessment suggests how the novel idea of creating an institute to investigate the “ethical impact of the biological revolution” was in demand: “We receive an average of 10 inquiries a day, requesting information on the Institute itself, or one of our programs. The greatest stimulus for this has come from a number of news stories about the Institute…” (quoted in Stevens, 2000: p. 48 and p. 54, respectively). By 1974, the Hastings Center (HC) had achieved stability after doubling its budget every year, operating on a $1 million budget by 1977. But the Center came to realize that even noncorporate funding was problematic. Funding agencies typically wanted specific questions addressed and problems solved. As a contact at the US Department of Health, Education, and Welfare instructed in 1971, “Unless Institute endeavors can be focused not only on analysis of problems, but also on their resolution, public administrators…are not going to be very interested in the Institute.” But as HC fellows understood, this would leave ‘troublesome underlying problems’ unexamined (Stevens, 2000: pp. 65–66). They came to understand this more directly in 1975 when the National Institutes of Health (NIH) rejected an HC application to renew a grant to study the ethical, legal, and public policy implications of genetic technologies and their human applications. The application was rejected, in part, for having an ‘antitechnological’ bias and exhibiting a prioritization of individual rights over the ‘greatest good for the greatest number.’ The NIH also criticized the HC for not giving out guidelines to assist screening agencies (Stevens, 2000: pp. 69–70).

Its early years saw the Center struggling over whether and how activist it should be. The meaning of ‘activist’ ranged from holding press conferences to expose ethical abuses, to proposing legislation, to recommending guidelines, to simply deliberating about ethical abuses and possible solutions. Founders were particularly challenged by an increasing number of demands to expose the abuses they encountered and go farther in promoting their various recommendations. Critics urged, “Why don't you get out of the ivory tower and into the streets?” “You people should quit talking and get some laws passed” (quoted in Stevens, 2000: p. 57). The Center was being called upon to function as an advocate. But the HC also had to contend with charges of being ‘antiscience,’ or ‘antitechnological,’ or ‘antimedical.’ In 1971, for example, the clinical director of the National Institute of Allergy and Infectious Diseases told the Center that its members were ‘negative’ and should have “spent more time…trying to find out how society and the individual might be improved” (quoted in Stevens, 2000: p. 60).

Ultimately, an adversarial role was rejected. The Center's modus operandi became to accept grant money to create ‘guidelines’ for specific projects. It would proffer suggestions for how to use a technology presumed to be going forward. So, for example, while it would publish guidelines for how to proceed with mass genetic screening, the Center would not join a law suit brought by a number of ‘black groups’ challenging mandatory screening of school children for sickle-cell anemia. Requests to participate in legal actions were considered problematic and discouraged. Another strategy, understood at the time to be ‘establishmentarian,’ was the decision to counsel medical and scientific professionals rather than to address the general public. Such strategies were undertaken to avoid a ‘factionalizing’ of HC fellows and to cultivate a ‘nonideological’ reputation. Together with the effects of funding constraints, these strategies fostered discourse and methods that supported a process-oriented type of ethics management, rather than substantive challenges to the sociopolitical sources and function of specific technologies or the legitimacy of biomedical power and authority more generally (Stevens, 2000: pp. 56–59).

The second historic example concerns the public debate that ensued in the 1970s over when to start or end heroic measures for premature infants or newborns with disabilities. How to cope with the tragedy of severely damaged or suffering newborns was not a consideration unique to the 1970s nor, in fact, has it ended (see, for example, Wesolowska, 2013). That decade did, however, see unprecedented public exposure of clinical practice and turmoil on that subject. The case of ‘the Johns Hopkins baby,’ in particular, ignited public concern. In 1969, the parents of a baby with Down syndrome refused to give permission for surgery to repair a correctible intestinal blockage. Placed in the corner of a nursery, the infant starved to death over 15 days. Although the case was in the opinion of a number of Johns Hopkins physicians not so unusual, it deeply disturbed a number of hospital staff who, according to historian David Rothman, “took the issue outside the hospital” (see ‘No One to Trust,’ Chapter 5 in Rothman, 1991). A film about the incident garnered such moral indignation and bad press for the hospital that Johns Hopkins' directors defended by announcing that they would create an interdisciplinary review board to advise on difficult cases (Rothman, 1991: p. 193).

Some physicians came to feel that new decision-making strategies were in order. The way one doctor viewed it, “Here we are really playing God and we need all the help we can get. Apart from giving parents a voice – or at least a hearing – we should enlist the support of clergymen, lawyers, sociologists, psychologists, and plain citizens who are not expert at anything, but can just contribute their common sense and wisdom” (Rothman, 1991: p. 196). But, at this time, authority to call for such assistance emanated from the physician. It was not a matter of parental prerogative to insist upon it.

By 1973, however, developments unrelated to neonatal crisis care, per se, but with far-reaching political ramification devolving on individual rights, came to affect analyses of who should count as medical decision makers. Roe v. Wade, which legalized abortion, worked, according to Rothman, to “…maximize parental autonomy in that a mother who wanted a fetus aborted had the right to do so…. Under Roe v. Wade the parent determined whether the fetus would survive – it was not much of an extension to add, whether a defective newborn would survive” (Rothman, 1991: p. 204). The emergent disability rights movement also contributed to an expanded view of who should be making decisions on behalf of whom, although sometimes in tension with parental prerogative. Section 504 of the Vocational Rehabilitation Act in 1973, which banned discrimination on the basis of ‘handicap’ was particularly influential in how the debate shifted. The ‘prerogatives of doctors or parents’ were not the sole concerns. The rights of newborns with disabilities now also needed to find expression and, for some, this required the implementation of a review board.

Rothman concludes that the era of unilateral decision making on the part of physicians in the nursery had come to an end. But the ethical apparatus brought to bear on these anguishing cases constituted a type of oversight that would leave unexamined broader social questions, e.g., “…are expenditures on the neonatal nursery the best use of social resources, or, why are most babies in the neonatal nursery from underprivileged families?”

For Rothman, “…the Johns Hopkins case helped ensure that philosophy, not the social sciences, would become the preeminent discipline among academics coming into the field of medicine. This, in turn, meant that principles of individual ethics, not broader based assessments of the exercise of power in society, would dominate the intellectual discourse around medicine” (Rothman, 1991: p. 221). In the context of neonatal crises management, the challenges to authority were circumscribed narrowly.

The third example is a consideration of the early development of national bioethics commissions, whose work “both profited from and contributed to the development of bioethics.13” It suggests how bioethics won cultural authority and institutionalization by mediating between challenges to biomedical authority and power, on the one hand, and the effort to shore up biomedical autonomy by scientists and physicians, on the other.

The national bioethics commissions have been characterized as being a part of transformational processes. As David Rothman relates, “As late as 1966, physicians had a monopoly over medical ethics; less than a decade later, lay people, dominating a national commission, were setting the ethical standards. Medical decision making had become everybody's business” (Rothman, 1991: p. 168 and p. 175, respectively). He underscores the impressive, ‘unbending opposition’ to the proposal of a commission, relating that many physicians and investigators found the idea of a panel meddlesome and dismaying: “leaders in medicine fought doggedly to maintain their authority over all medical matters;” and, “the geneticists and psychiatrists who testified (at the legislative hearings) were as antagonistic to the idea of a commission as the surgeons” (Rothman, 1991: p. 169).

Ultimately, the legislation establishing the first federal commission, the 1974 National Commission for the Protection of Human Subjects in Biomedical and Behavioral Research, created a weaker body than the one originally put forward. The 1968 proposal had wanted to establish a National Commission on Health Science and Society. Instead, the commission would be concerned not with health science and society issues but only with human experimentation (although a subsequent commission expanded its topical ambit). It could make recommendations to the Department of Health, Education, and Welfare but it had no enforcement power, and it would last only 4 years. There would be no permanent national human investigation board.

For Jay Katz, law professor and expert on the social and legal ramifications of human experimentation, the temporariness of a commission on human experimentation was the result of ‘subterfuge.’ He opined later that the reason a permanent commission was not established “may have been the Senate's reluctance to expose to public view the value conflicts inherent in the conduct of research. Had the Senate seriously debated the bill, it would have been forced to consider when, if ever, inadequately informed subjects can serve as a means to society's and science's ends. I believed then as I do now, that the rejection of an NHIB (National Human Investigation Board) was not just a mistake but a subterfuge to avoid giving greater visibility to the decisions made in the conduct of human experimentation” (cited in Fox and Swazey 2008: p. 50). Law professor and bioethicist, George Annas has also reflected critically. The commission, he felt, had endorsed the status quo. It failed to examine three premises: research is good, experimentation is almost never harmful, and research-dominated independent review boards can adequately protect research subjects (Jonsen, 1998: p. 106).

Annas' critique sheds an interpretive light on one of the commission's unique tasks, one that was highly consequential to bioethics' development and its eventual managerial orientation. Commissioners were instructed to “…identify the ethical principles which should underlie the conduct of biomedical and behavioral research with human subjects and develop guidelines that should be followed in such research” (Jonsen, 1998: p. 102). This was an authorization to facilitate means rather than an invitation to assess the morality of ends. The project resulted in the 1976 Belmont Report. The report concluded that there were three universal principles relevant to human experimentation: respect for persons, beneficence, and justice.14 These principles, in turn, required implementation of informed consent, risk–benefit assessment, and the just selection of research subjects (Jonsen, 1998: pp. 103–104). This new type of analysis, ‘principlism,’ did not function to dig deep into the concerns that first fired bioethical imagination, e.g., whether and how a variety of biomedical technologies may be threatening values undergirding the uniqueness of human life or the nature of the human species. Principlism (begun by Georgetown bioethicists) becomes bioethics' managerial toolbox, one carried beyond its first use in the context of human experimentation. But its initial institutionalization was as the product of a legislative mandate for the creation of a mediated process. It was a method settled upon after years of contention between those challenging the morality of research agendas and medical practices and those seeking to bolster scientific and medical authority.

In his analysis of public debate over HGE from the 1950s to the mid-1990s, sociologist John Evans evaluated the role of commissions in the ‘thinning’ of that debate. When theologians and others challenged geneticists who were seeking to normalize HGE, scientists responded by advocating “the creation of government advisory committees that would ease calls for setting the ends of HGE research through congressional action” (Evans, 2002: p. 7). Evans demonstrates how “the locus of the HGE debate was purposefully shifted away from the public to the bureaucratic state” where, “one choice was impossible – the decision not to engage in any HGE italics in original, Evans, 2002: p. 4 and p. 5, respectively). “Each particular community – be it Roman Catholic, feminists, or African-Americans – would still use the thick debate among themselves, but would translate their thick debate to the thin shared language for use in public.15

These historical considerations, one institutional, one clinical, and one governmental, point to a variety of open historical questions such as: the role of funding constraints in shaping public oversight, the demand for outside moral assistance that came from within medical ambits, the strategic value of guideline creation in delegitimizing the moral analysis of certain ends, etc. What the three considerations bear in common, however, is a tale of how the expansion of biomedical decision-making arenas to include ‘outside’ (bioethical) input, also involved a narrowing of one kind or another, a narrowing that always facilitated or was facilitated by adherence to principlist methods and solutions.16

With the ascendancy of principlism, the scope of discussion that birthed bioethics concerning what goals the new technologies should or should not serve, constricts. Theological critique, in particular, which had been highly influential in the earliest days of concern, (what might be called, ‘protobioethics,’) was marginalized.17 “One of my toughest problems during the Hasting's Center's first twenty years,” Dan Callahan reflected, “was persuading the philosophers to sit down with the theologians and to take them seriously. The secular philosophers could not give a damn for what the theologians were saying and were even scornful” (quoted in Jonsen, 1998: pp. 83–84; see also, Callahan, 2012: pp. 15ff).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128012383001756

MORAL AND ETHICAL ISSUES

Joseph Bronzino PhD, PE, in Introduction to Biomedical Engineering (Second Edition), 2005

2.8.3 Nontherapeutic Biomedical Research Involving Human Subjects (Nonclinical Biomedical Research)

In the purely scientific application of medical research carried out on a human being, it is the duty of the doctor to remain the protector of the life and health of that person on whom biomedical research is being carried out.

The subjects should be volunteers (i.e., either healthy persons or patients for whom the experimental design is not related to the patient's illness).

The investigator or the investigating team should discontinue the research if in his/her or their judgment it may, if continued, be harmful to the individual.

In research on humans, the interest of science and society should never take precedence over considerations related to the well-being of the subject.

These guidelines generally converge on six basic requirements for ethically sound human experimentation. First, research on humans must be based on prior laboratory research and research on animals, as well as on established scientific fact, so that the point under inquiry is well focused and has been advanced as far as possible by nonhuman means. Second, research on humans should use tests and means of observation that are reasonably believed to be able to provide the information being sought by the research. Methods that are not suited for providing the knowledge sought are pointless and rob the research of its scientific value. Third, research should be conducted only by persons with the relevant scientific expertise. Fourth,

All foreseeable risks and reasonably probable benefits, to the subject of the investigation and to science, or more broadly to society, must be carefully assessed, and…the comparison of those projected risks and benefits must indicate that the latter clearly outweighs the former. Moreover, the probable benefits must not be obtainable through other less risky means.

Fifth, participation in research should be based on informed and voluntary consent. Sixth, participation by a subject in an experiment should be halted immediately if the subject finds continued participation undesirable or a prudent investigator has cause to believe that the experiment is likely to result in injury, disability, or death to the subject. Conforming to conditions of this sort probably does limit the pace and extent of medical progress, but society's insistence on these conditions is its way of saying that the only medical progress truly worth having must be consistent with a high level of respect for human dignity. Of these conditions, the requirement to obtain informed and voluntary consent from research subjects is widely regarded as one of the most important protections.

A strict interpretation of these criteria for subjects automatically rules out whole classes of individuals from participating in medical research projects. Children, the mentally retarded, and any patient whose capacity to think is affected by illness are excluded on the grounds of their inability to comprehend exactly what is involved in the experiment. In addition, those individuals having a dependent relationship to the clinical investigator, such as the investigator's patients and students, would be elimi?nated based on this constraint. Since mental capacity also includes the ability of subjects to appreciate the seriousness of the consequences of the proposed procedure, this means that even though some minors have the right to give consent for certain types of treatments, they must be able to understand all the risks involved.

Any research study must clearly define the risks involved. The patient must receive a total disclosure of all known information. In the past, the evaluation of risk and benefit in many situations belonged to the medical professional alone. Once made, it was assumed that this decision would be accepted at face value by the patient. Today, this assumption is not valid. Although the medical staff must still weigh the risks and benefits involved in any procedure they suggest, it is the patient who has the right to make the final determi?nation. The patient cannot, of course, decide whether the procedure is medically correct because that requires more medical expertise than the average individual possesses. However, once the procedure is recommended, the patient then must have enough information to decide whether the hoped-for benefits are sufficient to risk the hazards. Only when this is accomplished can a valid consent be given.

Once informed and voluntary consent has been obtained and recorded, the following protections are in place:

It represents legal authorization to proceed. The subject cannot later claim assault and battery.

It usually gives legal authorization to use the data obtained for professional or research purposes. Invasion of privacy cannot later be claimed.

It eliminates any claims in the event that the subject fails to benefit from the procedure.

It is defense against any claim of an injury when the risk of the procedure is understood and consented to.

Case Study: Confidentiality, Privacy, and Consent

Integral to the change currently taking place in the United States health care industry is the application of computer technology to the development of a health care information system (Figure 2.4). Although a computerized health care information system is believed to offer opportunities to collect, store, and link data as a whole, implementation of such a system is not without significant challenges and risks.

Which type of experimental protocol requires that scientists that interact with the experimental subjects be unaware of their experimental condition?

Figure 2.4. Current technology put to use in monitoring the health care of children

(Courtesy of http://www.ustechlab.com/ and http://www.bbc.co.uk/threecounties/do_that/2002/10/black_history_month.shtml).

In a particular middle-sized city, it had been noted that children from the neighborhood were coming to the emergency room of a local hospital for health care services. A major problem associated with this activity was the absence of any record of treatment when the child showed up at a later date and was treated by another clinician. In an effort to solve this problem, the establishment of a pilot Children's Health Care Network was proposed that would enable clinicians to be aware of the medical treatment record of children coming from a particular school located near the hospital. The system required the creation of a computerized medical record at the school for each child, which could be accessed and updated by the clinicians at the local hospital.

Discuss at length the degree to which this system should be attentive to the patient's individual rights of con.dentiality and privacy.

Discuss in detail where and how the issue of consent should be handled.

It protects the investigator against any claim of an injury resulting from the subject's failure to follow safety instructions if the orders were well explained and reasonable.

Nevertheless, can the aims of research ever be reconciled with the traditional moral obligations of physicians? Is the researcher/physician in an untenable position? Informed and voluntary consent once again is the key only if subjects of an experiment agree to participate in the research. What happens to them during and because of the experiment is then a product of their own decision. It is not something that is imposed on them, but rather, in a very real sense, something that they elected to have done to themselves. Because their autonomy is thus respected, they are not made a mere resource for the benefit of others. Although they may suffer harm for the benefit of others, they do so of their own volition, as a result of the exercise of their own autonomy, rather than as a result of having their autonomy limited or diminished.

For consent to be genuine, it must be truly voluntary and not the product of coercion. Not all sources of coercion are as obvious and easy to recognize as physical violence. A subject may be coerced by fear that there is no other recourse for treatment, by the fear that nonconsent will alienate the physician on whom the subject depends for treatment, or even by the fear of disapproval of others. This sort of coercion, if it truly ranks as such, is often difficult to detect and, in turn, to remedy.

Finally, individuals must understand what they are consenting to do. Therefore, they must be given information sufficient to arrive at an intelligent decision concerning whether to participate in the research or not. Although a subject need not be given all the information a researcher has, it is important to determine how much should be provided and what can be omitted without compromising the validity of the subject's consent. Another difficulty lies in knowing whether the subject is competent to understand the information given and to render an intelligent opinion based on it. In any case, efforts must be made to ensure that sufficient relevant information is given and that the subject is sufficiently competent to process it. These are matters of judgment that probably cannot be made with absolute precision and certainty, but rigorous efforts must be made in good faith to prevent research on humans from involving gross violations of human dignity.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780122386626500045

Quantitative Systems Pharmacology

Roberto A. Abbiati, ... Davide Manca, in Computer Aided Chemical Engineering, 2018

1 Introduction

Pharmacokinetics (PK) is a branch of pharmacology pertaining to the study of the course of the drug concentration in the body. It investigates the modality and the extent of organism action on drug molecules, considering as an outcome the concentration–time profiles of the drug in the blood or in other body sites of interest. These analyses produce data to evaluate the properties, efficacy, and safety of a drug. In case of novel active principles or drug formulations proposed for clinical use and commercialization, it is mandatory to submit detailed reports on drug PK to the regulatory agency of competence (e.g., FDA (United States), EMA (Europe), CFDA (China), and TGA (Australia)). Reports are highly regulated to assure the safety and efficacy of drug formulations to final users. PK data are generated experimentally and are produced throughout a systematic and multistage process. Initially, experimental laboratory activity is necessary to produce preliminary data on specific chemical and physical properties of the active principle (in vitro studies). In addition, preclinical tests in animal models (in vivo) are necessary. Preclinical practice is meant to determine the PK properties of drugs, quantify toxicity, and extrapolate initial dosages for subsequent human experimentation. Unfortunately, it is not possible to avoid or substantially reduce clinical studies (i.e., experimentation in humans) via tests in laboratory animals. Indeed, clinical activity is crucial and highly regulated with several constraints and limitations. It requires longer times and several patients, and ultimately is extremely expensive. Human tests, defined as clinical trials, are classified into four phases. Three are necessary to get products approval, while the fourth consists of postcommercialization monitoring and is defined pharmacovigilance (Fig. 1). The time and the number of human subjects required for each trial phase increase progressively and this reflects in increasing costs.

Which type of experimental protocol requires that scientists that interact with the experimental subjects be unaware of their experimental condition?

Fig. 1. Schematization of the drug discovery and the development process, with details of the clinical trial phases length and associated costs as reported by DiMasi et al. (2003). The percentages under the stop signs indicate the percentage of drugs that do not make it to the following phase.

The standard experimental PK study is therefore based on the administration of drugs to human subjects, collection of blood samples on a predetermined schedule, assays of drug concentration in the blood, and definition of concentration–time curves. These curves are analyzed in multiple ways to investigate the drug properties. The major classification is between compartmental analysis and noncompartmental analysis (Gabrielsson and Weiner, 2000; Sheiner, 1984).

Noncompartmental analysis is based on the graphical techniques performed on the experimental data of the measured blood concentrations. This method can determine important pharmacokinetic parameters, including Cmax (maximum concentration), AUC (area under the curve), Vd (volume of distribution), and CL (clearance), under the assumption of first-order elimination kinetics. Despite not being useful for the PK prediction, this analysis is valuable in the first phase of drugs development as it provides a better understanding of the pharmacokinetic properties.

This chapter focuses on the compartmental analysis that is based on the definition of mathematical models of various complexities. Historically, there have been two types of compartmental models, the classical ones (Riegelman et al., 1968; Wagner, 1993) and the physiologically based ones (Gerlowski and Jain, 1983). The constitutive concept of the compartments (for both kinds of aforementioned models) is that the body, comprising the specific tissues and organs of interest (e.g., blood, liver, gut, and skin), can be considered as a set of interconnected control volumes to which drug is introduced and eventually removed. A compartment, in terms of chemical engineering nomenclature, can be compared with a continuously stirred tank reactor (CSTR). Indeed, it is possible to define a compartment (i.e., CSTR) with perfectly mixed properties such as an ideal system where the inlet drug is instantaneously converted to the outlet concentration, which also characterizes the internal holdup of the compartment. This approach is particularly convenient when working with reduced availability of data and when an approximated quantification of drug PK is sufficient. Most importantly, it is valid for both descriptive and predictive purposes.

Quantitative models can contribute to the cost/time reduction of clinical trials (Leil and Bertz, 2014) and support the activity of new drug discovery. Computation can be applied (i) to test in silico a large group of patients or different populations/ethnic groups, (ii) to translate data from laboratory tests (phase I, see Fig. 1) to humans, (iii) to simulate the results of the combination of different treatments (drug–drug interaction), or (iv) to improve the experimental design.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780444639646000027

What is the double

Double-blind refers to a study or research where both the subjects or participants of a study and the researchers are oblivious of the treatment being given and the subjects receiving the treatment. Both the participants and the experimenter are kept in the dark.

Which type of experimental design informs the researchers whom the treatment group is but not the study subjects?

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment. In non-probability sampling, the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

What is single blinded and double blinded study?

In a single-blind study, patients do not know which study group they are in (for example whether they are taking the experimental drug or a placebo). In a double-blind study, neither the patients nor the researchers/doctors know which study group the patients are in.

What is the purpose of blind and double

A Double-blind design designates a rigorous way of carrying out an experiment in an attempt to minimize subjective biases on the part of the experimenter and on the part of the participant [2–7]. A Double-blind design is most commonly utilized in medical studies that investigate the effectiveness of drugs.