Archival pigment print, 32.5 x 50 inches.

Research Ethics From the Inside

Institutional review boards are responsible for protecting the rights of human research subjects. How do they really work?

The Constant Gardener, a suspenseful 2001 novel by John Le Carré (later made into a popular film), centers around a pharmaceutical company that, to beat the competition to market, rushed a tuberculosis drug through clinical trials despite knowing it would harm the poor, desperate African research subjects, and even turned to murder to protect its secrets and profits. As le Carré made clear in an afterward to the book, his tale should not test the reader’s credulity: “By comparison with the reality, my story [is] as tame as a holiday postcard.”

But I have a different and more hopeful story to tell, based on my experience leading a private-sector research ethics board. The roots of this story lie in the protracted international effort to develop principles and guidelines that protect human research subjects from unethical treatment. This effort formally began in the aftermath of World War II with the Nuremberg Code of 1947, created by the International Law Commission of the United Nations to codify the principles of voluntary and informed consent by research subjects. Research ethics principles were further developed by the World Health Organization (WHO) in the Declaration of Helsinki of 1964, and by the United States in the congressionally mandated Belmont Report of 1979, which outlined the ethical principles underlying biomedical and behavioral research that in turn informed the human subjects regulations adopted in 1981 by the National Institutes of Health (NIH) and the Food and Drug Administration. In 1991, NIH codified the Federal Policy for the Protection of Human Subjects in what came to be called the Common Rule, as other federal agencies joined in as signatories. And in 1993, the WHO released the International Ethical Guidelines for Biomedical Research Involving Human Subjects.

In another key action, the United States adopted the National Research Act of 1974, which called for organizations conducting research to establish committees called Institutional Review Boards (IRBs). Their purpose would be to ensure that human subjects are made fully aware of the risks and benefits of their participation in research, in terms they can understand, so they can give informed consent. Participation must offer the prospect of a genuine benefit, even if in the distant future or to society at large. Payment for participation does not constitute a benefit, and being paid handsomely cannot be construed as compensation for accepting excessive risk.

Under the federal Common Rule, human subjects must be informed that after agreeing to participate in research, they can back out at any time, with no questions asked or pressure to stay. Descriptions of procedures must be provided at the educational level of research subjects. If subjects report or display “adverse events”—such as illness plausibly resulting from research exposures or self-destructive behavior after completing an intrusive questionnaire—the researchers conducting the study must report the events to the IRB. In turn, the IRB may demand that the research protocol be revised, that future adverse events be monitored and reported to the IRB on a specified schedule, or that the research be suspended or terminated entirely. For multiyear projects, researchers are typically expected to report their experiences to the IRB annually, to obtain approvals to protocol changes before they are implemented, and to explain how protocol revisions will monitor for recurrences of adverse events while minimizing, if not eliminating, them.

Inventing research ethics

In the early 1990s it remained to be seen how the Common Rule would shake out. Would implementation focus largely on NIH? Would IRB reviews be restricted largely to biomedical and behavioral research? I was the newest officer in a management consulting firm whose modest $20 million in revenues came mostly from conducting health-related survey and evaluation research under contract to federal agencies, with training, technical assistance, and educational product development representing smaller lines of work. We did almost no work for NIH and no hardcore biomedical or behavioral research. I didn’t anticipate that the Common Rule would be interpretedby my firm’s federal clients as requiring a research ethics approval process. And when our federal clients didn’t specifically mention IRB review with us for the next three years, we believed those assumptions had been validated.

Then in 1994 the grants office at the National Institute of Allergy and Infectious Diseases required that my company conduct a pre-award IRB review for a small project we were about to undertake. My boss, one of the company’s principals, directed me to find out how an IRB worked and to create one with due dispatch. Together, he and I formed an IRB of six staff members of varied backgrounds (four in research, one in technical assistance, and one in educational product development) plus an external professional ethicist who filled the requirement that IRBs have a “community member.” Our prerequisites for IRB members were a capacity to provide a fair and objective review, and no conflicts of interest. With few external sources of guidance regarding the implications of the Common Rule, our committee was left to itself to interpret and carry out our responsibilities. We painstakingly reviewed and approved the grant and the institute awarded it.

We expected that our huddle would promptly be dissolved. Instead, company principals announced the IRB would remain at the ready, with the same members and chair, to satisfy current and prospective clients who might require IRB review prior to award or the involvement of human subjects. We already were following older guidelines issued by the Department of Health and Human Services on informed consent and treatment of research subjects; the question was whether our clients would interpret the Common Rule as requiring something new, considering that it provided for significant exemptions, especially for survey research.

Over time, as our federal clients began forming their own IRBs to conduct internal reviews, or turned to us to request IRB reviews, our assumptions began to change. Our members began advocating that more of our firm’s grants and contracts should be undergoing IRB review. Our company’s leadership said it was the IRB’s collective responsibility to educate the firm’s staff and officers on the merits of human subjects protection, and to sell them and their clients on why more projects should be run through our IRB.

Meanwhile, our IRB’s stopgap original membership was becoming a topic of contention. Company staff and officers were coming to view the IRB not only as playing an important role, but also as having power. After all, the IRB could require extensive written changes in research protocols, and additional presentations to the IRB, as a condition for project approval. And until project approval was received, no activities involving human subjects could occur, and revenues associated with human subjects research could not accrue. As a result, parts of the company not represented on the IRB—including ones that weren’t yet presenting their work to us—began to argue for seats at the table. I welcomed this as an opportunity for greater inclusion. Company leadership supported it as a means of fostering wider acceptance of the IRB’s role. We invited officers in underrepresented parts of the firm to propose thoughtful, technically proficient individuals with high integrity and a big-picture perspective. Nominees were vetted by corporate leadership and IRB members. Through this expansion process, the IRB’s role and the quality of our deliberations gained wider respect. At no time were members who were materially involved in a project under review permitted to participate in deliberations or decision-making.

Early on, our company entered into an agreement with the Department of Health and Human Services called Federalwide Assurance, which the government had designed as a tool to help protect the rights and welfare of human subjects of research. Under this, we agreed that all of our federally funded research involving human subjects would come under the IRB’s scrutiny. But this left room for interpreting what projects qualified for exemption from IRB review, who would determine they were exempt (the researchers or the IRB), and at what point exemption would be determined. Overall, our IRB’s position was that only the IRB could determine that a project qualified for exemption, and only after presentation to the IRB. At that, we took seriously the matter of protecting target populations, identified in the Common Rule as “vulnerable,” including pregnant women, children, and prisoners, and adopted the practice that exemptions could not be granted in studies involving a vulnerable population. We concluded that certain types of projects did automatically qualify as exempt from IRB review, including true market research and projects lacking true human subjects (such as management-focused evaluations of federal initiatives). Still, there was no firm-wide mandate that every project that could plausibly be considered research involving human subjects be run past the IRB, even if merely to determine whether it was exempt. With growing esprit de corps, IRB members began advocating for such a mandate. As confirmation of the IRB’s company-wide credibility, the firm submitted a revised Federalwide Assurance to NIH committing to review not only federally funded but all research involving human subjects.

Meanwhile, board leadership documented the IRB’s protocols and reviewed the IRB training courses promulgated by NIH, several universities with which we collaborated, and a pay-as-you-go third-party training provider. Because the case examples used in these training courses drew from biomedical and behavioral research, they were inadequate training tools for our firm. We developed and tested more germane examples, building on the NIH course structure. With company endorsement, we required all research project directors in our firm to complete our course before they could present projects to the IRB for approval. Over time, we updated course content to reflect changes in the nature of our firm’s work and in the underlying structure of the NIH course. Eventually, our human resources department required that all company project directors take our IRB training course as part of an annual certification process. This evolution from an ad hoc to an institutionalized training process managed by human resources took about five years.

As not seen in the movie

A major coup in our IRB’s achieving visibility outside our company occurred when a key federal client decided that renewal of our largest project, an international demographic and health survey accounting for a third of company revenue, required IRB approval. The request reflected an expanded demand across federal agencies for IRB reviews of any project that could plausibly be seen as human subjects research. Upon review, the only issue that troubled us involved the absence of a plan to administer a uniform written consent form for the survey. But the researchers working on the project explained to the IRB that a detailed, written consent form was inappropriate because most subjects taking the survey were illiterate; even if the form were read out loud to them, many would likely flee after hearing what sounded like legalese. We negotiated a solution: the researchers agreed to distill the essence of the consent form into language understandable by illiterate or minimally educated people living within a specific culture; field staff would read the form out loud to research subjects; and those who could sign their name would, but most would mark an X, which a field staff member would sign as witness.

In April 2000, I was invited as IRB chair to present testimony related to this work to the Clinton administration’s National Bioethics Advisory Commission. I joined IRB chairs of three respected management consulting firms with revenues many times ours, some of which performed clinical trials. The commission asked, did our IRBs accord the same rights and protections to research subjects in developing countries, especially in Africa, as those accorded subjects in developed nations?

I explained that our work in developing nations was largely demographic and almost entirely limited to asking questions. We were not testing drugs. Recently, however, we had begun collecting biological specimens, such as blood samples, to measure maternal and child anemia. I said that we expected the biomarker work would increase over time and that every protocol change would be reviewed by the IRB to ensure consent and safety. More to the point, I said that in many cases we were not the final arbiter of either the research design or the rights and protections accorded to research subjects. In our work throughout Africa and other parts of the developing world, the ministries of health of the countries involved were nearly always our sponsors or hosts and sometimes provided supplemental funds to expand the US-funded work. Each ministry had its own ethics board and set its own standards of research, which strongly influenced how our studies were implemented, often resulting in requirements exceeding our protocols. Moreover, we expected that the involvement of the ethics boards of host ministries would increase as we enhanced the collection of biological specimens.

Our IRB reached decisions on protocols to be implemented in the developing world in the same manner as it reached decisions for domestic work—we didn’t view it as a matter of standards being higher or lower. If there were an in-country partner or host organization, they would shape the protocols governing the way the rights and protections accorded human subjects were prescribed. We supported ethical standards that were at least as high—and often higher—internationally as domestically.

By the mid 2000s, as the film version of The Constant Gardener was in production, our firm’s expanding work in collecting biological samples included gathering data in countries across Africa to produce the first estimates of HIV seropositivity using a technique called probability sampling, in which individual test subjects are chosen randomly from the larger population. Until then, country-level estimates had nearly always been based on the people who voluntarily walked into public health clinics. More accurate estimates were needed to ensure appropriate levels of funding to address the AIDS epidemic in terms of both treatment and prevention. But this work forced the IRB out of habitual ways of thinking. Some of the challenges were operational, such as figuring out how to preserve biological samples. Challenges more relevant to the IRB came from the ethics boards within the in-country ministries. Our initial intention was to conduct anonymous blood collection, by country. Regarding the blood samples as anonymous meant we had no intention of enabling the host ministries to notify research participants who tested HIV seropositive. Some ministries, however, disagreed with this approach. To address their objections, we developed protocols that allowed participants to obtain their results from designated public health clinics. Beyond that, it was up to each country to develop strategies for treatment of those who tested HIV positive.

The work on HIV seroprevalence studies was highly gratifying for the IRB and research team because it produced the first valid, probability-based (and, in our minds, the only accurate) data of this sort across Africa. Our studies showed that instead of being primarily a disease associated with poverty, HIV knew no socioeconomic boundaries, with prevalence fairly consistent across socioeconomic groups. In a few countries prevalence was highest in more affluent groups. Such results helped refute the stigmatization of HIV as a “poor man’s disease” and supported more effective allocations of resources for prevention and treatment of HIV.

Subsequently, the standard approaches of the project teams and the IRB were challenged when our federal client added testing for syphilis to the protocol in certain countries. Unlike tests for HIV, certain tests for syphilis could produce almost immediate results. Given that highly effective single-dose treatments—consisting of one pill—were available, some ministries wanted us to become on-the-spot treatment providers. The ethics argument was obvious: research subjects who tested positive would be far more likely to accept an immediate offer of treatment than if they had to take initiative to seek treatment later on.

Initially, the researchers and the IRB struggled with this requirement, not because we failed to see its wisdom, but because it created a sort of cognitive dissonance. Our firm had no precedents for providing treatments contingent on individualized data gathered in an ongoing data collection. We couldn’t imagine crossing this line, and worried about the implications of doing so, as if we were being asked to cross a metaphorical blood-brain barrier. When it became clear that some ministries would not relent, we arrived at a compromise: the ministries in some countries sent its people with our teams to deliver treatment. Other times, at the suggestion of the ministries, people we hired to assist with data collections were designated to dispense medications. Under both scenarios, once we came to accept treatment delivery as an appropriate adjunct to the research, we not only satisfied the country-specific ethics boards but also became far more comfortable that the IRB was helping the research teams arrive at ethically optimal protocols, even improving on what the IRB had originally approved.

Our IRB reached decisions on protocols to be implemented in the developing world in the same manner as it reached decisions for domestic work—we didn’t view it as a matter of standards being higher or lower.

Yet another challenge emerged in 2005 when a federal agency other than our funder attempted to influence approved country-specific protocols, taking the position that certain services it provided represented a “best practice” that must be followed across countries. If a particular country’s ministry didn’t comply, the other federal agency threatened to withdraw in-kind support for the country’s HIV seroprevalence data collections. Officials at the ministry became so upset that they mobilized their Parliament and ambassador to the United States. Our project team and all actors within the country were in agreement, and our client agency offered no objection. As IRB chair, I concluded that the resisting federal agency, which was not our client for the work, was the unwelcome source of conflict. With no time to convene the IRB, I made an executive decision to side with the ministry, Parliament, and the ambassador, and suggested to the other federal agency that it back down. It did.

Abroad at home

What I am hoping to illustrate here is that independent, private IRBs can be strong protectors of research ethics standards when operating overseas. Ironically, in some ways the domestic setting has been more challenging for our IRB, largely because we had hundreds of projects spread over dozens of clients, some of whom operated their own IRBs and were trying to figure out how our IRB and their IRBs were supposed to relate to each other.

Our firm took on a court-mandated study to quantify the health effects of chronic exposure to an admitted carcinogenic industrial runoff dumped for years into a community’s water supply. The results would guide the court in calculating compensation owed to community members. The problem was that our client—a law firm hired by the manufacturer (and defendant)—had written research specifications designed to dramatically understate the exposure’s health effects. The IRB viewed the study as presenting elevated risk because, although participation was unlikely to harm subjects, unsound results could deprive participants and the class they represented of appropriately calculated damages. Moreover, conducting the study as designed could set dangerous precedents for future court-mandated studies related to similar industrial exposures elsewhere by lending technical and ethical credibility to poorly designed protocols.

After the IRB labored to strengthen the research design, the client killed our company’s contract. I suspect they then shopped around and found a contractor willing to implement the unsound design. Based on this experience, the IRB voiced the opinion to company leadership that contracts of this type should not be pursued without IRB approval of the design before proposal submission.

In another instance, our company obtained federal funding to conduct several large population-based telephone surveys related to intimate partner violence. Some questions were highly sensitive, and our IRB voiced concern that if an intimate partner overheard what a respondent was telling our interviewer, it might place the respondent at heightened risk. To manage risk, researchers added a filter question after obtaining consent. Respondents were asked whether it was a safe time to answer questions or if there should be a call-back. Respondents were reminded that they could end the interview at any time without explanation if their privacy in speaking with us became compromised or concerns about safety arose. The IRB accepted that in any “conflict zone” some risk was unavoidable.

In a third study, our firm worked with the client to develop a questionnaire exploring relationships between violence against one’s self and against others in middle schools and high schools in one of the country’s most violence-ridden midsize cities. In the course of the study, the IRB had to investigate two possible adverse events. First, some field staff reported being exposed to a high level of physical threat when, for example, students heaved desks across the room at them. Teachers insisted this was within the range of normal student behavior. Subsequent review by our IRB, the client, and the client’s IRB led to the reluctant acceptance of the teachers’ observations. Second, a student was murdered while we were present in the school district. We learned that the murdered student had turned around his life and, on the day in question, had adopted the role of peacemaker between two warring gangs. After one gang left the scene, the other stuck around and kicked him to death. We felt an implicit responsibility. Because none of the students involved had taken our questionnaire, and since such killings were not uncommon, our IRB and the client’s agreed that the murder was not an adverse event resulting from the study. Still, we breathed a sigh of relief when data collection ended.

The future of IRBs

Over the past 30 years, IRBs have proliferated and come to play a critical role in protecting human research subjects. Traditional funders require that organizations performing research have the capacity to conduct IRB reviews as a prerequisite for granting support. Currently, it is difficult to obtain cooperation from partner organizations, to receive renewed cycles of funding, and to publish articles based on human subjects research if receipt of IRB approval cannot be demonstrated.

Leadership of research organizations typically recognizes that IRBs must operate independently. However, especially in downward economic cycles, IRBs can find themselves pressured to free up the revenue-generating engine. Such attempts to undercut the IRB’s independence can be viewed as normal give-and-take of doing business, whether in university or private settings. Such pressures can also come from researchers who have staff to pay and papers to write, or from clients who have schedules to enforce and reports to deliver. However, they also represent failures to appreciate that an IRB’s independence is essential for its effectiveness and legitimacy. As IRBs get their wings, they inevitably need to remind all parties of their essential independence.

Yet there is an opposite danger: that IRBs might come to be viewed as generic ethics boards for the organizations in which they operate. In 16 years as an IRB chair or cochair (during which time our firm experienced tenfold revenue growth), our IRB began to gain the unwanted reputation as the firm’s ethics watchdog. When staff approached the IRB with ethical issues beyond our jurisdiction, we typically suggested they explore their concerns with the firm’s leadership or with clients, leaving open the possibility of later returning to the IRB. In one case, a client was extensively revising and intentionally misrepresenting the scientific product of an intervention study. In another, my firm was awarded a contract to evaluate all grants funded under a particular initiative, but we subsequently learned that the client made no grants under this initiative, nor were any likely to be funded. Were we merely being hired to provide spin?

Some of our IRB members and staff did indeed advocate that the IRB play a larger role in brokering questions of ethics. For the most part, the domains into which they wished the IRB to extend itself went beyond the appropriate research-ethics activities for any IRB. But the demand for a broader role in company ethics suggests that organizations may need an additional type of ethical review body.

In July 2018, NIH issued a comprehensive update to the Common Rule, with an implementation date of January 21, 2019. Revisions included adding new requirements for the information that must be provided to research subjects in the consent process, expanding the types of research qualifying for exemption, requiring that institutions involved in cooperative research use a single IRB, and ending continuing review after studies reach the analytical stage. Researchers should become acquainted with the updated Common Rule, monitor implementation updates, and look to their institutions to revise their own protocols. Meanwhile, as the range of potential clinical and technological interventions becomes increasingly sophisticated and difficult for IRB members to evaluate, it will become more necessary for IRBs to seek consultants who can help evaluate potential risks and benefits of research studies, interpret the actual interventions, and evaluate reported or otherwise suspected adverse events. It will also fall upon IRBs to see that researchers are in sync with evolving norms on questions of justice, equity, and inclusion.

With accelerating movements globally promoting nationalism, denial of human rights, deregulation, and abrogation of government commitments to provide for a variety of social benefits, the need for effective organizations to help maintain broadly shared ethical norms is growing stronger. IRBs may end up stepping into a void by assuming an expanded role in promoting ethics in research organizations and in societies. They are an organizational model that should be valued and strengthened.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Ross, Jim. “Research Ethics From the Inside.” Issues in Science and Technology 35, no. 4 (Summer 2019): 84–88.

Vol. XXXV, No. 4, Summer 2019