Regulatory Challenges in University Research
Federal regulations must be streamlined and coordinated so that society’s values can be upheld without impeding science.
The body of federal regulations designed to ensure that university research adheres to generally accepted societal values and ethics has grown rapidly in recent years, creating an administrative burden and a potential impediment to research. As publicly supported and publicly accountable institutions, universities are expected by force of regulation to develop effective procedures to deal with a number of research-related issues. Some of the most challenging sets of issues, which we focus on here, involve protecting human and animal research subjects, and detecting and managing scholarly misconduct. Other important issues include dealing with conflicts of interest among researchers and protecting the research environment for the benefit of both scientific workers and research subjects. The university research community has long accepted responsibility for these various tasks and recognized that reasonable regulations to achieve these goals are worthwhile and necessary, but the requirements imposed by the regulatory system are reaching the point where they may no longer be called reasonable.
The university community now finds itself trying to resolve the tension that has developed between two missions: fostering a robust research program and monitoring and regulating the activities that constitute this program. To demonstrate their accountability to the public trust for funding the research, universities must demonstrate that their adherence to regulations is unequivocal and visible. However, badly designed or poorly coordinated regulations can create an unnecessary problem for universities. We outline some key issues in the regulation of research and point to possible national strategies to achieve compliance with these regulations without unduly hampering scholarly inquiry.
Protecting human subjects
The Federal Policy for the Protection of Human Subjects, generally referred to as the Common Rule, sets out requirements for the conduct of federally sponsored human research. The Food and Drug Administration (FDA) has developed additional, but quite similar, regulations for investigators and sponsors of clinical trials designed to bring drugs and devices to the marketplace.
These regulations embody the principles of the Belmont Report, which was published in 1979 by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Three principles–respect for persons, beneficence, and justice–are accepted as the basic requirements for the ethical conduct of research involving human participants. Respect for persons demands that individuals be able to make their own decisions about participating in research and that they be fully informed about the project, its risks and benefits, and the voluntary nature of participation. Respect for persons also involves the protection and special consideration of individuals, such as children and the cognitively impaired, who are unable to provide their own independent informed consent. Beneficence requires that any harm associated with the research be minimized and that potential benefits be maximized. Justice denotes the expectation that participant selection will be equitable so as to ensure that no particular group or class of individuals will be unduly targeted for participation in or exclusion from the research.
These principles are widely accepted; in fact, colleges, universities, and academic medical centers commonly surpass regulatory requirements and extend these protections to all human research participants irrespective of the source of project funding or the goal of the research. Research institutions believe they are doing a good job of putting these principles into practice. But some segments of the public and government are less confident that this is the case. Indeed, we appear to have moved into an era of heightened public concern regarding the protection of human participants in research and of public questioning of the ethics and motives of researchers. This concern has been fueled, in part, by recent government sanctions, including temporary complete suspension of clinical research activities, at several leading research universities. For its part, the academic community views such sanctions, along with the institutions’ efforts to correct the problems or, in some cases, explain why the problems were more bureaucratic than scientific, as signs that the system is working. Some outside observers, however, have retained a darker view of events.
One current crisis facing institutions is the difficulty of complying with agency interpretations of regulations, many of which essentially constitute new rules imposed without the normal rulemaking process. Universities and academic medical centers have established committees called Institutional Review Boards (IRBs) to review experimental protocols involving humans. The institutions traditionally have regarded federal regulations as being largely performance-based guidelines under which IRBs have significant discretion to act on a protocol-by-protocol basis. During the past several years, however, we have seen the temporary shutdown of several academic IRBs and therefore clinical research, leaving administrators and researchers scrambling to protect subjects enrolled in trials and to move forward with the research.
A review of documents issued by the Department of Health and Human Services (HHS) Office of Human Research Protections to institutions whose research programs have been investigated suggests that federal regulatory agencies are now viewing the regulations in more strict and literal terms than previously applied at many institutions. The agencies are expecting to see much more written documentation in IRB files related to research projects. This shift has forced institutions to greatly expand staffs and committees. At the University of Iowa, we have had a nearly fourfold increase between 1995 and 2000 in the cost of the human subject protection process. The focus of many of these efforts is, unfortunately, on the documentation of the process to prevent regulatory sanction or closure, necessitating the use of resources that could otherwise be deployed to achieve more significant improvements in human participant protections. This disconnection becomes even more complex as politically charged issues, such as gene therapy and growth of human embryonic stem cells, enter the research arena.
At a time when IRBs are struggling under increased pressure to fully document all of their actions to satisfy new agency interpretations of the Common Rule and FDA regulations, the committees now must comply with yet another set of regulations, promulgated under the Health Insurance Portability and Accountability Act (HIPAA) and implemented by HHS. The HIPAA details special requirements for accessing a patient’s medical records for use in research, but these regulations largely reflect protections already afforded under the Common Rule. Requiring IRBs to develop additional documentation to satisfy new regulations that do little more than duplicate those already in existence–especially at the same agency–is a movement in the wrong direction.
Problems such as these are being addressed by a new consortium of organizations that is initiating an accreditation process for institutions. By offering a recognized seal of approval, accreditation could establish a measure of excellence sought by research organizations, raising the bar for all and laying a path for continuous improvement. Called the Association for Accreditation of Human Research Protection Programs (AAHRPP), the organization plans to begin operating in 2002 and to draw its expertise from seven nonprofit organizations representing the leadership of universities, medical schools, and teaching hospitals; biomedical, behavioral, and social scientists; IRB experts and bioethicists; and patient and disease advocacy organizations. The Department of Veterans Affairs has contracted for similar accrediting services from the National Committee for Quality Assurance. These standards will be most appropriate and widely accepted if they distinguish between requirements for accreditation based on regulations (that is, specific laws and statutes) and recommendations based on best practices (that is, methods that are generally accepted as being the best way to deal with a particular situation).
In what may prove to be an important first step in initiating dialogue between accrediting agencies, the Institute of Medicine (IOM) issued, in April 2001, a report titled Preserving Public Trust: Accreditation and Human Research Participant Protection Programs. Among its recommendations, the report urges that accrediting organizations be nongovernmental entities whose standards build on federal regulations, that participants in studies be more thoroughly integrated into the research oversight process, and that consideration be given to having pilot accreditation programs evaluated by the U.S. General Accounting Office and the HHS Office of the Inspector General. The IOM is now conducting a more comprehensive assessment of the overall system for protecting human research participants, with a report expected in 2002. This study will delve into issues such as improving the informed consent process, easing the burdens on IRBs, ensuring that investigators are educated about the ethics and practices involved in conducting research with humans, enhancing research monitoring, and bolstering institutional support and infrastructure.
Protecting animal subjects
Under the Animal Welfare Act, the U.S. Department of Agriculture (USDA) regulates the care and use of several species of vertebrate animals in all research. As with human research, additional requirements are imposed on institutions that receive federal funding. In particular, the Public Health Service (PHS) Policy for the Humane Care and Use of Animals applies to all live vertebrate animals used in PHS-sponsored research. To provide further guidance on the operation of animal care and use programs under PHS regulations, the Institute of Laboratory Animal Resources of the National Research Council has published the Guide for the Care and Use of Laboratory Animals. Most other funding agencies and private foundations also require that research comply with PHS policy. As a result, institutions almost universally apply PHS standards to all animal research. Under these regulations, institutions must establish Institutional Animal Care and Use Committees that function in much the same way that IRBs function to oversee human research.
Animal care and use programs have had the option for several years of voluntarily seeking accreditation by the Association for the Assessment and Accreditation of Laboratory Animal Care (AAALAC) International. By participating in this accreditation process, institutions are able to assess their level of compliance with federal regulations and to get help in interpreting various regulations that are not spelled out in detail. Not surprisingly, the group’s staff members have been active in planning the Association for AHRPP.
Although the use of domestic pet species (dogs and cats) and nonhuman primates in research has attracted the most public attention, 95 percent of vertebrate animals used in research are rats and mice. These species are covered under the PHS policy, but they are not covered under the Animal Welfare Act. In September 2000, USDA settled a lawsuit filed by the Alternatives Research and Development Foundation by agreeing to initiate the rulemaking process for inclusion of rats and mice, as well as birds, under the Animal Welfare Act. However, Congress temporarily halted this process the following month by including a provision in the 2001 Agricultural Appropriations Act that prohibited USDA from using appropriated funds to begin rulemaking on this issue during fiscal year (FY) 2001, and this prohibition has now been extended through FY 2002 as well. Numerous universities and professional societies have spoken against this expanded coverage of the Animal Welfare Act. Their objections, which we believe are valid, center not on the appropriateness of providing protections to these species, but on the redundancy and cost of compliance with such regulations. The same species already are covered under the PHS Policy, as well as under the accreditation guidelines used by the AAALAC. Supporters of the inclusion of rats, mice, and birds under the Animal Welfare Act often cite the need for regulation of commercial breeders and vendors. However, a review of AAALAC’s list of accredited organizations shows that most of the large laboratory animal supply companies already have sought and received voluntary accreditation for their programs, including programs involving these species.
Preventing scholarly misconduct
Regarding scholarly misconduct, there is good news but also potential for bad news. On the one hand, research universities have not been unduly burdened with regulatory requirements in this area. Moreover, the federal government has proposed a new definition of the term, to be implemented ultimately across enforcement agencies, that is nearly uniformly seen as a vast improvement over the previous definitions used by the various agencies. In particular, the old definitions of misconduct included an ambiguous category described as “other practices” that seriously or significantly differed from accepted scientific norms. The proposed policy restricts the definition of misconduct to “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research or in reporting research results.” These terms are further defined as follows:
Fabrication is “making up data or results and recording or reporting them.”
Falsification is “manipulating research materials, equipment, or processes, or changing or omitting data or results such that the research is not accurately represented in the research record.”
Plagiarism is “the appropriation of another person’s ideas, processes, results, or words without giving appropriate credit.”
The proposed definition concludes, as did previous definitions, with the observation that honest error or differences of opinion do not constitute research misconduct.
This is a common sense approach with which few people can legitimately argue, in large part because the government, the public, and research universities share many of the same interests in regulating research misconduct. All three groups share an interest in the integrity and quality of research funded by public money and focused on progress in improving the collective public health, quality of life, and national security. Beyond that, it is vital to research universities, and to the national research enterprise of which they are a fundamental part, that these interests be fostered in a way that does not unnecessarily restrict or handicap research.
However, universities do see an important need to ensure that the costs they must bear in complying with these requirements are in direct relation to the risk of scholarly misconduct, which is deemed to be quite low. For example, the National Institutes of Health (NIH) in 1999 awarded nearly 13,000 grants and contracts, most involving multiple researchers. For that year, the PHS Office of Research Integrity (ORI) was called on to investigate 33 cases of alleged misconduct. Only 13 of these cases were sustained with a final finding of actual misconduct. Research universities also are concerned that they be able to maintain institutional responsibility and flexibility in investigating and adjudicating their own internal affairs. Here, too, there is optimism, as the proposed policy on misconduct includes maintaining local autonomy as a useful goal.
Thus, if misconduct were all that the government intended to regulate, research universities would not have much cause to complain. However, “research misconduct” seems to have become synonymous in regulatory parlance with “research ethics.” Reflecting this development, the PHS issued its final “Policy on Instruction in the Responsible Conduct of Research” in December 2000. Although implementation of the policy has been and remains suspended, as proposed, it would confront universities with the prospect of having to provide mandatory instruction in the “responsible conduct of research” for all staff involved in PHS-funded research, including research collaborators outside the university. The program of instruction includes nine core areas determined by the PHS to be “significant,” five of which have not previously been the focus of regulatory attention by the agency, including publication and authorship practices. The scope of the draft policy was somewhat revised in response to extensive criticism from the research community. For example, under the proposed policy it is no longer required but is still “recommended” that the program of instruction be extended to administrative and other support staff. But it took an inquiry by a House of Representatives committee to stop, even temporarily, this far-reaching initiative.
At present, the PHS has suspended implementation of the policy while the agency responds to the committee’s concern that the ORI exceeded its legal authority in issuing “a final policy . . . that would impose new requirements on our nation’s research institutions.” If the PHS ultimately issues a policy that resembles the current suspended version, then the costs of compliance are anticipated to be staggering for research universities and far out of line with the low risk detected by the ORI’s own study.
Heeding widespread complaints from the research community concerning the crippling effects of the federal regulatory structure, the House Committee on Appropriations requested in its budget for FY 1998 that agencies mount an effort to streamline duplicative and unnecessary regulations governing the conduct of extramural scientific research. In response, NIH established the Peer Review Oversight Group/Regulatory Burden Advisory Group to address major issues of increasing regulatory burden. We applaud this initiative and the changes that have been implemented, such as just-in-time IRB review (the review of human subject research only after the likelihood of funding is known). But few, if any, researchers and university administrators would say today that their regulatory burden has been greatly simplified or reduced since the initiative began. Therefore, we call on this group to look not at the fringes of the regulations for ways to streamline processes, but at the heart of the regulations and their redundancy across departments and agencies.
Although there can be no disagreement about the need to carefully and consistently ensure adherence by universities to regulations based on societal values, the details of bureaucratic implementation of the regulations are critical to the health of the nation’s university research enterprise. An obvious goal should be to streamline the regulatory apparatus and make more uniform the plethora of regulations that now exist.
In the area of protecting human research subjects, the federal government should rewrite all of its regulations so that there is a single set of rules, as well as explicit interpretations of those rules. The Common Rule, which covers nearly every type of situation, might serve as a solid foundation, with additions and refinements made as necessary. There is simply no need to have multiple agencies enforcing redundant regulations. The government also should identify a single agency responsible for not only the implementation and interpretation of the regulations, but also for conducting IRB audits and reviews. This agency could be either an existing one or a newly created administrative entity. Managed properly, this restructuring would not only relieve universities of unnecessary burdens but lower regulatory costs for both the regulator and institution.
To better protect animals used in research, the time has come as well to provide regulatory oversight for nonprofit research and educational institutions under a single agency’s umbrella. Again, either an existing agency or a new one could function in this role. Such consolidation would eliminate concerns about redundant regulations with potentially different reporting requirements and inspection parameters.
In the case of scholarly misconduct policies and training in the responsible conduct of research, any additional administrative burden on universities must be more narrowly tailored to the true nature and extent of the problem. Requiring thousands of federally funded researchers to receive formal (often additional) ethics training will not result in a wise use of resources, nor will it do much to reduce the already low risk that misconduct will occur. The goal must be to develop policies that truly can make a difference, rather than merely make investigators (and support staff, in some instances) jump through another hoop.
Across all of these areas, regulatory agencies should be urged to develop model programs that instruct university personnel on how to comply with their requirements.
Finally, in the case of all research regulatory procedures, the true cost of implementing new regulations, as well as new and more stringent interpretations of existing regulations, must be addressed. Proposals for addressing this issue include removing the arbitrary cap on the administrative components of facilities and administrative costs (commonly referred to as “indirect costs”) or developing some other mechanism whereby universities can fairly recover from the government the growing cost of regulatory compliance. Whatever the solution, we must find a way to ensure that well-documented costs for compliance activities are appropriately reimbursed. However, irrespective of the cost or reimbursement for those costs, we must ensure that the time and energies of the research community are not inordinately distracted from their most important tasks: the development of new knowledge for the benefit of all.
Committee on Assessing the System for Protecting Human Research Subjects, Board on Health Sciences Policy, Institute of Medicine, Preserving Public Trust: Accreditation and Human Research Participant Protection Programs (Washington, D.C.: National Academy Press, 2001).
Final Federal Policy on Research Misconduct, 65 Federal Register 76260 (December 6, 2000) ).
Steven Goldberg, “The Statutory Framework for Basic Research,” Culture Clash (New York: New York University Press, 1994), 44–68.
Institute of Laboratory Animal Resources, National Research Council, Guide for the Care and Use of Laboratory Animals (Washington, D.C.: National Academy Press, 1996). (http://www.nap.edu/readingroom/books/labrats/)
Sheila Jasanoff, Science at the Bar (Cambridge, Mass.: Harvard University Press, 1995), 93–113.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, The Belmont Report, Ethical Principles and Guidelines for the Protection of Human Subjects of Research (Washington, D.C.: Department of Health, Education, and Welfare, April 18, 1979) ).
National Bioethics Advisory Commission, Ethical and Policy Issues in Research Involving Human Participants (2001) ).
Office of Research Integrity, PHS Policy on Instruction in the Responsible Conduct of Research (RCR) (Washington, D.C.: U.S. Department of Health & Human Services, December 1, 2000) ).
Letter of W. J. Tauzin, chair of the House Committee on Energy and Commerce, to Chris Pascal, director of the PHS Office of Research Integrity, February 5, 2001 ).
David L. Wynes is assistant vice president for research, Grainne Martin is senior associate counsel for research, and David J. Skorton (firstname.lastname@example.org) is vice president for research at the University of Iowa.