How Should Science Respond to CRISPR’d Babies?

The scientific communitys leaders should enforce deterrence, create disclosure, and express humility.

On November 25, 2018, much of the world (including me) was shocked to hear of the birth of the world’s first babies produced from embryos whose DNA had been edited by the Chinese scientist He Jiankui, using the newly emerging technology called CRISPR (a handier name than the official “clustered regularly interspaced short palindromic repeats”). We still have been told little about what happened, and we have no independent or reliable verification. But the ripples of this event have not waited for confirmation: distrust, concern, and even outrage have spread. Science itself is at risk.

Whatever the full story, the He Jiankui affair has clearly been a fiasco. The experiment went forward despite a gross imbalance of risks and benefits, highly questionable consent, apparent fraud, inappropriate secrecy, and violation of a strong global consensus against human germline genome editing at this time. People were, rightly, shocked by this bolt from the blue, one that reinforced an unfortunately already widespread image of ambitious rogue scientists, casting He as a Chinese version of Drs. Frankenstein and Moreau.

Science does not have a president, prime minister, or pope. But science does have leaders, individual and institutional, and those leaders have some influence over public perceptions. Leaders reacted—but their reactions were insufficient. Now they badly need to do three things: enforce deterrence, create disclosure, and express humility.

Enforcing deterrence

He Jiankui expected to be hailed as a hero, or at least to be seen as a pioneering figure. He gambled his high-flying present for the hopes of an even higher-flying future. Thus far, it seems he bet wrong. Far from being a hero, he has been (almost) universally condemned and appears to be facing criminal prosecution in China. But whatever happens in China, science needs to ensure that he is ostracized. No future ambitious scientist should see this kind of experiment as anything but a suicidal career move.

In 1980 when Martin Cline, a biomedical scientist at the University of California, Los Angeles, violated ethical rules by pursuing the first (unapproved) gene therapy trials, he lost positions and grants; his career never recovered. Hwang Woo-Suk, a veterinarian and biotechnology researcher at Seoul National University, acted much worse by fraudulently claiming to have cloned human embryos. Until his fraud was discovered in late 2005, Hwang was a hero in South Korea, where his face graced a postage stamp. Following the revelations, he was fired from his faculty position, lost all his grants, and in 2009 was convicted of fraud and embezzlement and given a two-year prison sentence (suspended and later reduced to 18 months). Hwang has subsequently begun to rebuild his reputation with animal cloning work, but he has never regained his previous position. Similarly, He Jiankui’s career needs to be ruined—not necessarily forever, but for a long, long time.

How should science accomplish this? Colleagues should shun him, journals should refuse to accept papers where he is an author, funders should forsake him. He needs to be on publicly announced blacklists, at the very least by journals and funders. And leaders of science need to take the lead in announcing this and in encouraging others to do the same.

Of course, collective ostracism, particularly coming from official or semiofficial leadership, could descend into the abyss of McCarthyism or even Stalin-era Lysenkoism. Individual scientists should decide, based on their own conscience, whether to have anything to do with He. I would encourage them to reject any contact with or overtures from He, based on what is already known. The presidents of the National Academies in the United States and their foreign equivalents, and the directors of the National Institutes of Health and the National Science Foundation and their foreign equivalents, while not pressuring scientists to avoid He, should make it clear that they approve shunning of He, pending further light on the situation. Journals should take the same position. Funding agencies, particularly governmental ones, may have a harder time ignoring applications before an official determination of He’s guilt, but they should at least explore their legal powers to do so.

I do not recommend at this point that the US Academies, federal research funders, or foundations that support research perform their own investigation of He’s actions. (Although investigations could well be necessary for He’s collaborator from Rice University, Michael Deem). But these groups should be alert to final determinations coming from Chinese authorities investigating the situation. Some steps have already been taken against He: his funding has been canceled, and he has been fired from his faculty position. Chinese sources have implied that he will likely face criminal charges. Additional Chinese criminal or civil findings against him should be the basis for formal disqualification or other “blacklisting,” at least as long as those determinations are credible. Perhaps, someday, a long life of repentance and good works by He might justify science in readmitting him into its ranks—but not soon, both for his own demerits and, more important, to discourage would-be emulators.

Creating disclosure

A harder question is raised by all the academics who had hints, or direct knowledge, of what He was doing, but said nothing. This list includes at least the scientists Matthew Porteus and Stephen Quake at Stanford; Mark Dewitt at Berkeley; Nobel prize winner Craig Mello at the University of Massachusetts; and He’s collaborator, Michael Deem, at Rice. It also includes father-and-son ethicists William Hurlbut at Stanford and Ben Hurlbut at Arizona State University. Each has said that he had conversations with He about human embryo gene editing. Each has said that he discouraged He from doing it. Several have said that they suspected he might be doing it anyway. A few have said they actually knew about the pregnancies some months before the babies were revealed. Not one of them disclosed his knowledge in advance, to anyone.

I think they should have. But the word “snitching” conveys some of the difficulties of insisting on disclosure. Informing on others is sometimes socially required, while at the same time often being socially repugnant. From siblings, to high school students, to employees, informing the authorities about a colleague’s misbehavior will often get you labeled as a snitch, even a “dirty snitch.”

In addition to this basic social conditioning, conventions of confidentiality in science are important for allowing colleagues to communicate without fear of being scooped. Both peer review in publications and review in grant applications typically include strong confidentiality requirements, such as the destruction of any paper or electronic copies of the submitted article or the grant application. That internal code of confidentiality presumably leads to more discussion and cross-fertilization of ideas—and better research. Destroying it could slow scientific progress.

More concretely, scientists who snitch will almost certainly ruin their relationship with the snitched-on colleague. Similarly, pediatricians who in good faith and in response to strong state laws report parents as potential child abusers will often lose those parents and their children as patients. Informing scientists might find themselves sued, successfully or not, for libel, slander, and various other offenses. If the snitch is a competitor, as will often be the case, tortious claims might even be plausible. And the scientists who report may incur broader social costs from other colleagues and potential collaborators who shun them as snitches.

Against the backdrop of this social conditioning and the valuable conventions of confidentiality, should the scientists aware of He’s activities have disclosed their conversations and suspicions? And if so, to whom?

This is not a new question, to science or generally. It is not even new in discussions of the He affair: an editorial in the journal Science by the presidents of three national academies called for “an international mechanism that would enable scientists to raise concerns about cases of research that are not conforming to the accepted principles or standards.”

This question has arisen before in the biosciences with respect to so-called dual-use technologies, those that could be used for good purposes or for evil ones, such as biological warfare. It also comes up in more routine situations where someone is aware of wrongdoing and we as a society want to encourage or protect their whistleblowing. Qui tam statutes, giving whistleblowers some of the proceeds of suits against wrongdoers, date at least as far back as the Civil War. Their use has continued and expanded in recent years, especially in cases where fraud against the US government is alleged. Sometimes failure to snitch on illegal activity is itself a crime, especially under specific statues relating to child abuse and elder abuse.

The He affair is simply the most recent example of why science should think hard about encouraging, or even requiring, scientists to inform someone of their concerns about suspect research. I am largely convinced that such an obligation should be created. But the details are important, and those are tricky to get right.

What would be the obligation? To disclose behavior you believe to be illegal, or is “unethical” enough? Is this a binding legal or ethical obligation or a guideline or aspiration? Do you have to be certain of the other’s misbehavior, to have “clear and convincing evidence,” or to have a “preponderance of the evidence,” or just to have “reasonable suspicion”? What kinds of things should be reported? Plagiarism? Inappropriate authorship credit or order? Minor unapproved changes in a human subjects protocol? Dangerous work? Unethical work? Illegal work?

Then we hit the question of whom to tell. At least one of He Jiankui’s confidants, Matthew Porteus, has said that he thought about telling someone about He’s likely plans, but he did not know where to go. This is a real problem, especially when the two scientists are not in the same institution. When they are at the same university, a word to the relevant ombudsperson, department chair, dean, research vice president, or president might do the trick. But how would, say, Stanford professor Porteus go about contacting someone at China’s Southern University of Science and Technology, where He worked? It is not helpful to tell people to gather up their courage and take action unless you tell them where and how to report the misbehavior of colleagues.

We could create “scientific snitching” bodies. They could be located in academic institutions, in funding bodies, in national governments, or even in some kind of international organization. Scientists should be told they have a duty to report to this entity some kinds of illegal, unethical, or dangerous research. We could even give immunity from lawsuits for those who report when they act in good faith.

But we also should spare a moment’s thought, and pity, for the people who receive these reports. Some of the reports will be from disgruntled coworkers or jealous rivals or from the apparently mentally ill. How much chaff will need to be sifted to reveal how little grain? And who in the world would want that job?

At this point I am not sure exactly what should be done, but I am convinced that science needs to think hard about encouraging internal reporting of dangerous, unethical, or illegal research. The alternative may well be ham-fisted external requirements, or yet more loss of trust in the beneficent motives and results of science—or both. We need further study and thought on the details. We can examine precedents, such as requirements for medical professionals to report their patients for abuse and colleagues for practicing while impaired. Academic honor codes provide other useful precedents. The National Academies, or some similar group, should convene a committee to study the feasibility of such a reporting requirement and, within a short time, report with recommendations on whether and how to make it happen.

Expressing humility

The He affair fed public concerns about mad, bad, and rogue scientists. Whether or not one ultimately concludes that He Jiankui violated Chinese laws, criminal or otherwise, he was a rogue scientist. He proceeded secretly to do something that he knew, or should have known, would be widely condemned. He allegedly committed fraud to do so, at least according to official reports from Chinese authorities. He’s actions led many in the public to worry that scientists were pursuing their schemes with no regard for the law or for the opinions of their fellow citizens, citizens who were largely footing their bills. Science needs to make clear that it cannot, will not, and does not want to pursue research that is not acceptable to its society.

Before the He affair, scientists’ statements about human genome editing openly acknowledged the importance of public opinion. A March 2015 article in Science, many of whose authors became members of the organizing committees of the International Human Genome Editing Summits, said we should: “Strongly discourage, even in those countries with lax jurisdictions where it might be permitted, any attempts at germline genome modification for clinical application in humans, while societal, environmental, and ethical implications of such activity are discussed among scientific and governmental organizations.”

The organizing committee for the first summit, in December 2015, said in a concluding statement: “It would be irresponsible to proceed with any clinical use of germline editing unless and until (i) the relevant safety and efficacy issues have been resolved, based on appropriate understanding and balancing of risks, potential benefits, and alternatives, and (ii) there is broad societal consensus about the appropriateness of the proposed application.”

A report issued in February 2017 by the US National Academies of Sciences, Engineering, and Medicine (NASEM) said: “With respect to heritable germline editing, broad participation and input by the public and ongoing reassessment of both health and societal benefits and risks are particularly critical conditions for approval of clinical trials.”

In the United Kingdom, the Nuffield Council on Bioethics, an independent body that assesses novel bioethical questions, issued a report in July 2018 that said: “We recommend that before any move is made to amend UK legislation to permit heritable genome editing interventions, there should be sufficient opportunity for broad and inclusive societal debate.”

What all these findings have in common is the need for public buy-in—at least acceptance if not full approval or consensus—before proceeding with human germline genome editing. At the second international gene-editing summit, held in 2018 in Hong Kong, where He revealed his work, David Baltimore, chair of the summit organizing committee, initially struck the right note. Immediately after He’s appearance, Baltimore said, forthrightly, “There has been a failure of self-regulation by the scientific community because of a lack of transparency.” And, indeed, the organizing committee’s concluding statement reiterated Baltimore’s condemnation of He.

But there were disturbing off-notes, both in the official concluding statement and in individual statements from prominent scientists. The organizers’ concluding statement said:

The organizing committee concludes that the scientific understanding and technical requirements for clinical practice remain too uncertain and the risks too great to permit clinical trials of germline editing at this time. Progress over the last three years and the discussions at the current summit, however, suggest that it is time to define a rigorous, responsible translational pathway toward such trials.

A translational pathway to germline editing will require adhering to widely accepted standards for clinical research, including criteria articulated in genome editing guidance documents published in the last three years.

Such a pathway will require establishing standards for preclinical evidence and accuracy of gene modification, assessment of competency for practitioners of clinical trials, enforceable standards of professional behavior, and strong partnerships with patients and patient advocacy groups.

The concluding statement also called for “continued international discussion of potential benefits, risks, and oversight of this rapidly advancing technology.” It did not, however, say that a “broad societal consensus” would be necessary before starting clinical trials. And it did not say that before such trials start, “there should be sufficient opportunity for broad and inclusive societal debate.” This statement could easily be read as: “There are a lot of technical things scientists need to figure out before this can be done. The public should have a chance to comment, but they will not make the decisions. We will.”

This impression was abetted by some unfortunate statements alluding to the inevitability of human germline editing. George Daley, a member of the organizing committee, one of the major speakers, and the dean of Harvard Medical School, told the summit: “I want to suggest that I do think it’s time to move forward from the prospects of ethical permissibility to start outlining what an actual pathway for clinical translation looks like. What would be the regulatory standards that a group would be held to in order to bring this technology forward?”

Daley’s regulatory standards did not expressly include a societal consensus, or even social acceptance. He made a few bows toward society, but one could quite easily hear in his comments that scientists should, and would, be the ones to figure out when, and how, this new technology will be used. Science subsequently quoted Daley as saying, “We have to aspire to some kind of a universal agreement amongst scientists and clinicians about what’s permissible.… Those who violate those international norms are held out in stark relief.” At no point did he invite the public to contribute to this “universal” agreement.

My complaint is not that what the organizing committee or Daley said was wrong, but that they had a failure of omission. They did not say, let alone trumpet, the crucial need for public acceptance before anyone should use genome editing technology to make babies. At a time when rogue scientists, or science itself, is being blamed for ignoring the public, its high and mighty representatives should expressly say the following: “Science is part of society. The decision to use this technology belongs in part to scientists, but ultimately to societies.”

That, of course, is a truism. If a country makes the use of genomic editing technology illegal—as many have, including (effectively) the United States—then work cannot proceed there. But the He affair marked an especially important time for science to say this, openly and clearly. The primacy of public acceptance should have been the first sentence of any reaction by scientific leaders to He’s work. Instead, it was largely absent. And this, I fear, was a self-inflicted wound.

Personally, I think that the case for germline editing, if proven safe, is strong in a few applications and weak (but not trivial) in some others. But demanding social acceptance before using it to make babies is both legally and politically right. And science would benefit if its leaders made it crystal clear that they accept, and in fact agree with, that demand. Science cannot exist, let alone thrive, without the continuing financial, legal, and political support of the societies in which it works. Its leaders need to say so: early, often, and loudly.

One other aspect of science’s reaction to the He affair deserves mention. After this event—and after the first summit, the NASEM report, and the Nuffield Council report—some people called for a formal moratorium on human germline gene editing. This generally is an unhelpful kind of symbolic politics. A moratorium is defined as a “temporary prohibition of an activity.” The earlier statements, now called insufficient, said germline editing of babies should not then be done—in effect, a moratorium. Indeed, most countries where this work could easily be done prohibit it, with bans that are not expressly temporary. When the work is already illegal in the United States, the United Kingdom, most of Europe, and (now apparently) China, what does a call for a moratorium add?

If those calling for a moratorium are looking for a binding enforceable international agreement, with enforceable and enforced teeth, good luck to them. That path would be highly uncertain even after years of work. On the other hand, if those calling for a moratorium seek something along the lines of the landmark 1975 Asilomar Conference on Recombinant DNA—a consensus statement from the scientific community—that is effectively what the various articles and reports have done. They have repeatedly said “human germline genome editing should not be done until X, Y, and Z and we do not have X, Y, and Z.” This is effectively a prohibition until those X, Y, and Z conditions are met: a “temporary prohibition of an activity” measured not in years but in conditions to be met.

To me, the calls for a moratorium are in part political theater: We oppose this more than you do; you resist using the word moratorium; we insist you use it so that we will win. I don’t often like political theater. I prefer my politics and policies to be substantive. But these demands are also, in part, efforts by those who think science was not clear enough about heeding public acceptance to regain public trust. This is how I read a recent commentary in the journal Nature signed by some of the acknowledged leaders of science. I understand and agree with the impulse; I just don’t see the importance of the “M” word.

My logic leads me to conclude that science could call for a moratorium without changing its position, by expressly saying there should be a “moratorium” until certain conditions, which in my view must include social acceptance, are met. As the children’s rhyme goes, “Sticks and stones may break my bones, but words will never hurt me.” Accepting the word moratorium, carefully defined as a set of sensible necessary preconditions, may well be a good tactical move, even if logically unnecessary.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Greely, Henry T. “How Should Science Respond to CRISPR’d Babies?” Issues in Science and Technology 35, no. 3 (Spring 2019): 32-37.

Vol. XXXV, No. 3, Spring 2019