Unmasking Scientific Expertise
COVID-19 teaches us that “follow the science” is a prescription for divisive politics.
In early February 1976, two cases of swine flu were discovered at Fort Dix in New Jersey. The Center for Disease Control identified the virus as Hsw1N1, similar to the one that caused the 1918 pandemic. Serologic testing indicated that the virus had spread to more than 200 recruits. The CDC’s Advisory Committee on Immunization Practices soon recommended “an immunization program be launched to prevent the effects of a possible pandemic.” After consulting with a group of scientific experts and public representatives, President Gerald Ford launched a nationwide vaccination program to immunize “every man, woman, and child.”
The National Swine Flu Immunization Program, which cost $137 million and received bipartisan support from Congress, soon met with controversy. The president’s critics accused him of politicizing science during an election year, while skeptics questioned the safety of the vaccine. Reports of severe adverse effects—specifically, cases of Guillain-Barré syndrome—began ricocheting across the media. As public health experts and the administration grappled with a growing public backlash and the complex logistics of a mass immunization program, including supply shortages, heated negotiations with industry, and implementation of a novel safety surveillance system, they learned something shocking: no new cases of Hsw1N1 had been detected outside the Fort Dix cluster. The epidemic never materialized. The immunization program was discontinued in December 1976, after some 40 million Americans had been vaccinated.
Prevailing public opinion—exemplified by a December 1976 New York Times opinion piece on the “swine-flu fiasco”—was that the government had botched it. “On the flimsiest of evidence,” the op-ed declared, “President Ford and the Congress were panicked into believing that the country stood on the threshold of a killer flu epidemic.” For proponents of this view, the fact that there turned out to be no epidemic was proof that the government’s response was disproportionate and harmful, possibly even motivated by “political expediency” and “the self interest of government health bureaucracy.”
The opinion of many public health experts, by contrast, was that given the stakes the government was right to err on the side of caution. A 2006 article published in the journal Emerging Infectious Diseases expressed this point of view: “When lives are at stake, it is better to err on the side of overreaction than underreaction.… In 1976, the federal government wisely opted to put protection of the public first.” For advocates of precaution, the dangers of underreaction—even once it became clear there was no epidemic—greatly outweighed those of overreaction.
Follow which science?
The predicament faced by the Ford administration in early 1976 is not altogether different from that faced by governments around the world upon learning of an outbreak of a pneumonia-like illness in Wuhan, China, in early 2020. Needless to say, this is not because the coronavirus pandemic turned out to be a false alarm. The similarity, rather, stems from the fact that in both cases public health policy had to be made in conditions of extreme uncertainty, when the possibility of error was significant and the consequences of such error were potentially enormous, not only for public health but for society as a whole.
Scientific knowledge is always uncertain when applied to real-world events, albeit to varying degrees. And the translation of such knowledge into action involves yet more uncertainty, not least about the consequences of our actions. Making and evaluating policy decisions amid great uncertainties and urgency to act require scientific evidence—whether in February 1976 or February 2020. But they also require making judgments about how to interpret that evidence, weigh risks, reconcile differing, sometimes incompatible values and goals, and evaluate the inevitable trade-offs that actions or inactions entail. This is not, to put it mildly, what our public discourse over the past year and a half would lead one to believe.
On the contrary, the rhetoric of “follow the science” has served to mask the ineliminable role of judgment in public health policy, and thus the difficult choices the coronavirus crisis has forced us to confront. Suggesting that the correct policies follow inevitably from “the science” gives political decisions the veneer of objectivity, hiding both the uncertainties and disagreements that underlie them. This charade secures a privileged place for scientific experts in the process of political decisionmaking, while allowing politicians to outsource the justifications for their decisions. As a result, rather than political debate over what needs to be done, we hear competing claims about who is “following the science” and who is not. Caught up in the game of determining who is or is not being “scientific,” citizens and their representatives get distracted from the complex reality they face.
Scientific, political, and media elites have spent a lot of time over the past eighteen months bemoaning the public’s lack of trust in “the experts.” Various explanations have been proffered, from the rise of populism and heightened polarization to digital disinformation and inadequate science education. But conspicuously absent from this list is anything that might implicate the experts themselves—or the political and media elites who perpetuate the follow-the-science charade.
If we want to rebuild a shared public trust in expertise, we will need a more realistic and humane language to talk about scientific expertise and its place in our political life—an account of expertise that is worthy of the public’s trust. Such an account would affirm scientific expertise as a praiseworthy human achievement, indispensable to understanding the world around us and valuable for making political decisions. But it would also recognize the role of uncertainty and judgment in science, and thus the possibility of error and disagreement, including value disagreements, when using science for public policy. Reestablishing an appropriate role for science in our politics, in other words, requires restoring the central role of politics itself in making policy decisions.
Judgment and uncertainty
By early 2020 the threat of a “killer epidemic” was both real and urgent, which clearly distinguished this outbreak from 1976. But this fact does not resolve the fundamental disagreements at issue in our ongoing public debates over the appropriateness of the government’s response, including business and school closures, mandatory stay-at-home orders, physical distancing rules, immunization programs, and mask mandates.
It may be comforting to think that hindsight will answer these questions for us. But, as the unresolved disagreement over the government’s response in 1976 shows, that is a chimera. The kind of reasoning needed to assess pandemic policies after the fact is essentially the same as that involved in deciding whether to implement them in the first place. In both cases, we have to use the evidence and knowledge available to us to make predictions about human behavior, disease transmission, and their interrelated effects: what might happen in the future if we do (or do not) implement certain policies, such as immunization programs or mask mandates, versus what might have happened in the past had we not implemented them?
Of course, we can and should use empirical data and scientific methods, including statistical techniques, epidemiological models, and computer simulations, to help us answer these questions. But they can’t eliminate all uncertainty—and thus the need for judgment and the possibility of disagreement. Making and assessing policy decisions during a crisis like a pandemic involve a high degree of uncertainty not just in the results of research and their policy implications, but also in choices about which results and which types of research to take most seriously. And these disagreements are difficult, if not impossible, to disentangle from the underlying value disputes that animate our political life.
Uncertainty and disagreement are of course common in science; indeed, they are necessary for its progress. In ordinary research science, such uncertainty and disagreement are kept to a relative minimum, at least within well-established fields, by shared standards of evidence, disciplinary consensus, and a robust empirical base. History shows that there can be considerable uncertainty even in such fields—as when a prevailing consensus is challenged by new data or rival standards of evidence or is overthrown by a new one. Even so, we are usually more than happy to leave scientific experts to deliberate among themselves, at least when it comes to such disagreements as how to reconcile relativistic and classical physics or the correct interpretation of quantum mechanics or the comparative merits of rival cosmological models.
But when scientific knowledge is applied, say, in the context of environmental regulation or clinical medicine or, yes, the outbreak of a new disease on a global scale, the amount of uncertainty multiplies along with the stakes. More scientific experts, calling on more disciplines, methods, and theories, may be looking at the problem, identifying more variables and more sources of uncertainty. And a greater diversity of experts, of published studies, of results, in turn, increases the likelihood of expert disagreement. The consequences of such disagreement, moreover, are no longer confined to the laboratory, but directly implicate nonscientists. Decisionmaking under such conditions is considerably more challenging, both practically and epistemically, than in ordinary research science. But it is also more ethically fraught since expert judgment—and the risk of expert error—may be a matter of life and death.
In the case of the coronavirus pandemic, it is not simply that the empirical evidence was (and in some cases remains) limited. A dizzying array of fields and subfields also had to be mobilized to interpret the evidence and translate it into action—from public health, virology, genomics, and clinical medicine to biostatistics, data analytics, and economics. And some of these fields and subfields have different, even conflicting, methodologies and standards of evidence. An example of such a scientific culture clash is that between public health epidemiologists, who tend to focus on population-level trends and rely on a diversity of evidence, and clinical epidemiologists, who tend to focus on clinical practice and take randomized controlled trials as the standard of evidence. In such situations, experts might wind up disagreeing not just about whether a given policy intervention is justified, but about what kind of evidence is even relevant to making this assessment in the first place.
Under such conditions of profound and destabilizing uncertainty, with millions of lives and livelihoods hanging in the balance, rhetorical appeals to “follow the science” obscure the nature and extent of the challenges and disagreements involved and the high-stakes decisions that must be made. By creating the illusion that science can deliver us from such uncertainty, this rhetoric may seem politically advantageous or even psychologically comforting—to citizens, scientists, and politicians alike. Ultimately, however, it is not only misleading but also counterproductive, as it blurs rather than clarifies the nature of science and its role in politics.
Facts and norms
We could see these dynamics playing out in the science and politics of masks.
It is well known that the US government did not recommend wearing face coverings in public in the early days of the pandemic. In fact, leading experts such as Anthony Fauci, the director of the National Institute of Allergy and Infectious Diseases, strongly cautioned against using them, citing a lack of evidence for their effectiveness. “Right now, in the United States, people should not be walking around with masks,” he told 60 Minutes in a now-infamous statement from March 2020. “Wearing a mask might make people feel a little bit better and it might even block a droplet, but it’s not providing the perfect protection that people think that it is.” (He also reiterated this point privately, according to email communications that were recently made public.) Mainstream media outlets dutifully repeated these claims. For instance, around the same time, Vox ran a story “explaining” to its readers that “there’s little evidence to support the use of face masks for preventing disease in the general population.”
Just a few weeks later, the US Centers for Disease Control and Prevention reversed course. The government recommended that all Americans wear face coverings, including cloth masks, outside the home. The World Health Organization later followed suit and, in a matter of months, mask-wearing, as well as mask mandates, had become ubiquitous. Whereas Fauci had earlier said there was “no reason” for people to wear masks in public, he now confidently stated he had “no doubt” that those who did not wear them were contributing to the risk of transmission. Vox was soon “explaining” that “performative masculinity” was the reason why some Americans refused to wear masks.
What happened?
A common answer is that the experts simply learned more: the science progressed. Fauci himself has invoked this explanation. For instance, when asked why he changed his tune about masks, he responded: “Well the data now are very, very clear.” As new evidence emerged, especially from meta-analysis studies, “It became clear,” he told 60 Minutes, “that cloth coverings … and not necessarily a surgical mask or an N95 … work … contrary to what we thought.” Popular media coverage toed this line, citing “new information” and “new research” as the reason why “scientists change their mind.”
But this account is misleading.
Masks were a common form of personal protective equipment (PPE) in other countries long before the science was said to have progressed. (Mask-wearing was also standard practice in the United States during past pandemics.) Many Asian countries, including Japan and China, did not await new meta-analysis studies before adopting this public health intervention. If the “follow-the-science” logic were correct, then these countries must either have been behaving unscientifically back in winter 2020 or have had special access to scientific evidence that Western countries lacked. But a third option is more likely: countries such as China and Japan simply weighed the trade-offs of this particular policy differently than we did here in the United States, informed, no doubt, by scientific evidence as well as a different set of cultural norms regarding mask-wearing.
Evidence in context
Certainly we know a lot more today than we did at the beginning of the pandemic about the role of droplets and aerosols in disease transmission as well as the prevalence of asymptomatic spread. And we now have more evidence about the effectiveness of masks of all types. Nevertheless, much of this evidence remains indirect (e.g., from studies that extrapolate from research on other respiratory illnesses or on animals) or comes from studies that cannot easily control for the confounding effects of other nonpharmaceutical interventions, such as physical distancing, handwashing, or stay-at-home orders.
Randomized controlled trials (RCTs) are designed to isolate the causal efficacy of the intervention under study, which is one reason many experts consider them the gold standard of evidence in medicine. Yet, to date, there has been only one completed RCT, in Denmark, on the use of masks against SARS-CoV-2. And that study failed to establish any statistically meaningful reduction of infection among mask wearers, a finding consistent with past studies. For instance, a well-known RCT from 2015 explicitly cautioned against using cloth masks for hospital health care workers in “high-risk situations.”
Does “the science” therefore show that masks are ineffective, after all? No. First, these RCTs have important limitations, not least that they only studied the effect of masks on conferring individual protection against infection, rather than limiting spread. But limiting spread (or “source control”) has been a major justification for mask use during the pandemic, especially given the role of asymptomatic carriers in transmitting the virus. Moreover, RCTs that study the effectiveness of policy interventions in real-world settings, like the Danish study, have inherent limitations, including lack of blinding, difficulties policing post-randomization effects, or complications due to user error or relying on self-reported data.
Finally, there are good reasons to think that RCTs should not be taken as a gold standard for evidence, in any case, especially during a pandemic. Some observers have argued that RCTs are incapable, by design and within the bounds of ethics, of proving or disproving the effectiveness of masks in a crisis like COVID-19. Besides the logistical challenges involved in doing effectiveness studies in real-world settings, conducting such an RCT would require prohibiting use of masks in the control group, thus potentially exposing those test participants to greater risk of illness or death. (The Danish study got around this ethical challenge by conducting the RCT in Denmark before masks were mandated.)
Meanwhile, there is plenty of other evidence—from observational data, systematic reviews, computer modeling, animal studies, and basic research on disease transmission—to suggest that masks are effective, especially when combined with other interventions. To be sure, this finding leaves many open-ended questions: not only practical questions about policy, but also technical questions about the relative effectiveness of different nonpharmaceutical interventions and types of masks in various settings as well as the precise dynamics of disease transmission. But the fact that our knowledge is imperfect, in flux, or subject to expert disagreement does not mean that mask mandates are bad policy or that they lack evidence. As one public health policy expert put it, “There’s a lot more we would like to know. But given that [mask use] is such a simple, low-cost intervention with potentially such a large impact, who would not want to use it?”
Not physics
This, however, is a prudential judgment. And while the pragmatic and pluralistic approach to evidence that underlies it may well be justified, some experts continue to reject that approach, preferring a hierarchy of evidence with RCTs at the top. If there is a consensus here, it applies to the policy recommendation that mask mandates are worthwhile. (Despite their findings, the authors of the Danish study did not reject mask policies, citing the methodological limitations discussed above.) Yet this is hardly a scientific consensus of the kind that characterizes, say, relativistic or quantum physics or the modern evolutionary synthesis. To use the term “scientific consensus” in both contexts saps that term of any real meaning.
Rather than “following” from “the science,” mask mandates follow from a set of interrelated judgments about how to interpret the scientific evidence and apply it to the circumstances at hand—including how to weigh the relative risks and trade-offs of implementing or failing to implement such policies. Fauci acknowledged as much when explaining the reasons for changing mask guidance: “Very early on in the pandemic, there was a shortage of PPE and masks for health care providers who needed them desperately.… So the feeling was that people who were wanting to have masks in the community … might be hoarding masks and making the shortage of masks even greater.” This justification is not inconsistent with the idea that the policy changed because of new evidence. (If cloth masks work, for instance, then there’s less risk of a mask shortage.) But the rationale here is not really new evidence, so much as new judgments about what to do in light of that evidence.
These judgments are partly technical in nature, but they are also practical, combining expert knowledge and empirical data with sheer conjecture: assessing the availability of different kinds of PPE or predicting how a given policy will interact with other public health measures or influence human behavior. Finally, these judgments are unavoidably ethical and political: Who should have priority access to PPE? Does this policy’s public health benefits outweigh its social costs?
There is nothing wrong, in principle, with experts making these kinds of judgments, so long as they are open and honest about doing so. And our elected officials can and should take such judgments into consideration when formulating public policy, along with the various other concerns (social, political, practical, economic, and ethical) relevant to the decision at hand and its potential consequences. The problem with the follow-the-science charade is that it papers over this messy reality, concealing both the rationale and context for such decisions from public view. And that makes policymaking appear to be a rote exercise in the application of “scientific” rules, rather than a deliberative process, informed by expert judgments and interpreted and enacted by politicians at various levels of government, who are responsive to a range of pressures and considerations. What the public sees is different policymakers and different experts all claiming to be “following the science,” often in different directions at different times. As a result, public health policies can start to look like arbitrary political whims, particularly to those already disinclined to trust scientific and political elites. And this, in turn, produces both bewilderment and backlash—especially when “the science” changes, as inevitably it does under conditions of radical uncertainty.
In this way, the follow-the-science charade winds up undermining the legitimacy of public health policies and the experts who advocate them. What better illustration of this dynamic is there than the totem-like status that masks have acquired in our national politics, signaling, in the extreme, little more than allegiance to political tribe?
The value of disagreement
The follow-the-science charade prevents us from confronting pandemic policies as political decisions. But political decisions are of course precisely what they are, whatever their basis in scientific evidence. Treating such decisions as purely scientific in nature—and thereby suppressing the value judgments and disagreements that underlie them—forces science to serve as a proxy for our political disputes. This outcome appears to benefit scientific experts, by guaranteeing them an indispensable role in the political process as gatekeepers of knowledge needed to make policy decisions. It also appears to benefit political leaders, by allowing them to avoid taking responsibility for such decisions and to blame the scientific experts when policies turn out to be unpopular. And it appears to benefit both citizens and their representatives by painting the opposition as not just wrong but irrational and thus unworthy of recognition.
But the charade ultimately erodes the credibility of both experts and lawmakers, undermining the legitimacy of the policies they advocate. If we pretend our disagreements about public policy are fundamentally scientific in nature, then our political discourse will inevitably devolve into counterproductive debates about “the science.” Rival factions will appeal to their own evidence (or their own interpretations of it) and champion their own experts, while trying to discredit their opponents. “The science” becomes a shibboleth, rather than an aid to public decisionmaking. And the rules that do (or do not) “follow” begin to resemble cultural prohibitions more than public policies: taboos to be ritualistically followed or transgressed, depending on one’s tribal affiliation. Meanwhile, those members of the public who do not identify strongly with either faction are left bewildered, wondering whether either side really has any idea what it’s talking about or whom to trust.
It would be far healthier for both science and politics to surface the disagreements that are really driving these debates—especially those value disagreements about whether and when precautionary approaches to public health policies are appropriate.
We should not expect these disagreements to track our ideological divisions too neatly, or the political alignments surrounding them to remain stable over time. Thus it was in 1976 a Republican administration that adopted precautionary policies, based on expert advice, to prevent a potential epidemic. And it was the mainstream media—the Times no less—that criticized these policies as disproportionate, alarmist, and motivated by “the self interest of government health bureaucracy.” Even in the early days of the COVID-19 outbreak, some mainstream media outlets and public health experts downplayed the threat posed by the novel coronavirus, portraying it as less significant than the flu, before condemning this line as a dangerous conspiracy theory peddled by right-wing media.
Similarly, in the months before the 2020 election, mainstream media coverage tended to focus less on the dangers of misinformation and conspiracy theories about COVID-19 vaccines, and more on President Trump’s “rush” to develop them. A Kaiser Family Foundation poll from September 2020 reported that over 60% of Americans shared this worry, suggesting widespread distrust in the safety and effectiveness of any COVID-19 vaccines developed under Operation Warp Speed. The ideological valences may have since switched, once safe and effective vaccines were developed and especially after Donald Trump was out of the White House. But vaccine hesitancy has persisted—not only among conservatives and rural Americans, but also among essential workers and people of color and even some health care workers.
We will never eradicate inconsistency or opportunism from our political discourse. But we can and must bring the disagreements, including expert disagreements, masked by the follow-the-science charade out into the open. At stake is not simply the legitimacy of certain public health policies, but the legitimacy of science-based policy as such. In this sense, today’s crisis of scientific expertise must be seen as inseparable from—and indeed contributing to—the broader legitimation crisis faced by our political institutions. The charade is surely more symptom than cause of this deeper crisis. But neither science nor politics is well served by perpetuating it.
We need scientific experts to help us grapple with the kinds of questions and challenges posed by crises like the current pandemic. And we need scientific experts to make judgments, including ethical and political judgments, when offering us their advice and to be honest and transparent in making those judgments. Making policy decisions during a pandemic is unavoidably scientific and political, requiring the mutual cooperation of experts and nonexperts alike.
Surfacing the political and philosophical disagreements that underlie our public debate about the pandemic will not necessarily resolve them. But it might allow us to see the nature and significance of these disagreements a little more clearly—including that fundamental disagreement about the proper place of scientific expertise in our political life.