Scientific Controversies as Proxy Politics

Even for scientists, policy decisions entail much more than science.

In science policy circles, it’s a commonplace that scientists need to be better communicators. They need to be able to explain their science to 11-year-olds or policy makers, informed by framing studies, and using techniques from improv theater. Science communication is important for “trust” and “transparency,” and above all “public understanding of science”—meaning public deference to scientific expertise.

In January 2015, the Pew Research Center announced a set of findings comparing the views of “the public” and “scientists”—a random sample of US adults and members of the American Association for the Advancement of Science (AAAS), respectively—on 13 high-profile science controversies. Some of the discrepancies were striking:

  • 88% of scientists said that genetically modified foods are safe to eat, but only 37% of the public agreed.
  • 86% of scientists said that the MMR (measles, mumps, and rubella) vaccine should be required, compared with 68% of the public.
  • 87% of scientists thought that climate change is mostly due to human activity, but only 50% of the public was in agreement.

These gaps between the scientists and the public were seen as a problem, and one that science communication could solve. Stefano Bertuzzi, executive director of the American Society for Cell Biology, called the gaps “scary,” then promoted an initiative that aims to change “the public mindset to accept science in the real world without undue fear or panic.” Alan Leshner, then-CEO of AAAS, called for “respectful bidirectional communication” between scientists and the public—although, even here, the ultimate goal was to increase “acceptance of scientific facts.” Very few commentators questioned the conceptual division of the public and scientists or considered the possibility that high-level percentages might conceal disagreements among different scientific fields.

As a philosopher of science and STS (science and technology studies) researcher, I’ve spent much of the past several years examining public scientific controversies. In my work, I’ve found that simple distinctions between the public and scientists often miss the deeper fault lines that do much more to drive a controversy. Neither group is a uniform, homogeneous mass, and often the most important divisions cut across the public-scientist divide. Similarly, the simple explanation that the public must be ignorant of scientific facts—what STS researchers call the “deficit model”—misses the ways in which members of the public offer deep, substantive criticisms of those “facts.”

Indeed, in many cases, scientific controversies aren’t actually about the science. Instead, science provides an arena in which we attempt to come to terms with much deeper issues—the relationship between capitalism and the environment, the meaning of risk, the role of expertise in a democracy. Science, in other words, serves as a proxy for these political and philosophical debates. But science often makes a very poor proxy. Scientific debates are often restricted to credentialed experts, with the result that nonexperts are ignored. And scientific debates are often construed in very narrow, technical terms that exclude concerns about economics or the cultural significance of the environment. These debates thereby become intractable, as nonscientific concerns struggle for recognition in a narrowly scientific forum.

One can see this drama play out in three high-profile scientific controversies: genetically modified organisms (GMOs), vaccinations, and climate change. In each case, I argue, talking exclusively about the science leads us to ignore—and hence fail to address—the deeper disagreement.

GMOs: Lessons from a co-op

In 2012, shortly after finishing my PhD, I was elected to the board of directors of a local food co-op in northern Indiana. Many members of the co-op held the kind of environmentalist views that you would expect. But, at the same time, we also had many members who were faculty or graduate students in the local university’s biology department. These ecologists and molecular biologists generally shared the views of the nonacademic co-op members. But there was one striking point of divergence: GMOs. Many of the biologists believed that genetic modification was a powerful tool to increase food production while using fewer synthetic chemical inputs. They didn’t think that the co-op needed to go out of its way to promote GMOs, but they did bristle at the idea that the technology was somehow anti-environmentalist. Other members were deeply opposed to GMOs and wanted the co-op to explicitly ban those foods from our shelves.

As I looked into the GMO controversy more carefully, both within my co-op and more broadly, I quickly found that it wasn’t a matter of “progressive” scientists vs. “anti-science” environmentalists. For one thing, often the scientists were themselves environmentalists. In our co-op, they shared many of the views of the other members. They wanted food that was locally grown, using sustainable practices that minimized or avoided the use of chemical pesticides and fertilizers. They generally didn’t like “Big Ag” or industrialized agriculture. At the same time, in the broader controversy, not all scientists were completely uncritical GMO boosters. Some ecologists—whose work emphasizes the complexity of biological systems and the unintended consequences of human actions—are more skeptical of genetic modification.

Because there are scientists on both sides of the GMO controversy—and scattered across the range of positions between “pro” and “con”—it isn’t a matter of undeniable scientific fact vs. ignorance and fear. The standard way to test whether a food or chemical is safe is to feed it to lab rats under highly controlled conditions for 90 days. But lab rats aren’t humans, a controlled laboratory diet doesn’t necessarily correspond to the range of typical human diets, and 90 days may not be long enough to detect the effects of decades of eating genetically modified foods. Consequently, some scientists—people with PhDs in biology or toxicology from highly respected universities—think that the standard methods are too unreliable.

Similarly, when it comes to the benefits of GMOs—in terms of yields, reduced pesticide use, or profits for small farmers—my own research has found that dueling experts cite different kinds of studies. Pro-GMO experts cite surveys of farmers, which describe real-world conditions but often have methodological limitations (especially when the surveys are done in developing countries). Anti-GMO experts cite controlled experiments, which are methodologically rigorous but may not tell us much about what ordinary farmers will experience.

I’ve concluded that it’s deeply counterproductive to think of the GMO controversy in terms of scientists vs. the public or facts vs. emotion. Those two frames focus on who’s involved and how they reason, and make empirical mistakes on both counts. A better framing would focus on what’s at stake. What do people involved in the controversy actually care about? What benefits do they think GMOs will bring, or what harms do they think GMOs will cause? Once we identify these driving disagreements in the controversy, we may be in a better position to design policy compromises that address the concerns of both sides.

For the GMO controversy, I believe that the driving disagreement is over what rural sociologists call “food regimes”: political and economic ways of organizing food production. Today, especially in North America, the dominant food regime treats food as a commodity and food production as a business. Food is produced to be sold into a global market, and, as such, is valued primarily in economic terms—how much do we produce, and how much does it cost to produce it? Organic agriculture might have a place in this system, but as a boutique product, something that farmers can sell at a premium to wealthy customers. GMOs, by contrast, have the potential to increase production while reducing costs: feeding the world while expanding (someone’s) bottom line. As such, GMOs fit easily within the dominant food regime.

In my experience, many proponents of GMOs—from international agribusiness giants to farmers to molecular biologists—simply assume that this market-based system is the only way to produce food. But many critics reject this assumption. They imagine a very different food regime, one in which food is valued primarily in cultural and ecological terms, not economic ones. This perspective is fundamentally opposed to the use of synthetic chemical pesticides, which includes the two major uses of GMOs. In other words, a lot of disagreement about GMOs isn’t about whether they’ll give you cancer. It’s not even really about GMOs. Rather, it’s about deep philosophical disagreements over the way the food system relates to the cultural, economic, and ecological systems that surround it. GMOs act as a proxy—one proxy among many—for this deep philosophical disagreement.

Recognizing that the GMO controversy is about rival food regimes takes us well beyond the science vs. antiscience framing. For one thing, it lets us recognize that developing better ways of testing the safety of these foods won’t address the deeper political, economic, and ecological issues. Broader expertise—especially from social science fields, such as economics, sociology, and anthropology—is needed. Most important, we need to take up, explicitly, the question of what kind of food system, based on what values, we should have. This question cannot be answered by scientific evidence alone, although scientific evidence is, of course, relevant. Answering this question requires political deliberation at many levels of government and civil society.

Vaccines: Varying concepts of risk

In his book on the recent history of the vaccine controversy in the United States, Michigan State University historian Mark Largent describes two different conceptions of risk. On the one hand is risk as understood by experts in fields such as public health, decision theory, and policy. In these fields, risk is the probability or frequency of a hazard across an entire population, given a certain policy: how many people will get measles or suffer a serious side effect from a vaccination, for example. Further, the optimal or rational policy is the one that minimizes the total sum of these hazard-frequencies (perhaps under the constraint of the available budget). For instance, a mandatory vaccination policy that promised to prevent 5,000 measles cases, even while leading to 50 cases of serious side effects, would seem to be well worthwhile.

This conception of risk has important epistemological implications—that is, significance for what kind of knowledge is valuable. First, on this conception of risk, quantitative measures of hazards and their probabilities are essential. Qualitative phenomena that cannot be statistically aggregated across the entire population—such as the trustworthiness of experts—cannot be included in the most rigorous kind of hazard-minimizing calculation. Second, it’s not strictly necessary to understand why and how a certain policy will have its effects. All a policy maker absolutely needs is a reliable prediction of what these effects will be. Since research methods such as large-scale epidemiological studies, randomized controlled trials, and computer simulations are thought to deliver reliable, quantitative predictions, many policy makers and researchers in these fields have come to favor these approaches to research.

This conception of risk also tends to favor an expertise-driven, or technocratic, approach to governance. Once we have reliable predictions of the effects of various policy options, ideal policy making would seem to be simply a matter of choosing the option with the lowest total body count (again, perhaps subject to budgetary and legal constraints). Of course, policy makers are accountable to the public, and the public may not favor the optimal, hazard-minimizing policy. But these are deviations from rationality, and so, the product of emotion and ignorance. Tools from psychology and public relations might then be used to educate the public, that is, make it more compliant or deferential to expert judgment.

Hopefully, the last paragraph gave you pause. The technocratic approach to policy making had its heyday in the 1960s and 1970s; but over the past several decades we have gradually moved to a policy culture that places more emphasis on public engagement and deliberative democracy. We recognize that expert judgment can be fallible, that nonexperts might understand the situation on the ground in ways that are lost in the view from 30,000 feet, and that good policy must take into account factors that can’t be easily quantified. This means that insofar as we have come to recognize limitations to expertise-driven policy making, we also have reason to think that the classical, statistical conception of risk also has its limitations.

Returning to Largent, he contrasts what I’m calling the “statistical” conception of risk with the way vaccine-hesitant parents often think about risk. When parents make a medical decision for their child, their focus is on their child, not an overall social balance of costs and benefits. This approach to making decisions can potentially lead to selfish choices; but as philosopher Maya Goldenberg points out, we typically hold parents—specifically, mothers—responsible for acting in the best interest of their children. In addition, Largent notes that our society has developed a culture of responsible parenting, in which parents are expected to research issues from breast-feeding to educational philosophies to vaccines, then work collaboratively with experts to make the best decision for their children. In this cultural context, it’s not at all surprising that some parents would take steps to educate themselves about the potential vaccine side effects, then feel obligated to make a decision that protects their individual child from those potential side effects.

Parents’ individual conception of risk has its own epistemological implications. When it comes to the decision to vaccinate or not, the question is not how many people will develop measles or side effects, but instead whether their individual child will suffer a side effect or require hospitalization (or worse) if he or she contracts measles. This question requires a good understanding of the causal factors that contribute to vaccine side effects or measles complications. Are some children genetically predisposed to develop autism or a similar condition if their immune systems are overloaded by multiple concurrent vaccines? Do some children have immune systems that are especially vulnerable to a measles infection? Since these effects—if they exist—are rare, addressing them using an epidemiological study or random clinical trial would require a huge sample. A carefully designed case-control study would be much more informative. So it’s not surprising that some groups of vaccine-skeptic parents feel that the large, established body of vaccine research—based on epidemiological studies and clinical trials—doesn’t address their concerns and that parents should play a role in shaping the direction of vaccine research.

Vaccines are a proxy for a much deeper controversy about how we understand risk and expertise. It’s easy to caricature experts as distant and dispassionate, and just as easy to caricature parents as emotional and selfish. Both caricatures are misleading, and not just because they ignore the compassion and dedication of public health experts and the way parents evaluate the reliability of different sources of information, from pediatricians to YouTube. Perhaps more important, the caricatures miss the relationship between experts and parents. Are experts and parents equal partners, or should parents simply defer to the authority of expertise? As with GMOs, this question cannot be answered simply by pointing to the scientific evidence that vaccines are safe and effective.

Climate change: Debating a hockey stick

A controversy about the way scientists estimate historical climate trends—called “paleoclimate” research—can help us understand the way even very technical disputes can serve as proxies for deeper social, political, or economic issues. The example I have in mind is the “hockey stick” controversy concerning paleoclimate estimates produced by climatologist Michael Mann and his collaborators in the late 1990s. These estimates show a dramatic increase in average temperatures in the Northern Hemisphere over the past few decades, relative to the past 1,000 years, and when presented in graph form they resemble a hockey stick for the roughly flat trend and recent rapid increase. In 2003 and 2005, Steven McIntyre and Ross McKitrick published a pair of papers with statistical criticisms of the methods used by Mann and his colleagues to construct the hockey stick graph. Then, in 2007, Eugene Wahl and Caspar Ammann published an important response to McIntyre and McKitrick’s criticisms.

It’s worth taking a moment to understand the backgrounds and areas of expertise of those involved. McIntyre spent his career in the mining industry; he holds a bachelor’s degree in mathematics and a master’s degree in philosophy, politics, and economics, though not a PhD or an academic position. McKitrick is an economist at the University of Guelph in Canada. Today he is involved with the Fraser Institute, a policy think-tank that is generally considered conservative or libertarian, and The Global Warming Policy Foundation, which according to its website “while open-minded on the contested science of global warming, is deeply concerned about the costs and other implications of many of the policies currently being advocated.” Wahl and Ammann are both professional climate scientists. Wahl is a researcher in National Oceanic and Atmospheric Administration’s paleoclimatology program; Ammann is at the National Center for Atmospheric Research, which is funded by the National Science Foundation and administered by a consortium of research universities.

Perhaps because they are not climate scientists, McIntyre and McKitrick’s criticism focuses on the mathematical details of estimating past climate rather than on the physical science used to build climate models. In their response, Wahl and Ammann discuss the need to consider both false positives and false negatives in climate reconstruction and climate science more generally. A false positive is the use of a model—such as a reconstruction of long-term historical temperatures—that is not actually accurate. A false negative, by contrast, occurs when we reject a model that is actually accurate. Wahl and Ammann argue that McIntyre and McKitrick have taken an approach that strenuously avoids false positives, but does not take into account the risk of false negatives. In a 2005 blog post—around the time of the hockey stick controversy—McIntyre more or less acknowledges this point, writing that his “concern is directed towards false positives.” By contrast, in their paper, Wahl and Ammann prefer an approach that balances false positives and false negatives. Consequently, they conclude that the estimates produced by Mann and his colleagues—the estimates used to produce the hockey stick graph—are acceptable.

The philosophy of science concept of “inductive risk” lets us connect the standards of evidence used within science to the downstream social consequences of science. In philosophical jargon, “induction” refers to an inferential move that stretches beyond what we can directly observe: from data to patterns that generalize or explain those data. In the case of the hockey stick, the data are tree ring measurements; the inferential move is to historical climate patterns. Inductive risk refers to the possibility that this kind of inferential move gets things wrong, leading to false positives and false negatives. Philosophers, such as Heather Douglas, argue that the way we balance false positives and false negatives should depend, in part, on what the potential downstream consequences are. If the social consequences of a false positive error are about as bad as the consequences of a false negative error, then it makes sense to balance the two kinds of error within the research process. But if one kind of error is much more serious, then researchers should take steps to reduce the risk of that kind of error, even if that means increasing the risk of committing the other kind of error.

To apply the inductive risk framework to the hockey stick graph, we need to think about the meaning and social consequences of false positive and false negative errors. When it comes to estimating historical climate patterns, a false positive error means that we have accepted a historical climate estimate that is, in fact, quite inaccurate. This kind of error can lead to the conclusion that today we are seeing historically unusual climate change, which, in turn, feeds into the conclusion that we need to adopt climate change mitigation and adaptation policies, such as shifting away from fossil fuels and reducing carbon emissions. But, on the assumption that the paleoclimate estimates are actually inaccurate, these policies are unnecessary. So, all together, the likely social consequences of a false positive error include the economic costs of unnecessary climate policies, both for the economy in general, and for the fossil fuel industry, specifically. Furthermore, some conservatives and libertarians worry that environmental regulations, in general—and climate change mitigation policies such as cap and trade, specifically—are “socialist.” That is, above and beyond the economic impact of cap and trade, these individuals are concerned that environmental regulations are unjust restrictions on economic freedom.

Alternatively, a false negative error means that we have rejected historical climate estimates that are, in fact, quite accurate. This kind of error can lead us to believe that we are not seeing any unusual climate change today, which, in turn, can lead us to take an energy-production-as-usual course. But, if the paleoclimate estimates are actually accurate, staying the course will lead to significant climate change, with a wide variety of effects on coastal cities, small farmers, the environment, and so on. So, all together, the likely social consequences of a false negative error include the humanitarian and environmental costs of not mitigating or adapting to climate change.

These links between false positive and false negative errors, on the one hand, and the downstream social and economic effects of climate policy, on the other, can help us understand the disagreement between mainstream climate scientists and skeptics such as McIntyre and McKitrick. It’s plausible that many mainstream climate scientists think that the humanitarian and environmental costs of unmitigated climate change would be about as bad, or much worse, than the economic impact of policies such as cap and trade. In terms of inductive risk, this would mean that false negatives are about as bad, or much worse, than false positives. This might explain why Wahl and Ammann want to take an approach to estimating past climate that balances the possibility of both kinds of error.

Turning to the other side, there’s good reason to think that McIntyre and McKitrick believe that the consequences of cap and trade and other climate policies would be much worse than the consequences of unmitigated climate change. McIntyre’s career in the mining industry makes it reasonable to expect that he would be sympathetic to the interests of mining and fossil fuels; obviously those industries are likely to be seriously affected by a transition away from fossil fuels. And, as noted previously, McKitrick has connections to a group that describes themselves in terms of concerns about the economic effects of climate policies. In terms of inductive risk and the hockey stick, this means that false positives are much worse than false negatives.

Putting this all together, disagreements over the hockey stick graph can be traced back to disagreements over the relative importance of false positives and false negatives, which, in turn, can be traced back to disagreements over the relative seriousness of the social impact of cap and trade and unmitigated climate change. A technical dispute over how to estimate past climate conditions is a proxy for different value judgments of the consequences of climate change and climate policy.

This conclusion can be generalized beyond our four experts. Climate skepticism is especially high in central Appalachia, Louisiana, Oklahoma, and Wyoming—all areas of the country that are economically dependent on the fossil fuel industry. Quite reasonably, if your individual interests align with the interests of the fossil fuel industry, then a false positive error on climate change poses a direct risk to your economic well-being. That increased concern about false positive errors translates directly into climate skepticism.

It shouldn’t be surprising that people with an interest in fossil fuels might be motivated to be climate skeptics. But my point is that this isn’t acknowledged in the way we talk about climate science. Wahl and Ammann and McIntyre and McKitrick exchange technical criticisms about paleoclimate reconstruction and make points about the relative importance of false negative and false positive errors. These exchanges are unlikely to go anywhere, because the two sets of authors never address the deeper disagreement about the relative threat posed by climate change and climate policy. In other words, the technical debate isn’t just a proxy for a deeper political and economic disagreement; the technical debate myopically prevents us from recognizing, taking on, and trying to reconcile the deeper disagreement.

Making proxies more productive

Proxy debates do not necessarily prevent us from recognizing deeper disagreements. Consider the GMO controversy. The entire system of food production is too complex for us to debate as a whole, especially when we need to make fine-grained policy decisions. While keeping the rival general pictures in mind—the status quo food regime and the sustainability-based alternative—we can focus a policy discussion on a specific issue: How do GMOs fit into the general pictures offered by the two food regimes? Are there uses of GMOs that fit into both pictures? If so, how do we use policy to direct GMO research and development toward those uses? If not, how do we ensure that advocates on both sides have reasonable opportunities to pursue their different systems of food production?

Similarly, we might ask how public health policy and research funding can be used to address the concerns of both medical experts and vaccine-hesitant parents. Or how climate mitigation policy can be linked to economic development policies that will move places such as West Virginia away from economic dependence on fossil fuel extraction.

We need scientific evidence to answer these questions, and debates over the answers will likely still involve disagreements over how to interpret that evidence. But it is more obvious that science, alone, can’t answer these questions. For that reason, debates framed in terms of these kinds of questions are less likely to prevent us from losing track of the deeper disagreement as we debate the technical details.

Recommended Reading

  • Kelly Bronson, “Reflecting on the Science in Science Communication,” Canadian Journal of Communication 39 (2014): 523–37.
  • Heather Douglas, Science, Policy, and the Value-Free Ideal (Pittsburgh, PA: University of Pittsburgh Press, 2009).
  • Justin Farrell, “Corporate Funding and Ideological Polarization About Climate Change,” Proceedings of the National Academy of Sciences 113, no. 1 (2015): 92–97.
  • Maya Goldenberg, “Public Misunderstanding of Science? Reframing the Problem of Vaccine Hesitancy,” Perspectives on Science (January 2016).
  • Daniel Hicks, “Epistemological Depth in a GM Crops Controversy,” Studies in History and Philosophy of Biological and Biomedical Sciences (2015).
  • Peter Howe, Matto Mildenberger, Jennifer R. Marlon, and Anthony Leiserowitz, “Geographic Variation in Opinions on Climate Change at State and Local Scales in the USA,” Nature Climate Change 5, no. 6 (2015): 596–603.
  • Mark Largent, Vaccine: The Debate in Modern America (Baltimore, MD: Johns Hopkins University Press, 2012).
  • Henry Richardson, Democratic Autonomy (New York, NY: Oxford University Press, 2002).
Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Hicks, Daniel J. “Scientific Controversies as Proxy Politics.” Issues in Science and Technology 33, no. 2 (Winter 2017).

Vol. XXXIII, No. 2, Winter 2017