The Myth of Objective Scientists

Review of

Science, Policy and the Value-Free Ideal

Pittsburgh, PA: University of Pittsburgh Press, 2009, 210 pp.

In Science, Policy and the Value-Free Ideal, Heather Douglas of the University of Tennessee–Knoxville seeks to challenge the belief that science should be “value-free,” meaning that it is guided only by those norms internal to science. This is a worthwhile task that would make a welcome contribution to debates about the role of science in policy and politics. Unfortunately, the book fails to fully deliver on its promise. Instead, it ultimately takes the conventional position of endorsing the idea that value-free science is actually possible in certain circumstances.

Douglas discusses three types of values in science: ethical/social, epistemic, and cognitive. Ethical values have to do with what is good or right, and social values cover public interest concerns such as justice, privacy, and freedom. She argues that ethical and social values are often but not always compatible with one another. Cognitive values include simplicity, predictive power, scope, consistency, and fruitfulness in science. Douglas is somewhat less clear about what she means by epistemic values, which she associates with the notion of truth, describing them as “criteria that all theories must succeed in meeting” in order to develop reliable knowledge of the world. Presumably, epistemic values are those internal to the conduct of science.

Douglas argues that values external to science, such as those associated with societal problems, should play a direct role in science only in the process of deciding what research to pursue. If the public is particularly concerned by the harm caused by sexually transmitted diseases, then that priority should influence how much government decides to spend on research in this field. But once the research begins, it should be guided only by the rules that govern the conduct of research and evaluated objectively by disinterested experts. She warns that if values external to the practice of science were to play a direct role in the process of conducting research, “science would be merely reflecting our wishes, our blinders, and our desires.” Our scientific knowledge would then reflect the world as we’d like it to be, rather than how it actually is.

Douglas chooses not to pursue this line of thinking very far, even though she cites various cases that indicate that scientific understanding is often precisely about our wishes, blinders, and desires. She cites the example of diethylstilbestrol, a synthetic version of the female hormone estrogen that was recommended for women in the late 1940s for “female problems” such as menopause, miscarriages, and metal health problems. After evidence surfaced that the drug was ineffective and dangerous, it remained approved because of regulators’ preconceptions about what it meant to be a “woman.” Douglas explains that the “social values of the day ” about gender roles served to reinforce the view that “female hormones are good for what ails women” and made it easier to discount evidence to the contrary, especially when the societal values appeared to reinforce cognitive values such as theoretical simplicity and explanatory power. The lesson Douglas draws is of the “dangers of using any values, cognitive or social, in a direct role for the acceptance or rejection of a hypothesis.” A different lesson that she might have drawn is that societal values are all mixed up in how we practice research and consequently in the conclusions that we reach through science, whether we like it or not.

In her analysis, Douglas routinely conflates what are conventionally called scientific judgments with political judgments and even asserts that the latter should influence the former. For instance, she writes that it is not morally acceptable for scientists “to deliberately deceive decision makers or the public in an attempt to steer decisions in a particular direction for self-interested reasons.” Yet, she asserts that scientists have an obligation to consider societal outcomes in making scientific judgments that entail some uncertainty. She describes the example of a hypothetical air pollutant that is correlated with respiratory deaths but where no causal link has been identified. If the costs of pollution control are low, Douglas argues, it would be “fully morally responsible” for the scientist to recognize the uncertainties but to “suggest that the evidence available sufficiently supports the claim that the pollutant contributes to respiratory failure.”

This logic turns the notion of precaution fully on its head. It suggests that if an action is worth taking according to the scientist’s judgment, then the evidentiary base can be interpreted in such a way as to lend support for that action. Political considerations would thus dictate how we interpret knowledge.

Douglas considers the option that in a situation of high uncertainty, the scientist should simply answer the questions posed by policymakers rather than to try to promote a particular course of action, but she ultimately rejects the distinction between a scientist who arbitrates questions that can be resolved empirically and one who renders policy advice. She argues that because science carries such authority in policymaking, scientists have a responsibility to consider the societal consequences when the scientific evidence is uncertain and ultimately to “shape and guide decisions of major importance.” Such guidance presupposes that scientists will speak in one voice about decisions, or if they did, their policy proposals would be consistent with public values.

It is difficult to see such advice as anything but an invitation to an even greater politicization of scientific advice. In Douglas’s hypothetical air pollution case, the policy remedy is cheap to implement, so that policymakers would probably take this step even if the evidence is uncertain. But what if the politics are hard? Specifically, what if the costs of pollution control are not cheap or involve various winning and losing stakeholders? What should the advisor do in such a case? And what happens when the situation is characterized by experts offering legitimate but competing views on certainties and uncertainties, areas of fundamental ignorance, and a diversity of interests who pick and choose among the experts?

Douglas hints at a remedy to this quandary when she argues for greater public participation in scientific advisory processes as a way to build a consensus around knowledge claims among competing interests. Although this strategy may have merit in some contexts, to suggest that knowledge claims should be negotiated by nonexperts would be interpreted by many experts as an even further politicization of the process of scientific advice. Douglas admits that in many cases, such public engagement and deliberation would not be practical.

Surely, in many cases we want to preserve the distinction between scientific advice on questions that can be addressed empirically and advocacy for particular outcomes. Policy questions involving empirically unresolvable questions inevitably involve a wide range of expertise and interests, and are, as Douglas notes, rightfully the territory of democratic decisionmaking. But Douglas does not follow this principle consistently, as can be seen in her hypothetical air pollutant case. To be consistent, she would have to advise the scientist to simply explain honestly what the science does and does not prove, including the uncertainties, and then to allow the political process determine how to proceed.

Douglas winds up on the wrong track by asking the wrong questions. In the hypothetical air pollution case, she characterizes the question as whether the pollutant is a public health threat. But science cannot determine what is an acceptable level of risk for a society. Instead, a policymaker could ask when the pollutant reaches certain atmospheric concentrations and what is known about the health effects at various concentrations, or could propose alternative regulatory actions and ask a scientist what the result would be in each case, with the degree of uncertainty clearly stated. Framing the questions posed to scientists in ways that separate scientific judgments from policy advice is one way to avoid falling prey to the value-free ideal. Recognizing that there are different roles that expert advisors can play in decision-making can help improve the quality of advice and sustain the integrity of advisory processes.

Douglas has written a useful book with a clear thesis. However, in the end it offers far more support for the myth of the value-free ideal than might be expected by the overall thesis. Douglas is right in saying that moving beyond the value-free ideal indeed makes good sense, for the practice of both science and policy. But her analysis would be more consistent and more helpful if she pointed out that the goal should not be the unattainable value-free science but the more realistic value-transparent science. Scientists certainly should feel empowered to advocate for a course of action where the scientific evidence is weak or inconclusive, but they ought to be explicit in explaining that this is what they are doing. The alternative is not value-free science, but a science that hides values from view.

Cite this Article

Jr., Roger Pielke. “The Myth of Objective Scientists.” Issues in Science and Technology 27, no. 2 (Winter 2011).

Vol. XXVII, No. 2, Winter 2011