Science, Politics, and U.S. Democracy

Unless scientists and policymakers learn to work together effectively, both domains will suffer.

Political manipulation of scientific evidence in the interest of ideological convictions has been a commonplace of the U.S. democracy since the end of World War II. In 1952, the incoming secretary of commerce, Sinclair Weeks, fired Alan Astin, director of the National Bureau of Standards, after the Bureau’s electrochemists testified for the Federal Trade Commission and Post Office in a suit to stop a small Republican manufacturer from Oakland, California, from fraudulent advertising. The Bureau found that the product, a battery additive called ADX-2, was worthless, and over time would actually harm a battery. Because the administration believed that caveat emptor should take precedence over a laboratory analysis of the product, the Bureau’s work came into conflict with the ideology of the Eisenhower administration. Senate Republicans accused the government scientists of not taking the “play of the marketplace” into account in their research. This became a raging controversy that was eventually resolved by Astin’s vindication and reinstatement as well as the dismissal of the undersecretary who had urged the firing of Astin in the first place. In January 1973, President Nixon abolished the White House Office of Science and Technology (OST) and the President’s Science Advisory Committee (PSAC), when some scientists spoke out publicly against the president’s plans for funding the supersonic transport and the antiballistic missile system. Congress and President Ford subsequently reinstated the office by statute.

Both parties have occasionally yielded to the temptation to punish scientists who objected to government policy by cutting their research funding. President Johnson is said to have personally scratched out certain academic research projects because of the researchers’ opposition to the war in Vietnam. When President Carter took office in 1977, his Department of Energy (DOE), during the furor over the energy crisis, inherited a study called the Market Oriented Program Planning Study (MOPPS), which found both lower projected demand and a greater abundance of future natural gas supply than the “Malthusians” found acceptable. DOE reportedly sent the study back to the MOPPS team several times, seeking an assessment of future energy resources more acceptable to the administration. They finally ordered the study removed from the shelves of depository libraries, forced the resignation of the director of the U.S. Geological Survey, and removed Christian Knudsen from his position as director of the MOPPS survey.

But the past two years have been unique in the number, scope, and intensity of press reports and scientists’ allegations of political interference with the processes for bringing objective scientific information and advice to government policy decisions. The most extensive compilation and interpretation of the allegations of misuse of science advice by the Bush administration is that produced by the Union of Concerned Scientists (UCS); a similar compilation is available on the Web site of Rep. Henry Waxman, ranking member of the Committee on Government Reform. These accusations include suppression or manipulation by high ranking officials of information bearing on public health and the environment, replacement of experts on advisory committees when their views came into conflict with industry or ideological interests, screening of candidates for such committees for their political views, and the deletion of important scientific information from government Web sites. Although a response to the UCS report by the director of the Office of Science and Technology Policy (OSTP) disputed some of the details, the controversy between the scientific community and the administration is not so much over whether these events occurred but rather on the interpretation that should be placed on them and what they might mean for the future of the nation’s democracy.

Were these cases examples of unacceptable interference by government officials with entrenched political or ideological positions, resulting in their corruption of otherwise objective science advice? Or were they examples of the unavoidable and natural balancing of political interests and many other factors that influence policy decisions, only one of which is the relevance of available scientific knowledge to the ultimate political decision? Why do the 4,000 U.S. scientists, including many of the country’s most distinguished, who have signed on to a February 18, 2004, statement decrying these events, feel so outraged? Why does the White House feel equally strongly that nothing improper has been done? Most importantly, what are the consequences for the functioning of U.S. democracy if this situation cannot be resolved?

Truth and legitimacy

In the U.S. democracy science and politics are uniquely dependent on one another, but the relationship has never been an easy one. Science is about the search for objective evidence that would support successful predictions about the world around us. Politics is about governing based on the public’s acceptance of the legitimacy and accountability of elected officials. The search for truth in science and for legitimacy in politics both require systems for generating public trust, but these systems are not the same, and indeed they are often incompatible.

The failure to be open minded and objective in basic science—one might call this scientific bias—is a serious obstacle to scientific progress. In their laboratory research scientists must subject themselves to a disciplined process—transparency in reporting their work, independent verification of results, faithful attention to prior research, and an unrelenting search for alternative explanations and outright mistakes. When a scientist fails to submit to this discipline, the professional penalties can be severe.

Of course, both politicians and policy scholars are quick to point out that giving scientific advice to inform important policy decisions is not at all the same as doing scientific research. In advice to policy, the scientist is often searching for consensus judgments in the absence of full knowledge. Even when scientific understanding is incomplete, judgments about future consequences of policy are required, and the policy process makes rigorous demands on advisors. As policy scholars William C. Clark and Giandomenico Majone pointed out in 1985, if advice is to make a difference in policy, it must satisfy the following criteria: The technical work on which the advice is based must be technically credible; policy relevant, and politically legitimate. More specifically, the scientific analysis must be up to standards of due diligence, using good methods and critical analysis of data. Policy relevance requires that the issues address what the policymakers actually want to know in a timely manner. Legitimacy of the advice derives from an independent scientific effort to get at the truth, free from being shaped as a rhetorical instrument of one interested party or another. All three of these attributes of good advice must be perceived as such by many relevant stakeholders with different preferred outcomes.

Scientists must understand that the officials being advised are not obligated to adopt policies based solely on the scientists’ technical analysis.

Minimizing perceptions of bias among scientific advisors is both necessary and difficult. A great deal of attention has been given by scholars and by nongovernmental advisory institutions, such as the National Research Council (NRC), to balancing out the influence of different sources of bias, since bias can never be entirely eliminated. The results of such efforts, plus years of experience with many different processes to improve the ability of scientists to usefully and honestly inform public decisions, has led to the enormously complex system of science advisory bodies the U.S. government uses today. The difficulties are present even when the government officials seeking the advice are scrupulous in their avoidance of political interference in the work of advisors. The advisory system is fragile at best, and it cannot withstand purposeful efforts to corrupt it.

The symbiosis of science and politics

Scientists are fiercely defensive of their intellectual independence, but the financing of most of their research depends on maintaining the confidence and support of Congress and the president. The institutions of science need broad public and political support for their claims to provide both practical and cultural value to society. But the national system of research, especially longer-term or basic research, has a critical dependence on government financial support. In 2003, the federal government provided an estimated $19 billion for R&D to U.S. universities and colleges, and education leaders know that politicians can succumb to the temptation to use the purse as a tool for disciplining scientists who publicly oppose their policies. Perhaps the most extreme case was President Nixon’s instruction to his staff to revoke federal research funds to the Massachusetts Institute of Technology because of his annoyance with MIT President Jerome Wiesner’s opposition to the antiballistic missile program. His staff wisely declined to execute this intemperate order.

The health of U.S. science also depends on public policy in a variety of other ways—foreign policies that promote or limit scientific collaborations with colleagues abroad, educational investments to attract and train the next generation of scientists, new scientific institutions and facilities that define the leading-edge capabilities of science. Finally, scientists, like other citizens, do care about how society uses the knowledge their research creates. And for this reason, many thousands are happy to serve on advisory committees without financial compensation.

Despite the discomfort they might have with scientists who seem insufficiently grateful for the government’s largess or who are too willing to oppose federal policies while accepting federal funding, politicians are also dependent on competent, objective, and useful science advice. In most cases, technical advice is simply a part of the efficient functioning of government agencies that deal with an extraordinarily broad range of technical issues. No agency can expect its own scientific staff to have all the skills required for every task. If nothing else, government scientists need to check their work against the critical judgment of peers outside government. Where political and ideological issues are not in question, this part of the advisory system works very well.

For example, the NRC—the operating arm of the National Academy of Sciences, National Academy of Engineering, and the Institute of Medicine— appoints an array of academic and industrial experts to assess and assure the quality of research at government laboratories such as the National Institute for Standards and Technology. Other committees advise DOE on the research strategy for realizing fusion energy and help the National Science Foundation and the National Aeronautics and Space Administration on priorities for new telescopes on earth and in space. In 2001, the NRC performed 242 studies for the executive branch and Congress. A staff of more than 1,000, working with some 5,000 volunteer experts from universities and industry, prepared these studies and served as quality control reviewers of the finished products. Most of these studies were quite technical and, from a policy perspective, largely uncontroversial.

Government policymakers, however, need the advice of professional experts from outside government for a more profound reason, touching on the basic structure of U.S. democracy. Americans will not give a self-selected elite authority to decide complex matters on their behalf. U.S. leaders are selected by a highly decentralized voting system that, in principle at least, chooses leaders from a broad spectrum of citizens. How can the public judge the performance of these leaders? What makes their actions in office seem legitimate to us, the voters, and what kinds of information do we seek in holding them accountable? In the U.S. political tradition, public officials are judged by what they do, not by who they are.

Contrast the U.S. system with that in France or Japan. In those countries elected officials make the decisive political decisions, but the senior posts in the ministries are held by highly educated career employees. The legitimacy of these officials comes not from the visibility to the public of what actions they take but rather from the prestige associated with their education in a small number of very special schools— the grandes écoles of France and the former imperial universities of Japan. In these countries and in Britain as well, political appointees fill only the very top layer of the ministries; the agencies are run by career professionals. This reflects a level of public trust of senior government officials not found in the United States, where politically appointed officials, serving at the administration’s pleasure, are found four or five layers deep in many parts of government.

In France, for example, when the Chernobyl nuclear power plant accident spread radioactive material over the farms of Western Europe, the French government did not announce their tests for radioactive contamination of French crops until nearly a week had passed. Government officials explained that if there had been something the public needed to know, they would have been told. When something goes seriously wrong in the United States, the public immediately demands its leaders to explain “What did they know and when did they know it?” Although the U.S. public possesses limited technical literacy and suspects that segments of the media are guilty of bias, it still turns to the press and independent voices to judge the government’s performance. It places its trust in transparency, not in political elites.

This pragmatism in U.S. politics originates with the authors of the Constitution, who sought to build a government “of and by the people,” immune to restoration of monarchy. They were creating the world’s first constitutional democracy. Reflecting the philosophy of the Enlightenment, the founders saw science as a model for a rational approach to public choices and thus a model for democratic politics.

Politics deals with outcomes people can experience. U.S. voters have traditionally paid relatively little attention to political ideology; the two parties have been fundamentally very similar in philosophy. The philosophical goals of the society (equity, culture, freedom, spiritual well-being) have, in the past, been most effectively expressed politically through empirical evidence—“Are you better off than you were four years ago?”—not by abstract arguments. Letting the facts speak for themselves gives government policies credibility. When the public sees government officials basing their policies on objective, professional knowledge, the public confers authority on those who govern. Appeals to other sources of authority, such as religion or inherited power, disconnect accountability from authority.

This helps political scientists explain why government leaders are more likely to work behind the scenes to shape the advice they receive than they are to shut down the advisory committees. They need the validation that support from committees of experts can bring to their policies. The public expects those policies to be grounded in evidence and reasoned argument. Most presidents, from the right and from the left, have therefore sought to maintain a credible and active system for validating their policies through expert advice from outside government. However, as noted above, when sufficiently unhappy with the advice received, they have some times disowned the advice or even abolished the panel giving the advice.

The importance of trust

Experts giving advice must also earn their legitimacy. To be sure, credentials inevitably play a much bigger role in establishing the right of scientists to claim to be experts. Accordingly, the public insists on transparency in the advisory process, just as it does in government policymaking. That transparency was enacted into law following the push for more open government in the 1970s. The Freedom of Information Act opened up most agency records to public and press inspection. The Federal Advisory Committee Act, passed in 1972, requires access by the press and the public to meetings of most advisory committees. Together with many conflict of interest laws, these acts are expressions of a political will to keep advice given to government open to the sunshine of public witness.

It follows that if either scientists or politicians so politicize their mutual engagement that they sacrifice the credibility of the scientists and the legitimacy of the government officials, the consequences to the nation’s time-honored system of governance could be serious indeed. This compels both the scientific community and the government to find a way to bridge the gaps in interests, culture, and process that divide them. They must work together to develop processes that give each the value they seek from the relationship and that also protect the integrity of the relationship against temptations by either side to corrupt it. What, then, are the mechanisms through which these bridges might be built?

Perhaps the most essential structural element in the bridge between science and politics is the understanding that scientists insist on being allowed to inform policy through balanced and expert views of relevant technical facts and best judgments. Indeed, those politicians who believe sound advice will bring added political support to their policies should insist that the technical advice be appropriately organized and managed. However, the scientists must also understand that the officials being advised are not obligated to adopt policies based solely on the scientists’ technical analysis. Many factors go into the soup of politics, and artful government requires careful weighing of all these factors.

Litmus tests of political allegiance should not be among these criteria for selecting scientists for advisory committees.

The scientist giving the advice does have recourse if she or he does not like the final decision. The scientist can exercise each citizen’s political right to oppose the policy publicly. However, to preserve the nonpolitical nature of the advisory process, the scientist might resign from the panel before going public with a political position. The events leading up to President Nixon’s abolition of OST illustrate the difficulty of trying to serve as a nonpolitical advisor while publicly expressing private positions on public issues. A member of PSAC accepted an invitation to testify before Congress about the wisdom of President Nixon’s desire to build an antiballistic missile system. In testimony the scientist not only made clear that he spoke for himself and not for PSAC; he also said that his testimony did not rest on information available only to PSAC. He spoke as a knowledgeable private citizen. The press nevertheless interpreted the testimony as evidence that PSAC had given Nixon advice he did not wish to accept, and the president reacted by shutting down PSAC and OST. This happened despite the president’s statement to PSAC, relayed by the science advisor, that PSAC members should feel free to testify on either side of the issues and need not resign before doing so.

The sensitivity of the relationship between government officials and independent advisory committees demands clear guidelines for operation. I propose four rules that would help ensure sound and uncorrupted science-based public decisions:

  • The criteria for selection of scientists to serve on advisory committees, including description of their scientific qualifications and disclosure of all other activities that might bias their judgment, should be publicly documented. Litmus tests of political allegiance should not be among these criteria. Ideological or religious criteria can be considered in making policy but they should not be represented as science.
  • Key policy and regulatory decisions must not be deliberately deprived of relevant, independent, and expert scientific information. The science advice derived from this information should be published, along with the charter for the study, before the final regulatory decision is made.
  • An effective system of protections for whistle-blowers must be established to ensure that scientists inside government agencies are able to report with impunity allegations of deliberate political interference with their work and advice.
  • The president should formally document the policies that are to govern the relationship between science advice and policy, both through advisory structures using scientists outside government and those using government scientists. This should include a set of procedures like those above (and more extensively documented by the NRC) and should identify the locus of responsibility for detecting transgressions of the policy and procedures for investigation and correction if appropriate.

But who is to have the responsibility to oversee adherence to a presidential policy that insists on competent, objective, balanced, and open advice, and how is the policy to be enforced? The conventional institutional answer is the president’s science advisor.

The president’s science advisor

Before we can talk about who can bridge the gap between science and politics, we need to describe the turbulence that flows under the bridge. Scientists who have extensive government experience understand the conflict between the scientists’ demand for an advisory system giving balanced, objective, and technically expert advice and the government official’s insistence on panel members who share the president’s political philosophy. If the president’s science advisor is to mediate this conflict, he or she must understand and bridge both interests.

But listen to William D. Carey, former assistant director of the Bureau of the Budget and executive officer of the American Association for the Advancement of Science: “If a science advisor is going to count, he must be a foot soldier marching to the program of the president, not the company chaplain.” I would characterize this as an extreme, perhaps even demeaning, view of the role of science advisor to the president. I am sure John Marburger does not so characterize his job as director of OSTP in the executive office of President Bush. Indeed, to be effective, a science advisor to the president must be somewhat aloof from the demands of tactical politics. However, this independence is not to be exercised by failing to support the president’s established policies. It is won by the very high level of respect accorded the advisor’s scientific qualifications and prestige and by his or her acceptance of the broad framework of presidential policy.

If the president is serious about protecting the time-honored system of scientific inputs to inform policy, as President Bush has publicly affirmed, the president’s own prestige, the effectiveness of his governance, and the strength of the nation’s commitment to science would all benefit. But while the scientific community and the press refer to Marburger as the “president’s science advisor,” this is not his official title. Bush did not appoint Marburger to the traditional White House position of assistant to the president for science and technology. His formal position is director of OSTP, reporting to chief of staff Andrew Card. Had he been included in the president’s White House inner circle and given the position assistant to the president accorded to D. Allan Bromley by President George H. W. Bush, perhaps many of the events catalogued by UCS might have been avoided.

It might be, however, that behind the stressful relations between many scientists and President Bush over the events documented by UCS, something much more fundamental and foreboding is happening in U.S. politics, developments that might lie beyond thescience advisor’s ability to correct. In the earlier discussion it was pointed out that U.S. voters traditionally measure their leaders by objective assessments of what they achieve for the lives of the people. Political scientists have written that U.S. pragmatism traditionally outweighs the influence of ideology, religion, and elite connections. Scholars tell us that it was for that reason that government officials, over the years, encouraged the best and the brightest of science to advise the government or even to make careers in science-based public policy. But all that seems to be changing.

In the current presidential campaign much of the debate, especially from the Republican side, is, in fact, about social, religious, and patriotic values. Some of most intense conflicts between science advice and public policy have turned on objections by scientists to the primacy of ideology over science. Fourteen years ago scholars were already observing a growing preference for images that presented the appearance of pragmatic measures of achievement. We now live in an era of images and “spin.” In her 1990 book The Fifth Branch: Science Advisors as Policy Makers, Sheila Jasanoff writes, “In the closing decades of the 20th century the intellectual and technical advance of science coincides with its visible decline as a force in the rhetoric of liberal-democratic politics.”

The integrity of the science advisory process cannot withstand overt actions to censor or suppress unwanted advice, to mischaracterize it, or to construct it by use of political litmus tests in the selection of individuals to serve on committees. Nor can it survive threats to the job security of scientists in government when they attempt to call such political interventions to the attention of Congress or the press. Science advice must not be allowed to become politically or ideologically constructed. If we fail in the attempt to preserve the integrity of science in democratic governance, a strong source of unity in the electorate, based on common interest in the actual performance of government, will be eroded. Policymaking by ideology requires that reality be set aside; it can be maintained only by moving towards ever more authoritarian forms of governance.

Recommended Reading

  • Lewis M. Branscomb, “Science and Technology Advice to the U.S.A. Government: Deficiencies and Alternatives,” Science and Public Policy Vol. 20, No. 2, pp 67-78, April 1993.
  • Yaron Ezrahi, The Descent of Icarus: Science and the Transformation of Contemporary Democracy (Cambridge, Mass.: Harvard Univ. Press, 1990).
  • William C. Clark and Giandomenico Majone, “The Critical Appraisal of Scientific Inquiries with Policy Implications,” Science Technology and Human Values, Vol. 10, No. 3, summer 1985, pp 6–19. Sheila Jasanoff, The Fifth Branch: Science Advisors as Policy Makers (Cambridge, Mass.: Harvard University Press, 1990).
  • John H. Marburger, III, “On Scientific Integrity in the Bush Administration,” April 2, 2004 http://www.ostp.gov/html/ucs.html
  • Union of Concerned Scientists, Scientific Integrity in Policy Making: Further Investigations of the Bush Administration’s Misuse of Science (Cambridge Mass: Union of Concerned Scientists, July 8, 2004).
  • Henry Waxman, “Politics and Science: Investigating the State of Science under the Bush Administration,” .
Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Branscomb, Lewis M. “Science, Politics, and U.S. Democracy.” Issues in Science and Technology 21, no. 1 (Fall 2004).

Vol. XXI, No. 1, Fall 2004