Retiring the Social Contract for Science

Updating the way we talk and think about the place of science in society will lead to more effective policy.

A widely held tenet among policy scholars maintains that the way people talk about a policy influences how they and others conceive of policy problems and options. In contemporary political lingo, the way you talk the talk influences the way you walk the walk.

Pedestrian as this principle may seem, policy communities are rarely capable of reflexive examinations of their rhetoric to see if the words used, and the ideas represented, help or hinder the resolution of policy conflict. In the science policy community, the rhetoric of the “social contract for science” deserves such examination. Upon scrutiny, the social contract for science reveals important truths about science policy. It evokes the voluntary but mutual responsibilities between government and science, the production of the public good of basic research, and the investment in future prosperity that is research.

But continued reliance on it, and especially calls for its renewal or rearticulation, are fundamentally unsound. Based on a misapprehension of the recent history of science policy and on a failed model of the interaction between politics and science, such evocations insist on a pious rededication of the polity to science, a numbing rearticulation of the rationale for the public support of research, or an obscurantist resystemization of research nomenclature. Their effect is to distract from a new science policy, what I call “collaborative assurance,” that has been implemented for 20 years, albeit in a haphazard way.

One cannot travel the science policy corridors of Washington, D.C., or for that matter, read the pages of this journal, without stumbling across the social contract for science. The late Rep. George E. Brown, Jr., was fond of the phrase, as Gerald Holton and Gerhard Sonnert remind readers of Issues (Fall 1999) in their argument for resurrecting “Jeffersonian science” as a “third mode” to guide research policy. The social contract for science is part of the science policy scripture, including work by Harvey Brooks, Bruce Smith, the late Donald Stokes, and others. Its domain is catholic: Last year’s World Conference on Science, co-organized by the United Nations Educational, Scientific, and Cultural Organization and the International Council for Science, called for a “new social contract” that would update terms for society’s support for science and science’s reciprocal responsibilities to society.

In a recent book, I unearth a more complete genealogy of the social contract for science, pinpoint its demise two decades ago, and discuss the policies created in its wake. I find its origin in two affiliated concepts: the actual contracts and grants that science policy scholar Don K. Price placed at the center of his understanding of the “new kind of federalism” in the relationship between government and science; and a social contract for scientists, a relationship among professionals that the sociologist Harriet Zuckerman described as critical to the maintenance of norms of conduct among scientists. Either or both of these concepts could have evolved into the social contract for science.

Most observers associate the social contract for science with Vannevar Bush’s report Science, The Endless Frontier, published at the end of World War II. But Bush makes no mention in his report of such an idea and neither does John Steelman in his Science and Public Policy five years later. Yet commonalities between the two, despite their partisan differences, point toward a tacit understanding of four essential elements of postwar science policy: the unique partnership between the federal government and universities for the support of basic research; the integrity of scientists as the recipients of federal largesse; the easy translation of research results into economic and other benefits, and the institutional and conceptual separation between politics and science.

These elements are essential because they outline the postwar solution to the core analytical issue of science policy: the problem of delegation. Difficulties arise from the simple fact that researchers know more about what they are doing than do their patrons. How then do the patrons assure themselves that the task has been effectively and efficiently completed, and how do the researchers provide this assurance? The implications of patronage have a long history: from Galileo’s naming the Medician stars after his patron; to John Wesley Powell’s assertions in the 1880s that scientists, as “radical democrats,” are entitled to unfettered federal patronage; to research agencies’ attempts to meet the requirements of the Government Performance and Results Act of 1993.

How politics and science go about solving the problem of delegation has changed over time. The change from a solution based on trust to one based on collaborative assurance marks the end of the social contract for science.

The old solution

The problem of delegation is described more formally by principal-agent theory, where the principal is the party making the delegation, and the agent is the party performing the delegated task. In federally funded research, the government is the principal and the scientific community the agent. One premise of principal-agent theory is an inequality or asymmetry of information: The agent knows more about performing the task than does the principal. This premise is not a controversial one, particularly for basic research. It is exacerbated by the near-monopoly that exists between government support and academic performance, which permits no clear market pricing for basic research or its relatively ill-defined outputs.

This asymmetry can lead to two specific problems (described with jargon borrowed from insurance theory): adverse selection, in which the principal lacks sufficient information to choose the best agent; and moral hazard, in which the principal lacks sufficient information about the agent’s performance to prevent shirking or other misbehavior.

The textbook example of adverse selection is the challenge health insurers face in that the people most interested in obtaining health insurance are those most likely to need it, and are thus most likely to cost more to insure, but health problems are better known to the applicant than to the insurer. The textbook example of moral hazard is when the provision of fire insurance also provides an incentive for arson. Insurers attempt to reduce these asymmetries through expensive monitoring strategies, such as employing physicians to conduct medical examinations or investigators to examine suspicious fires. They also provide explicit incentives for behaviors to reduce the asymmetries, such as lower premiums for avoiding health risks such as smoking, or credits for installing sprinkler systems.

Both adverse selection and moral hazard operate in the public funding of basic research. The peer review system, in which the choice of agents is delegated to a portion of the pool of potential agents themselves, monitors the problem of adverse selection. Although earmarking diverts funds from it and critics question it as self-serving, peer review has been expanding its jurisdiction in the choice of agents beyond the National Science Foundation (NSF) and the National Institutes of Health (NIH). But immediately after World War II, there was no prominent consensus supporting the use of peer review to distribute federal research funds, and thus it was not part of any social contract for science that could have originated then.

Monitoring and incentives have replaced the trust that grounded the social contract for science.

Moreover, regardless of the mechanism for choice, the funding of research always confronts moral hazards that implicate the integrity and productivity of research. The asymmetry of information makes it difficult for the principal to ensure and for the agent to demonstrate that research is conducted with integrity and productivity. In Steelman’s words: “The inevitable conclusion is that a great reliance must be placed upon the intelligence, initiative, and integrity of the scientific worker.”

The social contract for science relied on the belief that self-regulation ensured the integrity of the delegation and that the linear model, which envisions inevitable progress from basic research to applied research to product and service development to social benefit, ensured its productivity. Unlike health or fire insurance providers, the federal government did not monitor or deploy expensive incentives to assure itself of the success of the delegation. Rather, it conceived a marketlike model of science in which important outcomes were assumed to be automatic. In short, it trusted science to have integrity and be productive.

There were, of course, challenges to the laissez faire relation between politics and science, including conflicts over the loyalty of NIH- and NSF-funded scientists during the early 1950s, the accountability of NIH research in the late 1950s and early 1960s, the relevance of basic research to military and social needs in the late 1960s and early 1970s, and the threat of novel risks from genetic research in the 1970s. Some of these challenges led to modest deviations from the upward trajectory of research funding. But even issues that led to procedural changes in the administration of science, including the Recombinant DNA Advisory Committee, failed to alter the institutionalized assumption of automatic integrity and productivity.

Toward the new solution

Reliance on the automatic provision of integrity and productivity by the social contract for science began to break down, however, in the late 1970s and early 1980s. Well before the high-profile hearings conducted by Rep. John Dingell (D-Mich.) into allegations involving Nobel laureate David Baltimore, committees in the House and Senate scrutinized cases of scientific misconduct. The scientific community downplayed the issue. Philip Handler, then president of the National Academy of Sciences, testified that misconduct would never be a problem because the scientific community managed it in “an effective, democratic, and self-correcting mode.”

To assist the community, Congress passed legislation directing applicant institutions to deal with misconduct through an assurance process for policies and procedures to handle allegations. But believing that public scrutiny and the assurances did not prod the scientific community to live up to Handler’s characterization, Dingell instigated the creation of the Office of Scientific Integrity [later, the Office of Research Integrity (ORI)] by NIH director James Wyngaarden. Wyngaarden proposed the office because informal self-regulation was demonstrably inadequate for protecting the public interest in the expenditure of research funds as well as for protecting the integrity of the scientific record and the reputation of research careers.

Both offices had the authority to oversee the conduct of misconduct investigations at grantee institutions and, when necessary, to conduct investigations themselves. ORI has recently been relieved of its authority to conduct original investigations, but it can still assist grantee institutions. ORI is an effort to monitor the delegation of research and provide for the institutional conduct of investigations of misconduct allegations that, under the social contract for science, had been handled in an informal way, if at all.

The effort to ensure the productivity of research has striking parallels. In the late 1970s, Congress understood that declining U.S. economic performance might be linked to an inability of the scientific community to contribute to commercial innovation. The congressional inquiry demonstrated that different kinds of organizations, mechanisms, and incentives were necessary for the research conducted in universities and federal laboratories to have its expected impact on innovation. A bipartisan effort led to a series of laws–the Stevenson-Wydler Technology Innovation Act of 1980, the Bayh-Dole Patent and Trademark Amendments Act of 1980, and the Federal Technology Transfer Act of 1986–that created new opportunities for the transfer of knowledge and technology from research laboratories to commercial interests.

Critical to these laws was the reallocation of intellectual property rights from the government to sponsored institutions and researchers whose work could have commercial impact. At national laboratories, what the legislation called Offices of Research and Technology Applications, which became the Office of Technology Transfer (OTT) at NIH, assisted researchers in securing intellectual property rights in their research-based inventions and in marketing them. Similar offices appeared on university campuses, contributing in some cases tens of millions of dollars in royalties to university budgets and many thousands of dollars to researchers. These changes not only allowed researchers greater access to technical resources in a private sector highly structured by intellectual property, but they also offered exactly the incentives that principal-agent theory suggests but that the social contract for science eschewed.

Collaborative assurance

Such institutions as ORI and OTT spell the end of the social contract for science, because they replace the low-cost ideologies of self-regulation and the linear model with the monitoring and incentives that principal-agent theory prescribes. Additionally, they are examples of what I call “boundary organizations”–institutions that sit astride the boundary between politics and science and involve the participation of nonscientists as well as scientists in the creation of mutually beneficial outputs. This process is collaborative assurance.

ORI has monitored the status of allegations and conducted investigations when necessary. This policing function reassures the political principal that researchers are behaving ethically and protects researchers from direct political meddling in their work. ORI also assists grantee institutions and studies the fate of whistleblowers and those who have been falsely accused of misconduct, tapping the skills of lawyers and educators as well as scientists in this effort.

OTT has likewise employed lawyers and marketing and licensing experts, in addition to scientists, in its creation of intellectual property rights for researchers. Consequently, intellectual property has emerged as indicative of the productivity of research. Evaluators of research use patents, licenses, and royalty income to judge the contribution of public investments in research to economic goals, even as researchers use them to supplement their laboratory resources, their research connections, and their personal income.

The collaborative assurance at ORI and OTT demarcates a new science policy that accepts not only the macroeconomic role of government in research funding but also its microeconomic role in monitoring and providing specific incentives for the conduct of research–to the mutual purposes of ensuring integrity and productivity. Collaborative assurance recognizes that the inherited truths of the social contract for science were incomplete: A social contract for scientists is an insufficient guarantor of integrity, and governmental institutions need to supplement scientific institutions to maintain confidence in science. The public good of research is not a free good, and government/science partnership can create the economic incentives and technical preconditions for innovation.

The new science policy

The task for the new science policy is therefore not to reconstruct a social contract for science that was based on the demonstrably flawed ideas of a self-regulatory science and the linear model. Monitoring and incentives have replaced the trust that grounded the social contract for science. Rededication, rearticulation, and renaming do not speak to how the integrity and productivity of research are publicly demonstrated, rather than taken for granted. The new science policy should instead focus on ways to encourage collaborative assurance through other boundary organizations that expand the still-narrow concepts of integrity and productivity.

Ensuring the integrity of science is more than managing allegations of misconduct. It also involves the confidence of public decisionmakers that the science used to inform policy is free from ideological taint and yet still relevant to decisions. Concerns about integrity undergird political challenges to scientific early warnings of climate change; the role of science in environmental, health, and consumer regulation; the use of scientific expertise in court decisions; and the openness of publicly funded research data.

The productivity of science is more than the generation of intellectual property. It also involves orchestrating research funding that targets public missions and addresses specific international, national, and local concerns, while still conducting virtuoso science. It further involves developing processes for translating research into a variety of innovations that are not evaluated simply by the market but by their contribution to other social goals that may not bear a price.

The collaborative effort of policymakers and scientists can, for example, build better analyses of environmental risks that are relevant for on-the-ground decisionmakers. The experience of the Health Effects Institute, which produces politically viable and technically acceptable clean air research under a collaboration between the federal government and the automobile industry, demonstrates this concept. Not only could such boundary organizations help set priorities and conduct jointly sponsored research, but they could evaluate and retain other relevant data to help ensure the integrity of regulatory science.

Collaboration between researchers and users can mold research priorities in ways that are liable to assist both. Two increasingly widespread, bottom-up mechanisms for such collaboration are community-based research projects, or “science shops,” which allow local users to influence the choice of research problems, participate in data collection, and accept and integrate research findings; and consensus conferences and citizens’ panels, which allow local users to influence technological choice.

Top-down mechanisms can foster collaborative assurance as well. Expanding public participation in peer review, recently implemented by NIH, deserves broader application, particularly in other mission agencies but perhaps also in NSF. Extension services, a holdover in agricultural research from the era before the social contract for science, can serve as a model of connectivity for health and environmental sciences. The International Research Institute for Climate Prediction Research, funded by the National Oceanic and Atmospheric Administration, connects the producers of climate information with farmers, fishermen, and other end users to help climate models become more relevant and to assist in their application. Researcher-user collaborations in the extension mode can also tailor mechanisms and pathways for successful innovation even in areas of research for which market institutions such as intellectual property are lacking.

The social contract for science, with its presumption of the automatic provision of integrity and productivity, speaks to neither these problems nor these kinds of solutions. Boundary organizations and collaborative assurance take the first steps toward a new science policy that does.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Guston, David H. “Retiring the Social Contract for Science.” Issues in Science and Technology 16, no. 4 (Summer 2000).

Vol. XVI, No. 4, Summer 2000