A Science Funding System Beyond the Linear Model
To improve the relationship between science and the public, funders should embrace a new ontology of scientific work that aligns incentives with society’s goals.
There has long been intense competition for science funding, but a recent piece in Nature suggests the field may have reached the so-called Szilard point, the threshold beyond which more money is being spent competing for grants than the value of available funding. Today, researchers and research funders are questioning the effort of pursuing grants and the financial viability of their investments, while the scientific community struggles to better understand and articulate the societal benefits of research funding.
Although threats to funding last year brought new attention to the problem, US science has been headed in this direction for some time. More than 20 years after Office of Science and Technology Policy’s then director John Marburger’s famous call for “better benchmarks,” very little has been done to evaluate research funding systems in terms of actual public benefits such as improved health measures, increased social mobility, and better well-being. Studies of science funding have also revealed shortcomings in the funding systems themselves, from excessive conservatism to biases in peer review to too much hype in science. And although the metrics used to evaluate these systems—like publications, citations, and future grants—may have meaning inside the scientific community, they don’t reflect actual societal value.
Central to this issue, which exists on multiple scales, is the relationship among the American scientific enterprise, its funding, and the so-called linear model of innovation. The linear model posits that society funds an autonomous scientific research community to do basic research, with the expectation that the fruits of this research will be applied to produce societal returns on the original investments. The science policy community has long criticized the linear model as simplistic and misleading, but it’s worth noting that the model has also failed to satisfy science’s stakeholders—both policymakers and the public.
A key problem with the linear model is the presumed difference between basic and applied research. Federal statistics continue to capture and report research activity using these categories, even though their persistence is more due to lack of an alternative ontology for scientific research efforts than to a belief in the fundamental soundness of the basic versus applied distinction.
An alternative ontology for science funding would replace that basic/applied dichotomy with a more accurate awareness of the inherent uncertainty of scientific inquiry, as well as science’s societal responsibilities. By delineating the different types and purposes of research, this new ontology could shape how decisions are made about what to fund, how to assess the success of that funding, and ultimately how to justify funding for science. And, importantly, by providing a structure that makes it possible to have accountability for the billions of taxpayer dollars spent each year, it also can help counter, and perhaps even prevent, the politicization of science funding.
Very little has been done to evaluate research funding systems in terms of actual public benefits such as improved health measures, increased social mobility, and better well-being.
In examining the scientific landscape, I have catalogued six different kinds of research, each with its own intended outcomes and beneficiaries:
- Mission-directed research: research directed toward solving a practical problem.
- Engaged public research: a form of mission-directed research that, rather than being directed by a government entity, is conducted in collaboration with some segment of the public.
- Private-interest research: research of private commercial interest, largely funded by commercial entities.
- Regulatory research: research done to gain regulatory approval from a government agency for any new product or process requiring scientific studies for regulatory oversight.
- Big Science infrastructure: large infrastructure projects (both physical and empirical/data infrastructure) supported with major public funding.
- Curiosity-based research: research pursued on a particular subject with the goal of pushing science forward.
With this new ontology of science in hand, funders could transparently steer investments to different types of research, while creating evaluation structures that determine whether societal goals have been met for each. This more purposeful system would not only improve the scientific enterprise as a whole, but also the lives of scientists, who could spend less time on writing grant proposals and performing intensive grant peer review. By removing the baggage of the old linear model and its misleading description of scientific research, this new ontology more accurately reflects the relationship between science and society. And, importantly, employing the ontology could make it easier for funders to support useful science, while enabling decisionmakers to more efficiently allocate investments to different types of science and their intended purposes.
A closer look at a new ontology for science
In contrast to the current practice of funding basic and applied research without clearly assessing whether the research has achieved its goals, the new ontology would demystify the process by establishing distinct funding, management, and evaluation mechanisms for each of the six categories. By creating funding mechanisms uniquely suited to the particular purpose of the research, this system builds in transparent and appropriate ways to evaluate whether research has met its goals. By clearly laying out what scientists’ responsibilities are, and to whom they are responsible, this ontology more clearly defines the various roles of scientists in society, reducing the demands on scientists to meet all the things society needs from science all the time.
Mission-directed research. Targeted at specific extra-scientific aims, generally focusing on a public good, mission-directed research often comes with substantial public funding and is usually led by a government agency. Success or failure is assessed by whether the aims are achieved (or found to be not achievable for empirical reasons).
A paradigmatic example is military research, which has probably been held most accountable to the goals for which it is funded. Military funders contract with scientists and then, as at the Defense Advanced Research Projects Agency (DARPA), keep them on track so that the research provides useful and implementable results. While much military research is classified, it has also provided a crucial source of funding for key scientific and technological breakthroughs of profound public importance, such as the jet engine, GPS systems, and the internet. Keeping scientists focused on the practical aims of this research has enabled the development of systems that are broadly implementable in the public realm.
Mission-directed research includes more than military research. For example, Operation Warp Speed, a large effort run by the Department of Defense (DOD) and Department of Health and Human Services and costing over $10 billion, enabled the pursuit, production, and testing of multiple COVID-19 vaccines all at once, arguably producing one of the most important scientific successes of the past decade. NASA projects also often run on this kind of model. Recent public funding in North Carolina shows the possibilities for this kind of research at the state level.
An important characteristic of mission-directed research is that even if something scientifically interesting is discovered, the mission takes precedence. For example, when a research project for better ecological management on military bases stopped being useful for making land management decisions, DOD stopped funding that line of research. Although it was still scientifically interesting, it was no longer aiding the mission. Scientists doing this type of research must be willing to accept this kind of oversight and possible redirection.
This new ontology could shape how decisions are made about what to fund, how to assess the success of that funding, and ultimately how to justify funding for science.
Engaged public research. Engaged public research involves working with some segment of the public, such as a patient group, community group, or some other public constituency. In this type of research, the group collaborates with the scientists, shaping the aims and methods of research. Scientists, in turn, treat this group not just as a source of data or of input on the problem, but as a key partner in framing the problem and determining what counts as an adequate solution.
Unlike mission-directed research, where the goals are specified by a government agency, the goals of engaged public research are specified through this scientist-public collaboration. Engaged public research includes practices like community-based participatory research or participatory action research. Scientists who do this type of work must be ready and willing to give up some of their autonomy to the engaged public.
Currently, engaged public research is rarely treated as a distinct and important kind of research, it receives comparatively less funding, and few scientists have access to the kind of training needed to do this work well. This neglect is unfortunate because this type of research can directly demonstrate the value of scientific research to the public. Whether helping patient groups grapple with the complex conditions of their disease or aiding fenceline communities in understanding local air quality challenges, this work engages the public in the complexities of scientific work and builds trust while generating new solutions to public problems.
Specific and strong support is needed for engaged public research because its substantial structural challenges—including the need for interdisciplinary teams, time to build trust, and continual communication among the full team—mean that the work often proceeds more slowly than some other types of research. All these challenges lead to much slower publication times. Success thus needs to be measured in terms of the engaged public’s assessment of the work rather than through traditional publication-based metrics.
Because of the immense value such research can provide to the public, engaged public research should have its own funding and evaluation mechanisms, potentially at the regional or state level. For example, funding could be provided in stages, starting with funds to develop the necessary team and ties to a community to scope possible projects; second, funds for conducting the research of interest to the community; and third, funds to continue collaborations with the community on additional or extended projects. What is most central, particularly for the final stage, is that evaluation includes input from the community engaged in the research. Research that does not have positive reviews from the community is not successful engaged public research.
Private-interest research. Private-interest research pursues research of commercial interest and should mostly be funded by commercial entities. Private entities, usually corporations, already provide the bulk of this funding, which surpassed the scientific investment of the federal government in 1980 and has grown significantly since.
In rare cases, public funds might be partnered with private funds to pursue specific public goods. However, public-private partnerships require continuous effort to prevent private-interest capture of such partnerships. Because this funding is mostly privately sourced and directed, concerns about public accountability for these expenditures are not as pressing. However, studies needed for regulatory approval should be conducted through the regulatory research funding category, not through private-interest research.
Research that does not have positive reviews from the community is not successful engaged public research.
Regulatory research. Central to drug and pesticide approval in many nations are studies needed to show safety and efficacy for the substance. Currently, such studies are usually funded and conducted by commercial entities seeking regulatory approval. Unfortunately, this mechanism gives those entities some control over precisely how such studies are conducted. For more than 20 years, detailed examinations of this work have shown a persistent bias in favor of the funder’s desired outcomes. Cases of data suppression and methodological gaming are widely known, undermining the trustworthiness of this body of research.
A better way to fund, conduct, and run these studies was put forth by philosopher of science Julian Reiss, who proposed an “Institute for Clinical Trials,” relying on private and public resources and distributing funds to researchers who would then conduct the trials needed for regulatory approval. Although Reiss focused his idea on biomedical research, this kind of system could be put in place for any regulatory decisionmaking that rests upon scientific studies. Entities seeking regulatory approval would pay a fee large enough to cover the costs of required studies to an Institute for Regulatory Scientific Trials, which would then call for researchers with no industry ties to run the studies. Private entities could still do the discovery and development research that would allow them to refine drugs, chemicals, and products that need regulatory approval. The regulatory research system would only produce evidence to assess whether final approval is warranted, such as the randomized controlled trials needed for drug approval.
Ensuring the relative independence of such an institute would be an important consideration. To trust in the system, researchers would need to avoid conflicts of interest, and a neutral party would need to regularly assess whether the research was biased. Significantly, for such an institute to work, laws would need to be changed, so that only studies produced by such a system counted as evidence for regulatory approval.
If this structure became a global norm, nations could potentially share studies for regulatory purposes. Variations by national and regional context for final regulatory decisions would still exist, but the knowledge-sharing such institutes would provide would substantially benefit science, industry, and the public.
Big Science infrastructure projects. Big Science infrastructure includes high energy physics particle colliders, large telescope installations, national census databases, large health databases usable for multiple research agendas, and global environmental monitoring systems or databases.
Big Science infrastructure projects tend to be more embedded than other science funding in political approval processes that are directly democratically accountable. Because the funding amounts are so large—often billions of dollars—they require legislative line items for budget allocation. Such projects thus draw the direct attention of elected politicians. Moreover, physical locations looking to host large infrastructure projects often work with their representatives to lobby for support.
The vested interests in funding and maintaining Big Science infrastructure—from the scientists whose work requires it, to the institutions and locations hosting and benefitting economically from it, to the politicians supporting it—pose inherent conflicts that make it challenging to assess whether these projects are worth the investment. Opposition to Big Science infrastructure projects tends to be diffuse, arising from opportunity costs from other areas of science or from excessive homogenization of scientific efforts, although sometimes there is local opposition, such as concerns over the violation of Indigenous rights and sovereignty when building telescopes at Maunakea in Hawaii. As a result of the combination of local political support and subfield-specific scientific support, these projects can develop a momentum of their own, making them difficult to fairly evaluate, much less shut down, once they are up and running.
Reasons to cease or redirect funding can come from discoveries made in pursuit of the science, from external forces, or both. Lessons learned in the research pursuit can indicate that some Big Science infrastructure projects should be shut down, such as Project Mohole (an effort to drill through the earth’s crust in the 1960s), which continued to function even after it was clear it would not be scientifically productive. Recognition of social injustices aggravated by such projects, such as the conflict over Indigenous rights on Maunakea, can also generate reasons to shift how they are pursued. And when scientific discovery proceeds more slowly than is hoped, delays can reduce the project’s value to the point that such projects may seem unwise uses of public funds, as has happened with the ITER fusion project in Europe.
I advocate for decisions about Big Science infrastructure to be made in a science court. Science courts have been perennially proposed since the 1960s for a range of science policy decisions. In such a court, expert advocates and critics would make their cases for and against funding. The jury could be made up of scientists (from other fields) or citizens. It could be recorded and televised, as a proper public spectacle, bringing to the public the debate over these important decisions.
Curiosity-based research. The funding of curiosity-based research should acknowledge that just the pursuit of empirical understanding is a primary driver for many scientists, without the implied promises of public payoff suggested by the linear model. Removing such promises shifts the justification for such research: It should be funded because pursuing science is an important cultural endeavor and because scientific discoveries, over the long run and in unpredictable ways, will profoundly shape our understanding of the world. Such scientific research should be funded because humans care about knowledge, even if relevant applications may be diffuse, distant, or never arrive.
In the United States, the predominant public funders of this type of research are the National Institutes of Health and the National Science Foundation. In these agencies, proposals for funding are evaluated using a labor-intensive process of individual and panel review by experts that aims to identify and fund the strongest proposals. High numbers of proposals and limited funds mean that only a small percentage of proposed projects are funded.
This process produces several distortions in funding outcomes. First, it tends toward conservatism: Ideas that challenge scientific orthodoxy are less likely to be selected than incremental projects that build upon existing work. Evaluative studies reveal that peer review often does not select the “best” science (as measured by citations). In addition, this system amplifies the Matthew effect in science, so that those already successful in gaining grants, publishing results, and building large labs tend to be rewarded with more resources.
I argue for embracing the uncertainties inherent in scientific inquiry and acknowledging that funders and their reviewers cannot know which curiosity-based science is best to pursue, by any definition of “best,” before it is pursued. Instead of trying to fund only the best science as assessed by peers, the United States should fund curiosity-based research through a lottery system. Of course, a viable lottery system would need to screen out unacceptable scientific research proposals. Such a screen would still need external reviewers, but instead of commenting on all aspects of a research project, reviewers would be asked the more straightforward question of whether the research and the researcher met some minimum standard of competency (e.g., having a PhD and peer-reviewed publications in the field). Similarly, researchers working with biological pathogens should meet minimum biosafety level requirements. Agencies might also ban researchers who have committed scientific fraud from eligibility. All proposals remaining after the screen would be placed in a lottery, from which projects would be drawn until the funds run out. Ongoing assessments and experiments should be used to evaluate the lottery system.
The funding of curiosity-based research should acknowledge that just the pursuit of empirical understanding is a primary driver for many scientists.
A lottery funding system would provide numerous advantages over the current model. First, proposals would no longer need to make tenuous promises about amazing future societal or commercial benefits; researchers could propose to study phenomena exclusively for their own sake. Second, the review process would become much less labor-intensive. Third, the burden on investigators would be reduced as proposals that passed screening could remain in the lottery pool for several funding cycles, obviating the need for continual revision and reapplication. Finally, lotteries would avoid the conservatism of the current system, reducing existing racial and gender biases and the Matthew effect.
One concern with a lottery system is that it might make scientific careers more fraught: Truly outstanding researchers might not be able to maintain a steady stream of funding. This risk could be reduced by creating supplemental funding mechanisms. For example, “fund people, not projects” arguments advocate for non-project-based funding. Funding could also be set at amounts graduate students need to complete their degrees.
Another concern is political. With a lottery, funding agencies would no longer be able to claim that they fund only “the best science.” However, under the current system, proponents and critics cherry-pick cases from publicly funded science to make larger points about the effectiveness of science funding (e.g., the Golden Goose Awards versus The Waste Reports). These anecdotes hardly count as evidence of anything. It would be more honest to say that scientific inquiry is a high-risk, high-reward venture; that much research does not pan out; and that picking winners ahead of time is quite impossible. Being able to claim that the system assiduously screens out bad science—at least as assessed by the scientific community—would be a better and more important claim.
A lottery system could also reduce scientific fraud and support innovative researchers if funding were tied to registered reports, requiring researchers to submit detailed methodology to a journal for peer review and acceptance prior to data collection. Such in-principle acceptance allows for publication of research that did not work as planned—a valuable source of information for other researchers. Tying a funding lottery to registered reports (where all studies with in-principle acceptance would be eligible) would also reduce researcher workload.
Assessment of lottery-funded curiosity-based research could include the usual metrics for academic science—including publications, citations, and peer assessment of impact—for individual projects as appropriate. However, assessing the value of curiosity-based research funding at the program level is a different kettle of fish. Justifications must rest on the importance of science for society and the cultural value of science over the long term. One could assess whether lotteries are doing better than current systems at supporting innovative and potentially transformative approaches.
Instead of trying to fund only the best science as assessed by peers, the United States should fund curiosity-based research through a lottery system.
Finally, lotteries would also provide better protection from the politicization of science funding for curiosity-based research. Because screenings would need to be clear and public (and reasons for why a proposal was screened out would need to be provided to each rejected proposal), they would be less subject to the political whims of the day and to political pressure, particularly when compared with a ranking system fraught with judgment. A lottery would instead involve clear standards and then random chance. This process would better protect scientific freedom.
Better for scientists, better for science, and better for society
Currently, science funding and thinking about science funding aren’t done in the way I propose here. But using this new ontology as a guide would have numerous advantages for people across the scientific enterprise.
For many scientists, engaging in this science funding system should free up time to do more science than they have in the present system, which requires tremendous attention to securing grants. Furthermore, these six streams of research funding would enable scientists to pursue their research with the appropriate kind of accountability measures. They would not have to contend with multiple accountability measures that are often in tension with one another; they would not be expected to “do it all.” Depending on the kind of result they want to pursue (e.g., do they want to improve conditions for a segment of the public, or do they want to solve a particular practical problem, or do they want to publish their results and add to the body of knowledge?), they would select the appropriate funding system and work within that for their project. They can, of course, change which system they want to work in as their career develops, maximizing their freedom to direct their research efforts in different ways.
This structure for scientific funding would help society by reducing all sorts of biases in funding. It would also help provide more support for engaged public research, which can solve the problems with which the public is concerned. The benefit, for science, would be that this should increase public support for science by directly demonstrating its value.
Political pressures on science funding systems would also be reduced, or at least properly focused. Funding curiosity-based research through lotteries would eliminate the problems of picking winners and reduce ideological litmus tests (and pressures on scientists to pass these tests). Mission-directed research directed by government agencies would continue to be shaped by political agendas, but such research efforts could be deployed more widely across different kinds of problems. Engaged public research would require bottom-up partnership and direction and thus be somewhat insulated from national-level political interference. Regulatory research, as proposed here, would provide protection from commercial interference and should improve the regulatory decisionmaking process. Big Science infrastructure projects, by utilizing a science court and the independent judicial structures that accompany court systems, would allow for more accountable and understandable public funding. While the distribution of public money is a political matter, and no public funding system can or should be wholly apolitical, the scheme proposed here would reduce politicization at the level of individual research projects.
Changing the ontology of funded research is one way to fund science better in search of creating better science. In contrast to the linear model, the ontology proposed here would create a public funding system that appropriately assesses the kinds of proposals or projects to which funding flows and then manages and assesses those projects appropriately. A complex mix of funding structures, cutting across disciplines, would enable pluralism in scientific approaches while producing the knowledge the public needs. In the coming decades, this system could be an important tool for grounding the social contract between science and the public.
