Data for Policy


Data-Driven Science Policy
Read Responses From

Is it time for the science policy/funder communities to be more scientific about how to invest in science? This is the question posed and persuasively addressed by Katy Börner in “Data-Driven Science Policy” (Issues, Spring 2016). The opportunity to gather, analyze, and visualize data from myriad resources (publications, grants, patents, social media, and so on) in unprecedented volumes provides a powerful argument, even a compelling rationale, for putting data-generated knowledge to work informing governments and private funders how to “most productively” (in Börner’s words) invest in research.

I agree that funders should be willing to become more scientific when it comes to investing in science. Cultivating a willingness to pose thoughtful questions about the purpose and the nature of investing in knowledge generation and use, developing robust ways to acquire and study the data available, and being honest and transparent about our determinations of success (or not) against meaningful outcomes strikes me as the right way to consider how best to deploy limited resources. On the flip side, data-driven science policy benefits from the same skepticism, caution, and debate ongoing for other “big data” initiatives—how “right” are the questions asked, how good are the data gathered, and what values are represented in outcome measures? Science and scholarship are, fundamentally, human enterprises in service to the common good. Attempts to make inquiry too efficient or optimal risk pushing us away from investing in research that is heterodox to prevailing wisdom, orthogonal to reigning dogma, or skewed to the interests of particular stakeholders. It may just be that some inefficiency is necessary to allow space for novelty to emerge.

Data-driven science policy benefits from the same skepticism, caution, and debate ongoing for other “big data” initiatives.

Data-driven investment poses positive opportunities and some tricky challenges for nonprofit private funders. Foundations, charities, and individual donors typically rely on eminence-based rather than evidence-based decision making. Like government funders, foundations, charities, and wealthy individuals assemble panels of experts to shape initiatives, provide merit reviews of proposals, and make funding recommendations. Unlike government funders, private funders typically have limited resources and invest in science with small numbers of grants on more targeted topics with short time scales, limiting the amount of data available for analysis. The James S. McDonnell Foundation, for example, makes about 30 new grants a year via investment strategies including: identifying where modest research investments could help fill gaps in scientific knowledge; looking for emerging areas of research at the intersections of traditional disciplines; and identifying questions early in their inception.

It is easy to see how the data-driven approaches Börner describes can help us to more systematically “map” knowledge gaps or target emerging lines of research that could get a boost from targeted, albeit, modest funding. Visualization approaches that dynamically monitor how ideas, theories, and tools are crossing disciplinary boundaries or merging into novel hybrid fields allow us to see how influential our grantees and their publications are in the broader scientific community. Importantly, data-driven approaches can temper our expectations (and claims!) as to what can be achieved with limited investment, guide how funding strategies might need to be altered or adjusted to better match our goals, and identify new research directions for the future. In my view, the challenges for private philanthropic supporters of science are philosophical: in our enthusiasm for data, how do we maintain our core characteristic of independent and diverse decision making?


James S. McDonnell Foundation

St. Louis, MO

Katy Börner’s excellent article highlights the impact of enormous computer power for models and simulations, using big data to parameterize models, and increasing capabilities for interactive visualizations. Policy makers can now be immersed in the complexity of their domains and empowered to interactively explore a wide range of “what if” questions.

These capabilities enable decision makers from many disciplines beyond science and technology to engage in such explorations to inform their positions on a diverse array of policy issues, including education, energy, health care, and security. The process of “informing” decision making is rather different from IBM’s computerized Watson just telling them the answer. In fact, the key insights usually come from small, multidisciplinary groups using the technology to investigate and debate alternative futures.

Our experience is that senior decision makers readily adapt to such interactive environments, often wanting to “take the controls” and pursue their own ideas and questions. The biggest adoption hurdle involves policy analysts who are reluctant to abandon PowerPoint to create interactive environments that enable such explorations. Taking this leap successively involves understanding decision makers’ “use cases” and paying attention to their “user experiences.” This is often best approached by analyst-designer teams.

We have also found that it is best to stick with commercial off-the-shelf tools—for example, AnyLogic, D-3, Excel, R, Simio, Tableau, and Vensim—that allow embedding Java code, for instance, rather than writing software from scratch. We often use combinations of these tools. This practice can enable creating a prototype interactive environment within a week or two, which, in turn, allows rapid user feedback and easy mid-course corrections.

The goal of all this is evidence-based policy, rather than policies based largely on political positions or ideologies. Indeed, we have experienced many instances of stakeholders’ ardent positions dissolving in the face of evidence, with them at the controls. Decision making is a lot more efficient and effective when you can get rid of bad ideas quickly.

Alexander Crombie Humphreys Chair in Economics of Engineering

Systems Engineering Research Center

Stevens Institute of Technology

Cite this Article

“Data for Policy.” Issues in Science and Technology 32, no. 4 (Summer 2016).

Vol. XXXII, No. 4, Summer 2016