What We Talk About When We Talk About Impact
When most people talk about “impact,” they often imagine one thing physically hitting another, for example, the impact of the meteorite that scientists think was responsible for killing off the dinosaurs—and leaving, as meteorites do, an impact crater on the edge of the Yucatán Peninsula.
This ballistic sensibility also informs a common understanding of the impact of things less corporeal than meteorites, including ideas and scholarship. Rarely, if ever, do ideas—academic or otherwise—blaze a trail in the sky and leave a clear mark where their impact has occurred. Yet people have an intuition of some series of collisions in which a new idea or new information changes people’s understandings, which changes people’s opinions, which changes people’s behaviors, which brings about different outcomes in the world.
Four decades ago, the Bayh-Dole Act enshrined the creation of intellectual property (IP) as part of the mission of research universities. Academic institutions responded by creating offices of technology transfer and including patents and other tokens of IP in their incentive systems. More recently, universities and their benefactors have sought to expand academia’s mission again, this time to include impact. For instance, my own institution, Arizona State University, wants to “enhance our local impact and social embeddedness” as one of its five high-level goals. On the funder side, in 2023, the Pew Charitable Trusts led a group of funders and research institutions in a “Scan of Promising Efforts to Broaden Faculty Reward Systems to Support Societally Impactful Research.” But the academy has a lot of work to do if impact is to take its place alongside IP in universities’ missions.
The search for impact has its own history in decades of jousting between pure versus applied research, curiosity-driven versus mission-driven research, the ivory tower versus the extension service, intellectual merit versus the “broader impacts” criteria at the National Science Foundation (NSF), and so on. But now that impact is a goal, we in the academic community need to elucidate a nuanced understanding of what we really mean by impact, how we imagine it happens, and what we as scholars might do individually and collectively to work toward it.
I approach these questions of impact as a social scientist, and particularly a political scientist concerned with public policy. And I am interested in the impact of scholarly ideas and analysis on legislation and policy, on politics and public discourse, and on people. As fellow political scientist Langdon Winner pointed out decades ago, legislation and technology have a shared identity: both are collective endeavors that authorize and provide infrastructures for how we as individuals and as a society pursue what we will. Winner reasons that if we have certain expectations of democratic practices and institutions for making legislation, then we should have similar expectations of democratic practices and institutions for making technology. I want to extend this reasoning to argue that if universities commit to practices and institutions for creating technological impact in the form of IP, then they should have similar structures for creating other kinds of impact.
A taxonomy of impact
What do we mean by impact? I posit four categories. First is what might be called “actual” impact: scholarship that affects the drafting or goals of legislation, budgets, or policy. An example is the language that directs the US Department of Energy (DOE) to facilitate and fund research, development, and deployment of direct air capture (DAC) of climate-warming carbon dioxide. I use scare quotes around the word “actual” because the concept of impact is often limited to substantive changes in, say, legislation. However, change happens in many other less formal ways. That is, policy change often follows political or social change.
Thus, the second category is impact on general thinking, which is roughly what some faculty aspire to as thought leaders or influencers. In the energy example, an impact on general thinking might be the concept of “overshoot,” which provided urgency to climate policymaking by clarifying how global temperatures are likely to surpass a predefined target (usually 1.5° Celsius above average preindustrial temperatures), thus making DAC a more interesting technology choice. Impact on general thinking can be associated both with the content of general thinking that might change (substantive) and with the agenda or vocabulary or framing with which things are considered that might change (procedural).
Finally, one might have an impact on people, either through the training of knowledgeable personnel (the third category) or through the interaction with lay knowledge (the fourth category). Such impacts might lead to substantive changes in the content of what people believe and procedural changes in how they behave, but also to reflexive changes in how they approach problems in relationship to their changing knowledge of an evolving world. For these categories of impact, the Climate Overshoot Commission (for elites) and Earth Overshoot Day (for the lay public) might be helpful examples. Both convey substantive information to change the knowledge upon which elites or lay publics might act, and both attempt to influence the agenda of how society approaches climate change. And yet, especially as the idea of overshoot is somewhat flexibly deployed between expert and lay groups, each asks different things of their respective audiences about their roles in ongoing opportunities for change.
Looking at these categories makes it possible to imagine how universities might create structures to encourage faculty to consider and pursue specific types of impact.
Measuring without a crater
A major challenge for universities is how to attribute and measure impact, wherever it occurs. Here we enter a nebulous area, because ideas are different from meteorites or even technologies that can be patented, licensed, and sold. If an idea results in an actual impact on a law or a budget—for example, adding millions for a new research program—then there is perhaps some common monetary denominator for measurement. But the attribution of actual impact, even if an academic paper is cited in testimony, committee reports, and legislative histories, will be diffuse, unlike the disclosures required by patent applications. One promising technical avenue for measuring this type of influence is the Overton index, which aims to make the relationship between academic work and policy documents discoverable.
For the other categories of impact, the prospects of attribution and measurement are cloudier still, but glimmers of possibility exist. To assess the impact on people, we might borrow from education. Formal education uses structured measurements such as evaluation rubrics. But the impact of scholarly work often happens informally, outside of classrooms. Such informal learning is harder to measure, but some museums, for example, adopt proxy measures such as “dwell time,” or how long someone spends in an exhibition. It is possible, then, to get a ballpark sense of how intensive an impact is—that is, how much people might have learned. Almost all educational institutions measure the size of their audience, how extensive impact is. A third dimension of this space of impact on people might be identity or specificity—the demographic, personal, and professional roles of audience members.
Similar proxy systems can be used to assess impact on general thinking. For example, a tool like Google Trends can help identify when and how often terms are used in web searches. But unless someone is coining an entirely new word or concept, it may not be possible to discern between a person who generates a brilliant idea and a person who succeeds at communicating it. Attribution and measurement do not go hand in hand. More complications arise from changes in behavior, protocol, or language internal to an organization; though often unobserved and undocumented, these are impacts nevertheless.
Finally, there are confounding questions of time and space. Some ideas have an immediate impact: they are retweeted, celebrated in op-ed pages, and become part of a public agenda. Others, however, burn slowly over time but nevertheless instigate profound changes. And some ideas gain local credibility; for example, a community in Nepal figures out how to innovate around a shared resource—but their insights may take decades, or Nobel laureate Elinor Ostrom, to spread to other areas. Even if bibliometric and other analytic measures evolve, the full measure of impact is likely to remain lumpy and elusive.
Tracing the knowledge value collective path
A further complication is that the concept of impact is limited by the fact that it is distinct from the ultimate goal—outcomes. New DAC legislation and technologies are well and good, but they require further interactions to effect the outcomes: reducing carbon dioxide in the atmosphere and mitigating global warming. Impacts are gateways to outcomes, but a cascade of interactions is required for those outcomes to manifest. Thinking about what connects academic or scholarly work to outcomes led to the idea of the “knowledge value collective” (KVC), articulated by (yet another!) political scientist, Barry Bozeman, and his colleague Juan Rogers.
The KVC refers to the set of actors who intermediate between an output, which could be an idea or product, and an outcome in the world. When a new DAC technology comes along, for example, the KVC includes not only potential investors and regulators, but also prospective neighbors of the sites where such technologies would be piloted and deployed, as well as potential buyers in a market for carbon dioxide that does not yet exist. If the people making the DAC technology understand the KVC well enough, they will appreciate the constraints and opportunities better and take those supposed downstream concerns reflexively into consideration when they imagine and design the technology. Research that better understands the KVC is better positioned for impact. When NSF’s Technology, Innovation, and Partnerships directorate emphasizes stakeholder partnerships, when DOE requires community benefits plans, and when the Pew report elaborates the socially engaged work necessary for societal impact, they implicitly endorse a vision of engaging portions of the KVC.
Successfully understanding and navigating the KVC, however, require a set of skills or talents that may be very different from those that have led to the initial technical discovery, invention, or analysis. Indeed, this task becomes the equivalent of an additional research project, complete with needs for new capacities and collaborations. For IP-based impact, omnipresent university-based tech transfer offices and proliferating entrepreneurship and innovation programs provide training for engaging for-profit aspects of the KVC. However, there are few formally organized, university-wide groups that teach their trainees to navigate community-based, not-for-profit, and public sector pathways through the KVC.
The KVC, in other words, provides another way to structure the very squishy concept of impact, helping the practitioner follow the many twists and turns that occur along the way to an outcome. It also moves the creation of impact away from the ballistic model of launching single missiles and hoping for an impact, toward a more practical model of trying multiple approaches and learning skills to navigate a complex sociotechnical landscape. Thus, rather than measuring the craters of impact (or the patents, publications, or earnings of IP), the KVC approach suggests that we might map out possible pathways to outcomes that enable others to discuss the plan and also, after the fact, determine whether these goals were met.
Enter the “impact catechism”
Adding impact to universities’ mission requires a framework that is different from tech transfer, but that is just as well institutionalized and supported. Universities also need to be able to tell a credible story of how they create change in the world using proxy measures, attributions, winding KVCs, and metrics not yet invented. Fortunately, there is help.
In the 1970s, George Heilmeier, legendary director of the Defense Advanced Research Projects Agency (DARPA), conceived of what is often called the Heilmeier Catechism, a series of eight questions designed to force assumptions about a proposed research project out into the open so they can be subjected to rigorous scrutiny. Heilmeier created the questions not just to guide prospective investigators in clarifying their research ideas, but also to protect the integrity and mission of DARPA so that it was not funding half-baked ideas—or worse, easy ones.
Creating and adopting an “impact catechism” could help academics envision how they can affect the world and guide them through the process. It could also help universities improve their ability to produce impact by beginning to understand the myriad ways their faculty, staff, and students can influence policy, politics, and people—by teaching those skills to their personnel; by choosing important areas of impact self-consciously; and by investing in and valorizing their work. Starting down this path means building the capacity to identify and categorize the types of impact various entities within the university aspire to. Then the university can support those entities’ performance toward those categories by facilitating their presence in the right kinds of networks, advancing their professional development with the right kinds of skills, and providing them with the right kinds of infrastructural support.
In an attempt to develop an impact catechism, I have begun to share the following eight questions informally with colleagues and students:
- What kind(s) of impacts (category/type) are you aiming at?
- What scope (extensivity) and depth (intensivity) of impact are you planning for?
- What specific audience(s) are you addressing or constructing?
- What (causal) model do you have in mind for creating impact?
- How are you creating opportunities for impact?
- Who or what (KVC) connects your outputs to impacts and outcomes?
- How are you participating in, researching, or keeping track of (intermediate) impacts along the way?
- How will you tell the story of the impact that you have with humility and accuracy?
Different versions of the Heilmeier Catechism exist as it was refined over time—including losing the “catechism” in favor of “questions.” My impact catechism may be a similar type of first draft. As a starting point for faculty members, research development staff, and central research offices, as well as for research sponsors, I hope this version can inspire new practice.
At the core of this new practice is a different imaginary—one in which faculty and students learn the skills to change the world not only through publishing or patenting or profit-seeking outputs, but also through the skills of social organizing and political communication; through rigorous policy design and implementation; and through public-interest technology development and knowledge mobilization for public purpose. Such knowledge today remains relegated to more explicitly political organizations, even if many—such as think tanks and civic organizations—are also not-for-profits like universities. If universities want to deliver on the goal of having a beneficial impact on their community, their state, their nation, or their world, they must find a way to inculcate these skills.
What better way to start than by asking questions?