Google “Toaster Laser”
In “How Dual Use Puts Research Under the Microscope” (Issues, Spring 2025), Håvard Rustad Markussen asks a pragmatic and incredibly topical question: Is “dual use” still an appropriate term for export control discussions, especially in a higher education environment? He describes dual use as “technology with both civilian and military applications,” echoing an approach nearly identical to export controls across the wider Western world. Yet the very notion of dual use is fraught with interpretive difficulty—even a toaster can have military application (if you doubt that, do a Google search for “toaster laser”). If all research and all technology are dual use (as Markussen contends), then how does one regulate and control access to and availability of potentially hazardous technologies?
In a way, the semiotics of export control bear resemblance to broader discussions on research security, where regulatory controls are applied to the conduct of research in higher education institutions in the interests of national or economic security. These controls are intended to be functionally protective of sensitive research, where the fear of access, dissemination, or diversion is focused on economic competitors or geopolitical adversaries (rather than “allies” or “friends,” who are usually exempted from such rules).
Research security is not, in Markussen’s example, a Norwegian problem, nor a European Union problem, but a transnational problem—with a national focus. Each nation, and indeed each institution, must decide its appetite for risks inherent in foreign collaboration in science and technology in a contested and fractured geopolitical environment. Too open, and one risks becoming a case study in foreign espionage and technology theft (just ask the president of Stanford University, which recently experienced intrusion by agents of the Chinese Communist Party). Too closed, and the institution suffers in attracting funding and talent, incurs reputational damage, and can suffer a fall in the global rankings that adjudge such institutions.
The challenge is really about a commonality of language: articulating who the risk is to, what the risk is of, and how the risk might manifest.
Markussen’s solution is a “more nuanced and careful risk assessment framework”—and he’s right. One might point to the experience of the Dutch virologist Ron Fouchier in his nation’s courts in 2013, when he was forced to obtain an export control permit to publish a paper about a genetically modified influenza virus. So the challenge is really about a commonality of language: articulating who the risk is to, what the risk is of, and how the risk might manifest. Governments are increasingly keen to lock down research that offers them national advantage, whereas scientists and scholars are more likely to want to share that research for the good of humanity. There needs to be a far more open approach by governments to academia, and a far more willing approach by academia to the political, economic, and diplomatic impacts of their work—the uncomfortable middle ground. To quote E. William Colglazier in a Forum letter in the same Issues edition, “Basic research is still in the … national interest if done with our ‘eyes wide open’ about potential security risks.” What is required is a common language, one that eschews “dual use” in favor of a dialect of risk that government, academia, industry, and the general public can all understand.
Brendan Walker-Munro
Senior Lecturer (Law)
Southern Cross University, Gold Coast, Australia
Håvard Rustad Markussen’s article highlights the inflection point facing the global research community. After a 40-year period of expanding internationalization, the system is shifting toward nationalism and a focus on security. Global ties are rapidly fraying. The result will be a slowing of scientific output and innovation.
Since 1980, global scientific cooperation increased, sometimes at exponential rates. It was an era when openness, international collaboration, and the free flow of knowledge drove global innovation. The open scientific model delivered enormous benefits, particularly to advanced countries such as the United States and its allies. Through unrestricted collaboration, nations accelerated technological breakthroughs, fostered scientific mobility, and cultivated ecosystems that thrive on shared expertise. The ability to tap into global talent and research networks propelled innovation in a range of important fields including biotechnology and artificial intelligence, reinforcing economic and strategic advantages for the world’s leading scientific powers.
The open scientific model delivered enormous benefits, particularly to advanced countries such as the United States and its allies.
The openness of the system allowed China to rapidly develop its own science system. This came about, however, without the hoped-for political liberalization that most assumed would be the result of openness. An autocratic China’s rise to world-leading levels in many fields has raised security concerns in the United States and elsewhere. Voices in favor of decoupling from China present strategies to do so. As Markussen notes: “By some arguments, all research is dual-use.”
While restricting knowledge flows and compartmentalizing scientific advancements might serve the interests of rising powers such as China—allowing them to develop independent capabilities insulated from foreign influence—it carries significant costs for scientifically advanced countries. The United States, the European Union, and other research leaders have built their strength on an interconnected system where openness fuels progress. Cutting ties and erecting barriers will inevitably weaken the innovation pipeline, slow scientific discovery, and limit the collaborative ventures that drive breakthroughs.
The compromise between these two extremes of openness and decoupling lies in what has been called a “derisking strategy” of blacklisting specific companies or institutions or scrutinizing collaborations, investments, and trade in technology or products closely related to military or defense applications. The US government has a long history of such control, but a National Security Presidential Memorandum issued in 2021, NSPM-33, extended controls even to basic research. Guidelines on how to implement the memorandum, however, are vague. Without a coherent framework, derisking threatens to sequester most cooperative research as people avoid even the appearance of aiding a rival nation.
If a derisking strategy is to serve as a viable middle ground, it must define risk in a way that protects national interests without dismantling the foundation of scientific progress. A well-calibrated framework should differentiate between legitimate security concerns and the natural exchange of knowledge that underpins technological advancement. Otherwise, advanced countries risk losing the very advantages that made them leaders in the global scientific order.
Caroline S. Wagner
Professor of Public Policy
John Glenn College of Public Affairs
The Ohio State University
Håvard Rustad Markussen examines how export control regulations, originally intended to prevent the misuse of scientific research, can inadvertently hinder international collaboration without delivering corresponding security benefits. His observations add a valuable perspective to the ongoing debate surrounding the risks with securitization of research.
As the global landscape becomes increasingly multipolar, advanced science nations are responding to both real and perceived threats of foreign interference and national security risks. Among the most commonly used tool is export controls. These measures aim to prevent sensitive technologies from falling into the wrong hands, but they also present significant challenges for research institutions, particularly those with extensive international partnerships.
The central challenge lies in balancing national and organizational security concerns with the pursuit of scientific progress. Export controls require research organizations to navigate complex, inconsistent regulations across nations, assess the dual-use potential of their work, and comply with shifting legal standards. Regulations are also inconsistently interpreted across organizations. While some cases are clear, most fall into ambiguous territory where it is uncertain whether restrictions should apply. The ambiguity highlights the need to carefully weigh the benefits of open research against the risks of overzealous application of regulations, which could threaten academic freedom, deter international talent, and stifle innovation. Researchers may self-censor or avoid sensitive but important areas of inquiry, while increased surveillance and government oversight raise serious ethical and legal concerns.
The ambiguity highlights the need to carefully weigh the benefits of open research against the risks of overzealous application of regulations, which could threaten academic freedom, deter international talent, and stifle innovation.
The potential for political weaponization of regulations further complicates the issue. For instance, recent actions by US lawmakers show how dual-use concerns can be politicized. Of particular note, three prominent Republican members of Congress recently sent a letter to Harvard University demanding “transparency and accountability regarding the university’s partnerships with foreign adversaries and entities implicated in human rights abuses.”
While some research security concerns may be valid, there is a pressing need to evaluate whether current practices genuinely enhance safety, or might be used for other reasons. Security policies must be implemented with greater transparency and grounded in demonstrable risks. Once enacted, such measures are difficult to reverse, making it essential that they are justified by evidence-based threats with direct implications for national security. Markussen illustrates the risks with the case of a Norwegian professor who was arrested for allowing Iranian PhD students access to a device classified as dual-use due to its potential military applications. Although the professor was later acquitted, the incident led to stricter controls on international collaboration in Norway, particularly with researchers from countries such as Iran and China. The case exemplifies a broader global trend: export controls, originally designed to prevent the misuse of dangerous technologies, are increasingly curbing scientific openness.
To achieve a potential balance, policymakers and compliance officers must prioritize proportionality and evidence-based policymaking. Targeted, evidence-based measures have a higher chance of addressing legitimate threats while maintaining levels of openness conducive for scientific advancement. We, however, live in a changing world, where conditions for open science are weakening.
Tommy Shih
Associate Professor
Lund University
KTH Royal Institute of Technology