The Limits of Dual Use
Distinguishing between military and civilian applications of scientific research and technology development has become increasingly difficult. A more nuanced framework is needed to guide research.
Research and technologies designed to generate benefits for civilians that can also be used for military purposes are termed “dual use.” The concept of dual use frames and informs debates about how such research and technologies should be understood and regulated. But the emergence of neuroscience-based technologies, combined with the dissolution of any simple distinction between civilian and military domains, requires us to reconsider this binary concept.
Not only has neuroscience research contributed to the development and use of technology and weapons for national security, but a variety of factors have blurred the very issue of whether a technological application is military or civilian. These factors include the rise of asymmetric warfare, the erosion of clear differentiation between states of war abroad and defense against threats “at home,” and the use of military forces for homeland security. It is increasingly difficult to disentangle the relative contributions made by researchers undertaking basic studies in traditional universities from those made by researchers working in projects specifically organized or funded by military or defense sources. Amid such complexity, the binary world implied by “dual use” can often obscure rather than clarify which particular uses of science and technology are potentially problematic or objectionable.
To help in clarifying matters, we argue that policy makers and regulators need to identify and focus on specific harmful or undesirable uses in the following four domains: political, security, intelligence, and military (PSIM). We consider the ways that research justified in terms of socially constructive applications—in the European Human Brain Project, the US BRAIN initiative, and other brain projects and related areas of neuroscience—can also provide knowledge, information, products, or technologies that could be applied in these four domains. If those who fund, develop, or regulate research and development (R&D) in neuroscience, neurotechnology, and neurorobotics fail to move away from the dual-use framework, they may be unable to govern its diffusion.
Multilateral treaties and conventions most significant for developments in the neurosciences are the Biological Weapons Convention and the Chemical Weapons Convention. These conventions’ review conferences, held five time a year, fail to capture the rapidly evolving advances in the neurosciences, and they lack adequate implementation and oversight mechanisms to keep up with emerging capabilities. Beyond disarmament treaties, countries use various export-control instruments to regulate the movement of dual-use goods. However, this is constructed around moral and ethical interpretations of what constitutes good (dual use) and bad (misuse). Dual-use regulations typically allow for cross-border trade if technologies are intended for good uses, albeit often under substantial export controls, while prohibiting trade in technology if there is a significant potential for terrorist or criminal use.
But the idea of good and bad uses raises intractable issues of interpretation and definition concerning applications of research. Indeed, governments and regulatory institutions even differ in their approaches to defining and delineating dual use in research. Funding agencies sometimes prohibit research that is intended to be used for military applications. This is true for the Human Brain Project, for example. However, regulating intentions can hardly prevent research that is benign in intent from later being used for hostile purposes.
Regulators have sought to specify more precisely what kinds of R&D should be prohibited in which contexts. For example, in the context of trade, the European Commission, the politically independent executive arm of the European Union (EU), states that “‘dual-use items’ shall mean items, including software and technology, which can be used for both civil and military purposes.” Horizon 2020, the EU program that funds the Human Brain Project, is more specific, requiring applicants for funding to ensure that “research and innovation activities carried out under Horizon 2020 shall have an exclusive focus on civil applications,” and they are required to complete an ethics checklist to demonstrate that they comply with this requirement.
The Office of Science Policy of the US National Institutes of Health defines “dual use research of concern” to mean “life sciences research that, based on current understanding, can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety, agricultural crops and other plants, animals, the environment, material, or national security.” This definition is meant to be more specific and to help policy-makers pay attention to particular problematic research and technologies, yet science or technology that might be “misapplied” might also be legitimately justified for national security purposes. The criteria used to distinguish between legitimate use and misuse or misapplication are inevitably matters of context that depend on how the technologies will be applied across the four domains we have identified.
Political applications
By political applications we refer to the use of neuroscience or neurotechnologies by government authorities to govern or manage the conduct of individuals, groups, or populations—for example, by changing or manipulating attitudes, beliefs, opinions, emotions, or behavior. We will focus on one example from the developing field of “neuropolitics”: the uses of neuroscience research into explicit and implicit attitudes and decision-making. This is especially timely given the Cambridge Analytica scandal, where personal data collected by the political consulting company were used to build psychological profiles and target political advertising in the run-up to the US presidential election.
The political scientist Ingrid Haas has said that the focus of political science has traditionally been on understanding the behavior of groups, but new methods in the neurosciences can help to understand how individual political behavior relates to the behavior of a collective. Here, as elsewhere, the hope is that if one understands the neurobiology of decision-making, and especially the unconscious factors shaping decision-making, it will be possible to develop more effective interventions. Brain imaging technologies such as functional magnetic resonance imaging (fMRI) have been used to investigate the ways in which people form and revise attitudes and make evaluations. For example, in 2006 Kristine M. Knutson and colleagues used fMRI to measure in human subjects the activation of the part of the brain known as the amygdala—understood to be active in the processing of emotion—to study attitudes toward Democrat and Republican politicians. Further studies by Woo-Young Ahn and colleagues in 2014 claim that brain imaging can accurately predict political orientation, such as the differences between liberals and conservatives in the United States.
In contrast to imaging technologies, recent developments in “noninvasive” brain stimulation—such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS)—can be used to directly alter or disrupt brain activity. For example, in 2015 Roberta Sellaro and colleagues examined the neural correlates of implicit attitudes by applying tDCS to subjects’ medial prefrontal cortex while asking them to “categorize in-group and out-group names and positive and negative attributes.” The researchers found that “stimulation decreased implicit biased attitudes toward out-group members,” concluding that tDCS could be used to reduce prejudice “toward members of social out-groups.”
Despite the limitations of these imaging and stimulation technologies—for example, it is not known whether the effects occur in a particular region of the brain or whether other regions are also being stimulated—research investments have already been made into the further development of noninvasive brain stimulation technology, such as in the EU-funded Hyper Interactivity Viability Experiments (HIVE) and in the US BRAIN initiative. Although brain imaging and stimulation technologies were originally envisioned to aid in the diagnosis and treatment of conditions such as depression and schizophrenia, there are clearly potential political uses of brain imaging and brain stimulation technologies in devising more effective ways in which authorities can shape or manage the opinions and attitudes of citizens. Many of these will, of course, merely give a neurological basis to the ways in which authorities have long tried to shape the decisions of their citizens (for example, in antismoking or healthy eating campaigns), which may raise few ethical concerns. But one can imagine less benign uses, such as in shaping attitudes to particular minorities, or manipulating choices of political parties in elections, or in managing protests or challenges to political regimes.
Security applications
There are many ways in which neuroscience and neurotechnologies can be deployed in the name of security. We focus here on the use of calmatives for national security—biochemical agents that act on the central nervous system to incapacitate those exposed to them. These neurotoxic weapons have become possible only because of advances in neuroscience, pharmacology, and physiology.
International discussions on unconventional weapons have to date focused on chemical, biological, and nuclear weapons. But a number of government agencies, private organizations, and nongovernmental organizations have expressed concerns that calmatives and other new agents that act on the central nervous system seem to fall outside the scope of the Biological Weapons Convention or the Chemical Weapons Convention, and might be permitted under Article 11d of the latter convention, which excludes from its prohibition of toxic chemicals their use for “law enforcement including domestic riot control purposes.” Yet calmatives can be lethal, as tragically demonstrated by the Russian theater hostage crisis in 2002, where many of the rescued hostages died as a result of fentanyl overdose.
Israel has a history of using incapacitating chemical agents, such as capsicum pellets, in policing. And although Israel receives funding through the EU’s Horizon 2020 for its participation in the Human Brain Project and other neuroscience research, the country is not a party to either the Chemical Weapons Convention (signed, but not ratified) or the Biological Weapons Convention (not signed or ratified). Very limited public information exists about whether Israel continues to develop and test chemical and biological weapons, but Alastair Hay and colleagues demonstrate that because capsicum pellets used against protestors in 2005 had more severe symptoms than other reported injuries from similar chemical agents, it is plausible to assume that collaborations between the Israel Institute for Biological Research and the Ministry of Defense includes continued R&D on neurotoxin-based weapons.
The specific issue of the development of neurotoxic chemicals for domestic security might be addressed by the extension or modification of existing international treaties and conventions. As one manifestation of such concerns, a joint paper issued on behalf of 20 countries (including the United States and the United Kingdom) in 2015, titled “Aerosolisation of Central Nervous System-Acting Chemicals for Law Enforcement Purposes,” argued that the Chemical Weapons Convention should not offer an exemption for law enforcement for use of neurotoxins in security applications.
However, this approach would not address the issues raised if current research on “neural circuits”—one focus of the US BRAIN Initiative—opened the possibility of their modification for security purposes. It is clear that we need new ways of thinking about how these and similar security uses of neuroscience and neurotechnology can be evaluated and regulated.
Intelligence applications
Neuroscience research offers an array of potential applications in the intelligence domain—for example, in identifying criminal suspects, for lie detection, and in surveillance. US neuroscience research funded through the Defense Advanced Research Projects Agency (DARPA), for example, has focused on technologies that might accomplish “brain reading” or “mind reading” to identify malicious intent. A relatively recent technique, near infrared spectroscopy (NIRS), operates by using changes in blood oxygenation in particular regions as a proxy for brain activity. The technique fMRI operates on the same principle, but instead of having the subject lie immobilized in a large machine, NIRS involves the subject donning a kind of helmet with multiple sensors that use infrared light than can penetrate the intact skull to measure changes in blood oxygenation. The initial uses of NIRS were medical, but as early as 2006, Scott Bunce and colleagues, supported by funds from DARPA’s Augmented Cognition Program, the Office of Naval Research, and the Department of Homeland Security, pointed out its many practical advantages over other methods of brain imaging, because it enabled the subjects to move about and carry out tasks in a relatively normal environment. Bunce suggested that because of these advantages, NIRS had significant potential in the detection of deception and other investigations that needed to be done in clinical offices or environments other than laboratories.
Neuroscientists, such as John-Dylan Haynes and colleagues, also claim that machine learning coupled with fMRI is beginning to be able to identify specific thoughts or memories in human subjects. Although this work is currently in its infancy, developments in such brain reading technologies will have obvious appeal to intelligence and surveillance organizations—for example, in determining when an individual intends to commit a terrorist act or whether a suspect might have important security-related information that they are unwilling to divulge voluntarily or under interrogation. But such potential applications raise not only technical questions of reliability but also ethical questions of violation of “neural privacy” in cases where information derived from involuntary brain scans is used as the basis of government action against individuals.
More effective lie detection also has obvious application to the intelligence domain. Whereas the conventional polygraph monitors physiological indicators such as blood pressure and pulse, fMRI-based technologies developed by Cephos Corporation and NoLie MRI claim that specific patterns of brain activity can identify when a suspect is lying or has a memory of particular words or scenes linked to an actual or intended crime. However, the reliability of such technologies is not yet well established. It is difficult to extrapolate from laboratory-based studies in which individuals are instructed to lie or tell the truth to real-life situations where the very fact of being accused of malicious intent, or knowledge of guilt, generates unknown patterns of brain activity, and where those who are genuinely guilty are likely to employ multiple techniques to disguise their deception. Although the results of such technologies may ultimately be inadmissible in the courtroom—as with the polygraph—they may well find use in intelligence applications such as interrogation and investigatory procedures.
Another intelligence-related application of neurotechnology (which also may find security-related uses) is in facial recognition technologies, which are now being adopted by many nations for intelligence gathering—although ethical challenges of privacy and consent remain unaddressed. Leading companies in the field of deep learning, such as Google DeepMind, have developed algorithms that are effective on static images but not on moving ones. Large-scale brain mapping projects are expected to lead to advances in neuromorphic computer design and “smarter” algorithms that can improve intelligence gathering-capabilities, such as facial recognition in videos, thus greatly enhancing surveillance capacities and potentially enabling additional applications of enhancing the targeting capabilities of autonomous or semiautonomous weapon systems.
Military applications
There are many military applications of contemporary developments in neuroscience and neurotechnology, but we focus here on warfighter enhancement through cortical implants and on noninvasive brain-computer interfaces (BCIs). Drugs originally used to improve cognition in patients with schizophrenia and dementia have been used to increase the cognitive performance of soldiers and to combat sleep deprivation. However, much current neuroscience and technology research on warfighter enhancement relates not to drugs, but to prosthetics. DARPA has for more than a decade invested heavily in this area, in particular aiming to develop cortical implants that will control prosthetic limbs fitted to soldiers injured in combat. As described in a 2006 paper by Andrew B. Schwartz and colleagues, one goal is to develop “devices that capture brain transmissions involved in a subject’s intention to act, with the potential to restore communication and movement to those who are immobilized.”
New approaches are now being tested for transmitting signals wirelessly to enable a subject to move a prosthetic limb with thought alone. The technique, which uses a cerebral implant that records the electrical activity of populations of neurons in motor cortical areas, was used initially with monkeys and now is being tried in humans by Jennifer L. Collinger and colleagues. Although such advances address important clinical issues of restoring function to those who have lost it by military or other injury, they also have potential warfighting applications. In theory at least, a single individual, equipped with a cortical implant, could wirelessly control neuroprosthetic devices in one or many robots situated in distant locales. And although it currently requires invasive surgery to install the cortical implants, R&D funded by DARPA is under way to miniaturize the chips and simplify their insertion.
In addition, Michael Tennison and Jonathan Moreno in 2012 described how a project within DARPA’s Augmented Cognition Program that used noninvasive brain-computer interfaces “sought to find ways to use neurological information gathered from warfighters to modify their equipment.” For example, BCIs would be utilized in a so-called “cognitive cockpit” that would aim to avoid information overload by adjusting the ways information is displayed and conveyed to the pilot. DARPA has also funded research using noninvasive BCIs in its Accelerated Learning program, which aims to develop new training methods to accelerate improvements in human performance, and in its Narrative Networks program to quantify the effect of narratives on human cognition and behavior. And the agency’s Neurotechnology for Intelligence Analysts and Cognitive Technology Threat Warning System programs have utilized noninvasively recorded “target detection” brain signals to improve the efficiency of imagery analysis and real-time threat detection. The threat-warning system is intended to integrate a range of sensory data from military personnel on the ground, much of which might be below their level of awareness, and then to alert them about potential threats.
Noninvasive BCIs may find many civilian uses as well. For example, just as the devices might aid injured soldiers, they are showing increased potential in the broader population for using brain signals from motor cortex areas to control prosthetic limbs. BCIs have been developed to enable individuals to control the cursor of a computer so that, for example, individuals who have lost the use of their limbs may be able to communicate by writing and, more recently, to enable some individuals apparently unable to communicate at all (considered to be in a persistent vegetative state, or diagnosed with locked-in syndrome) to communicate by activating different brain areas to answer simple questions. Such brain-computer interfaces are also being developed in the gaming industry to enable hands-free play.
Such devices depend on complex algorithms to detect information flows on one hand and brain activity on the other. They operate below the level of human consciousness and depend on the calculative functions of algorithms that are not known to the operators, or even to anyone who purchases and uses the devices, thus raising difficult issues of volition and moral accountability that cannot be captured in standard discussions about intent.
Beyond dual use
Our message is that the dual-use lens is not adequate for scientists and policy-makers to anticipate the ways that neuroscience and neurotechnology may remake the regulatory and ethical landscape as the various technologies bleed into and emerge from the PSIM domains we have described. Current treaties and regulatory frameworks that focus on export controls are not designed to deal with the complex and far-reaching consequences that future brain research will have at the intersection of technological advance, or with the broad arenas where government and nongovernment actors seek to advance their interests. The Ethics and Society division of the Human Brain Project is preparing an opinion on “Responsible Dual Use” that includes more detailed recommendations. The opinion will be available at https://www.humanbrainproject.eu/en/social-ethical-reflective/.
Emerging alternatives to existing dual-use governance regimes focus on self-regulation by industry, universities, and philanthropic organizations, especially in relation to artificial intelligence, machine learning, and autonomous intelligent systems. Although these efforts to couple research and technology development with ethical and social considerations are undoubtedly important, their capacity to actually shape, let alone terminate, lines of research that raise fundamental ethical and social issues has not been demonstrated.
The University of Bradford’s Malcolm Dando, a leading advocate for stronger efforts at chemical and biological weapons disarmament, has consistently stressed the key role that can be played by the education of neuroscientists in the dual use potential and risks of their work. But he remains skeptical that such efforts can be sufficient to “match the scale of the security problems that will be thrown up by the continuing scope and pace of advances in the life and associated sciences in coming decades.” We urge regulators concerned with dual use in the United States, Europe, and elsewhere to think beyond the dangers of militarization and weaponization when considering how to regulate and monitor advances in neuroscience research today and to pay attention to the political, security, intelligence, and military domains of application.