AI Aids the Pretense of Military “Precision”

Artificial intelligence is the latest promise of a technological solution to the intractable “fog of war.” In Ukraine and Gaza, enthusiasts have proclaimed the advent of AI-driven warfighting. In October 2023, Ukrainian technologists confirmed that AI-enabled drones identify and target 64 types of Russian “military objects” without a human operator; meanwhile the Israeli Defense Forces website states that an AI system generates recommended targets, reportedly at an unprecedented rate. Enormous questions arise regarding the validity of the assumptions built into these systems about who comprises an imminent threat and about the legitimacy of their targeting functions under the Geneva Conventions and the laws of war.

Considering military investments in AI as part of a sociotechnical imaginary is helpful here. Developed within the field of science and technology studies, the concept of sociotechnical imaginaries describes collectively imagined forms of social order as materialized by scientific and technological projects. These include aspirational futures that sustain investments in the military-industrial-academic complex. Iconic examples of AI-enabled warfighting in the present moment include battle management interfaces like Palantir’s AI platform.

We should be deeply skeptical of the promotion of AI as a solution to the fog of war, which imagines that the right technology will find the important signals amid the noise.

To function in the real world, these platforms require very large, up-to-date datasets (of labeled “military objects” or biometric profiles of “persons of interest,” for example), from which models can be developed. In the case of threat prediction and targeting, neither the US Department of Defense nor allied militaries make public the details necessary to assess validity. But in the case of predictive policing, an investigation by The Markup found that fewer than 1% of data-based predictions actually lined up with reported crimes. And generative AI introduces new uncertainties: both the provenance of the data and reliability of information are hard to check. That is particularly dangerous for “actionable military intelligence,” which is used for targeting and to designate imminent threats.

We should be deeply skeptical of the promotion of AI as a solution to the fog of war, which imagines that the right technology will find the important signals amid the noise. This faith in technology constitutes a kind of willful ignorance, as if AI is a talisman that sustains the wider magical thinking of militarism as a path to security. In the words of performance artist Laurie Anderson (quoting her meditation teacher), “If you think technology will solve your problems, then you don’t understand technology—and you don’t understand your problems.”

Critical inquiry into the realities of war can help challenge the logics through which militarism perpetuates its imaginary of rational and controllable state violence while obscuring war’s ungovernable chaos and unjustifiable injuries. Although there are valid reasons that military forces exist in today’s world, we should question the narratives that underwrite the billions of dollars funnelled into algorithmically based warfighting. We need to redirect resources to creative projects in de-escalation, negotiated settlements that offer true security for all, and eventual demilitarization. While the techno-solutionist imaginaries of militarism are longstanding, so are their limits as a basis for sustainable peace.

Your participation enriches the conversation

Respond to the ideas raised in this essay by writing to [email protected]. And read what others are saying in our lively Forum section.

Cite this Article

Suchman, Lucy. “AI Aids the Pretense of Military “Precision”.” Issues in Science and Technology (): 81–82.