Future Tense Fiction

The Algorithmic Fog of War

Rey Velasquez Sagcal's illustration for "The Algorithmic Fog of War" by Candace Rondeaux

Picture an artillery commander in eastern Ukraine staring at two screens showing conflicting target recommendations. The army’s Delta situational awareness system flags one position as highest priority. Kropyva targeting software, a command-and-control communications app, suggests another. Both systems draw from the same data streams—over 600,000 enemy location reports processed monthly—but because they were built by different providers at different times in Ukraine’s decade-long conflict with Russia, they’ve reached different conclusions about which target matters most. The commander has minutes to decide. Neither system explains its reasoning.

When he questions the recommendations, the institutional answer is clear: Trust the algorithm and strike. The deficiency isn’t in the black box. It’s in the human hesitation.

This is the fog of war in 2026. Not the traditional fog—lacking intelligence about enemy positions—but a new confusion about the logic of your own decisionmaking systems. The algorithms present confident recommendations while concealing how they reached them. And military command structures are reorganizing around this opacity, treating human judgment not as a safeguard but as a glitch to be minimized.

Andrew Liptak’s Future Tense Fiction story “Deficiency Agent” captures this dynamic with precision. In Liptak’s story, Parker, a systems support officer, cannot extract from TION—the artificial intelligence tactical navigation system guiding his convoy—why it insists on a specific route that seemingly adds avoidable time and distance to a mission in a hostile environment. Lieutenant Jacobs, fresh from officer training where he learned not to question algorithmic outputs, overrides Parker’s objections. The convoy drives into an ambush. A soldier is critically wounded. Hours later, they discover why TION rerouted them: to clean dust off a sensor tower lens.

The deficiency isn’t in the black box. It’s in the human hesitation.

The characters in Liptak’s story aptly describe Parker’s role of systems support officer as that of a “deficiency.” The terminology is diagnostic. Parker exists because TION isn’t reliable enough to operate without human supervision. Yet his role is conceived as compensating for organizational inadequacy—the deficiency is human judgment interfering with algorithmic authority. He’s inserted into the command loop as a failsafe, but marginalized when he attempts to exercise it.

The contradiction is structural: The system requires human oversight while systematically devaluing it. The human becomes the glitch. When Parker questions TION’s route selection, he’s not seen as performing his oversight function; he is seen as an obstacle to efficient operations. Lieutenant Jacobs, trained to trust algorithmic outputs, asks if Parker disagrees with “him”—the gendered pronoun revealing how the system has been personified while the human has been depersonalized.

Though there is no question that Ukraine’s Delta situational awareness system has transformed artillery targeting speed and coordination, giving Ukrainian forces a genuine advantage, commanders still face the black-box problem when the systems conflict or when they need to evaluate whether recommendations make tactical sense. This opacity becomes especially dangerous when adversaries interfere. Ukrainian forces using Starlink-dependent command systems have faced persistent Russian electronic warfare targeting their connectivity. Before Russia’s May 2024 offensive in Kharkiv, Ukraine’s 92nd Assault Brigade reported Starlink service becoming intermittent. Without reliable connectivity, they couldn’t quickly share intelligence and resorted to text messages.

When Ukrainian commanders face disruptions in Delta or Kropyva, they must decide quickly: Is this a technical glitch? Jamming? Data corruption? Commanders receive recommendations—or don’t—without visibility into whether the underlying data are reliable. The algorithmic fog means you can’t distinguish between system malfunction and adversarial action.

And even when these systems are working as designed, they can optimize for objectives humans cannot see or evaluate. Liptak’s TION illustrates this perfectly.

The algorithmic fog means you can’t distinguish between system malfunction and adversarial action.

TION successfully optimized the convoy’s path to complete its highest-priority objective. It performed exactly as designed. The problem is that TION treats combat objectives and maintenance tasks as equivalent variables in an optimization function humans cannot interrogate. Perhaps the sensor to be cleaned was critical to theater operations. Perhaps it monitored adversary movements that would have endangered dozens of other convoys. Parker and Jacobs have no way to evaluate these possibilities because TION doesn’t—or can’t—explain them.

The wagon-of-phones scenario in “Deficiency Agent” should worry every military planner. A child pulling a wagon filled with old mobile phones spoofs cellular traffic to create the appearance of a crowded street. TION interprets this phantom data as genuine and redirects the convoy onto an alternate route where militants are waiting. The deception requires no sophisticated hacking, no penetration of military networks. It exploits the gap between what TION measures and what TION cannot explain it is measuring.

This asymmetry favors adaptable adversaries. They need to understand what patterns trigger algorithmic responses—which they can learn through observation and testing. Operators need to understand why the system prioritizes certain patterns and whether the data feeding those systems are reliable—which requires access to reasoning processes that black-box AI systems don’t provide. Kremlin military planners haven’t reached that level of sophistication just yet. But Russia’s increasingly sophisticated jamming and attempted interference with Ukrainian command systems demonstrate how quickly adversaries can learn to exploit these vulnerabilities.

Then there is the question of after-action review. After the ambush, who bears responsibility? Parker questioned the route but lacked authority. Jacobs followed protocol. The AI performed its optimization function as designed. The structure disperses accountability while concentrating power in systems that resist interrogation.

Current military AI procurement emphasizes capability: what systems can detect, predict, or recommend. Less attention goes to how these capabilities restructure accountability. Junior officers learn that questioning AI outputs marks them as obstacles. The humans supervising AI—the “deficiencies”—occupy positions without authority to challenge algorithmic determinations.

Competitive pressures ensure this dynamic will intensify. Militaries that hesitate to trust algorithmic systems risk being outpaced by adversaries who don’t. But speed without the ability to interrogate reasoning creates its own vulnerabilities. The pressure is toward faster adoption, greater integration, more deference to algorithmic authority—all while the fundamental opacity problem remains unsolved and possibly unsolvable.

Militaries that hesitate to trust algorithmic systems risk being outpaced by adversaries who don’t.

This isn’t a problem better engineering can solve. The issue is how command relationships are organized around AI outputs that humans cannot effectively evaluate or challenge. Which raises the question: What happens as these systems become more sophisticated and more deeply embedded in military operations?

The choice isn’t whether to use these systems—it’s whether to reorganize command structures around them or reorganize the systems to remain compatible with human authority. Liptak’s dirty lens scenario multiplies with every AI system integrated into command operations.

Three structural interventions could prevent the next convoy from driving into an ambush, if institutions act before the pattern becomes irreversible. First, the role of human supervisors needs institutional weight that matches their responsibility. If Parker-equivalent positions are staffed with experienced personnel who lack command authority, the oversight function becomes performative. Second, military leadership must recognize that adversarial exploitation of AI systems is accelerating. The wagon-of-phones scenario will multiply as adversaries learn which patterns to spoof and which sensors to manipulate. Opacity compounds this vulnerability. Third, the training pipeline for officers needs to emphasize when to override algorithmic recommendations, not merely when to trust them.

None of these problems has straightforward solutions. Making AI systems more explainable encounters genuine technical limits—neural networks often can’t articulate why they weight certain features over others. Granting human supervisors more authority slows decisionmaking in environments where speed provides advantage. Training officers to question AI recommendations conflicts with operational cultures that reward efficiency and compliance.

But the alternative is institutionalizing a command structure where authority flows from systems that cannot explain themselves, where human judgment is treated as deficiency rather than necessity, and where the fog of war hasn’t been eliminated but relocated into the algorithms that promise clarity.

The dirty lens problem at the heart of “Deficiency Agent” isn’t coming. It is already here. And the humans designated to catch it—the commanders who question algorithmic outputs, the experienced personnel who understand when optimization serves the wrong objective—risk being reclassified as glitches in the system. That reclassification may prove more dangerous than any technical vulnerability. Because when you teach an institution to see human judgment as the deficiency, you’ve guaranteed that nobody with authority will question the algorithm until after the convoy drives into the ambush.

About the Author

Candace Rondeaux is senior director of Future Frontlines and Planetary Politics programs at New America and professor of practice at Arizona State University’s School of Politics and Global Studies. Her recently published book, Putin’s Sledgehammer: The Wagner Group and Russia’s Collapse into Mercenary Chaos, examines the evolution of proxy warfare, mercenary networks, and Russia’s irregular warfare strategy in Ukraine and beyond.

Future Tense Fiction is a partnership between Issues in Science and Technology and the Center for Science and the Imagination at Arizona State University.

Cite this article

Rondeaux, Candace. “The Algorithmic Fog of War.” Future Tense Fiction. Issues in Science and Technology (January 30, 2026).