AI Warfare's Deadly Cost: The Hidden Role of Defense Contractors in Modern Conflict
AI Warfare's Deadly Cost: Defense Contractors in Modern Conflict

The Rise of AI Warfare: A New Era of Chosen Blindness

In modern conflicts, from Gaza to Iran, a disturbing pattern has emerged: precision weapons, deliberate ignorance, and the tragic loss of innocent lives, particularly children. The failure to regulate artificial intelligence in warfare is exacting a devastating toll, with AI systems now playing a central role in military targeting decisions. This shift represents not just a technological advancement but a systemic evasion of accountability, where companies behind these systems operate as defense contractors while hiding behind their models.

The Fog Procedure: From Human Soldiers to Algorithmic Systems

Historically, the Israeli military employed a strategy known as the "fog procedure," where soldiers in low-visibility conditions would fire into the darkness, assuming threats lurked unseen. This logic of chosen blindness has been refined and automated in the age of AI warfare. In Gaza, AI systems processed billions of data points to rank individuals based on their likelihood of being combatants, generating target lists with minimal human oversight. The darkness is no longer a terrain condition but a design feature, creating deniability and shifting responsibility from people to procedures.

Case Studies: Gaza and Iran – Laboratories for AI Targeting

Israel's recent war in Gaza has been dubbed the first major "AI war," with systems producing over 37,000 targets in the initial weeks. Human operators spent an average of 20 seconds verifying each target, often just confirming gender, before approval. This rapid pace led to high civilian casualties; reports indicate that named militants accounted for only about 17% of over 53,000 deaths in Gaza, suggesting 83% were civilians. The IDF has disputed these figures but did not specify inaccuracies.

Wide Pickt banner — collaborative shopping lists app for Telegram, phone mockup with grocery list

In Iran, a strike on the Shajareh Tayyebeh elementary school in Minab killed at least 168 people, mostly children aged seven to twelve. While munitions were precise, intelligence failures were stark—the school had been repurposed for civilian use nearly a decade prior, but this was not updated in targeting databases. AI systems, including those from Palantir, were used to generate and prioritize targets at unprecedented speeds, enabling the U.S. military to strike 1,000 targets in the first 24 hours of the campaign. Whether AI directly selected the school or not, the system's design facilitated such errors.

The Companies Behind the Algorithms: Defense Contractors in Disguise

Major AI firms are deeply integrated into military operations, yet they often avoid the label of defense contractors. Palantir, with early CIA funding, supplies AI infrastructure to the U.S. military, drawing on technologies like Anthropic's Claude. When Anthropic resisted removing ethical constraints for targeting, the Pentagon turned to OpenAI, which quietly lifted its military use ban in 2024. Google and Amazon are signatories to Project Nimbus, a $1 billion-plus contract with the Israeli government, while Microsoft had deep ties before partially withdrawing in 2024.

Anduril builds autonomous weapons for lethal targeting, and venture capital firms like Andreessen Horowitz and Founders Fund back these companies, cultivating political influence. These entities spend millions on lobbying—Palantir outspent Northrop Grumman in one quarter of 2023—and blur lines between commercial and defense products to evade regulation.

International Law and Accountability: A System in Crisis

International humanitarian law requires careful verification of targets and protection of civilians, but AI targeting undermines these principles. Systems generate targets inferentially, without individual human assessment, and verification times are measured in seconds, reducing human judgment to rubber-stamping. Accountability frameworks collapse as reasoning disappears into probability scores, and companies like Palantir operate outside legal frameworks designed for states.

Pickt after-article banner — collaborative shopping lists app with family illustration

The EU AI Act exempts military applications, relying on international law, which is being eroded by these systems. In the U.S., policies like the 2025 National Defense Authorization Act promote AI adoption without regulation, framing it as a strategic race. This regulatory inaction is deliberate, maintained by lobbying and a culture that treats AI as consumer technology rather than a core component of warfare.

Pathways to Regulation: Salvaging Accountability in AI Warfare

Despite challenges, opportunities for regulation exist. The EU can use export controls and procurement conditions on dual-use systems. International courts, such as the ICJ, offer frameworks for holding companies liable in jurisdictions that enforce international law. AI firms depend on governments for infrastructure, giving states leverage to demand explainable systems, cumulative civilian cost assessments, and extended liability up the supply chain.

Without action, the fog procedure will define future conflicts, with companies operating from safety in places like Palo Alto, facing no personal risk or legal exposure. The cost of inaction is already too high, as seen in rows of small coffins from Minab to Gaza. Regulating AI warfare is not just a moral imperative but a necessity to prevent further tragedies and uphold the laws of war in an algorithmic age.