Anthropic vs Pentagon: Big Tech's Shift on AI and Warfare
Anthropic vs Pentagon: Big Tech's AI Warfare Shift

Anthropic-Pentagon Battle Reveals Big Tech's Reversal on AI and War

Dario Amodei, the chief executive of Anthropic, and Donald Trump are at the center of a growing conflict that underscores a dramatic shift in Silicon Valley's stance on artificial intelligence and military applications. Less than a decade ago, Google employees successfully protested against military use of AI, but today, Anthropic is embroiled in a legal battle with Trump administration officials, not over whether AI should be used for war, but how it should be deployed.

Standoff Forces Tech Industry to Grapple with Ethical Boundaries

The feud between Anthropic and the Pentagon escalated three days ago when the AI firm sued the Department of Defense, alleging that the government's decision to blacklist it from government work violated its First Amendment rights. This months-long standoff has centered on Anthropic's attempts to prohibit its AI model from being used for domestic mass surveillance or fully autonomous lethal weapons, staking an ethical boundary that other tech companies must now consider crossing.

Anthropic has argued that complying with the DoD's demands to allow "any lawful use" of its technology would breach its founding safety principles and open the door to potential abuse. This refusal to remove safety guardrails, and the Pentagon's subsequent retaliation, have highlighted longstanding concerns over AI's role in conflict, while revealing how much the industry's goals have shifted regarding military ties.

"If people are looking for good guys and bad guys, where a good guy is someone who doesn't support war, then they're not going to find that here," said Margaret Mitchell, an AI researcher and chief ethics scientist at Hugging Face.

From Anti-Military Protests to Lucrative Defense Contracts

Several factors have contributed to big tech's newfound embrace of militarism. Alignment with the Trump administration, including shows of fealty from major CEOs, has tied tech firms to the government's desire to expand military capabilities. The administration's vow to overhaul federal agencies using artificial intelligence has specifically signaled opportunities for AI firms to integrate their products into government and military operations, securing revenue for years to come. Additionally, concerns over China's technological advancement and a surge in international defense spending have shifted industry attitudes.

In contrast, just a few years ago, working with the military on potentially harmful technology was seen as a red line for many tech workers. In 2018, thousands of Google employees protested Project Maven, a program to analyze drone footage for the DoD. Over 3,000 workers stated in an open letter, "We believe that Google should not be in the business of war." Google subsequently decided not to renew Project Maven and published policies barring technology that could "cause or directly facilitate injury to people."

However, since then, Google has clamped down on employee activism, removed the 2018 language from its policies, and signed numerous contracts allowing militaries to use its products. In 2024, the tech giant fired over 50 employees for protesting military ties to the Israeli government. Chief executive Sundar Pichai emphasized in a memo that Google is a business, not a place to "fight over disruptive issues or debate politics." This week, Google announced it would provide its Gemini AI to the military for creating AI agents on unclassified projects.

OpenAI also reversed its stance, lifting a blanket ban on military access to its models in 2024 and now has its chief product officer serving as a lieutenant colonel in the US military's "executive innovation corps." Along with Google, Anthropic, and xAI, OpenAI signed an up-to-$200 million contract with the DoD last year to integrate technology into military systems. On the day Defense Secretary Pete Hegseth declared Anthropic a supply chain risk, OpenAI secured a deal allowing its tech in classified military systems.

More hawkish companies like Anduril and Palantir have made partnering with the DoD a cornerstone of their businesses, attempting to sway Silicon Valley politics. Palantir, which contracted with military intelligence in Afghanistan in the early 2010s, took over the Project Maven contract after Google dropped it in 2019. Maven is now the classified system used by military personnel to access Anthropic's Claude AI, according to reports.

Anthropic's Complex Stance on AI and Warfare

Despite public praise for its standoff with the Pentagon, Anthropic's co-founder and CEO Dario Amodei has emphasized that the AI company and the government largely share common goals. "Anthropic has much more in common with the Department of War than we have differences," Amodei wrote in a blog post last Thursday.

While the White House has accused Anthropic of being "a radical left, woke company," Amodei's views are far from pacifistic. In a January essay, he warned of AI's potential harms, such as creating deadly bioweapons and threats from China, while arguing that companies should arm democratic governments with advanced AI to combat autocratic adversaries. He expressed less concern about AI facilitating warfare and more about technology reliability and the risk of consolidation by a small group controlling autonomous drones.

Amodei's essay foreshadowed key issues in the Pentagon fight, including AI as a tool for mass surveillance. He stated that using AI for national defense is acceptable "in all ways except those which would make us more like our autocratic adversaries."

Although Amodei has stuck to Anthropic's red lines, he has repeatedly stated a desire to continue working with the Defense Department. The company's lawsuit reveals extensive military collaboration, with Anthropic offering a modified version of Claude, called Claude Gov, that is less restrictive for handling classified documents, military operations, or threat analysis.

The government has reportedly used Claude for target selection and analysis in bombing campaigns against Iran, a use-case Anthropic has not opposed. In his blog post, Amodei claimed Anthropic supports American frontline warfighters and remains committed to providing technology, stating, "We have said to the department of war that we are OK with all use cases, basically 98 or 99% of the use cases they want to do, except for two."