The US military is extensively using Palantir's Maven Smart System in its attacks on Iran, which uses Anthropic's Claude, despite the ban.


Fatemeh Bahrami/Anadolu via Getty Images

Sign up to see the future, today

Can’t-miss innovations from the bleeding edge of science and tech

The Pentagon’s Continued Reliance on Banned Anthropic AI Fuels Iran Offensive Amidst Ethical Storm.

Despite a public and explicit ban from the White House and the Department of Defense, the United States military continues to heavily rely on Anthropic’s advanced AI model, Claude, in its ongoing and deadly offensive against Iran. This revelation has ignited a fierce debate about the ethics of artificial intelligence in warfare, the autonomy of military operations, and the intricate, often conflicting, relationship between Silicon Valley’s tech giants and national security interests. The controversy deepened last week when Dario Amodei, CEO of Anthropic, publicly articulated a firm ethical boundary for his company’s AI models, insisting that their powerful AI should not be deployed for mass surveillance of American citizens or in the development and operation of deadly autonomous weapons systems. This declaration, rooted in Anthropic’s core philosophy of “Constitutional AI” – an approach designed to align AI behavior with human values and safety principles – was intended to set a global standard for responsible AI deployment.

However, Amodei’s principled stand was met with immediate and profound indignation from high-ranking officials within the Pentagon. Defense Secretary Pete Hegseth responded with a scathing rebuke, accusing Anthropic of overstepping its bounds and attempting to “seize veto power over the operational decisions of the United States military.” Hegseth’s statement underscored the military’s firm belief in its sole authority over national defense strategies, regardless of the ethical frameworks proposed by its technology providers. Following this heated exchange, President Donald Trump swiftly issued a presidential directive, ordering all government agencies to “immediately cease” using Anthropic’s technology. While simultaneously acknowledging the deep integration of such tools, the President announced a plan for Anthropic’s AI to be phased out of all government work over the subsequent six months, suggesting a recognition of the logistical complexities involved in an abrupt withdrawal.

Yet, the urgency of the ongoing conflict in Iran appears to have overridden these directives. Despite the official ban, the US military finds itself in a precarious position, struggling to operate effectively without the very technology it has been ordered to abandon. As The Washington Post reported, the military’s reliance on Palantir’s Maven Smart System in the Iranian theater of operations remains extensive. Crucially, the Maven system has had Anthropic’s Claude chatbot integrated since 2024, forming a critical component of its analytical and targeting capabilities. This deep integration means that any immediate cessation of Anthropic’s AI would significantly degrade the military’s operational effectiveness, potentially at a moment of high tension and strategic importance.

The depth of this reliance was further highlighted when the *Wall Street Journal* first brought to light the Pentagon’s continued use of Claude for selecting attack targets in Iran, mere hours after the White House’s announcement of the ban. This immediate defiance underscored the stark disconnect between political mandates and battlefield realities. According to sources cited by *WaPo*, the Maven system, powered by Claude, functions as a highly sophisticated targeting engine. It processes vast amounts of intelligence data, analyzes patterns, identifies potential threats, and then “spits out precise location coordinates for missile strikes,” prioritizing them by perceived importance. This capacity for rapid, data-driven targeting is described by military officials as indispensable.

The Maven system’s controversial history extends beyond the current Iranian conflict. It was also reportedly instrumental during the US military’s invasion of Venezuela and the subsequent kidnapping of its president, Nicolás Maduro, showcasing its long-standing and critical role in complex military operations. Navy admiral Liam Hulin confirmed to *WaPo* that Center Command is “heavily using” the Maven system, emphasizing its pervasive presence across critical military operations. This institutional embeddedness makes a quick pivot away from the technology exceedingly difficult, if not impossible, without significant operational disruption.

Military commanders, speaking anonymously to the newspaper, articulated their resolve to continue utilizing Anthropic’s technology, presidential orders notwithstanding, until a viable and equally effective replacement can be implemented. Their rationale is rooted in pragmatism and a perceived duty to protect American lives. “Whether his morals are right or wrong or whatever,” a source told *WaPo*, referring to Anthropic CEO Dario Amodei, “we’re not going to let [his] decision-making cost a single American life.” This statement encapsulates the core tension: the military views its use of AI as a necessary tool for mission effectiveness and force protection, even if it conflicts with the ethical stances of the technology’s creators or the directives of the Commander-in-Chief. The implication is that the advanced capabilities provided by Claude are so critical that foregoing them would put American personnel at undue risk.

Amidst this controversy, the broader AI industry is also grappling with its role in military applications. Following Amodei’s dramatic falling out with the Pentagon, OpenAI CEO Sam Altman perceived an opportunity, moving quickly to sign a contract with the Department of Defense. This move, however, proved to be a significant miscalculation, triggering an “enormous and ongoing PR crisis” for OpenAI and resulting in a soaring number of uninstalls of its flagship product, ChatGPT. The public backlash highlighted a growing unease among users and a segment of the tech community about the direct involvement of AI companies in military operations, particularly those with lethal applications. It suggested that while governments may prioritize national security, the ethical implications for technology companies cannot be ignored by their user base or employees.

The rampant and increasingly sophisticated use of AI in warfare has caught many researchers and ethicists by surprise, raising profound questions about accountability and the nature of modern conflict. A primary concern is the inherent limitations of even the most advanced chatbots, which continue to struggle with fundamental reasoning and are notoriously “haunted by rampant hallucinations.” In scenarios involving life and death, the potential for an AI system to generate incorrect or misleading information – a “hallucination” – could have catastrophic implications. A misidentified target, an erroneous assessment of intent, or a flawed prediction of battlefield dynamics could lead to unintended escalation, civilian casualties, or friendly fire incidents.

The human cost of the offensive in Iran has been tragically high. Reports indicate the killing of many hundreds of Iranian civilians, a grim toll that raises urgent questions about the precision and discrimination of AI-assisted targeting. Additionally, six American soldiers have also lost their lives in the conflict. While it is impossible to directly attribute every casualty solely to AI decision-making, the integration of these systems undeniably plays a role in the pace and scope of the conflict.

Paul Scharre, executive vice president at the Center for a New American Security, articulated the paradigm shift underway: “The key paradigm shift is that AI enables the US military to develop targeting packages at machine speed rather than human speed.” This acceleration of the kill chain, while offering tactical advantages, introduces unprecedented risks. Scharre, however, sounded a note of caution, adding, “But AI gets it wrong. We need humans to check the output of generative AI when the stakes are life and death.” This call for robust human oversight highlights the critical need for a “human-in-the-loop” or “human-on-the-loop” approach, ensuring that ultimate moral and operational responsibility remains with human commanders, even as AI provides unprecedented analytical power. The ongoing conflict in Iran, and the Pentagon’s continued reliance on a banned AI, serves as a stark case study in the complex and often contradictory realities of integrating cutting-edge technology into the brutal calculus of modern warfare. The outcome will undoubtedly shape future policies on AI ethics, military procurement, and the delicate balance of power between technological innovation and sovereign command.

**More on Anthropic:** Sam Altman Is Realizing He Made a Gigantic Mistake