
The standoff between the U.S. Department of Defense and Anthropic has reached a fever pitch. On Tuesday, February 24, 2026, Secretary of Defense Pete Hegseth issued a final warning to Anthropic CEO Dario Amodei: remove the ethical “guardrails” on the Claude AI model for military use by Friday evening, or face total removal from the federal supply chain.
The 5:01 PM Deadline: ‘Unfettered Access’ or Blacklisting
In a high-stakes meeting at the Pentagon, Hegseth made it clear that the Trump administration will no longer tolerate corporate restrictions on national security technology. Anthropic has until Friday, February 27, at 5:01 PM to comply with demands for “unfettered” military access.
“We are shrugging off any AI models that won’t allow you to fight wars,” Hegseth previously stated in January. “Our AI will not be ‘woke’—it will operate without ideological constraints that limit lawful military applications.”
The Three ‘Nuclear Options’ Facing Anthropic:
If Amodei declines to sign the updated usage policy, the Pentagon has threatened three major punitive measures:
- Supply Chain Risk Designation: Labeling Anthropic a “national security threat,” a move typically reserved for Chinese firms like Huawei, which would effectively blacklist the company from any U.S. government-linked business.
- The Defense Production Act (DPA): Invoking Cold War-era powers to legally compel Anthropic to retrain its models and strip out safety restrictions for military targeting.
- Contract Termination: Immediate cancellation of Anthropic’s $200 million defense contract, which currently allows Claude to operate on the military’s secure internal network, GenAI.mil.
The Red Lines: Autonomous Weapons and Surveillance
The dispute centers on two “red lines” that Anthropic has refused to cross:
- Fully Autonomous Lethal Force: Using AI to select and strike targets without meaningful human oversight.
- Mass Domestic Surveillance: Deploying large language models to analyze American citizens’ communications to “detect pockets of disloyalty.”
Tensions reportedly “exploded” earlier this month following the January 3 U.S. raid in Caracas, Venezuela, which led to the capture of Nicolás Maduro. Reports suggest the military utilized Claude’s data analysis capabilities during the operation without Anthropic’s knowledge, prompting the company to investigate—a move that infuriated Pentagon leadership.
Military AI Landscape: Competitors Closing In
While Anthropic hesitates, other tech titans are moving to fill the void.
| Company | Status with Pentagon (Feb 2026) | Terms Accepted |
| xAI (Grok) | Approved for Classified Use | “All Lawful Purposes” |
| OpenAI (ChatGPT) | Approved for Unclassified Use | Signed “Standard Defense Language” |
| Google (Gemini) | In Pilot Phase | “All Lawful Purposes” |
| Anthropic (Claude) | Under Review / Deadline Issued | Resisting “Lethal/Surveillance” use |




