Anthropic Court Filing Contradicts Pentagon Security Claims

Abstract geometric illustration representing legal documents and government institutions in conflict

Anthropic has filed sworn court declarations revealing the Pentagon told the AI company their positions were “nearly aligned” just one week after the Trump administration terminated their contract on national security grounds, directly contradicting the government’s stated justification for ending the relationship.

The court filing, submitted as part of Anthropic’s legal challenge to the contract termination, includes evidence that Pentagon officials communicated substantive progress towards resolving outstanding issues as recently as late February, according to TechCrunch AI. This timeline creates a significant credibility problem for the administration’s position that Anthropic posed unacceptable security risks requiring immediate contract cancellation.

The dispute centres on a defence contract that had positioned Anthropic as a key AI provider to the US military. In early March, President Trump publicly declared the relationship “kaput,” with administration officials citing concerns about the company’s security protocols and data handling practices. The Pentagon subsequently issued formal statements characterising Anthropic’s approach as incompatible with national security requirements.

Anthropic’s legal team has now presented email correspondence, meeting records, and witness statements indicating Pentagon technical staff were actively working through implementation details and expressing satisfaction with the company’s proposed security framework. One declaration reportedly describes a 28 February meeting where defence officials outlined a path forward that would address remaining concerns within weeks, not months.

The contradiction suggests the contract termination may have been driven by political considerations rather than technical security assessments. Legal experts note that if Anthropic can demonstrate the Pentagon’s security concerns were pretextual, the company strengthens its case for wrongful termination and potential damages.

The business implications extend well beyond this single contract. Anthropic’s ability to secure government work directly affects its competitive position against OpenAI and Google, both of which maintain substantial public sector relationships. A prolonged legal battle or reputational damage from “security risk” characterisations could hamper Anthropic’s efforts to win contracts with defence allies and security-conscious enterprise clients.

For the Pentagon, the filing creates procurement credibility risks. If courts determine the termination lacked legitimate security justification, future contract decisions may face heightened judicial scrutiny. Defence contractors across sectors will be monitoring whether political considerations can override technical assessments in contract administration.

The broader AI industry faces regulatory uncertainty. Companies developing frontier models must navigate an increasingly complex landscape where technical capabilities, security protocols, and political positioning all influence government relationships. The case may establish precedents for how AI companies can challenge adverse government actions and what evidence standards apply to national security determinations.

Anthropic has not disclosed the contract’s value, but defence AI procurement typically ranges from tens to hundreds of millions of pounds over multi-year periods. The company raised $7.3 billion in funding rounds through 2024, with Amazon and Google among major investors. Loss of defence revenue would be material but not existential for the well-capitalised startup.

The timing proves particularly sensitive as the US government finalises its AI strategy for defence applications. Multiple agencies are establishing frameworks for AI procurement, and this dispute will likely influence how those frameworks balance security requirements against industry engagement.

The case now moves to discovery, where both sides will be compelled to produce internal communications and decision-making records. Legal observers expect Anthropic to seek depositions from Pentagon officials who participated in the February discussions, potentially forcing testimony about the disconnect between technical assessments and termination decisions.

The court will need to determine whether national security deference to executive branch decisions outweighs evidence of contradictory government statements. That balance will shape not only Anthropic’s immediate prospects but the broader framework for AI companies seeking to challenge adverse government actions on evidentiary grounds rather than accepting security determinations as unreviewable.