The US Department of Defense is actively seeking alternative artificial intelligence suppliers to replace systems provided by Anthropic, according to a Pentagon official speaking to Bloomberg, marking a significant escalation in the ongoing dispute between the defence establishment and the AI safety-focused company.
The move represents concrete action following weeks of tension over Anthropic’s refusal to provide certain AI capabilities for military applications. Pentagon officials confirmed they are now in discussions with multiple AI providers to fill the gap left by Anthropic’s Claude models, which had been deployed across several defence programmes.
The breakdown centres on fundamental disagreements about acceptable use policies for large language models in military contexts. Anthropic has maintained strict limitations on how its AI systems can be deployed, citing its founding mission to develop safe and beneficial AI. These restrictions have proven incompatible with Pentagon requirements for operational flexibility in intelligence analysis and decision support systems.
According to sources familiar with the matter, the Department of Defense had integrated Anthropic’s technology into at least three classified programmes focused on intelligence synthesis and threat assessment. The need to replace these systems has created urgency within defence procurement offices, which are now evaluating alternatives from OpenAI, Google’s DeepMind, and several smaller specialised defence AI contractors.
Market implications
The Pentagon’s decision creates immediate opportunities for Anthropic’s competitors in what analysts estimate is a defence AI market worth approximately $4.6 billion annually by 2025. OpenAI, which has demonstrated greater willingness to work with defence applications, appears well-positioned to capture displaced contracts. The company’s existing relationship with defence contractor Palantir provides an established integration pathway.
For Anthropic, the loss represents a strategic trade-off between commercial opportunity and adherence to its stated AI safety principles. The company, which raised $7.3 billion in its most recent funding round, has consistently positioned itself as prioritising responsible AI development over market share. However, the Pentagon relationship represented one of the most lucrative potential revenue streams in the enterprise AI sector.
Smaller defence-focused AI companies including Scale AI, Primer, and Shield AI may also benefit from the Pentagon’s search for alternatives. These firms have built their business models specifically around military and intelligence applications, without the ethical constraints that have limited Anthropic’s defence sector engagement.
Broader policy context
The dispute reflects deeper tensions within the AI industry about appropriate boundaries for military applications of large language models. Whilst companies including Microsoft, Google, and Amazon have established defence partnerships through their cloud computing divisions, the emergence of powerful general-purpose AI systems has created new ethical and strategic questions.
Pentagon officials have expressed frustration with what they characterise as inconsistent policies across AI providers. One defence official, speaking on background, noted that the department requires reliable partners who can commit to long-term support for critical national security applications without sudden policy changes.
The situation also highlights the strategic vulnerability created when defence systems depend on commercial AI providers with potentially conflicting priorities. Some defence policy experts have renewed calls for greater investment in government-owned AI capabilities, though such programmes would require years to match the sophistication of current commercial systems.
What comes next
The Pentagon’s timeline for replacing Anthropic’s systems remains unclear, though sources indicate procurement officials are working to identify alternatives within the next 90 days. The urgency reflects concerns about maintaining operational capabilities whilst transitioning to new AI providers.
Industry observers will be watching whether other government agencies follow the Pentagon’s lead in reconsidering Anthropic partnerships, and whether the company’s stance influences similar policy decisions at competing AI firms. The outcome may establish important precedents for how commercial AI companies balance profit motives against ethical constraints in the defence sector.
The Pentagon’s willingness to abandon an established AI supplier over policy disagreements signals that defence planners now view reliable access to advanced AI capabilities as a strategic necessity, not a discretionary enhancement.












