The violent clash between the Pentagon and Anthropic explains the future of technological power

Piercamillo Falasca
19/02/2026
Interests

In the United States, a clash between the Pentagon and Anthropic on artificial intelligence is taking place that says a lot about how AI will be governed in the coming years. The basic question is simple: who really decides what can be done with the most powerful models on the market? The companies that produce them, with ‘ethical’ and security limits, or the state, when it buys them and integrates them into its own activities?

The case concerns Anthropic, the company behind Claude, one of the main alternatives to ChatGPT. According to various media reports, the Pentagon threatened to treat Anthropic as a supply chain risk’.

The reason is that Anthropic would not want to accept a principle that is as simple as it is demanding: to allow the use of its models for‘all lawful purposes on unclassified systems.

Pentagon and Anthropic: what Washington is asking for on artificial intelligence

In recent months, the Department of Defence has been trying to bring the large AI suppliers on a common basis. The idea is that, if a use is formally legal, the model should also be able to be used for military and intelligence activities, without the company imposing additional rules or blocks.

Put like that, the request even seems banal: “if it is legal, why not?”.
But here the discourse gets complicated.

‘Legal’ is not always the same as ‘acceptable’. And above all, artificial intelligence is a generalist technology: the same capability that helps summarise documents or write code can also help to analyse large amounts of data, automate procedures and accelerate operational decisions.

The two red lines of Anthropic

Anthropic would not have said ‘no’ to everything. From what has emerged, he would, however, hold two conditions firm.

The first: no use for building fully autonomous weapons, i.e. systems that can select and hit a target without a human being ‘in the loop’ in any meaningful way.

The second: no use for mass surveillance of US citizens.

These are two very specific limits. They do not speak of ‘pacifism’ or rejection of the defence sector in general.
And that is precisely why the reaction attributed to the Pentagon is the most political part of the story.

Why ‘supply chain risk’ is a heavy threat

The term ‘supply chain risk’ is more problematic than one might think. In practice, if a supplier is deemed ‘risky’, all companies working with the government or the defence ecosystem may be pressurised to cut all integration.

Otherwise, they risk compromising contracts, certifications and compliance requirements.

Put very simply: the government stops buying from a company deemed risky and, more importantly, makes it expensive for anyone to continue using its products if they want to stay within the federal supply chain.

It is a mechanism that can quickly turn into a kind of industrial ‘stigma’. This is even more true when we are talking about software that is integrated into business tools and processes.

The context: AI is becoming public infrastructure

This story does not come in a vacuum. In the United States, AI is steadily entering the public administration and the defence sector, including through dedicated versions and controlled environments on unclassified networks.

The idea, for Washington, is that models are a piece of infrastructure: like the cloud, networks, management systems.

When a technology becomes infrastructure, the state tends to behave accordingly. It demands uniform standards, availability, continuity of service. And above all, it reduces tolerance for ‘non-negotiated’ constraints imposed by private parties.

What is really at stake, even in Europe

Looking beyond the details, the Anthropic-Pentagon clash is a very concrete example of a bigger issue: can artificial intelligence companies really control the use of their models once they enter the public and military machine?

If Anthropic gives in, the message for the whole industry is clear: the ‘ethical’ stakes are negotiable and, in the face of a buyer like the state (and its supply chain), the companies’ margin of autonomy is limited.

If Anthropic does not give in, the outcome could still be instructive. Because it would mean accepting the risk of becoming an ‘inconvenient’ supplier in its home market.

In both cases, a precedent is set: who has the final say on the limits of AI when technology enters the ganglia of power.

And it is a question that Europeans will also soon have to ask themselves: not just ‘what rules do we write’, but what real levers do we have when rules meet national security, public procurement and supply chains.