Anthropic rejects a Pentagon ultimatum on the military use of its AI Claude
Feb 27
Fri, 27 Feb 2026 at 02:50 PM 0

Anthropic rejects a Pentagon ultimatum on the military use of its AI Claude

In the United States, Anthropic has just opened an unprecedented standoff with the Department of Defense, demonstrating that artificial intelligence is now at the heart of geopolitical decisions.

Indeed, as reported by Le Figaro, the Trump administration is requesting unrestricted use of Claude, its AI model, by the Pentagon. A request that the company publicly rejected, citing principles it considers non-negotiable…

A Pentagon ultimatum under legal pressure

Secretary of Defense Pete Hegseth had given Dario Amodei a clear ultimatum: remove all limitations on the use of Claude by the U.S. military. Failing that, Washington was considering invoking a 1950 law, a Cold War relic, which allows for compelling a private company to produce goods for national defense.

The Department of Defense has thus asked all of its AI suppliers to remove default usage restrictions, in order to broaden the scope of applications as long as they remain within the legal framework. All would have agreed, including Anthropic… with two exceptions.

These two “red lines” concern the mass surveillance of American citizens, as well as the deployment of fully autonomous weapons, capable of operating without final human supervision. Uses that the startup categorically refuses to authorize.

Faced with threats of sanctions, including possible inclusion on a blacklist of companies deemed a risk to strategic supplies, which already includes Huawei and Kaspersky, Anthropic is maintaining its position…

Claude already integrated into defense systems

While Anthropic does not intend to cooperate on this matter, the refusal is not a blanket rejection of collaboration with the federal government. Anthropic points out that its Claude model is already deployed within classified networks of the U.S. government, as well as in national laboratories. According to Dario Amodei, Anthropic's AI is already being used for missions such as intelligence analysis, modeling, simulation, operational planning, and cyber operations. The company even claims to have been a pioneer in integrating AI models into sensitive U.S. infrastructure. Since its inception, Anthropic has championed a security-centric and ethically aligned approach. At the beginning of the year, it had even published a "constitution" detailing the principles guiding Claude's behavior, with the aim of preventing dangerous uses.

In his statement, Dario Amodei emphasizes that while AI can contribute to defending democracies, it can also, in certain specific cases, weaken their foundations. This is a rare stance at this strategic level, illustrating the tension between technological sovereignty, security imperatives, and the responsibility of private AI actors…

Comments

Leave a Comment

Suggested for You