Anthropic accuses Chinese companies of copying its artificial intelligence
Feb 25
Wed, 25 Feb 2026 at 02:30 PM 1

Anthropic accuses Chinese companies of copying its artificial intelligence

AI is no longer just a field of innovation, but it has become a strategic battleground. This week, the American startup Anthropic claims to have been the target of a large-scale data mining operation conducted by several Chinese laboratories, with its flagship model in its sights.

Behind these accusations lie technological, economic, and geopolitical stakes, because beyond a simple dispute between competing companies, it is the control of advanced AI models that is at stake, in a context of heightened rivalry between Washington and Beijing…

A Large-Scale Distillation Campaign

In a post published by Anthropic, the company accuses three Chinese players, DeepSeek, Moonshot AI, and MiniMax, of having conducted attacks by "distillation" against its model Claude.

In concrete terms, distillation is a well-known practice in the AI ecosystem: it consists of training a smaller model, called the "student," using the responses generated by a more powerful model, the "teacher." Used internally by many companies, this method reduces costs and optimizes performance.

But what Anthropic is denouncing here is not the principle, but its application. According to the company, these labs allegedly created nearly 24,000 fraudulent accounts and generated more than 16 million interactions with Claude via its public API.

The goal would then be to reproduce its advanced reasoning capabilities, agentic coding, and sensitive query processing, without incurring the associated research costs.

In detail, MiniMax reportedly totaled more than 13 million interactions, Moonshot 3.4 million, and DeepSeek approximately 150,000. These volumes, according to Anthropic, reflect an industrial logic of systematic extraction

Strategic and security stakes

This case comes just days after similar accusations made by OpenAI, which also believes that DeepSeek used this technique to draw inspiration from its models.

For Anthropic, The issue goes beyond mere competition. Indeed, American models incorporate safeguards designed to limit certain sensitive uses, particularly in cybersecurity and biology. In the event of unregulated distillation, these protection mechanisms could disappear, while the technical capabilities would remain intact.

With technological tensions escalating between the United States and China, the company cites a risk to national security and announces the deployment of attack pattern detection systems, strengthened account verification, and the sharing of technical indicators with other industry players.

The irony of the situation is not lost on anyone, however, as the American AI giants, including Anthropic last year, are themselves targeted by lawsuits for training their models on copyrighted content. On the one hand, distillation would be seen as an attack on innovation, and on the other, the massive collection of public data would be considered a fair use…

Comments

Leave a Comment

Suggested for You