Anthropic alleges a massive distillation scheme by Chinese AI firms using Claude
- In Reports
- 04:30 PM, Feb 24, 2026
- Myind Staff
AI company Anthropic has accused three Chinese artificial intelligence labs, DeepSeek, Moonshot AI, and MiniMax, of creating more than 24,000 fake accounts to secretly access its Claude AI model and improve their own systems.
According to Anthropic, these companies generated more than 16 million exchanges with its AI model, Claude, through those accounts. The technique allegedly used is called “distillation,” a common AI training method. Anthropic said the companies “targeted Claude’s most differentiated capabilities: agentic reasoning, tool use, and coding.”
Distillation is usually used by AI labs on their own systems to build smaller and cheaper versions of large models. However, competitors can misuse this process to copy the outputs of another company’s model and train their own systems based on that information. This has raised serious concerns in the AI industry.
The accusations come at a time when there is strong debate in the United States over export controls on advanced AI chips. These restrictions were designed to slow down China’s AI development. Earlier this month, OpenAI sent a memo to U.S. House lawmakers accusing DeepSeek of also using distillation techniques to mimic its products.
DeepSeek first gained attention about a year ago when it released its open-source R1 reasoning model. That model nearly matched the performance of leading American AI labs but was developed at a much lower cost. The company is now expected to release DeepSeek V4, its latest model, which reportedly can outperform Anthropic’s Claude and OpenAI’s ChatGPT in coding tasks.
Anthropic provided details about the scale of activity it observed from each company. It said it tracked more than 150,000 exchanges from DeepSeek that appeared to focus on improving foundational logic and alignment. These exchanges were reportedly aimed at developing censorship-safe alternatives to policy-sensitive queries.
Moonshot AI allegedly carried out more than 3.4 million exchanges with Claude. These were focused on improving agentic reasoning, tool use, coding, data analysis, computer-use agent development, and computer vision. Last month, Moonshot released a new open-source model called Kimi K2.5 along with a coding agent.
MiniMax was linked to the largest volume of exchanges of about 13 million. Anthropic said these exchanges targeted agentic coding, tool use, and orchestration. The company claimed it was able to observe MiniMax redirect nearly half of its traffic to extract capabilities from the latest Claude model when it was launched.
Anthropic said it will continue to strengthen its defences to make distillation attacks harder to carry out and easier to detect. However, the company also called for broader cooperation. It urged “a coordinated response across the AI industry, cloud providers, and policymakers.”
The issue has also added fuel to ongoing discussions about U.S. export controls on advanced AI chips. Last month, the Trump administration formally allowed American companies such as Nvidia to export advanced AI chips, including the H200, to China. Critics argue that easing these export controls could significantly increase China’s AI computing power at a crucial time in the global AI race.
Anthropic stated that the scale of extraction carried out by DeepSeek, MiniMax, and Moonshot “requires access to advanced chips.” The company added, “Distillation attacks therefore reinforce the rationale for export controls: restricted chip access limits both direct model training and the scale of illicit distillation,” according to its blog.
Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think tank and co-founder of CrowdStrike, told TechCrunch he was not surprised by the allegations. “It’s been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of U.S. frontier models. Now we know this for a fact,” Alperovitch said. He added, “This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further.”
Anthropic also warned that distillation may create national security risks. The company said, “Anthropic and other U.S. companies build systems that prevent state and non-state actors from using AI to, for example, develop bioweapons or carry out malicious cyber activities.” It further stated, “Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely.”
The company also pointed to the risk of authoritarian governments using advanced AI systems for “offensive cyber operations, disinformation campaigns, and mass surveillance.” It warned that these risks increase further if such models are released as open source.
TechCrunch reported that it has reached out to DeepSeek, MiniMax, and Moonshot for comment.

Comments