(Reuters) - Three Chinese companies used Claude to improperly obtain capabilities to improve their own models, the chatbot's creator Anthropic said in a blog post on Monday, following a similar disclosure by OpenAI earlier this month.
OpenAI had warned US lawmakers that Chinese artificial intelligence startup DeepSeek is targeting the ChatGPT maker and the nation's leading AI companies to replicate models and use them for its own training, a memo seen by Reuters showed.
DeepSeek, Moonshot and MiniMax created more than 16 million interactions with Claude using roughly 24,000 fake accounts, in violation of Anthropic's terms of service and regional access restrictions, the company said.
They used a technique called "distillation," which involves having an older, more established and powerful AI model evaluate the quality of the answers coming out of a newer model, effectively transferring the older model's learnings, Anthropic said.
"These campaigns are growing in intensity and sophistication. The window to act is narrow, and the threat extends beyond any single company or region."
DeepSeek's operation targeted reasoning capabilities across diverse tasks and the creation of censorship-safe alternatives to policy-sensitive queries, while Moonshot aimed at agentic reasoning and tool use, as well as coding and data analysis.
MiniMax targeted agentic coding, tool use and orchestration and Anthropic detected the campaign while it was still active — before MiniMax released the model it was training.
"When we released a new model during MiniMax's active campaign, they pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system," the blog post said.
DeepSeek, Moonshot and MiniMax did not immediately respond to requests for comment.