Categories AI

Anthropic Accuses 3 Chinese Companies of AI Tool Theft

Anthropic has raised serious allegations against three notable Chinese artificial intelligence companies—DeepSeek, Moonshot AI, and MiniMax—claiming they have extensively utilized its Claude chatbot to covertly train competing models. This incident has added another layer to the ongoing global discourse regarding the boundaries between unethical practices and accepted industry norms.

In a blog post on Monday, the San Francisco-based company accused these Chinese firms of infringing corporate regulations by interacting with Claude, its market-transforming AI tool. “We have detected large-scale efforts by DeepSeek, Moonshot, and MiniMax to illicitly leverage Claude’s features for enhancing their own models,” the company asserted. “These entities engaged in over 16 million interactions with Claude via about 24,000 fraudulent accounts, breaching our terms of service and regional access limitations.”

Anthropic claims that the Chinese firms employed a methodology called “distillation,” where one model is trained using the outputs from another, typically a more advanced system. The alleged focus of these campaigns included key features that distinguish Claude, such as intricate reasoning, coding support, and tool utilization.

While Anthropic acknowledges that distillation is a “common and legitimate training technique,” it contends that the Chinese companies’ approach might have been intended for “illicit purposes.” The company emphasized that using extensive networks of fake accounts to mimic a competitor’s proprietary model not only contravenes its terms of service but also challenges U.S. export controls intended to limit China’s access to leading-edge AI technology. Anthropic has called for “rapid, coordinated action among industry stakeholders, policymakers, and the global AI community.”

On a related note, Anthropic recently faced accusations from thousands of authors for copyright infringements, related to its bulk downloading of books from shadow libraries to develop its AI models instead of purchasing and scanning them. In a significant resolution, Anthropic settled that lawsuit for $1.5 billion in September 2025, compensating authors around $3,000 per book for approximately 500,000 titles.

How the Chinese Firms Are Accused of Doing It

The company asserts that the three labs circumvented geofencing and business regulations restricting Claude’s commercial use in China by employing proxy services that resell access to major Western AI systems. According to Anthropic, one such “hydra cluster” operated tens of thousands of accounts simultaneously, distributing requests across different API keys and cloud providers.

After establishing these accounts, the labs reportedly scripted long, token-heavy conversations aimed at eliciting detailed, step-by-step responses to be integrated back into their own systems as training data. Anthropic described the outcome as an off-the-books pipeline that turned Claude into an unintentional instructor for models being developed within China’s rapidly evolving AI landscape.

While Anthropic has yet to announce specific legal actions against the three companies, it has indicated that it has severed known access points and is urging the U.S. government to tighten export regulations on advanced chips and AI services to prevent similar incidents in the future.

‘How the Turn Tables’

Anticipating sympathy, Anthropic instead faced skepticism from critics. The reaction among industry observers has been notably critical. Commentators have pointed out that Anthropic itself has confronted significant accusations regarding its own data collection methods, beyond the authors’ copyright case, including a separate lawsuit involving the unauthorized scraping of Reddit content. A commenter cleverly remarked, “How the turn tables,” alluding to a popular meme inspired by the television show The Office.

The underlying critique highlights an ongoing struggle over who establishes the standards for an industry heavily reliant on repurposing human output. U.S. companies like Anthropic and OpenAI are increasingly advocating for stringent actions against foreign competitors they accuse of copying proprietary technologies, while simultaneously defending their extensive data collection efforts under the premise of fair use.

Many Chinese labs, which often release more open-source models, are competing to close the performance gap with Western counterparts by leveraging any legal advantages they can find. As Washington continues to consider stricter regulations on exporting AI chips and cloud solutions to China, Anthropic’s accusations may intensify demands for new safeguards, providing critics an opportunity to highlight the troubling paradox inherent in contemporary AI practices.

For this story, Fortune reporters utilized generative AI as a research tool, with an editor verifying the information’s accuracy before publication.

Leave a Reply

您的邮箱地址不会被公开。 必填项已用 * 标注

You May Also Like