Anthropic Alleges Industrial-Scale IP Theft by Chinese Labs Amid US App Store Surge

Edited by: gaya ❤️ one

San Francisco-based AI developer Anthropic has publicly accused several prominent Chinese laboratories of orchestrating an industrial-scale campaign to illicitly extract the proprietary capabilities of its Claude large language model. Anthropic specifically named DeepSeek, Moonshot AI, and MiniMax as engaging in intellectual property infringement through a technique known as "distillation." The company detailed that this extraction involved approximately 24,000 fraudulent accounts generating over 16 million interactions with Claude, a clear violation of the platform's terms of service and regional access restrictions.

This alleged theft, occurring amid escalating geopolitical and ethical tensions in the global artificial intelligence sector in March 2026, is framed by Anthropic as a national security concern. The company argues that models built via illicit distillation may lack essential safety guardrails against misuse, such as developing bioweapons or enabling cyberattacks. Distillation is a common and legitimate method for creating smaller, more efficient models from larger ones, but it becomes adversarial when used by competitors to rapidly acquire advanced capabilities at a fraction of the independent development cost.

Anthropic noted that MiniMax was responsible for the largest operation, accounting for over 13 million of the illicit exchanges, with concentrated efforts across high-value areas including agentic reasoning, coding, and tool use—domains where Claude is recognized as a leader. To bypass Anthropic's commercial access ban within China, the accused labs allegedly employed proxy services to manage the vast networks of fraudulent accounts, a practice that circumvents existing export controls intended to preserve American technological dominance in AI. This pattern mirrors similar concerns previously raised by OpenAI regarding DeepSeek's model training practices in January 2025.

Simultaneously, market dynamics in the United States shifted significantly following a political standoff involving Anthropic and the Pentagon. This controversy ignited after CEO Dario Amodei refused to remove safety guardrails prohibiting Claude's deployment for fully autonomous weapons or domestic mass surveillance, leading President Donald Trump to issue an order for federal agencies to terminate contracts with Anthropic. The resulting public backlash fueled a substantial user migration, as Claude surged past both ChatGPT and Google Gemini to capture the number one position as the most-downloaded free application in the US App Store as of March 1, 2026, setting new daily active user registration records.

In response to the user exodus, OpenAI CEO Sam Altman revised the Pentagon agreement to explicitly include safeguards against intentional domestic surveillance and to uphold human responsibility for the use of force, following Anthropic's designation as a supply-chain risk. Anthropic, however, vowed to legally challenge the designation, which is typically reserved for adversary nations, while maintaining its stance against unconditional military use of its technology. The market reaction underscores public sensitivity to AI ethics, evidenced by Anthropic spokesperson reports indicating free users increased over 60% since January, daily signups tripled since November 2025, and paid subscribers more than doubled in the year leading up to the app store peak.

1 Views

Sources

  • La Razón

  • El Independiente

  • Diario Bitcoin

  • Xataka

  • AP News

  • AIBase

Did you find an error or inaccuracy?We will consider your comments as soon as possible.