Anthropic
Search documents
DeepSeek、月之暗面、MiniMax被点“非法提取”,它们做错了吗? | 电厂
Xin Lang Cai Jing· 2026-02-25 10:47
Core Viewpoint - Anthropic has accused three Chinese AI companies—DeepSeek, Moonshot, and MiniMax—of illicitly extracting data from its model Claude, marking the second controversy involving domestic models within three months [1][9]. Group 1: Allegations and Responses - Anthropic claims that the three Chinese companies used approximately 24,000 fraudulent accounts to interact with Claude over 16 million times, using these interactions to enhance their own models [1][4]. - The accused companies have remained silent regarding the allegations, with no public response from DeepSeek, MiniMax, or Moonshot [1]. - Anthropic's statement highlighted that the interaction patterns with Claude were abnormal, indicating intentional extraction of Claude's unique capabilities [7]. Group 2: Technical Aspects of Distillation - The technique used by the accused companies is known as "distillation," which allows models to learn from a "teacher model" like Claude by interacting with it [4][6]. - Distillation is a common method for rapidly evolving models, enabling smaller models to approximate the performance of larger ones with less data [6]. - Major AI companies, including OpenAI and Google, have included clauses in their usage agreements prohibiting distillation, reflecting a growing concern over intellectual property [9]. Group 3: Legal and Ethical Considerations - The ongoing debate over model distillation raises questions about legal definitions, including contract law, copyright law, and unfair competition [10]. - Both Chinese and American companies utilize vast amounts of internet data for training, leading to discussions about authorization and ethical use of such data [10]. - The narrative surrounding "Chinese companies distilling American models" has become a one-sided discourse, with the potential for a prolonged public relations battle [10]. Group 4: Open Source vs. Closed Source Models - Many leading Chinese models operate under open-source licenses that permit distillation, contrasting with the closed-source models that prohibit such practices [10][13]. - For instance, DeepSeek's models are released under the MIT license, allowing for academic and commercial use, while other models like MiniMax and Qwen3 follow the Apache 2.0 license [10]. - The controversy over distillation also highlights the ongoing debate between open-source and closed-source development paths in the AI industry [13].
受AI威胁与营收展望悲观影响,Workday盘前大跌10%
Xin Lang Cai Jing· 2026-02-25 10:29
Core Viewpoint - Workday's stock price fell approximately 10% due to increased macroeconomic uncertainty and a pessimistic revenue forecast, reflecting broader concerns in the software sector regarding spending cuts by enterprises [1] Group 1: Financial Performance and Forecast - Workday's subscription revenue for fiscal year 2027 is projected to be between $9.93 billion and $9.95 billion, which is below analyst expectations of around $10 billion [1] - The company's stock has declined about 40% year-to-date, exacerbated by fears that automation could impact traditional software revenue streams [1] Group 2: Market Dynamics and Competition - The software sector has experienced widespread sell-offs following the launch of new enterprise-level tools by AI startup Anthropic, raising investor concerns [1] - Piper Sandler analysts indicated that the performance guidance is unlikely to alleviate general investor worries about application-layer enterprises in the current environment of heightened scrutiny [1] Group 3: Leadership and Strategic Focus - Aneel Bhusri, co-founder of Workday, has resumed the role of CEO after stepping down in 2024, while continuing as chairman [2] - During the earnings call, Bhusri downplayed the notion that AI would replace traditional software [2] Group 4: Sales Cycle and Market Challenges - Workday reported elongated sales cycles, particularly in government, education, healthcare, and certain commercial markets, leading to delays in large enterprise transactions [1] - Despite the delays, most projects are still progressing, with some completed ahead of schedule in the first quarter [1]
计算机行业周报:LLaDA2.1实现技术突破,Gemini3.1Pro树立多模态新标准-20260225
Huaxin Securities· 2026-02-25 10:25
2026 年 02 月 25 日 LLaDA2.1 实现技术突破,Gemini3.1Pro 树立 多模态新标准 推荐(维持) 投资要点 分析师:任春阳 S1050521110006 rency@cfsc.com.cn 行业相对表现 表现 1M 3M 12M 计算机(申万) -5.4 5.5 3.4 沪深 300 0.7 5.5 20.6 市场表现 -30 -20 -10 0 10 20 30 (%) 计算机 沪深300 资料来源:Wind,华鑫证券研究 相关研究 1、《计算机行业周报:字节跳动 Seedance2.0 重 磅 上 线 , ClaudeOpus4.6 发布》2026-02-10 2、《计算机行业点评报告:亚马逊 (AMZN.O):AI 基础设施与零售网 络共振,资本开支周期驱动长期增 长》2026-02-08 3、《计算机行业点评报告:苹果 (AAPL.O):营收利润双增长, iPhone 与服务业务表现亮眼创历史 新高》2026-02-05 ▌ 算力:算力租赁价格平稳,扩散语言模型 LLaDA2.1 实现技术突破 2026 年 2 月,LLaDA2.1 扩散语言模型正式发布,含 160 亿、 ...
物理学家,危,Anthropic联创:AI觉醒,2-3年写出菲尔兹级论文
3 6 Ke· 2026-02-25 10:23
粒子物理十年无新发现,LHC成了「标准模型的坟场」。但Anthropic联创、哈佛物理博士Jared Kaplan却断言:再过2-3年,AI就能写出媲美顶尖物理学 家的论文,50%物理学家或将被彻底取代! 物理学界与科技圈地震! Anthropic联创、前物理学大牛Jared Kaplan放话:两到三年内,理论物理学家有50%概率被AI取代! 要知道,他博士毕业于哈佛大学物理学,同时是JHU的理论物理学教授,又是Anthropic的首席科学官。 对于AI和理论物理学,他都是行家,他的判断绝非无的放矢,白费口舌。 Kaplan引用内部研究与模型进展指出,未来2–3年内,AI在理论推导、数值模拟、公式发现和实验设计等核心科研环节中的能力,将逼近甚至超过大量人 类研究者。 他评估,至少有50%的物理学家工作内容,存在被AI替代或边缘化的明显风险。 替代50%理论物理学家,菲尔兹奖得主亦不例外 自2012年希格斯玻色子(即「上帝粒子」)被发现后,大型强子对撞机(LHC)的实验数据一直严格符合已有理论「标准模型」的预测,没有发现任何预 期之外的新粒子或新物理现象。 戏剧性并非源于希格斯粒子;当它在LHC现身时,其存在已 ...
别再一键贴代码,Anthropic点名3种“用AI不退化”真方法
3 6 Ke· 2026-02-25 10:23
2026年初,Anthropic研究揭开了AI辅助编程对技能学习的潜在风险,使用AI助手完成编程任务的开发者,在概念理解、代码阅读和调试能力上显著落后 于独立解决问题的同行。 技能退化,AI编程让人难以锻炼调试能力 在AI编程助手日益普及的今天,软件工程领域生产力显著提升。然而,代价是什么? 假设你是一个程序员,现在要用一个新的库来进行开发。 之前你遇到问题,只能接入网络搜索引擎和文档;现在能访问基于GPT-4o的AI编程助手。你会觉得哪一种更有利于你掌握这个库? 在这项研究中,被试者需要学会一个小众的Python异步编程库Trio,每个受试者都是初次使用该库进行编程。 被试者被随机分为两组。一组只用搜索学习,一组只通过大模型问答学习。 图1:实验设计方案 与普遍认知相反,AI辅助并未显著缩短任务完成时间(图2左边)。尽管AI助手能够直接生成完整正确的代码解决方案,但实验组的平均完成时间并未显 著优于对照组。 图2:人们使用AI与否与编程速度和技能评估得分 为何会这样?细分后发现,这源于参与者使用AI方式的巨大差异: 一部分参与者完全委托AI生成代码,确实大幅提高了效率; 另一部分参与者花费大量时间与AI交互 ...
科技巨头争夺印度市场,硅谷富豪加码加州政治影响力
Sou Hu Cai Jing· 2026-02-25 10:18
Group 1 - India is positioning itself to become the world's third-largest AI power, following the US and China, as emphasized by Prime Minister Narendra Modi at the AI Impact Summit [3][4] - Modi advocates for preventing AI monopolies and promoting shared and open-source technology to benefit the world, focusing on applications that can enhance the prospects of India's 1.45 billion people [3][4] - Major tech companies are making significant investments in India, with Google announcing $15 billion for data centers and undersea cables, Microsoft committing $17.5 billion, and Amazon planning to invest $35 billion [3][4] Group 2 - India's large online population, with approximately 1.4 billion people holding digital identities and over 700 million having digital health accounts, presents substantial opportunities for AI companies [4][5] - The US government is strengthening tech ties with India through agreements like the Silicon Valley Accord, distancing India from China amid geopolitical tensions [5][6] - Silicon Valley billionaires are increasingly influencing California politics, contributing millions to various political campaigns and seeking new allies as Governor Gavin Newsom approaches term limits [7][8]
五角大楼向Anthropic下达最后期限:周五前同意条款,否则终止合同
Xin Lang Cai Jing· 2026-02-25 10:16
责任编辑:郭明煜 战争部长皮特・赫格塞瑟威胁称,五角大楼或将援引《国防生产法》,迫使 Anthropic 取消其安全限 制,或将其列为供应链风险实体。 据一位熟悉战争部长皮特・赫格塞瑟与 Anthropic 首席执行官达里奥・阿莫迪会谈情况的人士透露,赫 格塞瑟周二威胁称:如果 Anthropic 未能在周五前自愿接受相关条款,五角大楼将援引《国防生产 法》,强制要求其允许军方将 AI 模型用于任何合法用途。 该人士称,赫格塞瑟还威胁,可将 Anthropic列为供应链风险实体—— 这意味着五角大楼及所有为军方 服务的承包商都不得使用其模型。 战争部长皮特・赫格塞瑟威胁称,五角大楼或将援引《国防生产法》,迫使 Anthropic 取消其安全限 制,或将其列为供应链风险实体。 据一位熟悉战争部长皮特・赫格塞瑟与 Anthropic 首席执行官达里奥・阿莫迪会谈情况的人士透露,赫 格塞瑟周二威胁称:如果 Anthropic 未能在周五前自愿接受相关条款,五角大楼将援引《国防生产 法》,强制要求其允许军方将 AI 模型用于任何合法用途。 该人士称,赫格塞瑟还威胁,可将 Anthropic列为供应链风险实体—— 这意 ...
Anthropic控告中国AI蒸馏,马斯克和整个互联网都笑了
Sou Hu Cai Jing· 2026-02-25 10:13
2026 年 2 月 23 日,美国 AI 公司 Anthropic 在官方博客发出了一篇义正言辞的声明,指控三家中国 AI 实验室 DeepSeek、月之暗面(Moonshot AI)和 MiniMax 对其旗下 Claude 模型发动了"工业级别的蒸馏攻击"。 据 Anthropic 描述,这三家公司通过约 24,000 个虚假账户,与 Claude 进行了超过 1,600 万次对话交互,意在提取 Claude 的核心能力来训练自家模型。声 明措辞严厉,将此事上升到"国家安全"的高度,声称被蒸馏出的模型可能被"威权政府"用于网络攻击、虚假信息传播、军事用途和大规模监控。 先解释一下"蒸馏"(Distillation)到底是什么。在机器学习领域,知识蒸馏(Knowledge Distillation)最早由 Hinton 等人在 2015 年提出,本意是让一个更 小的"学生模型"通过学习一个更大、更强的"教师模型"的输出来获得能力。简单理解:你问一个聪明的 AI 一堆问题,把它的回答记录下来,再拿这些回 答去训练另一个 AI。 图丨相关论文(来源:arXiv) 这个方法在行业里极其普遍,Anthropic ...
Anthropic指控AI公司蒸馏剽窃,马斯克硬刚“贼喊抓贼”
Sou Hu Cai Jing· 2026-02-25 10:13
Core Viewpoint - Anthropic accuses three leading Chinese AI companies, DeepSeek, Moonshot, and MiniMax, of infringing on its Claude model capabilities through fraudulent accounts and proxy services, utilizing a technique known as "model distillation" to enhance their own models [3][4]. Group 1: Allegations of Model Theft - Anthropic claims that the Chinese AI companies used fraudulent accounts to access Claude, generating over 16 million interactions, which they argue violates service terms and access restrictions [3][4]. - The three companies are accused of employing similar methods to access Claude's capabilities, particularly focusing on agentic reasoning, tool usage, and coding abilities [4]. Group 2: Specific Interactions and Patterns - DeepSeek engaged in over 150,000 interactions, focusing on extracting Claude's reasoning capabilities across diverse tasks, indicating coordinated efforts to avoid detection [5]. - Moonshot AI recorded over 3.4 million interactions, targeting agentic reasoning, tool usage, and data analysis, aiming to reconstruct Claude's reasoning pathways [5]. - MiniMax had the largest scale with over 13 million interactions, specifically targeting agent coding and tool usage, demonstrating adaptability by redirecting traffic to capture new features [5]. Group 3: Legal and Ethical Implications - The allegations raise questions about the legality of model distillation and the ethical considerations surrounding AI training, as many large language models are trained on publicly available internet data without explicit consent from original authors [7][8]. - There is an ongoing debate regarding the ownership of synthetic data and compliance issues related to training, particularly for open-source models [8]. Group 4: National Security and Export Controls - Anthropic's accusations highlight concerns over national security, suggesting that illegal distillation could undermine U.S. control over advanced AI technology exports [9]. - Current U.S. export controls primarily focus on hardware rather than large language model API access, indicating a gap in regulatory measures [9]. Group 5: Developer Responsibilities and Compliance - Developers using large language models must ensure their training processes are secure and compliant, maintaining clear records of training data sources and adhering to service terms [10][11]. - Anthropic is investing in defensive technologies to detect "distillation attack" patterns and is implementing protective measures to reduce the effectiveness of illegal distillation while maintaining legitimate user experience [11].
几十亿烧完了 春节AI大战到底谁赢了?
Di Yi Cai Jing· 2026-02-25 09:42
Core Insights - The "Chinese Large Model Spring Festival War" has concluded, with ByteDance's Doubao and Alibaba's Qianwen maintaining top positions in the App Store download rankings, while Ant Group's Aifu and Tencent's Yuanbao have dropped significantly [1][4][11] - The competition reflects differing strategies between Chinese and American firms, with Chinese companies focusing on consumer traffic and app entry points, while U.S. firms are prioritizing model performance and practical applications [1][15][17] Group 1: Competition Overview - The competition began gradually without a clear starting signal, with major players like Baidu, Ant Group, and Tencent announcing significant investments in AI activities leading up to the Spring Festival [3][4] - By early February, core players intensified their efforts, with Qianwen's promotional activities leading to a tenfold increase in orders and topping the App Store free list [4][6] - The download data indicates that Doubao leads with a 23% share, while Aifu and Yuanbao lag behind with around 15% each [4][5] Group 2: User Engagement and Retention - The cash-burning strategy has shown effectiveness in user acquisition, with Yuanbao's daily active users (DAU) increasing significantly during its promotional activities [6][10] - However, the long-term retention of users remains uncertain, as many users reported decreased usage of AI applications post-promotion [9][12] - Analysts emphasize the importance of evaluating user retention rates after the promotional period to assess the true value of the products [11][12] Group 3: Market Dynamics and Future Outlook - The current competition is seen as a pivotal moment in the AI landscape, with Doubao establishing a strong brand presence and Qianwen focusing on practical applications [10][11][17] - The differing paths of Chinese and American firms highlight a focus on consumer engagement in China versus a more mature SaaS ecosystem in the U.S. [15][17] - The future of AI assistants is expected to involve differentiated competition and integration into various user scenarios, moving beyond standalone applications [13][15]