Workflow
稀疏混合专家系统(SMoE)
icon
Search documents
“DeepSeek-V3基于我们的架构打造”,欧版OpenAI CEO逆天发言被喷了
3 6 Ke· 2026-01-26 07:44
Core Viewpoint - The discussion centers around the competitive landscape in the AI field, particularly focusing on the contrasting approaches of Mistral and DeepSeek in developing sparse mixture of experts (MoE) models, with Mistral's CEO acknowledging China's strong position in AI and the significance of open-source models [1][4]. Group 1: Company Perspectives - Mistral's CEO, Arthur Mensch, claims that open-source models are a strategy for progress rather than competition, highlighting their early release of open-source models [1]. - The recent release of DeepSeek-V3 is built on Mistral's proposed architecture, indicating a collaborative yet competitive environment in AI development [1][4]. - There is skepticism among the audience regarding Mistral's claims, with some suggesting that Mistral's recent models may have borrowed heavily from DeepSeek's architecture [4][13]. Group 2: Technical Comparisons - Both DeepSeek and Mistral's Mixtral focus on sparse MoE systems, aiming to reduce computational costs while enhancing model capabilities, but they differ fundamentally in their approaches [9]. - Mixtral emphasizes engineering principles, showcasing the effectiveness of a robust base model combined with mature MoE technology, while DeepSeek focuses on algorithmic innovation to address issues in traditional MoE systems [9][12]. - DeepSeek introduces a fine-grained expert segmentation approach, allowing for more flexible combinations of experts, which contrasts with Mixtral's flat knowledge distribution among experts [11][12]. Group 3: Community Reactions - The community has reacted critically to Mistral's statements, with some users expressing disbelief and pointing out the similarities between Mistral's and DeepSeek's architectures [2][17]. - There is a sentiment that Mistral, once a pioneer in the open-source AI space, is now perceived as having lost its innovative edge, with DeepSeek gaining more influence in the sparse MoE and MLA technologies [14][17]. - The competitive race for foundational models is expected to continue, with DeepSeek reportedly targeting significant releases in the near future [19].
“DeepSeek-V3基于我们的架构打造”,欧版OpenAI CEO逆天发言被喷了
量子位· 2026-01-26 04:45
鱼羊 发自 凹非寺 量子位 | 公众号 QbitAI "DeepSeek-V3是在Mistral提出的架构上构建的。" 欧洲版OpenAI CEO此言一出,炸了锅了。 网友们的反应be like: 这还是温和派,还有更直接的吐槽:Mistral在胡说八道些什么…… 还没吃上瓜的家人们别着急,咱们从头捋一捋这事儿: 在最近一次访谈中,当被问到如何看待中国开源AI的强势发展时,Mistral联合创始人、CEO Arthur Mensch这样回应: 中国在AI领域实力强劲。我们是最早发布开源模型的公司之一,而他们发现这是一个很好的策略。 开源不是真正的竞争,大家在彼此的基础上不断进步。 比如我们在2024年初发布了首个稀疏混合专家模型(MoE),DeepSeek-V3以及之后的版本都是在此基础上构建的。它们采用的是相 同的架构,而我们把重建这种架构所需的一切都公开了。 Arthur Mensch很自信,但网友们听完表示:桥豆麻袋,这不对劲。 且不说DeepSeek MoE论文的发布时间和Arthur Mensch提到的Mixtral论文相差 仅3天 : △ 图源:@Sebastian Raschka 认真细扒起来, ...