Workflow
FlashMLA
icon
Search documents
Mamba核心作者新作:取代DeepSeek在用的注意力机制,专为推理打造
量子位· 2025-06-01 03:40
Core Insights - The article discusses a new research paper by Tri Dao and his team from Princeton University, introducing two attention mechanisms specifically designed for inference, which significantly enhance decoding speed and throughput while maintaining model performance [1][2][5]. Summary by Sections Introduction of New Attention Mechanisms - The research presents two novel attention mechanisms: Grouped-Tied Attention (GTA) and Grouped Latent Attention (GLA), which optimize memory usage and computational logic during model inference [2][8]. - GTA reduces KV cache usage by approximately 50% compared to the existing GQA mechanism, while GLA offers faster decoding speeds than the MLA mechanism, sometimes up to 2 times faster than FlashMLA [2][11][36]. Mechanism Details - GTA combines and reuses the key and value states of different query heads, reducing memory transfer frequency and improving efficiency [15][16]. - GLA employs a dual-layer structure to enhance hardware efficiency and maintain parallel scalability, optimizing decoding speed without sacrificing model performance [17][18]. Experimental Results - Experiments were conducted on models of various sizes (small, medium, large, and XL) using the FineWeb-Edu-100B dataset, demonstrating that GTA outperforms GQA in larger models, while GLA matches MLA performance [21][22]. - The results indicate that both GTA and GLA can maintain or improve performance as model size increases, validating their effectiveness as alternatives to GQA and MLA [24][36]. Performance Metrics - The study evaluated performance using perplexity and downstream task accuracy across several benchmarks, showing that GTA and GLA maintain competitive performance while reducing KV cache requirements [26][27]. - GLA demonstrated superior throughput in real-time server performance tests, especially under concurrent request scenarios, indicating its efficiency in handling long contexts [30][33].
电子行业周报:DeepSeek开源周发布五大技术
证券研究报告 行业研究 / 行业点评 2025 年 03 月 03 日 行业及产业 电子 强于大市 投资要点: 一年内行业指数与沪深 300 指数对比走势: 许亮 S0820525010002 0755-83562506 xuliang@ajzq.com 资料来源:聚源数据,爱建证券研究所 相关研究 《爱建证券电子行业周报(0215-0221) : 全球 最新 AI 模型 GroK - 3 发布》2025/02/24 《人工智能专题报告(1):适合投资人的 DeepSeek 分析报告》2025/02/20 | 表 1: DeepEP 基于 NVLink 和 RDMA 普通内核宽带测试结果 . | | --- | | 表 2: DeepEP 延迟敏感场景测试结果 | | 表 3:用于密集模型的普通 GEMM 测试结果 | | 表 4: GEMM 连续性布局性能测试数据 . | | 表 5: 1F1B、ZB1P 与 DualPipe 方法的性能指标比较 | 1.1 FlashMLA 助力 AI 场景生成提速 DeepSeek 开源周发布五大技术 ——爱建证券电子行业周报 请仔细阅读在本报告尾部的重要法律声明 本周 ...
爱建证券电子行业周报:DeepSeek开源周发布五大技术
Investment Rating - The report rates the electronic industry as "Outperform" compared to the market [1]. Core Insights - The electronic industry experienced a decline of 4.9% in the past week, ranking 28th out of 31 sectors, while the SW electronic sub-sectors showed mixed performance with semiconductor materials up by 0.4% and others down [2][44]. - DeepSeek launched five open-source projects aimed at enhancing AI model efficiency, showcasing a competitive strategy against OpenAI's high-cost models [2][28]. - The report highlights significant advancements in AI hardware and software, indicating a potential surge in demand for domestic semiconductor chips [2][40]. Summary by Sections 1. DeepSeek Open Source Week Releases - DeepSeek announced the launch of five open-source projects to enhance AI capabilities, including FlashMLA for efficient model inference and DeepEP for improved GPU communication [5][9]. - FlashMLA achieved a data throughput of 3000 GB/s and 580 TFLOPS on the H800 platform, nearly doubling performance compared to previous models [6][8]. - DeepEP optimized GPU communication, achieving a bottleneck bandwidth of 153 GB/s for intra-node and 46 GB/s for inter-node communications [11][12]. 2. Global Industry Dynamics - NVIDIA reported a record revenue of $39.3 billion for Q4 2025, with significant growth in data center revenues [30][31]. - OpenAI launched its largest model, GPT-4.5, which is expected to enhance performance significantly but comes with a high API cost [33][34]. - Alibaba announced a massive investment of 380 billion yuan in cloud and AI hardware infrastructure over the next three years, marking a significant commitment to the sector [36]. 3. Market Review - The electronic industry saw a decline of 4.9% in the past week, with semiconductor materials showing slight gains while other sectors faced losses [2][44]. - The report lists top-performing stocks in the electronic sector, with notable gains from companies like Aojie Technology (+30.0%) and Chipone Technology (+27.4%) [48]. - The Philadelphia Semiconductor Index experienced a decline of 11.7%, reflecting broader market challenges [51].
DeepSeek开源引领AI普惠化浪潮
Wind万得· 2025-03-02 22:40
Core Insights - The article discusses the rapid advancements in AI, particularly focusing on DeepSeek's open-source strategy and the release of OpenAI's GPT-4.5 model, highlighting the competitive landscape in the AI large model sector [1][9]. Group 1: DeepSeek's Open-Source Strategy - DeepSeek, established in 2023, has released several products, including DeepSeek R1, which offers performance comparable to leading closed-source models at a significantly lower training cost of approximately $557.6 million [2][5]. - The open-source initiative by DeepSeek, including the release of code libraries like FlashMLA and DeepEP, aims to lower the development barrier for AI models and enhance computational efficiency [5][6]. - The performance of DeepSeek R1 has led to a rapid user growth of 100 million within just seven days post-launch, marking it as the fastest-growing AI application globally [7]. Group 2: Global AI Large Model Progress - The AI large model sector is experiencing significant growth, with DeepSeek's low-cost models challenging existing players like Kimi, which saw only a 28% increase in active users compared to DeepSeek's 750% growth [7]. - OpenAI's GPT-4.5, released on February 28, 2025, is touted as the largest and most knowledgeable chat model to date, with a high cost structure that raises questions about its performance relative to its price [9][10]. - The competitive landscape is shifting, with DeepSeek's open-source approach prompting other companies, including OpenAI, to consider similar strategies to maintain market relevance [13]. Group 3: AI Large Model Investment Dynamics - The emergence of low-cost, high-performance models like those from DeepSeek is reshaping investment dynamics, allowing smaller firms to enter the market and focus on innovation rather than heavy capital investment [14][15]. - The article notes a trend where investment focus is shifting from infrastructure to application scenarios, with significant funding opportunities in vertical applications such as finance and healthcare [15]. - Recent funding events in the AI large model sector indicate a growing interest, with several companies securing substantial investments, reflecting the market's evolving landscape [16][17].
传媒行业周报:GPT-4.5发布,DeepSeek“开源周”收官
GOLDEN SUN SECURITIES· 2025-03-02 02:55
Investment Rating - The report maintains an "Increase" rating for the media sector [6]. Core Viewpoints - The media sector experienced a decline of 8.06% during the week of February 24-28, 2025, influenced by market conditions. The outlook for 2025 is optimistic, focusing on AI applications and mergers and acquisitions, particularly in state-owned enterprises [1][10]. - The release of "Nezha 2" has further boosted the popularity of domestic IPs, highlighting significant opportunities in the IP monetization value chain, including trendy toys and film content [1]. - The publishing and gaming sectors are expected to benefit from tax relief policies, with the publishing industry projected to see high growth in 2025 [1]. Summary by Sections Market Overview - The media sector's performance was notably poor, ranking among the bottom three sectors, with a decline of 8.06% [10]. - The top-performing sectors included steel, building materials, and real estate, while the computer and communication sectors also faced significant declines [10]. Subsector Insights - Key focus areas include: 1. Resource integration expectations: Companies like China Vision Media, Guoxin Culture, and others are highlighted [2]. 2. AI applications: Companies such as Aofei Entertainment and Tom Cat are noted for their potential [2]. 3. Gaming: Strong recommendations for companies like Shenzhou Taiyue and Kaixin Network [2]. 4. State-owned enterprises: Companies like Ciweng Media and Anhui New Media are emphasized [2]. 5. Education: Companies like Xueda Education and Action Education are mentioned [2]. 6. Hong Kong stocks: Notable mentions include Tencent Holdings and Pop Mart [2]. Key Events Review - The release of GPT-4.5 by OpenAI, which boasts over ten times the computational efficiency of GPT-4, is a significant development in AI technology [21]. - DeepSeek's open-source initiatives, including the release of various codebases, are aimed at enhancing data access and model training efficiency [21]. - Alibaba's launch of the video generation model Wan 2.1 showcases advancements in video technology, particularly in generating synchronized movements and text within videos [21]. Subsector Data Tracking - The gaming sector is seeing a variety of new game releases, with popular titles currently available for pre-order [23]. - The domestic film market's total box office for the week was approximately 431 million yuan, with "Nezha: The Devil's Child" leading the box office [24][26]. - The top-rated series and variety shows reflect strong viewer engagement, with "Difficult to Please" and "Mars Intelligence Agency Season 7" leading in viewership [27][28].
DeepSeek披露,一天成本利润率为545%
华尔街见闻· 2025-03-01 11:17
Core Viewpoint - DeepSeek has disclosed key information regarding its model inference cost and profit margins, claiming a theoretical daily profit margin of 545% based on specific assumptions about GPU rental costs and token pricing [1][3]. Financial Performance - DeepSeek's total cost is reported to be $87,072 per day, assuming a GPU rental cost of $2 per hour. The theoretical total revenue from all tokens, calculated at DeepSeek-R1 pricing, amounts to $562,027 per day, leading to a profit margin of 545% [1][3]. - The pricing structure for DeepSeek-R1 includes $0.14 per million input tokens (cache hit), $0.55 per million input tokens (cache miss), and $2.19 per million output tokens [3]. Market Reactions - The article prompted significant discussion online, particularly regarding the profitability of DeepSeek's API services, with founder You Yang previously stating a monthly loss of 400 million yuan [2][5]. - You Yang indicated that the current state of DeepSeek API (MaaS) is not profitable due to discrepancies between testing speeds and real-world scenarios, as well as machine utilization issues [5]. Operational Insights - DeepSeek aims to optimize its V3/R1 inference systems for higher throughput and lower latency, focusing on techniques such as increasing batch size and load balancing [4]. - The company operates with a strategy of deploying full nodes during peak hours and releasing nodes for training during off-peak hours, which is seen as a response to concerns about resource utilization [5]. Open Source Initiatives - DeepSeek recently concluded a "Open Source Week," during which it announced the release of several codebases, including Fire-Flyer file system and other frameworks aimed at enhancing data processing capabilities [7][8][9][10][11]. - The cumulative downloads of the DeepSeek App have surpassed 110 million, with peak weekly active users reaching nearly 97 million, indicating strong user engagement [12].
DeepSeek宣布:活动正式收官
21世纪经济报道· 2025-02-28 08:46
Core Insights - DeepSeek's "Open Source Week" has successfully concluded, showcasing its commitment to transparency and collaboration in the AI field [1][7]. Group 1: Open Source Projects - The "Open Source Week" launched five projects from February 24 to February 28, covering various aspects of computing, communication, and storage [3]. - On February 24, the first open-source library, FlashMLA, was released, optimized for Hopper GPU, focusing on variable-length sequences and is now in production [4]. - On February 25, DeepEP was announced for public access, designed for MoE model training and inference, enabling efficient all-to-all communication and supporting low-precision operations [4]. - On February 26, DeepGEMM was open-sourced, a library for FP8 general matrix multiplication, featuring fine-grained scaling and supporting both standard and MoE group GEMM [5]. - On February 27, two tools (DualPipe and EPLB) and a performance analysis dataset were released, along with detailed explanations of parallel computing optimization techniques [5]. - On February 28, the release of 3FS was announced, which serves as an accelerator for all DeepSeek data access [6]. Group 2: API and Pricing Adjustments - DeepSeek reopened its API recharge function on February 25 after a 19-day suspension, accompanied by a structural adjustment in pricing [9]. - The pricing for the DeepSeek-chat based on the V3 model is set at 2 yuan per million input tokens and 8 yuan per million output tokens, while the DeepSeek-reasoner based on the R1 model is priced at 4 yuan per million input tokens and 16 yuan per million output tokens [9]. - On February 26, a peak-shifting discount pricing strategy was introduced, with significant reductions during specific hours, offering V3 at 50% off and R1 at 25% off [10]. Group 3: Market Impact - According to CITIC Securities, DeepSeek's open-source initiatives are expected to catalyze the AI+ theme, enhancing AI penetration across various industries and increasing demand for computing power [7].
与 00 后开源者聊 DeepSeek 开源周:一直开源最强模型,可能是不想赚钱,也可能是想推动更大变化丨开源对话#2
晚点LatePost· 2025-02-27 14:03
"当 AI 足够强大后,开源还是不是一个好选择?" 整理丨刘倩 程曼祺 嘉宾丨美国西北大学 MLL Lab 博士王子涵 ▲扫描上图中的二维码,可收听播客。《晚点聊 LateTalk》#102 期节目。欢迎在小宇宙、喜马拉雅、苹果 Podcast 等渠道关注、收听我们。 《晚点聊 LateTalk》是《晚点 LatePost》 推出的播客节目。"最一手的商业、科技访谈,最真实的从业者思考。" 这是《晚点 LatePost》 「开源对话」系列的第 2 篇。该系列将收录与开源相关的访谈与讨论。系列文章见文末的合集#开源对话。 上周五,DeepSeek 在官方 Twitter 上预告了下一周会连续 5 天开源 5 个代码库,进入 "open-source week"开源周。 目前 DeepSeek 已放出的 4 个库,主要涉及 DeepSeek-V3/R1 相关的训练与推理代码 。 这是比发布技术报告和开源模型权重更深度的开源。 有了训练和推理 工具,开发者才能更好地在自己的系统里,实现 DeepSeek 系列模型的高效表现。 (注:所有 4 个库和后续开源可见 DeepSeek GitHub 中的 Open-Inf ...
DeepSeek开源打碎了谁的饭碗
虎嗅APP· 2025-02-27 10:17
Core Viewpoint - The open-sourcing of DeepSeek is creating significant opportunities for mid-sized AI companies and domestic chip manufacturers, while posing challenges for established large model companies known as the "six little tigers" [1][4][8]. Group 1: Impact of DeepSeek Open-Sourcing - Many mid-sized private enterprises are rapidly transitioning to DeepSeek's base model, with over half of existing clients making the switch [1]. - The open-sourcing initiative has sparked a wave of enthusiasm in AI application entrepreneurship, leading to a twofold increase in collaboration requests for domestic chip companies [1]. - The "open-source week" plan by DeepSeek, which began on February 21, aims to share several code repositories, enhancing transparency and innovation in AI [3]. Group 2: Reactions from Industry Players - Internal debates are ongoing among the "six little tigers" regarding the implications of open-sourcing, with concerns that it could disrupt their business models [2]. - The open-source trend has prompted even traditionally closed-source companies like Baidu to consider open-sourcing their models [3]. - Industry experts suggest that while DeepSeek's innovations benefit application and chip companies, base model vendors face significant challenges [3][7]. Group 3: Market Dynamics and Future Prospects - The open-sourcing of DeepSeek is expected to benefit hardware and chip manufacturers, allowing them to engage more in training and inference businesses [7]. - The algorithms and code optimizations shared during the open-source week are designed to maximize GPU performance, enabling smaller developers to build high-performance models at lower costs [7]. - Despite the advantages, many companies may struggle to implement DeepSeek's offerings without additional support from service layer companies [7][8]. Group 4: Broader Implications - The open-source movement initiated by DeepSeek is seen as a catalyst for a broader shift in the AI ecosystem, potentially leading to a more collaborative environment [10]. - The participation of DeepSeek in major developer conferences indicates a strategic move to solidify its position in the market and expand its influence [10]. - As more companies integrate DeepSeek, questions arise regarding the commercialization and sustainability of its services [10].
刚刚!DeepSeek,硬核发布!
券商中国· 2025-02-27 03:35
Core Viewpoint - DeepSeek has made significant advancements in optimizing parallel computing strategies and has introduced new models that enhance performance and reduce costs in AI applications [2][3][5][7]. Group 1: Optimized Parallelism Strategies - DeepSeek announced the release of Optimized Parallelism Strategies aimed at improving computational efficiency, reducing resource waste, and maximizing system performance through effective task allocation and resource coordination [3][5]. - The strategies are designed for high-performance parallel execution in multi-core, distributed, or heterogeneous systems, balancing computation, communication, and storage overhead [5]. Group 2: New Model Releases - NVIDIA has open-sourced the first optimized DeepSeek-R1 model on the Blackwell architecture, achieving a 25-fold increase in inference speed and a 20-fold reduction in cost per token [3][7]. - The DeepSeek-R1 model's local deployment has garnered significant attention, with its inference throughput reaching 21,088 tokens per second, compared to 844 tokens per second for the H100, marking a substantial performance improvement [7]. Group 3: Cost Reduction Initiatives - DeepSeek announced a significant reduction in API call prices during nighttime hours, with DeepSeek-V3 prices cut to 50% and DeepSeek-R1 to as low as 25%, with reductions up to 75% [6]. Group 4: Additional Open Source Contributions - DeepSeek has continued its open-source initiatives by releasing FlashMLA, DeepEP, and DeepGEMM, which are optimized for NVIDIA GPUs and designed to support various AI model training and inference tasks [9].