Seek .(SKLTY)
Search documents
DeepSeek又放大招!梁文锋署名新论文引关注
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-02 11:12
Core Insights - DeepSeek has introduced a new framework called "Manifold-Constrained Hyperconnection" (mHC) aimed at enhancing scalability while reducing the computational power and energy requirements for training advanced AI systems [1][14][19] - The next flagship system, R2, is expected to be launched around the Chinese New Year in February [1][14] Summary of Key Points Introduction of mHC Framework - DeepSeek published a paper detailing the mHC framework, which addresses instability issues in traditional hyperconnections during large-scale model training while maintaining significant performance gains [1][15][16] - The paper lists three primary authors, including DeepSeek's founder Liang Wenfeng [1][17] Performance and Scalability - The mHC framework projects the residual connection space of hyperconnections onto a specific manifold, restoring the identity mapping property and integrating strict infrastructure optimizations for operational efficiency [3][19] - Empirical experiments indicate that mHC effectively supports large-scale training, providing notable performance improvements with better scalability. When the expansion rate is set to 4, it incurs only a 6.7% additional time overhead [3][19][21] Future Research Directions - The paper suggests that mHC serves as a flexible and practical extension of hyperconnection paradigms, potentially deepening the understanding of topological architecture design and guiding the evolution of foundational models [3][21] - It opens up several important research directions, including compatibility with various manifold constraints tailored to specific learning objectives and the exploration of differentiated geometric constraints to better balance plasticity and stability [3][21]
DeepSeek发布新论文提出更为高效的AI开发方法
Xin Lang Cai Jing· 2026-01-02 10:13
Core Viewpoint - DeepSeek has introduced a more efficient artificial intelligence development method through a paper co-authored by founder Liang Wenfeng, proposing a framework called "Manifold-Constrained Hyperconnection" (mHC) aimed at enhancing scalability while reducing the computational power and energy requirements for training advanced AI systems [1] Group 1 - The mHC framework is designed to improve scalability in AI development [1] - The new flagship system R2 from DeepSeek is expected to be launched around the Chinese New Year in February [1]
梁文锋DeepSeek新论文!接棒何恺明和字节,又稳了稳AI的“地基”
Xin Lang Cai Jing· 2026-01-02 05:27
Core Insights - DeepSeek has introduced a new architecture called mHC (Manifold-Constrained Hyper-Connections), which significantly improves the residual connection component of the Transformer architecture, a foundational element that has seen little change since its inception in 2015 [1][3] Group 1: Historical Context - The evolution of neural network architectures began with ResNet, introduced by Kaiming He in 2015, which addressed the vanishing gradient problem and enabled the training of very deep networks [3] - The Transformer model, released in 2017, adopted residual connections as a standard feature, forming the basis for many leading models today [3] Group 2: Technical Comparisons - Hyper-Connections, proposed by ByteDance in 2024, expanded the single residual flow into multiple parallel streams, enhancing model performance but introducing stability issues during training [5][10] - mHC aims to resolve the stability problems associated with Hyper-Connections by constraining the connection weight matrix within a specific mathematical space, ensuring that signal amplification does not occur [10][12] Group 3: Mathematical Innovation - The core innovation of mHC involves using a Doubly Stochastic Matrix for the connection weights, which guarantees that the output does not exceed the maximum input value, thus preserving energy conservation [10][12] - The implementation of mHC utilizes the Sinkhorn-Knopp algorithm to achieve the desired matrix properties efficiently, allowing for end-to-end training without introducing new hyperparameters [11][12] Group 4: Engineering Excellence - DeepSeek's approach to implementing mHC demonstrates significant engineering capabilities, including the development of custom CUDA kernels and operator fusion techniques to minimize computational delays [16] - The ability to integrate innovative mathematical solutions into practical training environments highlights DeepSeek's competitive advantage in the AI research landscape [16]
四大热点齐发:茅台直销战略落地、巴菲特退休、GPU四小龙集结上市、DeepSeek再释信号
Jin Rong Jie· 2026-01-02 00:17
Group 1: Moutai's Direct Sales Strategy - Moutai officially launched its direct sales strategy by selling Feitian Moutai on the "i Moutai" platform at a price of 1499 yuan per bottle, with a purchase limit of 12 bottles per user per day [2] - The move aims to reduce intermediaries, potentially converting some dealer profits into direct company revenue, which is expected to positively support mid-to-long-term performance [2] - The market response was extremely enthusiastic, with all six rounds of product releases selling out quickly, indicating strong demand for reasonably priced Feitian Moutai [2] Group 2: Warren Buffett's Retirement - Warren Buffett, the legendary investor, announced his retirement at the age of 95, marking the end of a nearly century-long investment career [3] - His career exemplified that investing can be a lifelong endeavor and has prompted a renewed examination of long-term investment philosophies [3] - Buffett emphasized the importance of focusing on quality assets and long-term holding, a principle that remains relevant despite the rise of high-frequency trading and quantitative strategies [3] Group 3: Domestic GPU Companies Accelerating Capitalization - The four leading domestic GPU companies, including Suiruan Technology, have initiated their IPO processes, with Suiruan recently completing its IPO counseling [4] - This acceleration in the capitalization of the domestic GPU sector reflects an unprecedented speed in the industry, with multiple companies moving towards public offerings [4] - The upcoming wave of IPOs in the tech sector is expected to inject capital into the economy and support the goal of self-sufficiency in the industrial chain [4] Group 4: DeepSeek's Research Publication - DeepSeek recently published an important research paper on a preprint platform, with founder Liang Wenfeng listed as one of the authors, highlighting the company's strategic focus on technological advancement [5] - The release of the paper follows the market's high interest in their DeepSeek-R1 model, indicating the company's strong technical capabilities [5] - Despite mixed opinions on the pace of AI technology iteration, DeepSeek's continuous output of significant research results suggests a robust technical strength [5]
DeepSeek新年炸场!梁文锋署名论文发布
Di Yi Cai Jing· 2026-01-01 13:44
Core Viewpoint - DeepSeek has introduced a new network architecture called mHC (Manifold-Constrained Hyper-Connections) aimed at addressing instability issues in large-scale model training, potentially guiding the evolution of next-generation infrastructure [1][3][4]. Group 1: Technical Innovations - The mHC architecture improves upon traditional hyper-connection frameworks by balancing performance and efficiency, akin to adding "traffic rules" to information channels, ensuring stable information flow during model training [4]. - The research highlights that mHC can enhance the stability and scalability of large models, making it easier to implement in complex scenarios, such as multi-modal models and industrial decision-making systems [5]. Group 2: Industry Implications - mHC may reduce hardware investment and training time for companies developing larger foundational models, thus lowering the barriers for small and medium AI enterprises to create more complex models [5]. - The innovation is seen as a fundamental advancement in addressing core issues within the Transformer architecture, with expectations for significant updates in DeepSeek's upcoming V4 version [5]. Group 3: Recent Developments - Despite not launching major versions like R2 or V4 in 2023, DeepSeek has continued to innovate, releasing DeepSeek-V3.2 and DeepSeek-Math-V2, the latter being the first math model to reach international Olympiad gold medal standards [6].
AI进化速递丨DeepSeek提出mHC新架构
Di Yi Cai Jing· 2026-01-01 13:05
Core Insights - DeepSeek has released a new paper proposing the mHC (Manifold-Constrained Hyperconnection) architecture [1] Group 1 - Zhiyuan has launched an integrated embodied large brain system called GenieReasoner [1] - The Moon's Dark Side project has introduced a new multimodal model earlier this year [1] - DeepSeek's new paper focuses on the mHC architecture, which aims to enhance hyperconnection capabilities [1]
DeepSeek 开年发布新论文:提出全新 mHC 架构,梁文锋现身作者名单
Xin Lang Cai Jing· 2026-01-01 12:24
Core Insights - DeepSeek has introduced a new architecture called mHC (Manifold-Constrained Hyperconnection) aimed at addressing the instability issues in traditional hyperconnections during large-scale model training while maintaining significant performance gains [1][6] Group 1: Research and Development - The paper presents mHC as a universal framework that projects the residual connection space of hyperconnections onto a specific manifold to restore the identity mapping property [6] - The authors of the paper include Zhenda Xie, Yixuan Wei, Huanqi Cao, and Liang Wenfeng, the founder and CEO of DeepSeek [1] Group 2: Performance and Scalability - Empirical experiments indicate that mHC is effective for large-scale training, providing tangible performance improvements and excellent scalability [6] - The proposed architecture is expected to contribute to a deeper understanding of topological architecture design and offer promising directions for the evolution of foundational models [6]
DeepSeek改造何恺明残差连接!梁文峰亲自署名,十年首次重大升级
Xin Lang Cai Jing· 2026-01-01 11:45
Core Insights - DeepSeek has introduced an upgraded version of the residual connection, a fundamental component of deep learning proposed by Kaiming He in 2016, marking a significant evolution in the field [1][27]. Group 1: Residual Connections and Hyper-Connections - Residual connections have remained unchanged for a decade, serving as the cornerstone of deep learning architectures, allowing signals to pass directly from shallow to deep layers without modification [5][31]. - The emergence of Hyper-Connections (HC) aims to expand the residual flow width from C dimensions to n×C dimensions, introducing three learnable mapping matrices to manage information flow [7][32]. - Experiments by the DeepSeek team indicate that the Hres matrix, responsible for internal information exchange within the residual flow, contributes significantly to performance improvements [7][32]. Group 2: Challenges with Hyper-Connections - When HC is extended to multiple layers, the composite mapping no longer retains the identity property, leading to sudden loss spikes and gradient fluctuations during training [9][34]. - The research team calculated that the amplification factor of the composite mapping in HC peaked at 3000, indicating that signals could be amplified or attenuated drastically during inter-layer propagation [10][35]. Group 3: Double Random Matrix Constraints - The core idea of the DeepSeek paper is to constrain the residual mapping matrix to a specific manifold formed by double random matrices, known as the Birkhoff polytope [11][36]. - This constraint provides three key theoretical properties: norm preservation, combinatorial closure, and a geometric interpretation that enhances feature fusion stability [14][39][40]. - The Sinkhorn-Knopp algorithm is employed to project any matrix onto this manifold, resulting in a significant reduction in signal gain from 3000 in HC to approximately 1.6 in mHC [16][41]. Group 4: Engineering Optimizations - The expansion of residual flow width incurs additional memory access costs, with detailed analysis showing that standard residual connections require reading 2C elements and writing C elements, while HC requires significantly more [19][44]. - The DeepSeek team developed infrastructure optimizations, including kernel fusion and specialized kernels for the Sinkhorn-Knopp algorithm, to reduce memory access and improve computational efficiency [19][43]. - The paper presents an optimization formula for recomputation strategies, aligning recomputation boundaries with pipeline stage boundaries for enhanced performance [20][45]. Group 5: Experimental Validation - The paper validates the proposed methods on MoE models of sizes 3B, 9B, and 27B, with an expansion rate of n set to 4, demonstrating stable training curves and a loss reduction of 0.021 compared to the baseline [22][47]. - In downstream task evaluations, mHC outperformed HC by 2.1% in the BBH reasoning task and 2.3% in the DROP reading comprehension task, showing superior performance across most tasks [22][48]. - Internal large-scale training experiments confirmed these findings, with mHC introducing only a 6.7% additional time overhead when n=4 [25][50].
DeepSeek,最新发布!
Zheng Quan Shi Bao· 2026-01-01 10:56
Group 1 - DeepSeek has introduced a new architecture called mHC (manifold-constrained hyperconnection) to address instability issues in traditional hyperconnections during large-scale model training while maintaining significant performance gains [1][3] - The research highlights that while hyperconnections have improved performance by diversifying connection patterns, they have also weakened the inherent identity mapping property of residual connections, leading to training instability and limited scalability [3] - Empirical results indicate that mHC effectively supports large-scale training with only a 6.7% additional time overhead when the expansion rate is set to 4, demonstrating its efficiency [3][5] Group 2 - DeepSeek recently launched two official model versions, DeepSeek-V3.2 and DeepSeek-V3.2-Speciale, with V3.2 achieving performance comparable to GPT-5 in inference benchmarks, suitable for everyday tasks [6][7] - The V3.2-Speciale model enhances long reasoning capabilities and combines theorem proving abilities, performing similarly to Gemini-3.0-Pro in mainstream inference benchmarks [7] - DeepSeek has also reduced API costs by over 50%, making it more accessible for developers [7] Group 3 - DeepSeek's research paper on the R1 inference model was featured on the cover of the prestigious journal Nature, marking a significant achievement for Chinese AI technology in the international scientific community [8] - This publication is notable as it is the first mainstream large language model research to undergo complete peer review and be published in a leading journal, breaking a gap in the field [8]
刚刚,梁文锋署名,DeepSeek元旦新论文要开启架构新篇章
Xin Lang Cai Jing· 2026-01-01 10:34
Core Insights - DeepSeek has introduced a new architecture called Manifold-Constrained Hyper-Connections (mHC) aimed at addressing the instability issues in traditional hyper-connections during large-scale model training while maintaining significant performance gains [1][27][28]. Group 1: Architecture and Methodology - The mHC architecture expands the traditional single residual flow of Transformers into a multi-flow parallel structure, utilizing the Sinkhorn-Knopp algorithm to constrain the connection matrix on a doubly stochastic matrix manifold [1][28]. - The core objective of mHC is to retain the performance improvements from widening the residual flow while resolving issues related to training instability and excessive memory consumption [4][34]. - The research team has implemented infrastructure optimizations such as kernel fusion, selective recomputation, and an extended DualPipe communication strategy to offset the overhead caused by wider channels [31][34]. Group 2: Performance and Stability - Empirical evidence shows that mHC not only resolves stability issues but also demonstrates exceptional scalability in large-scale training scenarios, such as with a 27 billion parameter model, where it only increased training time overhead by 6.7% while achieving significant performance improvements [34][49]. - The training stability of mHC was evaluated against a baseline model, showing a reduction in final loss by 0.021 and maintaining a stable gradient norm profile, indicating superior stability compared to traditional hyper-connections [49][50]. Group 3: Benchmarking and Results - In various downstream benchmark tests, mHC consistently outperformed the baseline model and surpassed traditional hyper-connections in most tasks, achieving performance gains of 2.1% and 2.3% in specific tasks [51][52]. - The scalability experiments indicated that mHC maintains its performance advantages even under higher computational budgets, demonstrating robust effectiveness in large-scale scenarios [52][53].