Workflow
mHC(流形约束超连接)
icon
Search documents
DeepSeek,最新发布!
Zheng Quan Shi Bao· 2026-01-01 10:56
DeepSeek发布新论文,梁文锋参与署名。 图为残差连接范式的示意图。本图对比了 (a) 标准残差连接、(b) 超连接以及 (c) 流形约束超连接的结构设计。与无约束的HC不同,mHC通过将连接矩阵 投影到一个约束流形上,专注于优化残差连接空间,从而确保训练的稳定性。 论文在结论与展望部分指出,实证结果表明,mHC能有效恢复恒等映射特性,相较于传统HC,能以更优的可扩展性实现稳定的大规模训练。关键的是, 通过高效的基础设施级优化,mHC以可忽略的计算开销实现了上述改进。 1月1日消息,DeepSeek发布了一篇新论文,提出了一种名为mHC(流形约束超连接)的新架构。该研究旨在解决传统超连接在大规模模型训练中的不稳 定性问题,同时保持其显著的性能增益。这篇论文的第一作者有三位:Zhenda Xie(解振达)、Yixuan Wei(韦毅轩)、Huanqi Cao。值得注意的是, DeepSeek创始人梁文锋也在作者名单中。 内部大规模训练结果显示,mHC可有效支持规模化训练,当扩展率=4时,仅带来6.7%的额外时间开销。 论文摘要指出,近来,以超连接(HC)为代表的研究通过拓宽残差流宽度和多样化连接模式,拓展了 ...
刚刚,梁文锋署名,DeepSeek元旦新论文要开启架构新篇章
Xin Lang Cai Jing· 2026-01-01 10:34
Core Insights - DeepSeek has introduced a new architecture called Manifold-Constrained Hyper-Connections (mHC) aimed at addressing the instability issues in traditional hyper-connections during large-scale model training while maintaining significant performance gains [1][27][28]. Group 1: Architecture and Methodology - The mHC architecture expands the traditional single residual flow of Transformers into a multi-flow parallel structure, utilizing the Sinkhorn-Knopp algorithm to constrain the connection matrix on a doubly stochastic matrix manifold [1][28]. - The core objective of mHC is to retain the performance improvements from widening the residual flow while resolving issues related to training instability and excessive memory consumption [4][34]. - The research team has implemented infrastructure optimizations such as kernel fusion, selective recomputation, and an extended DualPipe communication strategy to offset the overhead caused by wider channels [31][34]. Group 2: Performance and Stability - Empirical evidence shows that mHC not only resolves stability issues but also demonstrates exceptional scalability in large-scale training scenarios, such as with a 27 billion parameter model, where it only increased training time overhead by 6.7% while achieving significant performance improvements [34][49]. - The training stability of mHC was evaluated against a baseline model, showing a reduction in final loss by 0.021 and maintaining a stable gradient norm profile, indicating superior stability compared to traditional hyper-connections [49][50]. Group 3: Benchmarking and Results - In various downstream benchmark tests, mHC consistently outperformed the baseline model and surpassed traditional hyper-connections in most tasks, achieving performance gains of 2.1% and 2.3% in specific tasks [51][52]. - The scalability experiments indicated that mHC maintains its performance advantages even under higher computational budgets, demonstrating robust effectiveness in large-scale scenarios [52][53].