Workflow
稀疏注意力机制
icon
Search documents
用短视频成本生成长视频,字节Seed新注意力机制让计算量降低85%
Sou Hu Cai Jing· 2025-09-02 05:45
Core Insights - ByteSeed, in collaboration with Stanford researchers, has introduced a new model that significantly reduces the computational cost of generating long videos by 85% while maintaining quality and coherence in characters and scenes [1][3]. Group 1: Technology Overview - The new model employs a sparse attention mechanism called Mixture of Contexts (MoC), which redefines long video generation as a context retrieval task [1][3]. - MoC allows for the generation of a one-minute 480P video with only 2.32×10¹² FLOPs, compared to the baseline model's 1.66×10¹³ FLOPs, achieving an 85% reduction in computational load [3]. - For shorter videos, MoC also demonstrates cost-saving capabilities, with a multi-shot 64-second 480P video requiring only 2.3×10² FLOPs, saving approximately 86% compared to the baseline [3]. Group 2: Mechanism Details - MoC's core mechanism involves segmenting cross-modal sequences into semantically homogeneous content blocks, enhancing retrieval accuracy and reducing unnecessary computations [4][6]. - The model utilizes a dynamic top-k routing process, where only the most relevant blocks are retained for attention, optimizing the computational efficiency without adding parameters [6][7]. - To prevent information retention and ensure smooth long-range dynamics, strict temporal masks are implemented, prohibiting queries from accessing their own or subsequent blocks [6][7]. Group 3: Performance Metrics - The MoC method outperforms baseline models in various performance metrics, including theme consistency, background coherence, action continuity, and image quality [3][4]. - In a single-shot 8-second 320×192 video test, MoC required 4.1×10⁹ FLOPs, representing a reduction of approximately 78% compared to the baseline's 1.9×10¹⁰ FLOPs [3]. Group 4: Engineering Implementation - MoC integrates selected key values into FlashAttention variable-length kernels, enabling linear scalability for millions of tokens and efficient parallel processing on GPUs [6][7]. - The model ensures that all visual tokens can access complete text prompts, maintaining thematic consistency and enhancing editability [7].
用短视频成本生成长视频,字节Seed新注意力机制让计算量降低85%
量子位· 2025-09-02 04:17
Core Viewpoint - The article discusses a new model developed by ByteSeed in collaboration with Stanford researchers that significantly reduces the computational cost of generating long videos while maintaining quality and coherence [1][2]. Group 1: Cost Reduction in Video Generation - The new model allows for the generation of long videos at a cost comparable to that of short videos, achieving an 85% reduction in computational requirements [1][10]. - For example, generating a one-minute 480P video using the Mixture of Contexts (MoC) mechanism requires only 2.32×10¹² FLOPs, compared to 1.66×10¹³ FLOPs for the baseline model [10]. - The MoC mechanism also demonstrates similar cost-saving effects for short videos, with a 64-second multi-shot video requiring 2.3×10¹² FLOPs versus 1.7×10¹³ FLOPs for the baseline, resulting in approximately 86% savings [11]. Group 2: Quality and Consistency - The generated long videos maintain subject and background consistency, motion smoothness, and overall image quality, outperforming the baseline model across various performance metrics [12]. - In a single-shot 8-second 320×192 video test, the MoC model achieved a reduction of approximately 78% in computational load, requiring only 4.1×10⁹ FLOPs compared to 1.9×10¹⁰ FLOPs for the baseline [14]. Group 3: Mechanism of MoC - The MoC mechanism redefines long video generation as an information retrieval task, focusing on efficient cross-temporal memory retrieval [3][15]. - It employs a sparse attention mechanism that segments video sequences into semantically homogeneous content blocks, allowing each query token to connect only with the most relevant blocks [15][16]. - The model incorporates a "content alignment chunking" process to enhance retrieval accuracy and reduce unnecessary computational waste [19]. Group 4: Engineering Implementation - The MoC model is designed to prevent information retention issues by enforcing strict temporal masks during the routing phase, ensuring that queries do not access future blocks [20]. - The implementation utilizes FlashAttention for efficient memory access and parallel processing on GPUs, allowing for scalable performance with millions of tokens [20].
DeepSeek V4 借实习生获奖论文“起飞”?梁文峰剑指上下文:处理速度提10倍、要“完美”准确率
AI前线· 2025-07-31 05:02
Core Viewpoint - The article highlights the significant achievements of Chinese authors in the field of computational linguistics, particularly focusing on the award-winning paper from DeepSeek that introduces a novel sparse attention mechanism for long-context modeling, showcasing its efficiency and performance improvements over traditional methods [1][17]. Group 1: Award and Recognition - The ACL announced that over 51% of the award-winning papers for 2025 had Chinese authors, with the USA at 14% [1]. - A paper by DeepSeek, led by author Liang Wenfeng, won the Best Paper award, which has generated considerable discussion [1]. Group 2: Technical Innovations - The paper introduces a Natively Trainable Sparse Attention (NSA) mechanism, which combines algorithmic innovation with hardware optimization for efficient long-context modeling [4][6]. - NSA employs a dynamic hierarchical sparse strategy that balances global context awareness with local precision through token compression and selection [11]. Group 3: Performance Evaluation - NSA demonstrated superior performance in various benchmarks, outperforming traditional full attention models in 7 out of 9 metrics, particularly in long-context tasks [8][10]. - In a "needle in a haystack" test with 64k context, NSA achieved perfect retrieval accuracy and significant speed improvements in decoding and training processes [9][15]. Group 4: Future Implications - The upcoming DeepSeek model is expected to incorporate NSA technology, generating anticipation for its release [17]. - There are speculations regarding the delay of DeepSeek R2's release, attributed to the founder's dissatisfaction with its current performance [17].
刚刚,DeepSeek梁文锋NSA论文、北大杨耀东团队摘得ACL 2025最佳论文
3 6 Ke· 2025-07-31 03:40
在这届 ACL 大会上,华人团队收获颇丰。 ACL 是计算语言学和自然语言处理领域的顶级国际会议,由国际计算语言学协会组织,每年举办一次。一直以来,ACL 在 NLP 领域的学术影响力都位列 第一,它也是 CCF-A 类推荐会议。今年的 ACL 大会已是第 63 届,于 2025 年 7 月 27 日至 8 月 1 日在奥地利维也纳举行。 今年总投稿数创历史之最,高达8000多篇(去年为 4407 篇),分为主会论文和 Findings,二者的接收率分别为 20.3% 和 16.7%。 根据官方数据分析,在所有论文的第一作者中,超过半数作者来自中国(51.3%),而去年不到三成(30.6%)。紧随中国,美国作者的数量排名第二, 但只占 14.0%。 今年共评选出 4 篇最佳论文,2 篇最佳社会影响力论文、3 篇最佳资源论文、3 篇最佳主题论文、26 篇杰出论文,2 篇 TACL 最佳论文、1 篇最佳 Demo 论 文以及 47 篇 SAC Highlights。 以下是具体的获奖信息。 最佳论文奖 论文摘要:算法公平性传统上采用了种族色盲(即无差异对待)这种数学上方便的视角。然而,该团队认为,在一系列重要的情 ...
无需训练,即插即用,2倍GPU端到端推理加速——视频扩散模型加速方法DraftAttention
机器之心· 2025-06-28 04:35
Core Insights - The article discusses the challenges and advancements in video generation using diffusion models, particularly focusing on the computational bottlenecks associated with attention mechanisms in the Diffusion Transformer (DiT) model [1][6][14] - A new method called DraftAttention is introduced, which significantly reduces the computational overhead of attention mechanisms while maintaining high generation quality, achieving up to 2x end-to-end inference acceleration on GPUs [3][22][46] Group 1: Background and Challenges - Diffusion models have become mainstream for high-quality video generation, but the computational load of attention mechanisms increases dramatically with video length and resolution, leading to inefficiencies [1][6] - In models like HunyuanVideo, attention computation can account for over 80% of the total processing time, with generating an 8-second 720p video taking nearly an hour [1][5] - The complexity of attention mechanisms grows quadratically with the number of tokens, which is directly proportional to video frame count and resolution, causing significant slowdowns in inference speed [6][7] Group 2: Existing Solutions and Limitations - Current acceleration methods, such as Sparse VideoGen and AdaSpa, utilize sparse attention mechanisms for some level of end-to-end acceleration on GPUs, but their effectiveness is limited due to insufficient sparsity and rigid design [2][3] - These methods often rely on fixed sparse operators and lack dynamic adaptability to input content, making it difficult to achieve fine-grained, content-aware sparse pattern control [2][7] Group 3: DraftAttention Methodology - DraftAttention is a plug-and-play, dynamic sparse attention mechanism that does not require training, designed to reduce the computational burden of attention mechanisms while preserving generation quality [3][11][46] - The method involves creating a low-resolution attention map to estimate token importance, guiding the selection of sparse patterns for high-resolution attention calculations [11][12] - A token rearrangement strategy is introduced to enhance the execution efficiency of sparse computations on GPUs, making the approach hardware-friendly [13][22] Group 4: Theoretical Foundations and Experimental Results - The effectiveness of DraftAttention is supported by theoretical analyses demonstrating that the approximation error between the low-resolution and high-resolution attention maps is bounded [15][17] - Experimental evaluations show that DraftAttention outperforms existing sparse attention methods like Sparse VideoGen across multiple metrics, including PSNR and SSIM, particularly at high sparsity rates [20][21] - On NVIDIA H100 and A100 GPUs, DraftAttention achieves up to 1.75x end-to-end inference acceleration, with performance improvements scaling with video length, resolution, and sparsity [22][46] Group 5: Future Directions - The authors plan to further optimize efficiency bottlenecks in long video generation by integrating techniques such as quantization and distillation, aiming to extend high-quality video generation capabilities to resource-constrained environments like mobile and edge devices [46]
0.5B以小搏大拿下端侧模型新SOTA:4090可跑,长文本处理5倍常规加速丨清华&面壁开源
量子位· 2025-06-10 07:35AI Processing
清华大学&面壁智能 投稿 量子位 | 公众号 QbitAI 端侧性价比之王,清华大学和面壁智能团队开源新模型—— MiniCP M 4 ,提供 8B、0.5B 两种参数规模, 仅使用同级别开源模型22%的训练开销 ,就达到了同级别最优性能。 MiniCPM4-8B是 开源首个开源的原生稀疏模型,5%的极高稀疏度加持,让长文本、深思考在端侧真正跑起来。 在MMLU、CEval、MATH500、HumanEval等基准测试中,以仅22%的训练开销,性能比肩 Qwen-3-8B,超越Gemma-3-12B。 MiniCPM4-0.5B 在性能上,也展现出以小博大——在MMLU、CEval、BBH、HumanEval等基准测试中,MiniCPM4.0 -0.5B性能超越同级 的Qwen-3-0.6B、Llama 3.2、Gemma3, 并通过 原生QAT技术 实现几乎不掉点的int4量化以及600Token/s的极速推理速度。 在常见端侧芯片,比如Jetson AGX Orin与RTX 4090上,MiniCPM 4可实现长文本处理的5倍常规加速与极限场景下的百倍加速。 请看VCR: 目前团队已公开发布技术报告,该模 ...
0.5B以小搏大拿下端侧模型新SOTA:4090可跑,长文本处理5倍常规加速丨清华&面壁开源
量子位· 2025-06-10 07:35
Core Insights - MiniCPM4, developed by Tsinghua University and Weizhi Intelligent Team, is an open-source model that achieves optimal performance with only 22% of the training cost compared to similar models, offering 8B and 0.5B parameter sizes [1][3][4] - The model utilizes a novel sparse attention mechanism, InfLLM v2, which allows for efficient long-context processing, achieving a 5% sparsity rate [2][8][16] - MiniCPM4 demonstrates superior performance in benchmark tests, outperforming models like Qwen-3 and Gemma-3 while using significantly less training data [3][11][116] Model Performance - MiniCPM4-8B matches the performance of Qwen-3-8B and surpasses Gemma-3-12B with only 22% of the training data used by Qwen-3 [3][116] - MiniCPM4-0.5B outperforms Qwen-3-0.6B and Llama 3.2 in various benchmark tests, showcasing its efficiency in smaller parameter sizes [3][11] - The model achieves a decoding speed of 600 tokens per second with minimal performance loss during quantization [3][10] Technical Innovations - The InfLLM v2 architecture allows for efficient long-context processing by dynamically selecting relevant context tokens, reducing computational costs by 60% compared to previous methods [8][11][16] - The model incorporates a lightweight CUDA inference framework (CPM.cu) and a cross-platform deployment framework (ArkInfer) to optimize performance on edge devices [19][20][40] - The FR-Spec algorithm enhances speculative sampling efficiency, reducing computational overhead by 75% while maintaining output accuracy [28][30] Data Efficiency - MiniCPM4 achieves high capability density by utilizing only 8 trillion tokens for training, compared to 36 trillion tokens used by Qwen-3, demonstrating effective data filtering strategies [56][116] - The UltraClean data selection method enhances the quality of pre-training data, significantly improving model performance [57][61] Application and Use Cases - MiniCPM4 is designed for long document understanding and generation, proving effective in tasks such as automated literature review generation and complex tool interactions [120][130] - The model's ability to handle long sequences and maintain high accuracy in context extrapolation makes it suitable for various applications in AI-driven tasks [118][119]
月之暗面 MoBA 核心作者自述:一个 “新晋大模型训练师” 的三入思过崖
晚点LatePost· 2025-02-20 14:21
"从开源论文、开源代码出发,现在已经进化到开源思维链了嘛!" 文丨Andrew Lu 注释丨贺乾明 程曼祺 2 月 18 日,Kimi 和 DeepSeek 同一天发布新进展,分别是 MoBA 和 NSA,二者都是对 "注意力机 制"(Attention Mechanism)的改进。 今天,MoBA 的一位主要研发同学 Andrew Lu 在知乎发帖,自述研发过程的三次踩坑,他称为 "三入思过 崖"。他在知乎的签名是"新晋 LLM 训练师"。 这条回答下的一个评论是:"从开源论文、开源代码出发,现在已经进化到开源思维链了嘛。" 注意力机制之所以重要,是因为它是当前大语言模型(LLM)的核心机制。回到 2017 年 6 月那篇开启 LLM 革命的 Transformer 八子论文,标题就是:Attention Is All You Need(注意力就是你所需要的一 切),该论文被引用次数至今已达 15.3 万。 注意力机制能让 AI 模型像人类一样,知道在处理信息时该 "重点关注" 什么、"忽略" 什么,抓住信息中最 关键的部分。 在大模型的训练阶段和使用(推理)阶段,注意力机制都会发挥作用。它的大致工作原理是 ...