Workflow
混合注意力推理模型
icon
Search documents
MiniMax发布开源混合架构推理模型M1,M1所需的算力仅为DeepSeek R1的约30%
news flash· 2025-06-17 08:32
Core Insights - MiniMax, an AI unicorn based in Shanghai, has officially launched the open-source inference model MiniMax-M1 (referred to as "M1") [1] - M1 is claimed to be the world's first large-scale mixed attention inference model with open weights [1] - The model combines the Mixture-of-Experts (MoE) architecture with Lightning Attention, achieving significant breakthroughs in performance and inference efficiency [1] - Test data indicates that the M1 series surpasses most closed-source models in long context understanding and code generation productivity scenarios, with only a slight gap behind top closed-source systems [1]