高推理

Search documents
MiniMax重磅开源M1模型:百万上下文超DeepSeek R1,实现性能与效率双杀
AI科技大本营· 2025-06-17 02:32
Core Insights - MiniMax has officially open-sourced its latest large language model, MiniMax-M1, marking a significant development in the AI landscape [2][4] - MiniMax-M1 is recognized as the world's first open-weight large-scale hybrid attention inference model, showcasing substantial breakthroughs in performance and inference efficiency [4][6] Model Specifications - MiniMax-M1 features a parameter scale of 456 billion, with each token activating approximately 45.9 billion parameters, and supports a maximum context length of 1 million tokens, which is 8 times longer than that of DeepSeek R1 [7][12] - The model's computational load (FLOPs) for generating 100,000 tokens is only 25% of that required by DeepSeek R1, indicating a significant advantage in long text processing tasks [7][12] Training and Efficiency - The training of MiniMax-M1 utilized a large-scale reinforcement learning (RL) strategy, optimizing performance across various tasks, including mathematical reasoning and software engineering [9][11] - The complete RL training of MiniMax-M1 was accomplished in three weeks using 512 H800 GPUs, with a cost of approximately $534,700, demonstrating high efficiency and cost-effectiveness [11] Performance Comparison - MiniMax-M1 is available in two versions, with maximum generation lengths of 40K and 80K tokens, and has shown superior performance in complex software engineering, tool usage, and long-context tasks compared to leading open-weight models like DeepSeek-R1 and Qwen3-235B [12][19] - In benchmark tests, MiniMax-M1 outperformed other models in various categories, including long-context understanding and tool usage, establishing itself as a strong contender in the AI model landscape [19]