AI独角兽MiniMax发布混合架构开源模型M1:训练成本仅380万,长文本处理超GPT-4o
Xin Lang Ke Ji·2025-06-19 00:47

Core Insights - MiniMax has launched its self-developed MiniMax-M1 series model, achieving significant breakthroughs in processing long texts with millions of tokens, making it the longest context reasoning model available [1][2] - The reinforcement learning (RL) training cost has decreased significantly to $530,000 (approximately 3.8 million yuan), with inference efficiency several times higher than competitors [1] - MiniMax-M1 has been open-sourced and leads in the TAU-bench tool usage scenario, outperforming all open-source weight models, including Gemini-2.5 Pro [1][2] Pricing Strategy - MiniMax offers competitive API pricing for different token processing tiers, with input costs ranging from 0.8 yuan to 2.4 yuan per million tokens and output costs from 8 yuan to 24 yuan per million tokens [1][2] - The pricing for the first two tiers is lower than that of DeepSeek-R1, while the third tier covers a domain not currently addressed by DeepSeek [2] Performance Metrics - The M1 model has been extensively tested across 17 mainstream evaluation sets, demonstrating superior performance in software engineering, long text understanding, and complex productivity scenarios [2] - In code capability (SWE-bench), M1-40k and M1-80k versions achieved scores of 55.6% and 56.0%, respectively, significantly surpassing all other open-source models [2] - For long text tasks (MRCR), the M1 series not only outperformed all open-source competitors but also narrowly trailed behind Google's Gemini 2.5 Pro, ranking second globally [2] Tool Utilization - In the TAU-bench tool calling scenario, the M1-40k model leads all open-source models and even surpasses the closed-source Gemini 2.5 Pro, showcasing its potential as a foundational model for AI agents [3]

AI独角兽MiniMax发布混合架构开源模型M1:训练成本仅380万,长文本处理超GPT-4o - Reportify