Workflow
全球开源大模型杭州霸榜被终结,上海Minimax M2发布即爆单,百万Tokens仅需8元人民币
量子位·2025-10-28 01:18

Core Insights - The open-source model throne has shifted to Minimax M2, surpassing previous leaders DeepSeek and Qwen, which were based in Hangzhou, now replaced by the Shanghai-based Minimax [1] Performance and Features - Minimax M2 achieved a score of 61 in the Artificial Analysis test, ranking it as the top open-source model, just behind Claude 4.5 Sonnet [2] - The model is designed specifically for agents and programming, showcasing exceptional programming capabilities and agent performance [4] - Minimax M2 is economically efficient, with a reasoning speed twice that of Claude 3.5 Sonnet, while its API pricing is only 8% of Claude's [5][9] - The model's total parameter count is 230 billion, with only 10 billion active parameters, allowing for rapid execution [9][10] - It employs an interleaved thinking format, crucial for planning and verifying operations across multiple dialogues, enhancing agent reasoning [11] Comparative Analysis - In the overall performance ranking, M2 placed fifth in the Artificial Analysis test, securing the top position among open-source models [14] - The test utilized ten popular datasets, including MMLU Pro and LiveCodeBench, to evaluate model performance [15] - M2's pricing is set at $0.3 per million input tokens and $1.2 per million output tokens, representing only 8% of Claude 3.5 Sonnet's cost [16] Agent Capabilities - Minimax has deployed M2 on an agent platform for limited free use, showcasing various existing projects created with the model [32][35] - The platform allows users to create diverse web applications and even replicate classic games in a web environment [36][38] - Users have successfully developed projects like an online Go game platform, demonstrating M2's programming capabilities [40][43] Technical Insights - M2 utilizes a hybrid attention mechanism, combining full attention and sliding window attention, although initial plans to incorporate sliding window attention were abandoned due to performance concerns [45][46] - The choice of attention mechanism reflects Minimax's strategy to optimize performance for long-range dependency tasks [49][54]