Workflow
大模型预训练原理Scaling Law
icon
Search documents
阿里发布Qwen3-Max,性能超GPT5跻身全球前三
Guan Cha Zhe Wang· 2025-09-24 05:37
Core Insights - Alibaba's Qwen3-Max model has been unveiled at the 2025 Yunqi Conference, showcasing performance that surpasses GPT-5 and Claude Opus 4, positioning it among the top three globally [1] - The model features two versions: Instruct and Thinking, with the preview version ranking third on the Chatbot Arena leaderboard, indicating strong competitive performance [1] - Qwen3-Max is the largest and most powerful model in the Tongyi Qianwen family, with a pre-training data volume of 36 trillion tokens and over one trillion parameters, demonstrating exceptional coding and agent tool capabilities [1] Performance Metrics - In the SWE-Bench Verified test, the Instruct version scored 69.6, placing it in the global first tier, while the Tau2-Bench test focused on agent tool capabilities, achieving a groundbreaking score of 74.8, surpassing Claude Opus 4 and DeepSeek-V3.1 [1] - The Qwen3-Max-Thinking-Heavy version exhibited remarkable performance in mathematical reasoning tests, achieving perfect scores of 100 in both AIME 25 and HMMT tests, marking a first for domestic models [2] Model Development Insights - The Scaling Law principle suggests that increasing data and parameter sizes is a potential pathway to achieving AGI, and the performance breakthroughs of Qwen3-Max indicate that further increases in data and model parameters can still yield stronger models [4] - The Tongyi Qianwen series has achieved comprehensive coverage from 0.5 billion to over one trillion parameters, encompassing over 300 large models to meet diverse application needs [4] User Access - Users can now experience Qwen3-Max for free on the Tongyi Qianwen QwenChat platform and access API services through Alibaba Cloud's Bailian platform [5]
阿里通义发布Qwen3-Max
Qwen3-Max的推理增强版本Qwen3-Max-Thinking-Heavy也展现出非凡性能,结合工具调用和并行推理 技术,其推理能力创下新高,尤其在聚焦数学推理的AIME25和HMMT测试中,均达到突破性的满分 100分,为国内首次。Qwen3-Max推理模型之所以能够取得优异成绩,原因在于大模型在解数学题时懂 得调动工具,能够写代码做题,同时,增加测试时的计算资源,也让模型表现变得更好。 大模型预训练原理Scaling Law(规模化法则)认为,持续地增长数据和参数规模,是通向AGI的可能路径 之一。由于自然数据的数量有限,当前有部分学者认为预训练的Scaling Law即将逼近上限,而Qwen3- Max的性能突破显示,继续增大数据、模型参数,依然能锻造出更强的模型。目前,通义千问系列模型 已经实现从0.5B到超万亿的全尺寸覆盖,包含三百多个大模型,可满足不同场景的需求。 9月24日,2025云栖大会开幕,阿里通义旗舰模型Qwen3-Max重磅亮相,性能超过GPT5、Claude Opus4 等,跻身全球前三。Qwen3-Max包括指令(Instruct)和推理(Thinking)两大版本,其预览版 ...