Core Insights - The latest xbench Leaderboard has been released, showcasing updates from six models that have entered the top 10, including GPT-5-high and Qwen3-235B-A22B-Thinking-2507, with scores improving by 3-5 points [1][9][10] - The dual-track evaluation system continues to track advancements in AGI, with a new question bank for the xbench-DeepSearch set expected to be released soon [1][2] Model Performance Summary - GPT-5-high from OpenAI shows a significant average score increase from 60.8 to 64.4, maintaining a stable BoN (N=5) score [9][12] - Qwen3-235B-A22B-Thinking-2507 has improved its average score from 45.4 to 55, with BoN scores rising from 66 to 77, indicating substantial enhancements [9][35] - Claude Opus 4.1-Extended Thinking has increased its average score from 46.6 to 53.2, with a slight BoN increase from 69 to 72 [9] - Kimi K2 0905 achieved an average score of 51.6, demonstrating a balance between model capability and response speed [9][28] - GLM-4.5 from ZHIPU scored 48.8 with a BoN of 74, while Hunyuan-T1-20250711 scored 44.4 with a BoN of 63 [9] - Grok-4 has shown a remarkable improvement, achieving a score of 65, marking it as a state-of-the-art model [9][10] Evaluation Insights - The distribution of model scores indicates a narrowing gap among the top performers, with the top five models scoring between 76-78 [10] - The overall performance of models suggests that advancements in model capabilities are reaching a plateau, with smaller incremental improvements noted across most models [10][12] - The xbench evaluation mechanism continues to provide real-time updates on model performance, with future rankings expected [2][8]
ScienceQA最新榜单出炉!多家公司新模型分数均提升|xbench 月报