IPO前夕,智谱上线旗舰大模型GLM4.7
Hua Er Jie Jian Wen·2025-12-23 03:43

Core Insights - The launch of GLM-4.7 marks a significant iteration in the technology product line of the company, enhancing coding capabilities, long-range task planning, and tool collaboration [1][2]. Performance Metrics - GLM-4.7 achieved competitive performance in multiple mainstream benchmark tests, ranking first in open-source and domestic models in the Code Arena coding evaluation system, surpassing GPT-5.2 [1]. - In the HLE benchmark test, GLM-4.7 scored 42.8%, a 41% improvement over the previous version GLM-4.6, and outperformed GPT-5.1 [3]. - The model set new open-source records with a score of 87.4 in the τ²-Bench interactive tool calling evaluation [5]. - GLM-4.7 achieved open-source SOTA scores of 73.8% in SWE-bench-Verified and 84.9% in LiveCodeBench V6, exceeding Claude Sonnet 4.5 [9]. Technological Advancements - The model introduces "retained thinking" and "round-level thinking" mechanisms to enhance stability and controllability in complex tasks [6][10]. - The "interleaved thinking" approach allows the model to pre-think before responding or calling tools, improving adherence to complex instructions and code generation quality [6][7]. - GLM-4.7 has improved its understanding of UI design standards, resulting in aesthetically pleasing web pages and presentations, with a 16:9 PPT adaptation rate increasing from 52% to 91% [11]. Market Reception - The launch of GLM-4.7 has garnered significant attention from the global developer community, with feedback highlighting its problem-solving capabilities and high cost-performance ratio [13]. - Users have reported successful implementations, such as developing interactive games like "Plants vs. Zombies" and "Fruit Ninja," showcasing strong task decomposition and technology stack integration abilities [14]. - The competitive pricing strategy of GLM-4.7, being significantly lower than that of Codex or Claude Code, is seen as a challenge to Western AI companies [15].

IPO前夕,智谱上线旗舰大模型GLM4.7 - Reportify