智谱发布GLM-4.7-Flash并开源:同级别SOTA模型为轻量化部署提供新选择
IPO早知道·2026-01-20 03:19

Core Viewpoint - The article discusses the launch of GLM-4.7-Flash by Zhipu, a new mixed thinking model that offers a balance of performance and efficiency for lightweight deployment, with a total parameter count of 30 billion and an active parameter count of 3 billion [2]. Group 1 - GLM-4.7-Flash officially released on January 20 and is now available for free use on Zhipu's open platform BigModel.cn, replacing the previous GLM-4.5-Flash model [2]. - In mainstream benchmark tests such as SWE-bench Verified and τ²-Bench, GLM-4.7-Flash outperformed models like gpt-oss-20b and Qwen3-30B-A3B-Thinking-2507, achieving open-source SOTA scores in its model category [3].

智谱发布GLM-4.7-Flash并开源:同级别SOTA模型为轻量化部署提供新选择 - Reportify