用编程大模型登顶开源第一后,智谱GLM团队被拷问了3小时
量子位·2025-12-24 12:46

Core Viewpoint - The article discusses the release of the new model GLM-4.7 by Z.ai, which has surpassed GPT-5.2 in the WebDev ranking, marking a significant achievement in the open-source large model space [1][2]. Model Performance and Optimization - The improvements in GLM-4.7 are primarily attributed to advancements made during the post-training phase, particularly in supervised fine-tuning (SFT) and reinforcement learning (RL) [8]. - The design of GLM-4.7 considers hardware limitations, aiming for high performance on consumer-grade graphics cards while maintaining logical capabilities close to 30 billion parameters [9]. - A complex pre-training data process was established, involving multi-source data collection and rigorous cleaning to enhance model quality [11]. Model Application Scenarios and Functions - GLM-4.7 has shown significant improvements in programming tasks, with optimizations made specifically for coding languages like Python and JavaScript, as well as lesser-known languages [16]. - The model has enhanced creative writing capabilities, producing more nuanced and engaging text, and has introduced a feature called "Interleaved Thinking" to improve decision-making in complex tasks [21]. Technical Methods and Tools - The introduction of the Slime framework aims to address the inefficiencies and stability issues in large model reinforcement learning, providing developers with tools to replicate high alignment effects [27]. - The team emphasizes transparency in their data collection and processing pipeline, which has garnered respect within the open-source community [28]. Future Commitments and Market Position - Z.ai has committed to maintaining its open-source ethos even after potential IPO plans, recognizing the importance of the open-source ecosystem for its growth [46]. - The competitive pricing of GLM-4.7 has attracted attention, with users noting its affordability compared to other models like Codex and Claude Code [47].