需求太火爆!智谱AI因算力告急“限购”:GLM编程计划每日仅售20%,老用户优先

Core Viewpoint - The rapid increase in user demand for the newly released GLM-4.7 language model has led to significant computational bottlenecks for Zhipu AI, prompting the company to implement emergency throttling measures to prioritize existing users' experience [1][2]. Company Summary - Zhipu AI announced that starting January 23, it will drastically reduce the daily new subscription limit for its programming assistant service "GLM Coding Plan" to 20% of its previous levels, ensuring that existing users' access is prioritized [1][2]. - The company has experienced frequent throttling errors and significant response time delays during peak hours due to the surge in user numbers, which it attributes to a phase of resource strain caused by rapid growth [1][2]. - The GLM Coding Plan is positioned as a competitor to Claude, and the company is in direct competition with leading firms like OpenAI and Anthropic [1][2]. Industry Summary - The implementation of throttling measures in response to user surges has become a common phenomenon in the rapidly growing AI industry, as seen previously with DeepSeek, which also limited API access due to server resource constraints [3]. - This "throttling" action highlights the temporary mismatch between the explosive growth in AI application demand and the pace of foundational computational infrastructure development [3]. - The computational bottleneck reflects strong end-user demand while revealing the operational challenges AI companies face in transitioning from technological breakthroughs to stable service delivery [3].

KNOWLEDGE ATLAS-需求太火爆!智谱AI因算力告急“限购”:GLM编程计划每日仅售20%,老用户优先 - Reportify