Workflow
智谱
icon
Search documents
一上市就亮眼,“新登恒生”被期待调进成分股,有公募已提早布局
Feng Huang Wang· 2026-02-22 13:25
阿里、腾讯、美团等被总结为建议调出的"老登恒生"成分股。同时,恒生生物科技指数也在当日逆市收 涨0.96%,另外,石油股也出现走强。 值得留意的是,虽然港股互联网主题ETF、恒生科技ETF等节前资金流入最多的ETF并未迎来涨势,但 表现强势的个股也有公募基金的布局身影,次新股以打新的方式布局,而表现强势板块则在此前已有主 动权益基金的加仓迹象。 2月22日,节后首个交易日,恒指和恒生科技虽然没有收涨,但AI概念股的涨势如期而至。 2月20日,港股三大指数集体收跌。其中,恒生科技收跌2.91%,恒生指数收跌1.10%,而恒生国企指数 收跌1.22%。相较于恒生科技的继续下挫,港股上市的AI概念次新股却迎来上涨,网友也出现了调整恒 生科技部分成分股的建议。 小米、中芯国际、比亚迪、腾讯、快手等成分股也在去年四季度被主动权益基金减持,不过,美团在去 年四季度得到主动权益基金增持超过1700万股。 石油、创新药板块走强 港股AI概念股大涨 截至2月20日收盘,智谱延续强势表现,收涨42.72%,报725港元;MiniMax也同样收涨14.52%,报970 港元。两大AI龙头盘中最高市值双双突破3000亿港元,与上市初 ...
产品涨价、股价飙升,中国AI大模型龙头“爆”了!
Mei Ri Jing Ji Xin Wen· 2026-02-22 13:05
Core Insights - The two AI companies, Zhipu and MiniMax, have seen significant stock price increases, with Zhipu rising over 42% on the first trading day of the Year of the Horse, and MiniMax increasing over 14% on the same day, leading to market capitalizations exceeding 300 billion HKD [1][2][7] - Both companies have experienced substantial growth since their IPOs, with Zhipu's stock price increasing by 523% and MiniMax's by 487.88% within a month of their listings [1][7] - Despite their high valuations, both companies are currently operating at a loss, with a price-to-sales ratio (PS) exceeding 700, significantly higher than OpenAI's 65 [2][17] Company Performance - Zhipu's stock closed at 725 HKD per share, with a market capitalization surpassing 323.2 billion HKD, while MiniMax closed at 970 HKD per share, reaching a market cap of 304.2 billion HKD [2][11] - Zhipu's cumulative losses from 2022 to mid-2025 amount to approximately 6.238 billion CNY, while MiniMax reported a net loss of 512 million USD (around 3.605 billion CNY) for the first nine months of 2025 [17][20] Market Position - Zhipu and MiniMax's market capitalizations have surpassed those of major companies like Ctrip and Kuaishou, and are approaching the valuations of Pop Mart and Baidu [2][11] - The market's enthusiasm for these companies is attributed to their technological advancements and product breakthroughs, particularly in AI model development [12][19] Technological Advancements - Zhipu launched its flagship model GLM-5, which has shown over a 20% performance improvement compared to its predecessor, while MiniMax introduced M2.5, designed for full-stack programming development [12][13] - Both models have achieved significant performance metrics in industry benchmarks, with MiniMax M2.5 being the most called model in a recent week, reaching 3.07 trillion tokens [14][16] Pricing Strategy - Following the launch of GLM-5, Zhipu raised the prices of its coding plans by 30% in China and over 100% internationally, indicating strong demand for its services [12][19] - MiniMax's pricing for token usage is significantly lower than that of competitors like Claude Opus 4.6, making it an attractive option for users [14][15] Future Outlook - The market is betting on the future potential of these companies, as the demand for AI capabilities continues to grow, particularly in areas requiring high token consumption for complex tasks [19][20] - Analysts suggest that the transition of AI from simple tasks to more complex operations will drive up token consumption, positioning these companies favorably in the evolving AI landscape [19]
AI进化速递丨智谱发布GLM-5技术报告
Di Yi Cai Jing· 2026-02-22 13:05
Group 1 - The company has secured orders for its connectivity business until the fourth quarter of 2026, indicating strong demand and a robust order backlog [1] - The AI high-speed optical module production line is operating at full capacity 24 hours a day, reflecting the company's commitment to meeting market needs [1] Group 2 - Alibaba Cloud's Coding Plan supports various models including Q&A 3.5, GLM-4.7, and Kimi-K2.5, showcasing advancements in AI technology [1] - Zhiyuan released a technical report on GLM-5, providing full disclosure of technical details, which may enhance transparency and trust in their AI solutions [1]
刚刚!AI大牛股,重磅发布!四大创新来袭,多家国产芯片“入列”
Xin Lang Cai Jing· 2026-02-22 12:19
Core Insights - The company, Zhiyu, has officially released the GLM-5 technical report, claiming significant performance improvements due to four major technological innovations [1][11] - GLM-5 has demonstrated unprecedented capabilities in real-world programming tasks, surpassing all previous open-source benchmarks in handling end-to-end software engineering challenges [1][4][14] - Following the release of GLM-5, Zhiyu's stock price surged by 42.72% to 725 HKD per share, achieving a total market capitalization of 323.2 billion HKD, with a cumulative increase of over 500% since its listing [1][11] Technological Innovations - The first innovation is the introduction of the DeepSeek Sparse Attention (DSA) mechanism, which significantly reduces training and inference costs while maintaining long-context understanding and reasoning depth. This allows the model parameters to expand to 744 billion and training tokens to 28.5 trillion [5][15] - The second innovation involves a new asynchronous reinforcement learning (RL) infrastructure that decouples the generation and training processes, greatly enhancing post-training iteration efficiency [6][14] - The third innovation is a new asynchronous Agent RL algorithm that improves the model's autonomous decision-making quality, enabling it to learn continuously from diverse long-term interactions [6][16] - The fourth innovation is the full embrace of the domestic computing power ecosystem, with GLM-5 being natively compatible with seven major domestic chip platforms [6][16] Market Response and Pricing Strategy - Due to overwhelming demand, Zhiyu announced a price increase for the GLM Coding Plan subscription service, with a 30% increase in China and over 100% for the overseas version, marking it as the first domestic AI company to raise prices for large model commercialization services [1][9][19] - The GLM Coding Plan subscription service, designed for AI programming scenarios, has sold out immediately upon launch, indicating a rare demand surge in the industry [8][18] - Zhiyu has acknowledged issues with user experience due to high demand and has implemented a phased rollout of GLM-5 access for different subscription tiers [8][18]
“全球大模型第一股”智谱道歉!
Guang Zhou Ri Bao· 2026-02-22 12:09
Core Viewpoint - The AI company Zhiyu experienced a significant stock price surge of over 40% on February 20, but subsequently issued an apology due to operational issues following the release of its new model, GLM-5 [2]. Group 1: Apology and Issues - Zhiyu acknowledged three key mistakes in its apology letter regarding the GLM-5 release: insufficient transparency in usage rules, slow expansion pace, and a rough upgrade mechanism for existing users [6][7]. - The company implemented a tiered usage strategy to manage increased computational demands, raising peak consumption by three times and off-peak by two times, but failed to communicate this clearly to users, leading to widespread complaints [6]. - The demand for GLM-5 exceeded expectations, causing delays in service expansion and limiting access for Pro and Lite users, with Max users fully opened but facing potential throttling during peak times [6]. Group 2: User Compensation and Experience - To address user dissatisfaction, Zhiyu offered compensation, including a 15-day extension for users already utilizing GLM-5 and the option for Lite and Pro users to apply for refunds [6]. - The GLM Coding Plan, a subscription service for AI programming, saw a surge in demand due to GLM-5's strong performance, which approached state-of-the-art levels in coding capabilities [8]. Group 3: Industry Context and Future Plans - The incident reflects a broader trend in the industry, where companies like Anthropic faced similar challenges with service capacity following model releases [8]. - In response to the demand surge, Zhiyu launched a "Computing Partner" recruitment plan to enhance computational capacity, seeking partnerships with chip manufacturers and inference service providers [9][11]. - The company has already collaborated with several domestic chip platforms to optimize GLM-5 for high throughput and low latency on local computing clusters [11].
3000亿AI巨头,道歉!
Sou Hu Cai Jing· 2026-02-22 11:48
Core Insights - The company, known as the "first stock of global large models," issued an apology regarding the GLM Coding Plan and announced a compensation scheme [1] - The company identified three main issues with the recent update: insufficient rule transparency, slow rollout of GLM-5, and poorly designed upgrade mechanisms for existing users [2] - Following the launch of GLM-5, user demand exceeded expectations, prompting the company to increase computational resources and optimize models to ensure service quality [8] Group 1 - The GLM-5 model was launched on February 12, achieving alignment with Claude Opus 4.5 and obtaining state-of-the-art (SOTA) scores in mainstream benchmark tests [7] - GLM-5 has been recognized for its capabilities in various assessments, achieving top performance in BrowseComp, MCP-Atlas, and τ²-Bench [7] - The company has completed deep reasoning adaptations of GLM-5 with several domestic computing platforms, ensuring high throughput and low latency on domestic chip clusters [7] Group 2 - The company has become the first AI-native enterprise in China to increase prices for large model commercialization services [7] - The company was founded in 2019, originating from the Knowledge Engineering Laboratory at Tsinghua University, and possesses comprehensive self-research capabilities from algorithms to hardware adaptation [8] - The company's vision is to enable machines to think like humans, aiming for general artificial intelligence (AGI) and focusing on exploring the limits of artificial intelligence [8]
智谱发布GLM-5技术细节:工程级智能,适配国产算力
Hua Er Jie Jian Wen· 2026-02-22 11:20
Core Insights - The release of GLM-5 marks a significant advancement in AI model capabilities, shifting the focus from mere parameter size to system engineering capabilities [2][15] - GLM-5 demonstrates the ability to perform complex tasks, improve training efficiency, and fully adapt to domestic chip architectures, indicating a move towards an independent technological ecosystem in China [2][14] Group 1: Model Capabilities - GLM-5 can handle complex tasks beyond simple code generation, showcasing "engineering-level intelligence" [4][5] - The model supports a context length of 200K tokens, enabling it to manage long-term planning and multi-round interactions effectively [4][6] - The introduction of DSA (DeepSeek Sparse Attention) reduces computational complexity by 1.5-2 times without loss of performance, allowing for more efficient processing [6][7][9] Group 2: Training and Efficiency Innovations - GLM-5 features a restructured reinforcement learning (RL) architecture that decouples model generation from training, significantly enhancing throughput [13] - The model's training efficiency is optimized through asynchronous RL algorithms, allowing for stable learning in complex environments [13] - The overall design emphasizes efficiency innovations over sheer computational power, which is crucial for the Chinese AI landscape [10] Group 3: Hardware Adaptation - GLM-5 is natively compatible with various domestic GPU ecosystems, including Huawei Ascend and others, marking a shift towards system-level adaptation rather than reliance on foreign hardware [14] - The model's performance on a single domestic computing node is comparable to that of a cluster of two international GPUs, with deployment costs reduced by 50% in long-sequence processing scenarios [14] Group 4: Comprehensive AI Engineering - The development of GLM-5 represents a complete closed-loop system that integrates model architecture innovation, training efficiency optimization, and deep adaptation to domestic chips [15] - This signifies a transition for Chinese AI from application-level advantages to full-stack optimization, including architecture, algorithms, training systems, and inference frameworks [15][18] - The report emphasizes a mature approach to AI development, focusing on practical engineering metrics rather than competitive benchmarking [18]
新模型太火?智谱发致歉信、公布补偿方案
Nan Fang Du Shi Bao· 2026-02-22 10:49
Core Viewpoint - The company, Zhiyu AI, acknowledged issues following the release of its GLM-5 model, including insufficient transparency, slow rollout, and poor upgrade mechanisms for existing users, and proposed compensation measures [2][4]. Group 1: Company Performance and Market Reaction - Zhiyu AI's stock price surged by 42.72% to 725 HKD, with a market capitalization exceeding 320 billion HKD as of February 20 [2]. - The annual recurring revenue (ARR) for the GLM Coding Plan has surpassed 100 million RMB, approximately 14 million USD [3]. Group 2: Product Launch and User Experience - The GLM-5 model was launched on February 12, achieving state-of-the-art performance in coding and agent capabilities, leading to overwhelming demand and service delays [2][3]. - Users reported increased consumption rates with GLM-5, which has over twice the parameters of GLM-4.7, designed for complex tasks [4]. Group 3: Response to User Feedback - The company issued an apology to developers, acknowledging that the demand for GLM-5 exceeded expectations, resulting in service disruptions and delays [4][9]. - A refund policy was established for affected Lite and Pro users, allowing them to apply for refunds under a "Zhiyu covers all costs" principle for the period from January 1, 2026, to February 21, 2026 [5][10]. Group 4: Future Plans and Partnerships - To address the demand-supply gap, Zhiyu AI initiated a "Computing Power Partner" recruitment plan, inviting chip manufacturers and service providers to collaborate on optimizing GLM-5's performance [3][9]. - The company plans to enhance user experience by optimizing infrastructure and combating malicious resource usage from gray market actors [12][16].
智谱发布GLM Coding Plan致歉信,并公布处理补偿方案
Zhong Guo Ji Jin Bao· 2026-02-22 10:49
Core Viewpoint - The company issued an apology regarding the GLM Coding Plan, acknowledging three main mistakes: insufficient rule transparency, slow rollout of GLM-5, and poorly designed upgrade mechanisms for existing users [1] Group 1: GLM-5 Release and User Impact - Following the release of GLM-5, user traffic exceeded expectations, leading to a mismatch in the company's expansion pace, resulting in a staggered rollout of GLM-5 in Max, Pro, and Lite tiers [3] - Max users have been fully opened, while Pro users may experience throttling during peak times due to high cluster load, and Lite users will gradually be opened after the holiday during non-peak times [3] Group 2: Compensation and Future Commitments - The company supports Lite and Pro users in applying for refunds if affected; if not opting for refunds, all users, including Max, will receive a 15-day extension [3] - The company emphasized that future adjustments affecting core user rights will be communicated in advance, providing a "choice period" to avoid passive acceptance by users [3] Group 3: GLM-5 Performance - GLM-5, launched on February 12, aligns its programming capabilities with Claude Opus 4.5 and has achieved state-of-the-art (SOTA) scores in mainstream benchmark tests [3] - The model has demonstrated SOTA performance in agent capabilities, ranking first in several evaluation benchmarks, including BrowseComp, MCP-Atlas, and τ²-Bench [3]
智谱发布GLM-5技术报告,技术细节全公开
Mei Ri Jing Ji Xin Wen· 2026-02-22 10:30
Core Insights - The article discusses the launch of GLM-5, a next-generation foundational model aimed at shifting programming paradigms from "VibeCoding" to "AgenticEngineering" [1] Group 1: Model Innovations - GLM-5 builds on the intelligence, reasoning, and programming capabilities of its predecessor, GLM-4.5, while employing sparse attention to significantly reduce inference costs [1] - The model maintains long-context capabilities without loss, enhancing its overall performance [1] Group 2: Learning Infrastructure - A new asynchronous reinforcement learning infrastructure has been developed to better align the model with various tasks, decoupling the generation process from the training process to greatly improve post-training iteration efficiency [1] - The introduction of a novel asynchronous Agent reinforcement learning algorithm further enhances the effectiveness of reinforcement learning, allowing the model to learn more effectively from complex, long-range interactions [1] Group 3: Performance Metrics - GLM-5 achieves state-of-the-art (SOTA) performance in mainstream open benchmark tests [1] - The model demonstrates unprecedented capabilities in real-world programming tasks, surpassing all previous open-source baselines in handling end-to-end software engineering challenges [1]