Scaling Law 2.0
Search documents
谷歌销售Gemini AI模型业务激增,低费率创业板人工智能ETF华夏(159381)连续5日吸金超4亿元
Mei Ri Jing Ji Xin Wen· 2026-01-20 03:09
Group 1 - The core viewpoint of the articles highlights the ongoing adjustments in the AI sector, with a notable decline in AI-related ETFs and stocks, while also indicating a significant increase in funding for AI investments over the past five days [1][2] - The AI industry is at a pivotal point of capability leap and accelerated commercialization, with leading firms in the US and China dominating the global large model landscape, and a clear trend of differentiation emerging in model architecture and optimization [2] - The demand for computing power is undergoing profound changes due to the comprehensive upgrade of inference paradigms, with high-value scenarios like continuous inference and multi-modal generation becoming core sources of future growth [2] Group 2 - The Huaxia ChiNext AI ETF (159381) is designed to support investments in AI-focused companies on the ChiNext board, with half of its weight in AI hardware computing power and the other half in AI software applications, showcasing high elasticity and representativeness [3] - The Huaxia Cloud Computing ETF (516630) tracks an index focused on domestic AI software and hardware computing power, with a combined weight of 83.7% in computer software, cloud services, and computing devices, indicating a strong alignment with AI applications [3] - The Huaxia Communication ETF (515050) focuses on the 5G communication theme index, emphasizing electronic and communication computing hardware, with major holdings in companies like Zhongji Xuchuang and Liyuan Precision [3]
2026年投资峰会速递:AI产业新范式
HTSC· 2025-11-10 12:07
Investment Rating - The report maintains an "Overweight" rating for the technology and computer sectors [7]. Core Insights - The AI industry is entering a new paradigm characterized by the Scaling Law 2.0, where synthetic data expands the training data ceiling, and the Mid Training paradigm reshapes model evolution paths [2][3]. - The commercial application of AI is transitioning into a scaling phase, with the integration of agent capabilities and transaction loops accelerating industry implementation [2][6]. Summary by Sections Models - Computing power expansion remains the core growth engine, with representative model training computing power expected to grow at an annual rate of 4-5 times from 2010 to 2024, with leading models achieving up to 9 times [3][13]. - The cost of complete training for frontier models is projected to reach the billion-dollar level by 2027 [3][13]. Training - The Mid Training paradigm expands training boundaries by integrating reinforcement learning (RL) into the middle stage, enhancing data generation and optimal allocation [4][16]. - This approach significantly increases data utilization efficiency and is expected to break traditional performance limits [4][16]. Agents - GPT-5 establishes a "unified system" direction, promoting standardization in agent architecture through adaptive collaboration between fast and deep thinking [5][19]. - The real-time router dynamically allocates computing resources based on task complexity, enhancing response efficiency and stability in complex scenarios [5][19]. Applications - The integration of agent capabilities into commercial transactions marks a new phase of AI applications, with OpenAI's Agentic Commerce Protocol enabling AI agents to execute purchases directly [6][22]. - The global AI application landscape is evolving through three stages: productization in 2023, commercialization trials in 2024, and scaling implementation in 2025 [25][26]. - Domestic AI applications are accelerating, with significant advancements in commercial capabilities following the release of models like DeepSeek-R1 [26].