GPNPU(通用神经网络处理器)架构
Search documents
云天励飞董事长陈宁:以GPNPU架构推动算力效率大幅提升
Zhong Zheng Wang· 2025-12-04 10:37
Core Insights - The future of AI is shifting from a competition of "being smarter" to a systemic competition focused on "being more efficient, safer, and inclusive" [1] - CloudWalk Technology plans to launch a General Purpose Neural Processing Unit (GPNPU) architecture aimed at significantly reducing the cost and improving the efficiency of AI generation [2] Group 1: AI Development and Efficiency - CloudWalk Technology's GPNPU architecture will optimize matrix/vector units, storage hierarchy, and bandwidth utilization, targeting a reduction in the cost of generating 1 million tokens from approximately $1 to $0.01, achieving a hundredfold efficiency improvement [2] - Jeffrey Hinton emphasizes the need for new computational forms to address the increasing pressure on energy consumption and efficiency, suggesting that brain-like chips and organoid-based computing could offer advantages in power consumption and communication capabilities [1][2] Group 2: Market Predictions and Industry Standards - By 2030, the global AI chip industry is expected to reach approximately $5 trillion, with training chips accounting for about $1 trillion and inference/processing chips projected to reach $4 trillion, representing around 80% of the market [3] - CloudWalk Technology has proposed the establishment of unified AI processing chip and inference network standards to enable shared capabilities across different countries and regions, particularly in critical sectors like healthcare and education, aiming for "AI for All" [3]