Core Insights - Alibaba has launched the "Zhenwu 810E" high-end AI chip, marking the debut of its self-developed PPU, which is part of the AI supercomputer initiative "Tongyun Ge" [1][3] - The "Tongyun Ge" initiative combines Alibaba's self-developed chips, leading cloud services, and advanced open-source models to achieve high efficiency in AI model training and deployment [1] - The "Zhenwu" PPU has been deployed in multiple clusters on Alibaba Cloud, serving over 400 clients including major organizations like State Grid and Xpeng Motors [1][3] Group 1 - The "Zhenwu" PPU features a self-developed parallel computing architecture with 96G HBM2e memory and an inter-chip bandwidth of 700 GB/s, suitable for AI training, inference, and autonomous driving [3] - The performance of the "Zhenwu" PPU surpasses that of NVIDIA's A800 and is comparable to the H20, with an upgraded version reportedly outperforming the A100 [3] - The successful launch of the "Zhenwu" PPU reflects years of strategic investment and vertical integration by Alibaba in the chip sector, culminating in a comprehensive AI stack [3] Group 2 - The Tongyi Laboratory has released the Qwen3-Max-Thinking flagship inference model, achieving multiple global records and performance levels comparable to GPT-5.2 and Gemini 3 Pro [4] - The number of derivative models from the Qwen open-source model has exceeded 200,000, with download counts surpassing 1 billion, maintaining its position as the largest in the world [5]
阿里自研AI芯片"真武"亮相 "通云哥"黄金三角浮出水面