Workflow
这届世界人工智能大会,无问芯穹发布了“三个盒子”
Zheng Quan Shi Bao Wang·2025-07-28 06:55

Core Insights - The article discusses the launch of three core products by Wunwen Xinqun at the 2025 World Artificial Intelligence Conference, aimed at enhancing AI computational efficiency and resource utilization [1][2][3] Product Overview - Wunwen Xinqun introduced three main products: Wuqiong AI Cloud, Wujie Intelligent Computing Platform, and Wuyin Terminal Intelligence, designed to provide a comprehensive solution for future intelligent infrastructure [1][2] - The Wuqiong AI Cloud serves as a systematic solution for utilizing large-scale computing clusters, integrating heterogeneous computing resources across various regions [3] - The Wujie Intelligent Computing Platform has been successfully implemented in over 100 large-scale research scenarios, supporting significant model training and inference tasks [3] - The Wuyin Terminal Intelligence solution focuses on creating an integrated solution for smart terminals, optimizing computational resources and performance [4][5] Technological Advancements - The Wuqiong AI Cloud has established a nationwide wide-area computing network, covering key nodes of the "East Data West Computing" national strategy, with a total computing power exceeding 25,000 Peta [3] - The Infini-Megrez2.0 model, developed in collaboration with Shanghai Chuangzhi Institute, achieves cloud-level performance with 21 billion parameters while minimizing memory usage to 7 billion [5] - The Mizar2.0 inference engine, launched alongside Infini-Megrez2.0, enhances inference speed and reduces memory and power consumption, achieving an 18% increase in intelligence level and over 100% improvement in inference performance [6] Market Impact - Wunwen Xinqun aims to address the contradiction between limited resources and infinite demand by improving intelligence efficiency and expanding computational resources [2] - The company emphasizes the importance of resource optimization in driving the evolution of intelligent efficiency, enabling AI applications to be deployed across various computational scenarios [2][3]