SuperX发布旗舰AI服务器 算力和容量提升50%
Zheng Quan Shi Bao Wang·2025-10-08 09:54

Core Insights - SuperX has launched its latest flagship product, the XN9160-B300AI server, designed to meet the growing demand for scalable and high-performance computing in AI training, machine learning, and high-performance computing environments [1][2] Group 1: Product Features - The XN9160-B300AI server is equipped with 8 NVIDIA Blackwell B300 GPUs, which are based on the new Blackwell Ultra architecture, offering a 50% increase in NVFP4 computing power and a 50% increase in HBM capacity compared to the previous Blackwell generation [1][2] - The server supports the construction and operation of trillion-parameter foundational models and can perform exascale-level scientific computations [1] Group 2: Performance Optimization - The server is optimized for GPU-intensive tasks, excelling in foundational model training and inference, including reinforcement learning, distillation techniques, and multimodal AI models [2] - It features a significant breakthrough in memory capacity, providing 2304GB of unified HBM3E memory (288GB per GPU), which is crucial for managing large models and high concurrency in generative AI and large language models [2] Group 3: Scalability and Connectivity - The XN9160-B300AI server can efficiently scale to meet AI factory-level tasks through 8 InfiniBand 800Gb/s OSFP ports or dual 400Gb/s Ethernet connections [2] - The fifth-generation NVLink interconnect technology ensures seamless communication among the 8 GPUs, catering to the needs of large-scale model training and distributed inference [2]