AI Infrastructure & Market Positioning - Supermicro identifies itself as the fastest-growing AI OEM, projecting growth from $7 billion to $14 billion, and targeting approximately $22 billion [4] - The company emphasizes its focus on addressing missing elements in AI deployments, particularly concerning deployment scale and efficiency [5] - Supermicro aims to improve the experience of deploying, managing, and servicing AI infrastructure by collaborating with ecosystem partners [3] Liquid Cooling & Data Center Efficiency - Supermicro has a current capacity of 5,000 racks per month, with 2,000 racks per month dedicated to liquid cooling [7] - The company is investing in liquid cooling technologies and has an 18MW power capacity in its manufacturing facility for system-level testing [6] - Supermicro aims to improve liquid cooling efficiency up to 98% to reduce water consumption and noise levels [24] - Liquid cooling can improve efficiency by up to 40% by targeting CPUs, GPUs, memory, and power supplies [23] Holistic AI Deployment Approach - Supermicro advocates for a holistic approach to AI deployments, considering system, rack, and data center levels, including power, cooling, rack density, and weight [10][11][20] - The company emphasizes the importance of rapid deployment and monetization of AI infrastructure, working closely with technology partners [8] - Supermicro is validating networking solutions, including InfiniBand and Ethernet, to ensure efficient data transfer from GPUs [19][20] AMD Partnership & Solutions - Supermicro collaborates with AMD, offering systems from MI25 to MI300X, including air-cooled and liquid-cooled options [26][31] - Supermicro is shipping MI350 and MI355 series servers with both air-cooled and liquid-cooled options [36][50] - Supermicro was the first to qualify an A+ A+ A solution, combining AMD CPUs, GPUs, and Pensando AI NICs [49][50]
Build for What’s Next in AI