Workflow
AMD(AMD)
icon
Search documents
Supermicro Delivers Performance and Efficiency Optimized Liquid-Cooled and Air-Cooled AI Solutions with AMD Instinct™ MI350 Series GPUs and Platforms
Prnewswire· 2025-06-12 18:35
Core Viewpoint - Supermicro is launching new GPU solutions featuring AMD Instinct MI350 series GPUs, designed for high performance, scalability, and efficiency in AI-driven data centers [1][2][3] Company Overview - Supermicro is a global leader in Application-Optimized Total IT Solutions, focusing on Enterprise, Cloud, AI, and 5G Telco/Edge IT Infrastructure [8] - The company emphasizes innovation and efficiency in its product offerings, which include servers, AI, storage, IoT, and networking solutions [8] Product Features - The new Supermicro H14 generation GPU solutions support both liquid-cooled and air-cooled configurations, providing flexibility for various deployment environments [4][7] - These systems utilize dual AMD EPYC 9005 CPUs and AMD Instinct MI350 series GPUs, enhancing performance for AI, HPC, Cloud, and Enterprise workloads [3][5] - The AMD Instinct MI350 series GPUs deliver up to 40% more tokens-per-dollar compared to competitors, optimizing cost efficiency for customers [4] Performance Metrics - The new GPU servers offer 288GB HBM3e memory per GPU, which is a 1.5x increase in memory capacity compared to previous generations, and 8TB/s bandwidth [5] - The systems are designed to maximize computational throughput and energy efficiency, enabling faster processing for AI workloads [5][6] Market Position - Supermicro's integration of AMD MI350 series GPUs into its offerings demonstrates a commitment to providing advanced solutions for AI training and inference [6][7] - The company aims to support the growing demand for scalable and efficient infrastructure in AI applications across cloud service providers and enterprises [7]
AMD Unveils Vision for an Open AI Ecosystem, Detailing New Silicon, Software and Systems at Advancing AI 2025
Globenewswire· 2025-06-12 18:30
Core Insights - AMD is positioning itself as a leader in AI innovation with the introduction of its comprehensive AI platform and open, scalable rack-scale AI infrastructure [1][2][3] Group 1: AI Solutions and Infrastructure - AMD launched the Instinct MI350 Series accelerators, achieving a 4x increase in AI compute and a 35x improvement in inferencing performance compared to previous generations [4] - The MI355X model offers significant price-performance advantages, generating up to 40% more tokens-per-dollar than competing solutions [4] - AMD's next-generation AI rack, "Helios," is expected to deliver up to 10x more performance for inference tasks by utilizing the upcoming MI400 Series GPUs [4] Group 2: Energy Efficiency Goals - The Instinct MI350 Series surpassed AMD's five-year goal for energy efficiency, achieving a 38x improvement in energy efficiency for AI training and high-performance computing [4] - AMD has set a new target to achieve a 20x increase in rack-scale energy efficiency by 2030, allowing for significant reductions in electricity usage for AI model training [4][18] Group 3: Developer Support and Ecosystem - AMD introduced the AMD Developer Cloud, providing a managed cloud environment for AI development, aimed at lowering barriers for developers [4] - The ROCm 7 software stack has been enhanced to support generative AI and high-performance computing, improving developer experience and expanding hardware compatibility [4] Group 4: Partnerships and Collaborations - Major companies like Meta, OpenAI, Microsoft, and Oracle are leveraging AMD's technology for their AI solutions, indicating strong industry collaboration [5][6] - Oracle Cloud Infrastructure is adopting AMD's open rack-scale AI infrastructure, planning to offer zettascale AI clusters powered by AMD's latest processors [6]
COMPAL Optimizes AI Workloads with AMD Instinct MI355X at AMD Advancing AI 2025 and International Supercomputing Conference 2025
Prnewswire· 2025-06-12 18:30
Core Insights - Compal Electronics has launched its new high-performance server platform SG720-2A/OG720-2A, designed for generative AI and large language model training, featuring AMD Instinct™ MI355X GPU architecture and advanced liquid cooling options [1][3][6] Technical Highlights - The SG720-2A/OG720-2A supports up to eight AMD Instinct MI350 Series GPUs, enabling scalable training for LLMs and generative AI applications [7] - It incorporates a dual cooling architecture, including air and two-phase liquid cooling, optimized for high thermal density workloads, enhancing thermal efficiency [7] - The server is built on the CDNA 4 architecture with 288GB HBM3E memory and 8TB/s bandwidth, supporting FP6 and FP4 data formats, tailored for AI and HPC applications [7] - High-speed interconnect performance is achieved through PCIe Gen5 and AMD Infinity Fabric™, facilitating multi-GPU orchestration and reducing latency [7] - The platform is compatible with mainstream open-source AI stacks like ROCm™, PyTorch, and TensorFlow, streamlining AI model integration [7] - It supports EIA 19" and ORv3 21" rack standards with a modular design for easy upgrades and maintenance [7] Strategic Collaboration - Compal has a long-standing collaboration with AMD, co-developing solutions that enhance efficiency and sustainability in data center operations [5] - The launch of SG720-2A/OG720-2A at both Advancing AI 2025 and ISC 2025 highlights Compal's commitment to expanding its global visibility and partnerships in the AI and HPC sectors [7]
Oracle and AMD Collaborate to Help Customers Deliver Breakthrough Performance for Large-Scale AI and Agentic Workloads
Prnewswire· 2025-06-12 18:30
Core Insights - Oracle and AMD are collaborating to offer an AI supercomputer featuring AMD Instinct MI355X GPUs, enhancing AI training and inference capabilities for customers [1][2] - The new zettascale AI cluster will support up to 131,072 MI355X GPUs, providing over 2X better price-performance compared to previous generations [1][2] - The partnership aims to meet the growing demand for AI infrastructure, enabling customers to handle larger and more complex datasets effectively [2][3] Performance Enhancements - The AMD Instinct MI355X GPUs deliver nearly triple the compute power and a 50% increase in high-bandwidth memory compared to the previous generation [2] - Customers can expect up to 2.8X higher throughput for AI deployments, resulting in faster results and lower latency [4] - The new architecture supports 288 gigabytes of high-bandwidth memory 3 (HBM3) and up to eight terabytes per second of memory bandwidth, facilitating the execution of large models entirely in memory [4] Infrastructure and Design - The OCI Supercluster features a high-throughput, ultra-low latency RDMA cluster network architecture, optimized for demanding AI workloads [2] - A dense, liquid-cooled design allows for 125 kilowatts per rack, accommodating 64 GPUs per rack at 1,400 watts each, enhancing performance density [4] - The powerful head node, equipped with an AMD Turin high-frequency CPU, optimizes GPU performance through efficient job orchestration and data processing [4] Open-Source and Networking Innovations - The open-source stack, AMD ROCm, enables flexible architectures and easy migration of existing code, preventing vendor lock-in [4] - Oracle will be the first to deploy AMD Pollara AI NICs, providing advanced RoCE functionality for high-performance and low-latency networking [5]
AMD(AMD.O) CEO:宣布为AI芯片推出ROCM 7软件。
news flash· 2025-06-12 17:43
AMD(AMD.O) CEO:宣布为AI芯片推出ROCM 7软件。 ...
AMD公司CEO宣布为AI芯片推出ROCM 7软件。
news flash· 2025-06-12 17:38
AMD公司CEO宣布为AI芯片推出ROCM 7软件。 ...
6月13日电,AMD公司CEO宣布为AI芯片推出ROCM 7软件。
news flash· 2025-06-12 17:38
智通财经6月13日电,AMD公司CEO宣布为AI芯片推出ROCM 7软件。 ...
Advanced Micro Devices (AMD) Update / Briefing Transcript
2025-06-12 17:30
Advanced Micro Devices (AMD) Update / Briefing June 12, 2025 12:30 PM ET Speaker0 morning. How's everyone doing? It is great to be back here in Silicon Valley with so many friends, press, analysts, partners, and especially all of the developers who are here today. And a big welcome to everyone who's joining online from around the world for our Advancing AI twenty twenty five. Now, it's been an incredibly busy nine months since our last Advancing AI event. We launched lots of new AI data center PC and gaming ...
AMD(AMD.O) CEO:MI350 AI 芯片比英伟达同类产品速度更快。
news flash· 2025-06-12 17:05
AMD(AMD.O) CEO:MI350 AI 芯片比英伟达同类产品速度更快。 ...
AMD(AMD.O) CEO:推出MI350和MI355芯片;MI400芯片将在明年推出。
news flash· 2025-06-12 17:05
AMD(AMD.O) CEO:推出MI350和MI355芯片;MI400芯片将在明年推出。 ...