AMD Instinct™ MI350 Series GPUs
Search documents
AMD Unveils Strategy to Lead the $1 Trillion Compute Market and Accelerate Next Phase of Growth
Globenewswire· 2025-11-11 21:30
Core Insights - AMD is entering a new growth era driven by its leadership in technology and AI, with a long-term revenue CAGR target exceeding 35% and a non-GAAP EPS target above $20 [1][12] - The company emphasizes its broad product portfolio and strategic partnerships, positioning itself to lead in high-performance and AI computing [2] Product Leadership and Momentum - AMD's ROCm™ open software has seen a 10x increase in downloads year-over-year, indicating strong developer engagement [4] - The AI PC portfolio has expanded 2.5x since 2024, with AMD Ryzen™ powering over 250 platforms, and is expected to achieve up to 10x performance gains with next-generation processors [5] - AMD has secured over $50 billion in design wins since 2022 in its embedded segment, positioning itself for AI-driven growth from cloud to edge [6] Technology Leadership - AMD is extending its innovations in chiplet design, packaging, and interconnect technology to enhance AI performance and efficiency [7] - The AMD Instinct™ MI350 Series GPUs are the fastest ramping products in the company's history, with significant deployments by leading cloud providers [8] Long-Term Growth Targets - AMD aims for a greater than 60% revenue CAGR in its data center business and over 80% CAGR in data center AI [12] - The company expects to achieve more than 50% market share in server CPUs and over 40% in client revenue market share [12] - AMD plans to exceed 70% revenue market share in adaptive computing and expand its embedded segment opportunities [12]
Introducing the AMD Instinct™ MI350 Series GPUs: Ultimate AI & HPC Acceleration
AMD· 2025-10-20 18:11
Product & Technology - AMD Instinct™ MI350 Series GPUs are designed for Generative AI and high-performance computing (HPC) acceleration in data centers [1] - The GPUs are built on the 4th Gen AMD CDNA™ architecture [1] - The GPUs deliver efficiency and performance for training AI models, high-speed inference, and complex HPC workloads [1] Target Applications - The GPUs are suitable for scientific simulations, data processing, and computational modeling [1] Legal & Trademark - ©2025 Advanced Micro Devices, Inc [1] - AMD, the AMD Arrow Logo are trademarks of Advanced Micro Devices, Inc in the United States and other jurisdictions [1]
Broadcom Makes VMware Cloud Foundation an AI Native Platform and Accelerates Developer Productivity
Globenewswire· 2025-08-26 13:04
Core Insights - Broadcom Inc. is enhancing VMware Cloud Foundation (VCF) to accelerate enterprise adoption of Private AI as a Service, with nine of the top 10 Fortune 500 companies already committed to VCF and over 100 million cores licensed globally [1][2] - VCF 9.0 will integrate VMware Private AI Services as a standard component, positioning it as an AI-native platform for secure and scalable private cloud infrastructure [1][3] Group 1: AI Innovations and Features - VCF is designed to support AI workloads with GPU precision, enabling organizations to run, move, and govern AI models securely [3] - Native AI services included in VCF 9.0 will enhance privacy and security, simplify infrastructure, and streamline model deployment, with availability expected in Broadcom's Q1 FY26 [3] - Upcoming AI innovations will include support for NVIDIA Blackwell Infrastructure, enhancing the capabilities of VCF for demanding workloads [4] Group 2: Developer and Infrastructure Enhancements - VCF provides a vSphere Kubernetes Service (VKS) for agile app development, allowing developers to focus on applications rather than infrastructure [7] - The integration of GitOps and Argo CD will streamline secure application delivery, ensuring consistent and auditable deployments [11] - Multi-tenant Models-as-a-Service will enable secure sharing of AI models while maintaining data privacy and isolation [5] Group 3: Collaborations and Ecosystem Expansion - Broadcom is collaborating with AMD to enhance enterprise AI infrastructure, allowing customers to leverage AMD ROCm™ and Instinct™ MI350 Series GPUs for AI workloads [6] - The partnership with Canonical aims to accelerate the deployment of modern container-based and AI applications [11] Group 4: Customer Testimonials - Grinnell Mutual reports enhanced agility, efficiency, and security through VCF, fostering collaboration across traditionally separate teams [9] - New Belgium Brewing Company highlights cost efficiencies and improved IT operations with the integrated VCF 9.0 platform [9]
Introducing the AMD Instinct™ MI350 Series GPUs: Ultimate AI & HPC Acceleration
AMD· 2025-07-23 17:01
Product Highlights - AMD Instinct™ MI350 Series GPUs are designed for Generative AI and high-performance computing (HPC) acceleration in data centers [1] - The GPUs are built on the 4th Gen AMD CDNA™ architecture [1] - The series aims to deliver efficiency and performance for training AI models, high-speed inference, and complex HPC workloads [1] Target Applications - The GPUs are suitable for training massive AI models [1] - They are also applicable for high-speed inference [1] - Complex HPC workloads such as scientific simulations, data processing, and computational modeling can also benefit from these GPUs [1] Legal and Trademark Information - ©2025 Advanced Micro Devices, Inc [1] - AMD, the AMD Arrow Logo are trademarks of Advanced Micro Devices, Inc in the United States and other jurisdictions [1]
Micron HBM Designed into Leading AMD AI Platform
Globenewswire· 2025-06-12 18:46
Core Insights - Micron Technology has announced the integration of its HBM3E 36GB 12-high memory into AMD's Instinct™ MI350 Series GPUs, emphasizing the importance of power efficiency and performance in AI data center applications [4][5] - This collaboration marks a significant milestone for Micron in the high-bandwidth memory (HBM) industry, showcasing its strong customer relationships and execution capabilities [4][5] Product Features - The HBM3E 36GB 12-high solution provides outstanding bandwidth and lower power consumption, supporting AI models with up to 520 billion parameters on a single GPU [5] - The AMD Instinct MI350 Series platforms can achieve up to 8 TB/s bandwidth and a peak theoretical performance of 161 PFLOPS at FP4 precision, with a total of 2.3TB of HBM3E memory in a full platform configuration [5] - This integrated architecture enhances throughput for large language model training, inference, and scientific simulations, allowing data centers to scale efficiently while maximizing compute performance per watt [5] Strategic Collaboration - Micron and AMD's close working relationship optimizes the compatibility of Micron's HBM3E product with the MI350 Series GPUs, providing improved total cost of ownership (TCO) benefits for demanding AI systems [6] - The collaboration aims to advance low-power, high-bandwidth memory solutions that facilitate the training of larger AI models and the handling of complex high-performance computing (HPC) workloads [7]