Workflow
AMD Instinct™ MI350 Series GPUs
icon
Search documents
Broadcom Makes VMware Cloud Foundation an AI Native Platform and Accelerates Developer Productivity
Globenewswire· 2025-08-26 13:04
Core Insights - Broadcom Inc. is enhancing VMware Cloud Foundation (VCF) to accelerate enterprise adoption of Private AI as a Service, with nine of the top 10 Fortune 500 companies already committed to VCF and over 100 million cores licensed globally [1][2] - VCF 9.0 will integrate VMware Private AI Services as a standard component, positioning it as an AI-native platform for secure and scalable private cloud infrastructure [1][3] Group 1: AI Innovations and Features - VCF is designed to support AI workloads with GPU precision, enabling organizations to run, move, and govern AI models securely [3] - Native AI services included in VCF 9.0 will enhance privacy and security, simplify infrastructure, and streamline model deployment, with availability expected in Broadcom's Q1 FY26 [3] - Upcoming AI innovations will include support for NVIDIA Blackwell Infrastructure, enhancing the capabilities of VCF for demanding workloads [4] Group 2: Developer and Infrastructure Enhancements - VCF provides a vSphere Kubernetes Service (VKS) for agile app development, allowing developers to focus on applications rather than infrastructure [7] - The integration of GitOps and Argo CD will streamline secure application delivery, ensuring consistent and auditable deployments [11] - Multi-tenant Models-as-a-Service will enable secure sharing of AI models while maintaining data privacy and isolation [5] Group 3: Collaborations and Ecosystem Expansion - Broadcom is collaborating with AMD to enhance enterprise AI infrastructure, allowing customers to leverage AMD ROCm™ and Instinct™ MI350 Series GPUs for AI workloads [6] - The partnership with Canonical aims to accelerate the deployment of modern container-based and AI applications [11] Group 4: Customer Testimonials - Grinnell Mutual reports enhanced agility, efficiency, and security through VCF, fostering collaboration across traditionally separate teams [9] - New Belgium Brewing Company highlights cost efficiencies and improved IT operations with the integrated VCF 9.0 platform [9]
Introducing the AMD Instinct™ MI350 Series GPUs: Ultimate AI & HPC Acceleration
AMD· 2025-07-23 17:01
Product Highlights - AMD Instinct™ MI350 Series GPUs are designed for Generative AI and high-performance computing (HPC) acceleration in data centers [1] - The GPUs are built on the 4th Gen AMD CDNA™ architecture [1] - The series aims to deliver efficiency and performance for training AI models, high-speed inference, and complex HPC workloads [1] Target Applications - The GPUs are suitable for training massive AI models [1] - They are also applicable for high-speed inference [1] - Complex HPC workloads such as scientific simulations, data processing, and computational modeling can also benefit from these GPUs [1] Legal and Trademark Information - ©2025 Advanced Micro Devices, Inc [1] - AMD, the AMD Arrow Logo are trademarks of Advanced Micro Devices, Inc in the United States and other jurisdictions [1]
Micron HBM Designed into Leading AMD AI Platform
Globenewswire· 2025-06-12 18:46
Core Insights - Micron Technology has announced the integration of its HBM3E 36GB 12-high memory into AMD's Instinct™ MI350 Series GPUs, emphasizing the importance of power efficiency and performance in AI data center applications [4][5] - This collaboration marks a significant milestone for Micron in the high-bandwidth memory (HBM) industry, showcasing its strong customer relationships and execution capabilities [4][5] Product Features - The HBM3E 36GB 12-high solution provides outstanding bandwidth and lower power consumption, supporting AI models with up to 520 billion parameters on a single GPU [5] - The AMD Instinct MI350 Series platforms can achieve up to 8 TB/s bandwidth and a peak theoretical performance of 161 PFLOPS at FP4 precision, with a total of 2.3TB of HBM3E memory in a full platform configuration [5] - This integrated architecture enhances throughput for large language model training, inference, and scientific simulations, allowing data centers to scale efficiently while maximizing compute performance per watt [5] Strategic Collaboration - Micron and AMD's close working relationship optimizes the compatibility of Micron's HBM3E product with the MI350 Series GPUs, providing improved total cost of ownership (TCO) benefits for demanding AI systems [6] - The collaboration aims to advance low-power, high-bandwidth memory solutions that facilitate the training of larger AI models and the handling of complex high-performance computing (HPC) workloads [7]