Workflow
AMD EPYC "Venice" CPU
icon
Search documents
AMD and its Partners Share their Vision for “AI Everywhere, for Everyone” at CES 2026
Globenewswire· 2026-01-06 04:30
Core Insights - AMD is leveraging its extensive AI product portfolio and industry collaborations to create real-world impacts from AI technology [2][4] - The company is entering an era of yotta-scale computing, with a projected increase in global compute capacity from 100 zettaflops to over 10 yottaflops in the next five years [5] AI Infrastructure - The AMD "Helios" rack-scale platform is designed to deliver up to 3 AI exaflops of performance in a single rack, focusing on maximum bandwidth and energy efficiency for large-scale AI training [6] - The platform is powered by AMD Instinct MI455X accelerators, EPYC "Venice" CPUs, and Pensando "Vulcano" NICs, all integrated through the open AMD ROCm software ecosystem [6] Product Launches - AMD unveiled the full AMD Instinct MI400 Series accelerator portfolio and previewed the next-generation MI500 Series GPUs, which are expected to deliver up to a 1,000x increase in AI performance compared to the MI300X GPUs [7][9] - The new AMD Instinct MI440X GPU is designed for enterprise AI deployments, supporting scalable training and inference workloads [7][8] AI in PCs - AMD introduced new Ryzen AI platforms for AI PCs, including the Ryzen AI 400 Series, which delivers 60 TOPS NPU and supports seamless cloud-to-client AI scaling [10][11] - The Ryzen AI Max+ series supports models with up to 128 billion parameters, enhancing local inference and content creation capabilities [12] Embedded AI Solutions - AMD launched the Ryzen AI Embedded processors, aimed at powering AI-driven applications in various sectors, including automotive and healthcare [14] Strategic Initiatives - AMD is participating in the U.S. government's Genesis Mission, which aims to secure U.S. leadership in AI technologies and includes the deployment of AMD-powered AI supercomputers [15][16] - The company committed $150 million to expand AI education and access in classrooms and communities [16]
英特尔Xeon 7预告:192 个核心、16 个内存通道
半导体行业观察· 2025-07-08 01:35
Core Viewpoint - Intel's upcoming Xeon "Diamond Rapids" processors are expected to feature significant advancements, including up to 192 cores, support for 8 or 16 DDR5 memory channels, and a thermal design power (TDP) of 500W, indicating a substantial upgrade over previous generations [1][3][4]. Group 1: Processor Specifications - The Diamond Rapids processors will utilize a new LGA 9324 socket platform, enhancing memory subsystem and PCIe channel configurations [1]. - The processors are anticipated to support data transfer rates exceeding 12,800 MT/s, potentially achieving over 1.6 TB/s peak bandwidth, a notable increase from the Granite Rapids' approximately 844 GB/s [3]. - The architecture will include up to six chiplets, with a maximum of four compute blocks and two I/O blocks, facilitating advanced memory and PCIe connectivity [3][4]. Group 2: Architectural Features - The processors will be based on the Panther Cove microarchitecture, which aims to improve the efficiency of AMX extensions and support for FP8 and TF32 data formats [4][6]. - Diamond Rapids will be part of the Oak Stream platform, supporting single, dual, or quad-socket configurations, and PCIe Gen 6 interconnects [4]. Group 3: Market Positioning and Competition - Intel plans to release the Diamond Rapids processors in 2026, potentially aligning with the launch of the "Jaguar Shores" AI accelerator, creating a comprehensive AI system [6]. - The Diamond Rapids processors will compete directly with AMD's EPYC "Venice" CPUs, which are based on the Zen 6 architecture and can support up to 256 cores [7].