Performance Comparison - AMD EPYC 9575F CPU achieves approximately 1.24x higher total token throughput performance on Mixtral 8x7B compared to Intel Xeon 8592+ [1] - AMD EPYC 9575F CPU demonstrates roughly 1.10x greater total token throughput performance on Llama4-Maverick-17B-128E-FP8 versus Intel Xeon 8592+ [1] - AMD EPYC 9575F CPU exhibits about 1.05x improved AI inference throughput performance on Deepseek-R1-SGLang when compared to Intel Xeon 8592+ [1] - Both AMD EPYC 9575F and Intel Xeon 8592+ were tested with 8x AMD Instinct MI300X GPUs [1] Product & Technology - AMD EPYC 9575F is positioned as the highest performance CPU for hosting AI accelerators [1] Legal & Trademark - ©2025 Advanced Micro Devices, Inc [1] - AMD and the AMD Arrow Logo are trademarks of Advanced Micro Devices, Inc in the United States and other jurisdictions [1]
Improve AI accelerator performance with AMD EPYC™ AI host processors