Core Insights - Akamai has acquired thousands of NVIDIA Blackwell GPUs to enhance its global distributed cloud infrastructure, creating a unified platform for AI research and development, fine-tuning, and post-training optimization [1][4] - The industry has reached a tipping point where AI inference is as critical as model training, with 56% of organizations citing latency as a primary barrier to AI deployment at scale [2] - Akamai's strategy focuses on decentralized AI infrastructure to meet the demands of the inference era, allowing AI to interact with physical systems without the limitations of traditional cloud architecture [3][6] Group 1: Infrastructure and Technology - The integration of NVIDIA Blackwell AI infrastructure enables Akamai to redefine AI usage by bringing inference closer to users and devices [4] - Akamai's platform combines NVIDIA RTX PRO™ Servers and BlueField-3 DPUs with its distributed cloud computing infrastructure, which includes over 4,400 global locations [6] - The company has seen strong demand for its initial deployment of NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and plans to continue expanding GPU capacity [6] Group 2: Performance and Cost Efficiency - Akamai's infrastructure allows for a reduction in latency by up to 2.5 times and can save businesses as much as 86% on AI inference costs compared to traditional hyperscaler infrastructure [5] - The platform supports predictable, high-performance inference by processing AI workloads on dedicated GPU clusters [7] - Localized fine-tuning of large language models is facilitated on-site to meet data privacy and regional compliance needs [7]
Akamai to Deploy Thousands of NVIDIA Blackwell GPUs to Create One of the World’s Most Widely Distributed AI Platforms