Group 1 - Nvidia has entered a strategic technology integration agreement with AI chip startup Groq, agreeing to pay approximately $20 billion in cash for technology licensing and to hire Groq's core engineering team [1] - This move is seen as a precise positioning by Nvidia during a critical turning point in AI development, as the revenue from inference workloads is projected to surpass that from training workloads for the first time by 2025, accounting for 52.3% of global AI workloads [3] - The "inference inflection" is driving a surge in demand for dedicated AI processors that offer low latency, high energy efficiency, and deterministic responses, with Groq's language processing unit (LPU) being one of the lowest latency and highest token generation rate commercial chips currently available [3]
英伟达200亿美元锁定Groq核心团队,加速布局实时AI推理时代核心基础设施