Colossus 1
Search documents
马斯克:23万块GPU,包括3万块GB200,用于在一个名为Colossus 1的超算集群中训练Grok。在Colossus 2中,第一批55万块GB200和GB300也将在几周后开始上线用于训练。正如黄仁勋所说,xAI的速度是无与伦比的。
news flash· 2025-07-22 17:03
Core Insights - The company is utilizing a total of 230,000 GPUs, including 30,000 GB200 units, for training its AI model Grok within a supercomputing cluster named Colossus 1 [1] - The first batch of 550,000 GB200 and GB300 GPUs will be deployed in the upcoming weeks for training in Colossus 2 [1] - The speed of xAI's operations is described as unparalleled, as noted by Jensen Huang [1]