GB200 NVL72芯片
Search documents
AI“电荒”未解:马斯克“加单”燃气轮机 部分数据中心电网连线需等7年
Xin Lang Cai Jing· 2026-01-10 10:53
随后马斯克回复,"属实"。之后他又在访谈节目中直言,电力生产是扩大AI系统规模的限制因素, 但"人们低估了增加电力供应的难度";中国在AI竞赛中的决定性优势在于其大规模供应电力的能力。 在此之前,斗山能源曾在去年10月表示,已与一家美国大型科技公司签署合同,为其提供2台380兆瓦燃 气轮机;同年12月该公司又从同一客户处获得3台380兆瓦燃气轮机订单。 1月8日,美国电力基础设施公司Babcock & Wilcox宣布,已选择西门子能源为其项目提供蒸汽轮机发电 机组,该项目系为Applied Digital正在筹备的AI数据中心提供1GW电力。 不久前,OpenAI也为其得州阿比林的星际之门数据中心订购了29台燃气涡轮机,每台发电能力为34兆 瓦,可支撑50万颗GB200 NVL72芯片运行。 ▌多个数据中心考虑自建电力设施 余热锅炉供需缺口拉大 《科创板日报》1月10日讯 2026年,美国的AI"电荒"依旧未解。 SemiAnalysis最新报告中披露的数据显示,仅在美国得克萨斯州,每月就有数十吉瓦的数据中心负荷申 请涌入。然而在过去的12个月里,获批却仅有略高于1吉瓦的申请,电网容量已饱和。 缺电挡不住A ...
马斯克证实xAI又买了5台燃气轮机,为超级计算机集群供电
Sou Hu Cai Jing· 2026-01-07 13:52
IT之家 1 月 7 日消息,埃隆・马斯克旗下的人工智能初创企业 xAI,又从韩国斗山能源公司额外采购了 5 台 380 兆瓦的燃气轮机,用于为其持续扩容的超级计算机集群供电。 xAI 采购新涡轮机的消息由社交平台 X 的用户 @SemiAnalysis 率先披露,该用户指出这批涡轮机由韩 国斗山能源公司生产。据《亚洲经济日报》报道,斗山能源去年 10 月曾宣布,与美国一家大型科技企 业签订合同,为其供应 2 台 380 兆瓦燃气轮机。同年 12 月,斗山能源进一步透露,又斩获了 3 台同型 号燃气轮机的订单。 上述 X 平台用户称,这批燃气轮机将为一个新增的、规模相当于 60 多万台 GB200 NVL72 芯片的计算 集群提供电力支持。这或将使 xAI 的数据中心跻身全球最大规模之列。埃隆・马斯克在回复相关帖子 时证实了采购事宜,他在 X 平台发文称:"情况属实。" 近期 xAI 顺利完成了一轮超额认购的 E 轮融资,融资金额达 200 亿美元(IT之家注:现汇率约合 1399.06 亿元人民币),超出了最初 150 亿美元的目标,这笔资金将用于推动基础设施的快速扩建以及 人工智能产品的研发。这家人工智能 ...
4倍速吊打Cursor新模型,英伟达数千GB200堆出的SWE-1.5,圆了Devin的梦,实测被曝性能“滑铁卢”?
3 6 Ke· 2025-10-31 12:16
Core Insights - Cognition has launched its new high-speed AI coding model SWE-1.5, designed for high performance and speed in software engineering tasks, now available in the Windsurf code editor following its acquisition of Windsurf in July [1][2] - SWE-1.5 operates at speeds up to 950 tokens per second, making it 13 times faster than Anthropic's Sonnet 4.5 model, and significantly improving task completion times from 20 seconds to 5 seconds [2][4] Model Performance - SWE-1.5 is a cutting-edge model with hundreds of billions of parameters, designed to provide top-tier performance without compromising speed [2] - The model achieved a score of 40.08% in the SWE-Bench Pro benchmark, ranking just below Claude's Sonnet 4.5, which scored 43.60% [4] Technical Infrastructure - The model is trained on an advanced cluster of thousands of NVIDIA GB200 NVL72 chips, which can enhance performance by up to 30 times compared to NVIDIA H100 GPUs while reducing costs and energy consumption by up to 25% [8] - SWE-1.5 utilizes a custom Cascade intelligent framework for end-to-end reinforcement learning, emphasizing the importance of high-quality coding environments for downstream model performance [9] Development Strategy - The development of SWE-1.5 is part of a broader strategy to integrate it into the Windsurf IDE, aiming to create a unified system that combines speed and intelligence [10] - Cognition plans to continuously iterate on model training, framework optimization, and tool development to enhance speed and accuracy [11] Market Positioning - The launch of SWE-1.5 coincides with the release of Cursor's Composer model, indicating a strategic convergence in the AI developer tools market, with both companies focusing on proprietary models and low-latency developer experiences [13] - SWE-1.5's processing speed of 950 tokens per second is nearly four times faster than Composer's 250 tokens per second, highlighting its competitive edge [14]
4倍速吊打Cursor新模型!英伟达数千GB200堆出的SWE-1.5,圆了Devin的梦!实测被曝性能“滑铁卢”?
AI前线· 2025-10-31 05:42
Core Insights - Cognition has launched its new high-speed AI coding model SWE-1.5, designed for high performance and speed in software engineering tasks, now available in the Windsurf code editor [2][3] - SWE-1.5 operates at a speed of up to 950 tokens per second, making it 13 times faster than Anthropic's Sonnet 4.5 model, and significantly improving task completion times [3][4][6] Performance and Features - SWE-1.5 is built on a model with hundreds of billions of parameters, aiming to provide top-tier performance without compromising speed [3][4] - The model's speed advantage is attributed to a collaboration with Cerebras, which optimized the model for better latency and performance [3][6] - In the SWE-Bench Pro benchmark, SWE-1.5 achieved a score of 40.08%, just behind Sonnet 4.5's 43.60%, indicating near-state-of-the-art coding performance [6] Development and Infrastructure - SWE-1.5 is trained on an advanced cluster of thousands of NVIDIA GB200 NVL72 chips, which offer up to 30 times better performance and 25% lower costs compared to previous models [10] - The training process utilizes a custom Cascade AI framework and incorporates extensive reinforcement learning techniques to enhance model capabilities [10][11] Strategic Vision - The development of SWE-1.5 is part of a broader strategy to integrate AI coding capabilities directly into the Windsurf IDE, enhancing user experience and performance [13][15] - Cognition emphasizes the importance of a collaborative system that includes the model, inference process, and agent framework to achieve high speed and intelligence [13][14] Market Position and Competition - The launch of SWE-1.5 coincides with Cursor's release of its own high-speed model, Composer, indicating a strategic convergence in the AI developer tools market [17] - Both companies are leveraging reinforcement learning in their models, highlighting a shared approach to creating efficient coding agents [17] User Feedback and Performance - Early user feedback on SWE-1.5 indicates a perception of high speed, although some users reported issues with task completion compared to other models like GPT-5 [18][19]