Workflow
英伟达全新处理器
icon
Search documents
英伟达计划推出全新芯片,OpenAI是大客户;我国首个国家级人形机器人与具身智能标准体系发布丨智能制造日报
创业邦· 2026-03-01 04:09
欢迎加入 睿兽分析会员 ,解锁 AI、汽车、智能制造 等相关 行业日报、图谱和报告 等。 1.【英伟达计划推出全新芯片,OpenAI是大客户】英伟达计划发布一款专为OpenAI及其他客户定制 的全新处理器,助力打造更快、更高效的工具。这是其业务的重大调整,或将重新定义AI竞赛格局。 据知情人士透露,该公司正在为AI推理计算设计全新系统——这类计算负责让AI模型响应用户请求。 这款新平台将于下月在圣何塞举办的英伟达GTC开发者大会上正式公布,将整合初创公司Groq设计 的芯片。部分知情人士称,OpenAI已同意成为这款新处理器的最大客户之一,对英伟达而言是重大 胜利。这家ChatGPT开发者本就是英伟达的核心客户,过去数月一直在寻找英伟达芯片的更高效替代 方案,并于上月与一家芯片初创公司签约,新增了供应选择。(财联社) 2.【机构:2025年下半年全球智能眼镜出货同比剧增139%】Counterpoint Research数据显示, 2025年下半年全球智能眼镜出货量同比增长139%,其中AI智能眼镜成为绝对主力,占总出货量 88%。AI智能眼镜平均售价从2025年上半年的347美元上涨至360美元。 (财联社 ...
英伟达计划推出全新芯片 OpenAI是大客户
Xin Lang Cai Jing· 2026-02-28 03:13
Core Insights - Nvidia plans to release a new processor specifically designed for OpenAI and other clients, aiming to create faster and more efficient tools, marking a significant shift in its business strategy that could redefine the AI competition landscape [1][5] - The new platform, set to be unveiled at the Nvidia GTC developer conference next month, will integrate chips designed by the startup Groq, focusing on AI inference computing, which is becoming a competitive focal point in the industry [1][5] Group 1: Market Dynamics - Nvidia currently dominates the GPU market, holding over 90% market share, but is facing performance bottlenecks in its flagship products due to the shift towards inference computing [2][6] - Competitors like Google and Amazon have launched their own chips to rival Nvidia's flagship products, increasing pressure on Nvidia to develop more efficient chips for AI applications [1][2] - The demand for new types of chips that can handle complex AI tasks more efficiently has surged due to the explosive growth of autonomous coding technologies in the tech industry [1][2] Group 2: Client Relationships - OpenAI has agreed to become one of the largest customers for Nvidia's new processor, which is a significant win for Nvidia, as OpenAI has been seeking more efficient alternatives to Nvidia's chips [1][5] - OpenAI recently announced a large-scale procurement of dedicated inference computing power from Nvidia, indirectly referencing the new processor, while also signing a major agreement with Amazon to use its Trainium chips [1][5] Group 3: Technological Developments - Nvidia's high-performance GPUs, including the Hopper, Blackwell, and Rubin series, are recognized as top products for training large-scale AI models, but the rising demand for inference capabilities has led to calls for more cost-effective and energy-efficient solutions [2][6] - The AI inference computing process is divided into two main stages: pre-filling, where the model understands user prompts, and decoding, where the model generates responses, with the latter often being slower [8] - Nvidia's recent acquisition of Groq's key technology for $20 billion and the integration of its core management team is one of the largest talent acquisitions in Silicon Valley history, indicating a strategic shift towards enhancing inference capabilities [7]