Rubin架构
Search documents
英伟达Rubin重塑散热需求,宁波精达锁定装备红利
Zheng Quan Shi Bao Wang· 2026-01-09 07:24
在1月初的国际消费电子展(CES)期间,英伟达披露下一代AI芯片计算架构Rubin架构。除机柜功率密 度的爆炸性增长带来的100%全液冷需求外,Rubin架构还延续并强化了"45℃温水液冷"技术路线,推动 数据中心液冷市场迎来新的爆发。 作为全球换热器装备制造的龙头企业,宁波精达(603088)凭借多年来在空调、冷冻、冷链和汽车热管理 领域积淀的定制化成形技术及装备研发能力,其"卖铲人"的角色在2026年初这场AI液冷技术大变革中进 一步增强,并由此构建新的增长曲线。 数据中心液冷业务已经成为公司发展新增量 AI数据中心的液冷系统通常分为一次侧(室外侧)和二次侧(机房侧)。二次侧负责将服务器芯片的 热量通过冷板、冷量分配单元(CDU)等组件传递给冷却液;一次侧则负责将携带热量的冷却液通过 室外冷源(如干冷器、冷却塔、冷水机组)降温,并将热量排至大气中。宁波精达下游客户在一次侧和 二次侧均有生产相应的产品,具体产品覆盖换热器、冷凝器、管件、微通道散热水冷板(MCCP)等。 随着英伟达正式发布Rubin架构,数据中心液冷散热从技术架构到产业格局正在迎来系统性重塑。模型 更迭驱动算力需求提升,进一步带动芯片、服务器 ...
四大芯片巨头掌门人罕见同台发声
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-08 05:57
全球科技盛会2026年国际消费电子展(CES 2026)于当地时间1月6日开幕,芯片巨头掌门人——英伟 达CEO黄仁勋、英特尔CEO陈立武、AMD CEO苏姿丰、高通CEO安蒙相继登场。值得注意的是,这四 家芯片巨头既有合作也有竞争,却鲜少出现在同一舞台。 他们正指向AI时代一个更为宏大的发展目标:数百倍的算力增长需求和背后从云到端全面的AI应用拓 展空间。 而中国厂商正在CES上扮演越来越重要的角色。无论是AI眼镜、各形态机器人,中国厂商的积极参展, 也在海外引发高度关注,甚至让现场展台"水泄不通"。 例如,宇树科技把"传统艺能"拳击赛带到现场,21世纪经济报道记者发现,观赛人员围了里三层外三 层,纷纷高举手机拍摄G1机器人灵活的动作表现。 一众中国原生AI硬件厂商备受关注的背后,是中国企业全球角色的进阶:凭借供应链与研发能力的长 期沉淀,中国已升级为全球科技创新的关键力量。 AI大模型加速发展背后,是底层算力基础设施的快速演进支撑。在CES现场,芯片巨头掌门人们都强调 了算力的飞速增长态势,以及由此引发的应用新机会。 英伟达创始人兼CEO黄仁勋指出,物理AI的"ChatGPT时刻"已然到来。机器开始具备理 ...
1.7日报
Ge Long Hui· 2026-01-08 05:16
4、传字节和车企合作造车,字节大概不会去搞硬件,目前的传说是字节今次会和塞力斯合作。大概率 来说,就是华为造车和车机、智能驾驶,字节来解决其他AI相关的问题。华为+字节,这个阵容还是相 当能打啊... 5、大摩发了一篇吹嘘泡泡玛特的报告,预测今年依然能达到大约21%的增长,如果特别乐观的话,季 环比能达到10%左右。泡泡一直是争议巨大的公司,双方都有自己的道理,最终也只能业绩说话了。 就这些。 1、英伟达推出的新一代Rubin架构中,比上一代Blackwell要多用15%的内存和硬盘,所以存储股继续大 涨,最夸张的是sandisk暴涨28cm,美光也涨了10cm,我之前买了些做多海力士的衍生产品,也算吃到 肉了。其实国产存储和AI关系不太大,但我村的逻辑就是我说你有就有,所以也纷纷暴涨。 2、小米终止和万能的大熊合作,辞退具体负责此次合作的经办人员,集团副总裁许斐和公关部负责人 徐洁云通报批评,扣除2025年所有绩效的奖金。呃,实事求是讲,这可能是万能的大熊史上黑小米最成 功的一次,造成了惨烈的杀伤效果。不过最终还是产品说话,小米推出了新版SU7,看样子还是非常不 错,而且据说目前小米智驾的进步很大,已经进入 ...
CES 2026见证AI生态变局 中国厂商跻身全球核心阵营
2 1 Shi Ji Jing Ji Bao Dao· 2026-01-07 23:14
编者按: AI生态新棋局 虽然当前,AI大模型的技术路线还没有完全收敛,基于此开展的AI应用仍处在探索阶段,但正呈现百 花齐放的态势:物理AI在今年的发展进程尤其被重视,原生AI硬件以更丰富的形态在探索市场。 而中国厂商正在CES上扮演越来越重要的角色。无论是AI眼镜、各形态机器人,中国厂商的积极参展, 也在海外引发高度关注,甚至让现场展台"水泄不通"。 例如,宇树科技把"传统艺能"拳击赛带到现场,21世纪经济报道记者发现,观赛人员围了里三层外三 层,纷纷高举手机拍摄G1机器人灵活的动作表现。 一众中国原生AI硬件厂商备受关注的背后,是中国企业全球角色的进阶:凭借供应链与研发能力的长 期沉淀,中国已升级为全球科技创新的关键力量。 竞速底层算力 AI大模型加速发展背后,是底层算力基础设施的快速演进支撑。在CES现场,芯片巨头掌门人们都强调 了算力的飞速增长态势,以及由此引发的应用新机会。 英伟达创始人兼CEO黄仁勋指出,物理AI的"ChatGPT时刻"已然到来。机器开始具备理解真实世界、推 理并付诸行动的能力。无人驾驶出租车将是最早受益的应用之一。他在现场发布NVIDIA Alpamayo系列 开源AI模型、仿 ...
英伟达狂扫台积电80万片晶圆!2026年AI芯片大战一触即发
Sou Hu Cai Jing· 2025-12-11 08:38
Core Insights - TSMC's advanced packaging capacity is fully booked, with NVIDIA accounting for over half of the orders, indicating strong demand for semiconductor manufacturing [1] - NVIDIA has reserved 800,000 to 850,000 wafers for 2026, significantly outpacing competitors like Broadcom and AMD [1][3] Group 1: NVIDIA's Capacity Reservation - NVIDIA's large-scale capacity reservation is primarily to meet the growing production demands for the Blackwell Ultra chip and to prepare for the next-generation Rubin architecture [3] - Current orders do not include potential demand from the Chinese market for the H200 AI chip, suggesting that NVIDIA's capacity needs may increase further [3] Group 2: TSMC's Response to Demand - TSMC is actively expanding its advanced packaging facilities, planning to build eight wafer fabs at the AP7 plant and introducing two new packaging factories in Arizona, expected to start mass production in 2028 [3] - Due to limited capacity, TSMC has decided to outsource some processes of its CoWoS advanced packaging to companies like ASE and SPIL in Taiwan [3] Group 3: Industry Alternatives and Technology - The outsourcing decision has prompted some companies to consider alternative solutions, with Intel's EMIB technology gaining attention as a viable option [3] - EMIB offers advantages in area and cost, allowing for highly customized packaging layouts, but for GPU suppliers like NVIDIA and AMD, TSMC's CoWoS remains the preferred solution due to its bandwidth, transmission speed, and low latency requirements [3]
英伟达GTC Keynote直击
2025-03-19 15:31
Summary of Key Points from the Conference Call Company and Industry Overview - The conference call primarily discusses **NVIDIA** and its developments in the **data center** and **AI** sectors, particularly in relation to the **GTC conference** held in March 2025. Core Insights and Arguments - **Data Center Product Launch Delays**: NVIDIA's data center products in Japan are delayed, with the first generation expected in 2026 instead of 2025, and the HBM configuration is lower than anticipated, with 12 layers instead of the expected 16 layers and a capacity of 288GB [2][3] - **Rubin Architecture**: The Rubin architecture is set to launch in 2026, featuring a significant performance upgrade with the second generation expected in 2027, which will double the performance [3][4] - **CPO Technology**: The Co-Packaged Optics (CPO) technology aims to enhance data transmission speeds and will be introduced with new products like Spectrum X and Quantum X [6] - **Small Computing Projects**: NVIDIA is focusing on small computing projects like DGX BasePOD and DGX Station, targeting developers with high AI computing capabilities [7] - **Pre-trained Models and Compute Demand**: The rapid growth of pre-trained models has led to a tenfold increase in model size annually, significantly driving up compute demand, which has resulted in a doubling of CSP capital expenditures over the past two years [9][10] - **Inference Stage Importance**: The conference emphasized the significance of the inference stage, with NVIDIA aiming to reduce AI inference costs through hardware and software innovations [11][12] - **Capital Expenditure Growth**: North America's top five tech companies are expected to increase capital expenditures by 30% in 2025 compared to 2024, nearly doubling from 2023 [16] - **Impact of TSMC's Capacity**: TSMC's increased capacity is projected to affect NVIDIA's GGB200 and GB300 shipment volumes, which are expected to decline from 40,000 units to between 25,000 and 30,000 units [17][20] Additional Important Insights - **Hardware Changes**: The GB200 and GB300 models show significant changes in HBM usage, with GB300 increasing from 8 layers to 12 layers, and a rise in power consumption [15] - **Market Performance**: Chinese tech stocks have outperformed U.S. tech stocks, indicating a potential shift in market dynamics [13] - **Future Product Releases**: NVIDIA's product roadmap includes significant advancements in GPU architecture, with the potential to influence the entire industry chain [14] This summary encapsulates the critical developments and insights shared during the conference call, highlighting NVIDIA's strategic direction and the broader implications for the tech industry.
不止芯片!英伟达,重磅发布!现场人山人海,黄仁勋最新发声
21世纪经济报道· 2025-03-19 03:45
Core Viewpoint - The article highlights NVIDIA's GTC 2025 event, emphasizing the shift in AI focus from training to inference, showcasing new hardware and software innovations aimed at enhancing AI capabilities and applications [1][3][30]. Group 1: Key Innovations and Products - NVIDIA introduced the Blackwell Ultra GPU series and the next-generation architecture Rubin, with plans for the Vera Rubin NLV144 platform to launch in the second half of 2026 and Rubin Ultra NV576 in the second half of 2027 [5][10]. - The Blackwell Ultra architecture significantly enhances AI performance, achieving a 1.5x improvement in AI performance compared to the previous generation, and offers a 50x increase in revenue opportunities for AI factories [8][10]. - The new CPO switch technology aims to reduce data center power consumption by 40MW and improve network transmission efficiency, laying the groundwork for future large-scale AI data centers [13][14]. Group 2: AI Inference and Software Upgrades - NVIDIA's new AI inference service software, Dynamo, is designed to maximize token revenue in AI models, achieving a 40x performance improvement over the previous Hopper generation [19][21]. - The introduction of AI agents and the Ll ama Nemo tr o n series models aims to facilitate complex inference tasks, enhancing capabilities in various applications such as automated customer service and scientific research [20][30]. Group 3: Robotics and Physical AI - NVIDIA launched the GROOT N1, the world's first open-source humanoid robot model, designed for various tasks such as material handling and packaging, indicating a significant step towards the commercialization of humanoid robots [25][30]. - The company also introduced new desktop AI supercomputers, DGX Spark and DGX Station, aimed at providing high-performance AI computing capabilities for researchers and developers [23][24]. Group 4: Market Sentiment and Future Outlook - Despite the significant technological advancements presented at GTC 2025, NVIDIA's stock price fell by 3.43% post-event, reflecting ongoing market concerns regarding AI spending and competition [28][29]. - Analysts suggest that while there are concerns about AI capital expenditure growth in 2026, the overall sentiment may improve due to the innovations showcased at the event [29][30].