Workflow
AI ASIC
icon
Search documents
摩根士丹利:AI ASIC-协调 Trainium2 芯片的出货量
摩根· 2025-07-11 01:13
July 8, 2025 09:00 PM GMT Asia Technology | Asia Pacific AI ASIC: Reconciling Trainium2 chip shipments We attribute the AWS Trainium2/2.5 chip shipments mismatch (between semis and systems) to unstable PCB yield rates. We expect chip shipments to reach around 1.1mn in 2025. Reason for this report: We conducted follow-up research on AI Supply Chain: AI ASIC dynamics: Trainium and TPU, as some investors expressed confusion about the wide range of AWS Trainium2/2.5 chip shipment assumptions, from 1mn to 2mn un ...
电子行业跟踪周报:Marvell上调数据中心TAM,关注ASIC趋势对铜连接市场的驱动-20250622
Soochow Securities· 2025-06-22 10:50
◼ AI ASIC 趋势已明朗,关注数据中心铜连接市场。 从目前已获知的 CSP ASIC 服务器方案来看,使用铜缆进行短距互 连已成趋势。根据 Fibermall 报告,AWS 今年有望采购 150 万颗自研芯 片,大部分使用AEC互连,尽管Trainium2的计算能力低于英伟达H100, 但可以使用 400G AEC,随着 Trainium3 于年底推出,800G AEC 的需求 有望得到增加;虽然微软在使用 AEC 铜缆方面较 AWS 缓慢,但目前已 经开始使用 AEC 构建 AI 网络;谷歌 TPU 互连目前主要使用无源铜缆 DAC,随着速度的提升,可能会转向 AEC。尽管目前还没有 Meta 使用 AEC 铜缆的明确信息,但在其 Minerva 机柜系统中,使用了 2 个可拆卸 电缆背板盒来连接计算托盘以及网络托盘。该电缆背板盒使用了 112G PAM4 连接器和电缆,每个盒子包括 4 个电缆组,每个电缆组包含 384 对不同电缆线,在 8 个 MTIA 托盘,6 个网络托盘和 1 个机柜管理系统 托盘间形成了 1536 对电缆网络。头部 CSP 厂商之外,X.AI 也对 AEC 表现出了大量需求, ...
摩根士丹利:全球科技-AI 供应链ASIC动态 -Trainium 与 TPU
摩根· 2025-06-19 09:46
1) AWS Trainium: it remains a global debate whether Marvell will win part of the Trainium3 business (link). At least our checks suggest that Alchip taped out the Trainium3 design back in February, and the wafers were already out in May. Charlie Chan thinks Alchip has a higher chance to win the 2nm Trainium4, which should be decided this summer. Astera Labs and Alchip just formed a partnership in connectivity chip design (I/O chiplets). We think that may help Alchip compete for next-generation XPU ASIC proje ...
野村:Meta 在 ASIC 服务器方面雄心勃勃,其 MTIA AI 服务器有望在 2026 年成为一个里程碑
野村· 2025-06-19 09:46
Asia AI Server Global Markets Research EQUITY: TECHNOLOGY Meta's ambition on ASIC AI server Meta's MTIA AI servers could mark a milestone in 2026F AI ASIC is booming; a potential cross over in units from 2H26-2027 In the world of AI servers, nVidia (NVDA US, Not rated) has been dominating 80%+ market "value" so far, vs. ASIC AI servers that have roughly 8-11% in value share (based on Bloomberg consensus estimates). However, if we just compare the number of units for AI ASICs vs. nVidia's AI GPUs (which can ...
迈威尔(MRVL.US)点燃AI ASIC需求井喷预期 最大受益者乃博通(AVGO.US)?
智通财经网· 2025-06-18 14:40
Core Viewpoint - Marvell Technology (MRVL.US) has seen a significant stock price increase due to positive evaluations from top Wall Street analysts regarding its customized AI ASIC chip activities and potential market announcements [1][3] Group 1: Market Opportunities - Analysts from Evercore ISI predict that the new AI ASIC chip designs could ramp up quickly between 2026 and 2027, indicating strong future demand [1] - Marvell expects each customized AI chip design win to generate billions in lifecycle revenue within 1.5 to 2 years, while each XPU Attach win could contribute hundreds of millions within 2 to 4 years [2] - The total addressable market (TAM) for customized data center chips has been raised to $94 billion, a 26% increase from last year's AI activities [3] Group 2: Financial Projections - Marvell has raised its financial targets, with analysts noting that the potential earnings per share could reach $8 by 2028, exceeding Wall Street estimates by 60% [4] - The company aims to capture at least 20% of the TAM, with over 50% of its data center revenue expected to come from AI ASIC-related demands [3][5] Group 3: Competitive Landscape - Broadcom (AVGO.US) is identified as the long-term beneficiary of Marvell's AI activities, holding a dominant market share of approximately 60% in the AI ASIC sector, while Marvell holds 13% to 15% [6][7] - The AI ASIC market is expected to grow significantly, with major tech companies like Google, Microsoft, and Amazon investing heavily in AI ASIC chips, indicating a shift in market dynamics away from GPU dominance [7]
华为昇腾910系列2025年出货量调研
傅里叶的猫· 2025-05-20 13:00
这是一个来自瑞穗证券的一份研报,里面提到了对博通、英伟达、AMD、超微和华为的分析。报告 放在星球中,有兴趣的朋友可以到星球中查看原始报告。 瑞穗预计,博通定制ASIC芯片(TPUv7p/MTIA2)将在2026年加速放量,并可能在2026年下半年用 于OpenAI的Strawberry和苹果的Baltra项目。在2024年,博通的定制ASIC芯片占使用的70%-80%,绝 对的AI ASIC龙头。报告中的这个数字应该是没有考虑像Google的TPU这种自产自销的AI ASIC。 在沙特的UMAIN项目中,未来5年将部署4000台GB200 NVL72服务器(对应28万颗英伟达GPU)和 35万颗AMD GPU。在阿联酋G42项目中,承诺保证每年进口50万颗英伟达GB200 GPU(价值150亿 美元)。但笔者对这个数字持保留态度,感觉未必能持续下去。 对于超微和AMD的分析都比较简单,我们这里就不太写了。 这么看下来,今年昇腾910系列的出货量在70万以上应该是有的。 end 对华为的一个分析很有意思,报告中提到预计昇腾910在2025年订单超70万颗,下一代昇腾920将于 2026年推出。 但报告中也提到 ...
英伟达(NVDA.US)不愿放弃中国市场! 欲再推“中国特供版”AI芯片
智通财经网· 2025-05-02 14:15
智通财经APP获悉,据媒体最新报道,全球"AI芯片霸主"英伟达(NVDA.US)已通知包括字节跳动、阿里 巴巴以及腾讯等科技公司在内的中国市场最重要客户,该公司正重新修改其AI芯片设计架构,以符合 美国政府的最新出口限制,并坚持继续向中国企业供应AI芯片。 截至1月26日的 2025财年,英伟达在中国市场实现销售额高达171.1亿美元,占该半导体巨头高达1,305 亿美元总营收的大约13%。 即将推出的中国市场"特供版AI芯片"将走ASIC路线而不是通用GPU路线? 有半导体行业分析人士在《The Information》最新报道披露后表示,英伟达为了推出迎合美国政府出口 禁令的中国市场特制版AI芯片,可能将AI芯片技术路线从通用GPU 转向专门面向AI训练/推理领域的 AI ASIC。 据悉,这些分析人士表示,GPU架构的特定决定了英伟达除非大规模削减性能,否则无法在短期内推出 符合美国出口限制措施的AI芯片,但是大幅削减性能可能使得英伟达AI芯片相比于国产AI 芯片不具备 性价比。不过,还有分析人士表示,英伟达的中国市场AI芯片策略可能将专注于"在AI GPU架构上做快 速且适度幅度降级以规避监管红线"— ...