傅里叶的猫
Search documents
存储器市场跟踪
傅里叶的猫· 2025-05-28 14:42
Core Viewpoint - The article discusses the current trends and forecasts in the memory market, particularly focusing on NAND and DRAM, highlighting the demand dynamics and inventory levels of major players like Samsung and SK Hynix [1][2][3]. Summary by Sections Market Overview - The article references UBS research, indicating that Samsung and SK Hynix expect a modest increase in DRAM shipments for Q2 2025, with Samsung projecting less than 10% growth and SK Hynix over 20% for NAND [2]. - UBS has adjusted its NAND price forecast for Q2 from +5% to +3%, citing resistance from customers against rising SSD prices [2]. Demand and Inventory - As of Q1 2025, DRAM inventory levels are decreasing faster than expected, with smartphone customers holding about 10 weeks of inventory and PC manufacturers around 12 weeks [3]. - For NAND, smartphone manufacturers have approximately 9 weeks of inventory, while SSD inventory stands at 11 weeks [3]. HBM Demand Forecast - UBS has revised its HBM demand forecast for 2025 down from 203 billion Gb to 189 billion Gb, reflecting a year-on-year growth of 105%, and for 2026 from 303 billion Gb to 291 billion Gb, with a growth rate of 54% [3][24]. - The demand for ASICs is expected to outpace GPUs, with ASICs accounting for 54% of total HBM demand by 2026, up from 41% in 2025 [3]. Supplier Insights - Samsung's HBM bit shipments are projected to increase from 5.1 billion Gb in 2024 to 11.2 billion Gb by 2026, representing 8.1% of its total DRAM bits [9]. - SK Hynix is expected to grow its HBM shipments from 6.8 billion Gb in 2024 to 17.4 billion Gb by 2026, which will account for 17% of its total DRAM bits [9]. Production Capacity - Longxin's wafer production capacity is expected to reach 170,000 wafers per month by the end of 2024, with plans to increase to 230,000 wafers per month by the end of 2025 [24]. - Yangtze Memory Technologies Company (YMTC) is also expanding its production capacity, aiming for 160-layer technology despite facing challenges due to U.S. restrictions [25]. Competitive Landscape - The article highlights the competitive dynamics among major players, with Samsung and SK Hynix leading in HBM production, while Micron is also increasing its HBM output significantly [9][24]. - The market share of DRAM suppliers is detailed, showing Samsung at 39.5%, SK Hynix at 28.9%, and Micron at 22.9% for 2025 [27]. Future Outlook - The article concludes with a cautious outlook for the memory market, emphasizing the uncertainty due to tariff issues and potential specification downgrades in NAND demand [2][24].
Morgan Stanley--出口管制正在缩小中国的HBM差距
傅里叶的猫· 2025-05-27 14:52
Core Insights - Morgan Stanley's report indicates that due to U.S. export controls, China's HBM technology gap is narrowing, with Changxin Storage (CXMT) aiming to produce HBM3/3E by 2027 [1][2]. Group 1: HBM Technology Development - China currently lags 3-4 years behind global leaders in HBM3 technology, but this gap is expected to close due to advancements in AI chip production capabilities [2][3]. - The DRAM technology gap between CXMT and market leaders has decreased from 5 years to 3 years, thanks to significant progress in DRAM technology [2][3]. - The shift towards lower-cost AI inference solutions may enhance China's competitiveness in the HBM and high-end DRAM markets [3][4]. Group 2: Market Dynamics and Competitors - China's semiconductor ecosystem is becoming more competitive, with local solutions emerging across various segments, including chips, substrates, and assembly [4][5]. - Geopolitical tensions are driving the Chinese tech industry to prioritize local components, increasing the market share of Chinese suppliers [5][6]. - By 2027, approximately 37% of wafer manufacturing capacity is expected to be concentrated in China, with notable advancements in advanced memory nodes [5][6]. Group 3: Changxin Storage (CXMT) Updates - CXMT is progressing towards HBM production, with plans to start small-scale production of HBM2 samples by mid-2025 and mass production of HBM3 by 2026 [14][16]. - The company aims to increase its HBM capacity to approximately 100,000 wafers per month by the end of 2026, expanding to 400,000 wafers per month by the end of 2028 [16][19]. - CXMT's DDR5 production is currently at a 3-year lag behind leading competitors, but it is actively working to close this gap [18][19]. Group 4: Hybrid Bonding Technology - China leads in hybrid bonding patents, which are crucial for the future of HBM technology, with significant advancements made by companies like Yangtze Memory Technologies (YMTC) [20][27]. - Hybrid bonding technology is expected to enhance the performance and yield of HBM products, with major manufacturers considering its implementation in future generations [27][28]. Group 5: GPU Market and AI Inference - The introduction of alternative GPU products, such as NVIDIA's downgraded H20 GPU, is expected to impact the HBM market significantly, with potential revenue implications of approximately $806 million [9][12]. - The Chinese GPU market for AI inference is projected to grow at a CAGR of about 10% from 2023 to 2027, driven by increased adoption of workstation solutions [12][13].
台积电各个Fab的产能情况
傅里叶的猫· 2025-05-26 14:25
Core Viewpoint - The article provides an overview of TSMC's wafer fabrication capacity across various fabs, highlighting the production capabilities and expected changes in output for the upcoming quarters. Group 1: TSMC's Fab Capacity Overview - TSMC operates a total of 17 fabs located in Hsinchu, Tainan, Shanghai, Nanjing, Kaohsiung, Washington, and Arizona [2][4]. - The data presented in the article is measured in KWPM, indicating the monthly capacity in thousands of wafers [3]. Group 2: Hsinchu Fabs - Hsinchu has the highest number of fabs, totaling seven, with capacities of 30, 60, 41, 93, 135, and 3 KWPM for the respective fabs [4]. - It is expected that the production capacity will remain stable in the latter half of the year, with Fab 20 showing a slightly higher increase [4]. Group 3: Tainan Fabs - Tainan houses three fabs: Fab 6 (8-inch), Fab 14 (12-inch), and Fab 18 (12-inch), with capacities of 102, 338, and 242 KWPM respectively [5][6]. - A slight reduction in production is anticipated for Fab 6 and Fab 14, while Fab 18's capacity is expected to increase [5]. Group 4: Shanghai Fab - Shanghai has one fab, Fab 10, with a monthly capacity of 105 KWPM, which is expected to remain relatively stable in the upcoming quarters [7][8]. Group 5: Washington Fab - Washington has one 8-inch fab, Fab 11, with a capacity fluctuating between 23 and 30 KWPM [9][10]. Group 6: Taichung Fab - Taichung has one 12-inch fab, Fab 15, with a first-quarter capacity of 287 KWPM, which is expected to experience slight fluctuations [11][12]. Group 7: Nanjing Fab - Nanjing has one 12-inch fab, Fab 16, which is focused on mature processes (28nm and above) [13][14]. Group 8: Arizona Fab - Fab 21 in Arizona is under construction, with production of 4nm chips expected to begin by the end of 2024, indicating currently low capacity [15][16]. Group 9: Kaohsiung Fab - Fab 22 in Kaohsiung is planned for 2nm processes, with multiple phases of construction; currently, the capacity is at zero [17].
中芯国际各foundry的产能
傅里叶的猫· 2025-05-25 10:02
Summary of Key Points Core Viewpoint - The article provides an analysis of the foundry capacities of SMIC (Semiconductor Manufacturing International Corporation) across its facilities in Beijing, Tianjin, Shanghai, and Shenzhen, focusing on the production capacity of 8-inch and 12-inch wafers in the first quarter of the year and projections for the upcoming quarters. Group 1: Beijing - Beijing has three 12-inch wafer fabs with capacities of 56, 76, and 25 KWPM in Q1 of this year. The production capacity for the last two quarters of the year is expected to remain consistent with Q1, with a slight decline anticipated [2]. Group 2: Tianjin - Tianjin has one 8-inch wafer fab with a Q1 capacity of 148 KWPM. A slight decrease in capacity is expected for the upcoming two quarters [3][4]. Group 3: Shanghai - Shanghai, being the main hub for SMIC, has one 8-inch fab and two 12-inch fabs. The capacities in Q1 are 120 KWPM for the 8-inch fab, and 23 and 13 KWPM for the two 12-inch fabs, respectively. The 8-inch fab's capacity is expected to decline, while the 12-inch fabs are projected to increase their capacities in the latter half of the year [5][6]. Group 4: Shenzhen - Shenzhen has one 8-inch fab and one 12-inch fab, with Q1 capacities of 67 KWPM and 33 KWPM, respectively. Projections for future quarters indicate a slight increase for the 12-inch fab and a decrease for the 8-inch fab [7][8].
HBM晶圆厂位置和产量调研
傅里叶的猫· 2025-05-24 13:21
Core Viewpoint - The article discusses the expected price increase of HBM4 by 30% and provides insights into DRAM production capacity and wafer fabrication plants, referencing data from Trendforce [1]. Group 1: DRAM Production Capacity - Samsung's wafer production capacity is detailed, showing a decline from 513K in Q1 23 to an estimated 455K in Q3 23, with projections for Q4 24 at 645K [3]. - SK Hynix's production capacity is also outlined, with a total of 333K in Q1 23 and a gradual increase projected to 378K by Q4 25 [4]. - Micron's production capacity shows a decrease from 303K in Q1 23 to 250K in Q3 23, with a recovery to 320K by Q1 25 [5]. - Other companies like Nanya, PSMC, Winbond, and JHICC have their production capacities listed, indicating varying levels of output and future projections [6][7][8][9]. Group 2: Market Trends and Insights - The anticipated price increase of HBM4 is a significant market trend that could impact the overall DRAM market dynamics [1]. - The article highlights the importance of understanding production capacities across different companies to gauge market supply and potential pricing strategies [1][3][4][5].
特供中国的阉割版Blackwell-B40的最新信息
傅里叶的猫· 2025-05-24 13:21
Core Viewpoint - Nvidia is launching a new AI chip for the Chinese market, significantly reducing the price compared to the previously restricted H20 model, as a response to U.S. export controls and competition from Huawei [2][3]. Group 1: New Product Launch - Nvidia plans to introduce a new AI chip priced between $6,500 and $8,000, which is a substantial decrease from the H20's price range of $10,000 to $12,000 [2]. - The new chip is based on the Blackwell architecture and utilizes the RTX Pro 6000D server-grade processor along with conventional GDDR7 memory [2]. - Production for the new chip is expected to start in June, following the ban of the H20 model [2]. Group 2: Market Impact and Competition - Following the ban of the H20, Nvidia's market share in China plummeted from 95% to 50%, with Huawei's Ascend 910B chip rapidly gaining market presence [2]. - Nvidia's CEO Jensen Huang warned that continued U.S. restrictions could lead to more Chinese customers shifting to Huawei [2]. Group 3: Financial Implications - The discontinuation of the H20 resulted in Nvidia recording a $5.5 billion inventory loss and forfeiting $15 billion in potential orders [3]. - Nvidia is also planning to mass-produce another Blackwell architecture chip, potentially named B40, aimed at the Chinese market in September [3]. Group 4: Regulatory Challenges - Nvidia is awaiting final approval from the U.S. government for the new product designs, as the company navigates compliance with U.S. export regulations [3]. - Industry analysis indicates that the new U.S. regulations aim to suppress China's AI computing power by limiting memory bandwidth, while Nvidia seeks to find market opportunities by adjusting chip configurations [3].
外资顶尖投行研报分享
傅里叶的猫· 2025-05-23 15:46
还有专注于半导体行业分析的SemiAnalysis的全部分析报告: 想要看外资研报的同学,给大家推荐一个星球,在星球中每天都会上传几百篇外资顶尖投行的原文研 报:大摩、小摩、UBS、高盛、Jefferies、HSBC、花旗、BARCLAYS 等。 星球中每日还会更新Seeking Alpha、Substack、 stratechery的精选付费文章, 现在星球中领券后只需要 390元,即可每天都能看到上百篇外资顶尖投行科技行业的分析报告和每天的精选报告,无论是我们自 己做投资,还是对行业有更深入的研究,都是非常值得的。 ...
SemiAnalysis--为什么除了CSP,几乎没人用AMD的GPU?
傅里叶的猫· 2025-05-23 15:46
Core Viewpoint - The article provides a comprehensive analysis comparing the inference performance, total cost of ownership (TCO), and market dynamics of NVIDIA and AMD GPUs, highlighting why AMD products are less utilized outside of large-scale cloud service providers [1][2]. Testing Background and Objectives - The research team conducted a six-month analysis to validate claims that AMD's AI servers outperform NVIDIA in TCO and inference performance, revealing complex results across different workloads [2][5]. Performance Comparison - For customers using vLLM/SGLang, the performance cost ratio (perf/$) of single-node H200 deployments is sometimes superior, while MI325X can outperform depending on workload and latency requirements [5]. - In most scenarios, MI300X lacks competitiveness against H200, but it outperforms H100 for specific models like Llama3 405B and DeepSeekv3 670B [5]. - For short-term GPU rentals, NVIDIA consistently offers better cost performance due to a larger number of rental providers, while AMD's offerings are limited, leading to higher prices [5][26]. Total Cost of Ownership (TCO) Analysis - AMD's MI300X and MI325X GPUs generally have lower hourly costs compared to NVIDIA's H100 and H200, with MI300X costing $1.34 per hour and MI325X costing $1.53 per hour [21]. - The capital cost constitutes a significant portion of the total cost, with MI300X having a capital cost share of 70.5% [21]. Market Dynamics - AMD's market share in the AI GPU sector has been growing steadily, but it is expected to decline in early 2025 due to NVIDIA's Blackwell series launch, while AMD's response products will not be available until later [7]. - The rental market for AMD GPUs is constrained, with few providers, leading to artificially high prices and reduced competitiveness compared to NVIDIA [26][30]. Benchmark Testing Methodology - The benchmark testing focused on real-world inference workloads, measuring throughput and latency under various user loads, which differs from traditional offline benchmarks [10][11]. - The testing included a variety of input/output token lengths to assess performance across different inference scenarios [11][12]. Benchmark Results - In tests with Llama3 70B FP16, MI325X and MI300X outperformed all other GPUs in low-latency scenarios, while H200 showed superior performance in high-concurrency situations [15][16]. - For Llama3 405B FP8, MI325X consistently demonstrated better performance than H100 and H200 in various latency conditions, particularly in high-latency scenarios [17][24]. Conclusion on AMD's Market Position - The article concludes that AMD needs to lower rental prices to compete effectively with NVIDIA in the GPU rental market, as current pricing structures hinder its competitiveness [26][30].
外资顶尖投行研报分享
傅里叶的猫· 2025-05-22 13:44
Group 1 - The article recommends a platform where users can access hundreds of top-tier foreign investment bank research reports daily, including those from firms like Morgan Stanley, UBS, Goldman Sachs, Jefferies, HSBC, Citigroup, and Barclays [1] - The platform also provides comprehensive analysis reports focused on the semiconductor industry from SemiAnalysis, along with selected paid articles from Seeking Alpha, Substack, and stratechery [3] - The subscription to the platform is currently available for 390 yuan, allowing access to a wealth of technology industry analysis reports and selected articles daily, which is deemed valuable for both personal investment and deeper industry research [3]
JP Morgan--AI服务器市场分析
傅里叶的猫· 2025-05-22 13:44
这篇文章,我们来看下JP Morgan最近出的一篇AI Server的分析报告。这个报告的内容真的是干货满 满,20页的pdf中,JP Morgan对2025年英伟达各个GPU的出货量做了预测,给出了微软、Meta、亚马 逊、谷歌的NVL72等效机架需求,并写明了CSP资本支出与AI服务器出货量的三角验证。还有 GB200/300在ODM之间的分配比例和ODM的库存情况,华为910B等效芯片产量预测... 这个报告给出的数据实在是太多了,非常值得一看。但还是那句,这些预测数据都是JP Morgan的一 家之言,大家要自行判断数据真假。想看原文的同学请到星球中取。 正文 DeepSeek后需求信号令人鼓舞,但上游与ODM之间仍存差距 我们维持今年Nvidia高端GPU预测不变,为550万,但调整了产品组合预测,以反映GB服务器的上升 趋势。我们认为,尽管近期供应链出现小问题,Nvidia仍专注于基于ARM的AI服务器(即GB/VR) 而非HGX产品。我们现在预测今年Blackwell GPU中GB服务器占比约85%(约380万)。对于HGX, 我们将Blackwell HGX GPU预测下调至约90万,但Hop ...