半导体行业观察
Search documents
事关苹果芯片,分析人士:绝无可能
半导体行业观察· 2026-02-02 01:33
Core Viewpoint - Recent rumors suggest that Intel may supply chips for Apple's M series and non-Pro iPhone models, but industry insiders largely dismiss the likelihood of this happening due to technical concerns [2][3]. Group 1: Intel's Potential Role with Apple - Reports from GF Securities and DigiTimes indicate that Apple is evaluating Intel's 18A-P process for its entry-level M series chips expected to ship in 2027 and non-Pro iPhone chips in 2028 [2]. - Apple has signed a confidentiality agreement with Intel and received samples of the 18A-P process design kit (PDK) for evaluation [2]. - There are indications that Apple's custom ASIC, in collaboration with Broadcom, will utilize Intel's EMIB packaging technology in 2028 [2]. Group 2: Technical Concerns Regarding Intel's Manufacturing - Some industry experts express skepticism about Intel's ability to manufacture iPhone chips, citing the company's decision to fully adopt backside power delivery (BSPD) technology in its 18A and 14A nodes, unlike TSMC, which offers both BSPD and non-BSPD options [3][4]. - While BSPD can enhance performance by reducing voltage drop and allowing higher frequencies, its benefits for mobile chips are limited and may lead to significant thermal issues [5]. - The need for additional cooling solutions due to self-heating effects raises doubts about Intel's capability to produce stable iPhone chips in the near term, although M series processors might still be a possibility [5].
台积电2nm,被疯抢
半导体行业观察· 2026-02-02 01:33
Core Viewpoint - The global AI and HPC chip competition has officially entered the 2nm era, with TSMC initiating a "preparation mode" for advanced process technology [2] Group 1: 2nm Process Development - TSMC's 2nm (N2) process represents a significant shift from FinFET to GAAFET architecture, with strong demand from clients exceeding expectations [2] - Major clients for the 2nm process include Apple and Qualcomm, with general-purpose GPUs and ASICs expected to ramp up production starting next year [2][3] - TSMC's N2 family is projected to have a larger scale and longer lifecycle than the 3nm process, with mass production expected to ramp up in 2026 [3] Group 2: Advanced Packaging Technologies - TSMC is simultaneously upgrading its advanced packaging systems to meet the demands of AI chips, which are moving towards multi-chiplet and large package sizes [3] - The company is expected to increase its CoWoS monthly capacity by over 70% this year, while also validating next-generation technologies like CoWoP and CPO [3] - The ability to produce high-yield large system-level packages is critical for the semiconductor ecosystem's resilience [3]
Tower半导体,市值狂飙300%
半导体行业观察· 2026-02-02 01:33
Core Viewpoint - Tower Semiconductor's stock price has surged, leading to a market capitalization exceeding $15 billion, which is three times the $5 billion Intel had planned to pay for the company before abandoning the acquisition due to regulatory issues [2][6]. Group 1: Stock Performance and Market Position - Over the past six months, Tower's stock has increased by more than 160%, making it one of the most notable beneficiaries in AI infrastructure [4]. - The company's CEO, Russell Ellwanger, noted a significant shift in public perception, with increased interest from family and neighbors regarding Tower's stock [4]. - Tower has transitioned from being viewed as a niche analog chip manufacturer to a key player in AI infrastructure due to the growing demands on data centers [4]. Group 2: Investment and Growth Strategy - Tower announced a $300 million investment to expand its silicon photonics production line, following an earlier $350 million investment [5]. - The majority of the new capacity will be built at Tower's main production site in Migdal HaEmek, which will become the largest photonics center for the company [5]. - The company anticipates that its photonics-related revenue will double, with AI-related products expected to generate nearly $1 billion annually by 2026 [5]. Group 3: Financial Projections - Tower forecasts record revenue of $440 million for the fourth quarter and expects total revenue of $1.5 billion for 2025, representing a 14% year-over-year increase [5]. - Projected earnings are expected to approach $200 million, which is significant for a hardware manufacturer [5]. Group 4: Perspective on Intel Acquisition - The failed acquisition by Intel is now viewed differently, as Tower received $350 million in compensation but also maintained its independence and continued to invest actively during the regulatory review [6]. - This independence has allowed Tower to secure a strong position in a critical area of AI infrastructure [6].
DRAM、NAND价格,创历史新高
半导体行业观察· 2026-02-02 01:33
公众号记得加星标⭐️,第一时间看推送不会错过。 在存储半导体市场,DRAM 和 NAND 闪存的月平均价格继续同步强劲上涨。 主流DRAM产品DDR4的价格突破11美元,创下自价格追踪开始以来的历史新高。NAND闪存的价格 也在短短一个月内飙升了60%以上。 根据市场研究公司 DRAMeXchange 1 月 30 日的数据,1 月份主流 PC DRAM 产品(DDR4 8Gb 1Gx8)的平均固定合同价格为 11.50 美元,比上个月的 9.30 美元上涨了 23.66%。 自去年4月以来,DDR4内存的平均价格已连续10个月呈上涨趋势(1.65美元)。目前,其价格已达 到自2016年6月开始追踪以来的最高水平。 分析人士认为,价格上涨是由于 DDR4(一种较旧的 PC 标准)供应短缺所致,因为人工智能 (AI) 的普及优先保障了服务器用高附加值 DRAM 的供应。 DRAMeXchange 的母公司 TrendForce 评论道:"DDR4 8Gb 模块价格上涨了 115-120%,平均价格 达到 85 美元",并评估认为 DRAM 市场已进入非常强劲的上升阶段。 DDR5 和 DDR4 之间的价格倒置现 ...
烦人的内存墙
半导体行业观察· 2026-02-02 01:33
Core Insights - The unprecedented availability of unsupervised training data and the scaling laws of neural networks have led to a significant increase in the size and computational demands of models used for training low-level logic models (LLMs) [2] - The primary performance bottleneck is shifting towards memory bandwidth rather than computational power, as server hardware's peak floating-point operations per second (FLOPS) have increased at a rate of 3 times every two years, while DRAM and interconnect bandwidth have only increased at rates of 1.6 times and 1.4 times, respectively [2][10] - The article emphasizes the need to redesign model architectures, training, and deployment strategies to overcome memory limitations [2] Group 1 - The computational requirements for training large language models (LLMs) have grown at a rate of 750 times every two years, driven by advancements in AI accelerators [4] - Memory and communication bottlenecks are emerging as significant challenges in the training and serving of AI models, with many applications being limited by internal and inter-chip communication rather than computational capacity [4][9] - The "memory wall" problem, where the performance of memory does not keep pace with computational speed, has been a recognized issue since the 1990s and continues to be relevant today [5][6] Group 2 - Over the past 20 years, server-level AI hardware's peak computational capability has increased by 60,000 times, while DRAM's peak capability has only increased by 100 times, highlighting the growing disparity between computation and memory bandwidth [8] - Recent trends in AI model development have led to unprecedented increases in data volume, model size, and computational resources, with LLMs growing in size by 410 times every two years [9] - Even when models fit within a single chip, internal data transfer between registers, caches, and global memory is becoming a bottleneck, necessitating faster data provision to maintain arithmetic unit utilization [10] Group 3 - The article discusses the performance characteristics and bottlenecks of Transformer models, particularly focusing on the differences between encoder and decoder architectures [13] - Arithmetic intensity, which measures the FLOPS per byte of memory accessed, is crucial for understanding performance bottlenecks in Transformer models [14] - Performance analysis of Transformer inference on Intel Gold 6242 CPUs shows that the latency for GPT-2 is significantly higher than for BERT models, indicating that memory operations are a major bottleneck for decoder models [17] Group 4 - To address memory bottlenecks, the article suggests rethinking AI model design, emphasizing the need for more efficient training methods and reducing the reliance on extensive hyperparameter tuning [18] - The challenges of deploying large models for inference are highlighted, with potential solutions including model compression through quantization and pruning [25][27] - The design of AI accelerators should focus on improving memory bandwidth alongside peak computational capability, as current designs prioritize computational power at the expense of memory efficiency [29]
“不务正业”的半导体巨头
半导体行业观察· 2026-02-01 02:25
公众号记得加星标⭐️,第一时间看推送不会错过。 上世纪70年代末,日本著名的调味品公司味之素开始研究副产品的应用。 在对蛋白质和氨基酸(关键调味料成分)进行研究时,味之素研发团队发现副产物可以做出拥有极高 绝缘性的树脂类合成材料,于是创造出了一种具有高耐用性,低热膨胀性,易于加工和其他重要特征 的热固性薄膜,该膜被命名为ABF。 1996年,英特尔与味之素联系,寻求使用氨基酸技术开发薄膜型绝缘子,这两家企业合作研发出了 FC-BGA(Flip Chip Ball Grid Array),最终让ABF成为了FC-BGA产品的主要方案。 那时没人会想到,这种从制作味精时产生的"废料"中提炼出的薄膜,最终会垄断全球99%的高端CPU 和GPU封装市场。到2021年,当全球芯片荒席卷而来时,味之素ABF材料的交付周期长达30周,英 特尔、AMD和英伟达等巨头不得不排队等待这家调味料公司的供货。 而当我们回望半导体行业近百年发展历程,像味之素这样跨界成功的半导体"隐形冠军",远不止一两 家。 在漫长的时间里,唐纳森公司的工程师发现了一个惊人的相似性:半导体工厂面临的问题,本质上和 一百年前的拖拉机一模一样,就是如何防止 ...
破局光通信 “卡脖子”!光电融合 + 光子计算量产
半导体行业观察· 2026-02-01 02:25
Core Viewpoint - The forum titled "Collaborative Innovation Forum from Devices to Networks" aims to address practical challenges in the semiconductor industry, focusing on implementable technology solutions rather than mere concepts [1][10]. Group 1: Event Overview - The forum will take place on March 18, 2026, at the Shanghai New International Expo Center, featuring over 10 leading companies and three major telecom operators addressing the urgent needs of 6G technology [1]. - Unlike typical PPT presentations, this forum will showcase verified and applicable results from experts across academia and industry, targeting critical areas such as compound semiconductors and EDA [2]. Group 2: Agenda Highlights - The agenda includes presentations on various topics, such as photonic integrated chips for communication systems and the advantages of silicon capacitors in AI applications [5][6]. - Notable presentations include a practical solution for photonic integrated chips that can reduce device size by 40% and power consumption by 25%, addressing hardware bottlenecks in the transition from 5G to 6G [6]. Group 3: Demand and Collaboration - The forum will facilitate direct matching between supply and demand by inviting major telecom operators and leading cloud service providers to seek partnerships and collaboration [7]. - A closed-door matching session will allow participating companies to submit their technology proposals for one-on-one discussions with potential partners, leading to significant collaboration opportunities [7]. Group 4: Industry Needs and Opportunities - Telecom operators are expected to announce procurement needs for 6G integrated communication devices, focusing on domestic suppliers of optical chips and high-power compound semiconductor devices [8]. - Cloud service providers will present collaboration lists for AI computing centers, prioritizing products that can be delivered quickly from domestic companies [8]. Group 5: Organizational Strengths - The forum is organized by Semiconductor Industry Observation, which has over 10 years of experience in the semiconductor field, aiming to solve real industry problems and facilitate resource gathering for domestic innovation [10][12]. - The organization boasts a significant reach with over 950,000 followers across the industry, enabling effective engagement with key stakeholders [12].
黄仁勋:台积电要加油了
半导体行业观察· 2026-02-01 02:25
公众号记得加星标⭐️,第一时间看推送不会错过。 英伟达首席执行官黄仁勋昨晚宴请供应链伙伴高层,「兆元宴」再度登场。黄仁勋表示,台积电今年 必须要非常努力工作,因为英伟达需要很多晶圆,他预期「未来十年台积电的产能可能会成长超过百 分之百,是非常显著的规模扩张,而光是为英伟达的需求就得翻倍」。 黄仁勋再度选在台北砖窑古早味怀旧餐厅举行「兆元宴」,和供应链伙伴餐叙,出席的包括台积电董 事长魏哲家、联发科首席执行官蔡力行、广达董事长林百里、英业达董事长叶力诚、纬创董事长林宪 铭、鸿海董事长刘扬伟、宏碁董事长陈俊圣、矽品董事长蔡祺文、和硕董事长童子贤、共同首席执行 官郑光志及邓国彦、华硕董事长施崇棠、共同首席执行官胡书宾及许先越、纬颖董事长洪丽寗、台达 电董事长郑平、仁宝董事长陈瑞聪以及云达总经理杨麒令等人。餐叙结束后黄仁勋亲自送魏哲家离 开,显见对台积电的重视程度。 黄仁勋说,台积电今年要非常努力工作,因为英伟达需要很多晶圆和CoWoS,他也说,台积电做得 非常好。英伟达已经全面投产Blackwell、Vera Rubin芯片,而Vera Rubin包含六款不同的芯片,每 款都是世界上最先进的芯片。黄仁勋强调,今年英 ...
HBM,变了
半导体行业观察· 2026-02-01 02:25
公众号记得加星标⭐️,第一时间看推送不会错过。 高带宽存储器(HBM)的商业化进程正在发生变化。传统半导体通常在通过样品与客户完成质量测 试后才开始量产。然而,为了满足关键客户的需求,现在一些半导体厂商会在认证完成之前就主动开 展量产。 据业内人士1日透露,为了满足三星电子、SK海力士和英伟达对HBM4的需求,他们甚至在测试完成 之前就已经开始大规模生产HBM4。 率先公布其性能的SK海力士表示:"自去年9月建立量产系统以来,HBM4目前正在根据客户要求的 数量进行量产。" 根据对SK海力士内部和外部报告的综合分析,工作组将此次量产定性为"高风险量产"。高风险量产 指的是在客户认证完成之前,提前部署晶圆进行量产。 之所以要冒险进行大规模生产,是因为生产周期(即产品交付所需的总时间)。通常情况下,HBM 作为最终产品交付大约需要四个月。一旦认证完成并开始大规模生产,几乎不可能在明年 NVIDIA 的 AI 加速器发布计划之前及时供应 HBM。有限的产能和较低的初始良率使得快速提高出货量成为 不可能。 基于风险的大规模生产存在这样的风险:一旦需求不确定或产品出现严重缺陷,供应商可能会面临库 存积压的风险。这意味着 ...
芯片巨头,一年上涨1500%
半导体行业观察· 2026-02-01 02:25
Core Viewpoint - SanDisk's stock price has surged over 1500% year-on-year, driven by record profits and strong demand from AI and data center markets [2][4] Financial Performance - SanDisk reported a record profit of $803 million, a 7.7-fold increase year-on-year [2] - Revenue for Q2 FY2026 reached $3.03 billion, exceeding the high-end forecast of $2.65 billion [3] - GAAP profit increased by 672% to $804 million, with profit margins at 26.6% [3][4] - Gross margin improved to 51.1%, up from 29.8% in the previous quarter [7] Market Segments - Revenue from data centers grew by 76%, driven by AI infrastructure demand [12] - Edge computing revenue increased by 63.2%, while consumer market revenue rose by 51.7% [12] - SanDisk's data center business is expected to become the largest market for NAND flash by 2026 [13] Strategic Developments - SanDisk extended its joint venture agreement with Kioxia until December 31, 2034, with a payment of $1.165 billion for manufacturing services [13] - The company is focusing on long-term supply agreements with major clients to enhance planning and returns [10] - SanDisk is developing high-bandwidth flash (HBF) technology and has made progress with its BiCS8 QLC storage products [13] Future Outlook - The company anticipates significant growth in its data center business in both the short and long term [11] - SanDisk expects Q3 revenue to be around $4.6 billion, a 171% increase year-on-year [14]