Workflow
傅里叶的猫
icon
Search documents
OpenAI大幅下调支出目标?
傅里叶的猫· 2026-02-21 14:13
今天关于 OpenAI 大幅下调支出目标的新闻被讨论得非常广,我们在星球中也第一时间对这个新闻 做了说明。 这个新闻的源头应该是 CNBC 发的这个新闻。 下午,国内的媒体也进行了转发,但很可惜,国内媒体并没有搞清楚始末就进行了转发。 1.4 Trillion 的由来 在去年 10 月份,奥特曼直播中表示,OpenAI 致力于投入 1.4 万亿美元,建设总计 30GW 的算力 资源。这里的 1.4 万亿可以理解成 Capex,但不仅包括 OpenAI,还有它的合作伙伴。 而 600 B 是指 OpenAI 到 2030 年的 OpEx,所以根本就不是一回事。 这里简单解释一下 CapEx 和 OpEx 的区别: 资本支出(CapEx) CapEx 是公司花大钱买或建造能用很多年的东西,比如买机器、建厂房、买服务器或升级大楼。这 些钱不是马上花完就没了,而是把东西记成资产,之后每年通过"折旧"一点点算进成本。简单说, 就是为了未来赚钱而做的长期投资,花钱的时候现金流出很大,但当期利润不会一下子掉很多,因为 成本被分摊了。 运营支出(OpEx) OpEx 是公司每天正常运转必须花的钱,比如付员工工资、交房租、水电 ...
光互联的市场图谱
傅里叶的猫· 2026-02-21 14:13
Core Insights - The article discusses the evolution of optical interconnect technology, highlighting three key structural patterns in the market: vertical integration vs. specialization, the scarcity of light generation, and the rise of SiPho foundries [5][6][10]. Group 1: Market Structure - Vertical integration offers structural advantages during technological transitions, as companies that can design across multiple layers can optimize the entire tech stack [9]. - Companies like Broadcom exemplify vertical integration, appearing across multiple layers of the value chain, while most others focus on specific segments [8]. - The semiconductor industry has historically shown that such advantages may not be permanent, as standardization can lead to the emergence of fabless models [9]. Group 2: Scarcity of Light Generation - The difficulty of producing light sources (Layer 1 and Layer 0) is highlighted, with InP and GaAs materials requiring specialized technology and equipment [12][13]. - Companies capable of mass-producing high-performance InP lasers are few, creating a concentrated market [13][14]. Group 3: Rise of SiPho Foundries - Layer 2, which focuses on SiPho foundries, is gaining attention as traditional semiconductor manufacturers like TSMC and GlobalFoundries enter the photonics space [17]. - TSMC's potential to optimize both AI chips and optical interconnects within the same ecosystem could disrupt existing vertical integration advantages [17]. Group 4: Layer Analysis - Layer 0 involves substrate supply, with companies like AXT benefiting from increased demand for III-V substrates, although geopolitical risks exist due to production in China [21][22]. - Layer 1 is dominated by Coherent and Lumentum, both of which manufacture InP lasers and are expanding production amid high demand [24][25]. - Layer 2 focuses on SiPho foundries, with companies like GlobalFoundries and TSMC leading in manufacturing photonic integrated circuits [27][29]. - Layer 3, represented by DSPs, faces potential obsolescence as CPO technology advances, with companies like Broadcom and Marvell adapting to this shift [33][36]. - Layer 4 sees companies like Innolight and Eoptolink currently leading in the pluggable module market, but their positions may be challenged as the industry shifts towards CPO [40][42]. Group 5: Future Signals - Key indicators to watch include pJ/bit energy consumption metrics, which reflect technological advancements and efficiency [56]. - The ongoing standardization efforts, such as OIF and UCIe, will shape the future market landscape and influence competitive dynamics [57][59]. - Recent mergers and acquisitions signal strategic directions in the industry, with notable deals like Marvell's acquisition of Celestial AI [60][62]. - The choices made by major cloud service providers like Google and AWS regarding their technology partnerships will ultimately determine market trajectories [63][64].
数据中心狂飙时代的三道坎
傅里叶的猫· 2026-02-19 15:47
最近高盛搞了一个线上交流会, 请来的嘉宾是微软数据中心高级开发组的前首席工程师 Mark Monr oe,他在数字基础设施领域干了 40 多年,算是真正的行业专家。他点出了数据中心扩张的三大死 穴:电、水、人。 我们之前的文章中,也提过另外两个卡点:Memory和台积电的CoWoS产能。 电力:卡在脖子上的第一道坎 关于电力的卡脖子,之前讲过非常多,大家应该也都理解。 Monroe 说得很直白,电力是当前最要命的近期约束。云计算和 AI 推理这些业务必须离用户近,响 应速度才快,所以都扎堆在大城市周边。问题是这些地方本来用电就紧张,数据中心一来,电网直接 吃不消。 但AI 训练就没这个顾虑。训练模型对地理位置没啥要求,哪儿有电往哪儿搬,所以现在很多训练任 务都在往偏远地区迁移。这种分化其实挺明显的:推理要速度,训练要电量,各取所需。 那怎么办呢?Monroe 提到了两个方向。 第一个是"灵活负载管理",说白了就是在用电高峰期让数据中心主动降低负荷。杜克大学做过一个 研究,如果数据中心愿意接受每年 0.25% 的停机时间(也就是 99.75% 的正常运行),美国电网能 多承载 76 GW 的新负载;如果能接受 ...
Memory后续怎么走?Bernstein开始看空Kioxia
傅里叶的猫· 2026-02-14 15:13
Core Viewpoint - Kioxia's recent financial report indicates a mixed outlook, with a strong profit guidance for the next quarter but concerns over its pricing strategy and competitive position in the NAND market [2][4][8]. Financial Performance - Kioxia reported an operating profit of 142.8 billion yen for Q3, slightly above market expectations but with an average selling price (ASP) increase of only about 10%, lagging behind competitors like Samsung and SK Hynix [2][6]. - The company provided a profit guidance range of 436 billion to 526 billion yen for the next quarter, significantly higher than market expectations of 250 billion to 300 billion yen, driven by price increases across all application areas [2][8]. Analyst Opinions - **Bernstein - Bearish View** - Concerns over high valuation and lack of DRAM business, leading to a cautious outlook on Kioxia's long-term competitiveness [4][5]. - Kioxia's ASP growth is lagging behind competitors, primarily due to lower pricing for major client Apple [6]. - Bernstein maintains a low target price of 7,000 yen, suggesting a significant downside risk [7]. - **Bank of America - Bullish View** - Upgraded target price to 32,000 yen, citing strong demand driven by generative AI and limited expansion among global NAND manufacturers [8]. - Profit forecast for FY2027 increased by 112% to 2.3 trillion yen based on Q3 performance [8]. - **Goldman Sachs - Neutral View** - Target price adjusted to 24,000 yen, indicating limited upside due to cyclical nature of the NAND industry and existing valuation ceilings [9][10]. - Highlights risks related to supply-demand dynamics and reliance on major clients, which could affect long-term profitability [10][11]. - **GSR - Bullish View** - Notable price increases in data center products, including a 70% hike accepted by Apple, indicating a shift in market perception regarding Kioxia's pricing power [13]. - Transition from a volatile commodity model to a stable industrial product model through long-term agreements, enhancing profitability and reducing performance volatility [13].
聊一聊刚刚曝光参数的摩尔线程S5000
傅里叶的猫· 2026-02-14 15:13
Core Viewpoint - The MTT S5000, developed by Moore Threads, is positioned as a competitive GPU for large model training and inference, showcasing performance that rivals international flagship products, marking a significant advancement in domestic computing power capabilities [1][3]. Group 1: MTT S5000 Performance - The MTT S5000 features a single card AI computing power of 1000 TFlops with liquid cooling and 920 TFlops with air cooling, alongside 80 GB of memory and a memory bandwidth of 1.6 TB/s [4]. - The S5000's performance has been reported to match or even exceed that of NVIDIA's H100 in certain multi-modal large model fine-tuning tasks [4][6]. - The architecture utilizes the fourth-generation MUSA architecture, optimized for large-scale AI training, and supports full precision calculations from FP8 to FP64 [6]. Group 2: Cluster Performance - The Kua'e Wan Card cluster built on the S5000 achieves a floating-point operation capability of 10 Exa-Flops, with an MFU of 60% in Dense model training and around 40% in MoE models, maintaining over 90% effective training time [8]. - The S5000 employs unique ACE technology for communication tasks, allowing for zero-conflict parallel computing and significantly enhancing model computing power utilization [10]. Group 3: Training and Inference Cases - In January 2026, the Zhiyuan Research Institute completed end-to-end training and alignment verification of the RoboBrain 2.5 model using a thousand-card cluster based on the S5000, achieving a training loss difference of only 0.62% compared to NVIDIA's H100 cluster [10]. - In December 2025, Moore Threads, in collaboration with Silicon-based Flow, conducted performance testing on the DeepSeek-V3 671B model using the S5000, achieving a record-breaking inference throughput of over 4000 tokens/s for Prefill and over 1000 tokens/s for Decode [12].
逻辑清晰的液冷和燃机
傅里叶的猫· 2026-02-12 15:58
Group 1: Liquid Cooling and Gas Turbines - Vertiv reported better-than-expected earnings, particularly in orders, indicating a strong year for liquid cooling technology [1] - Domestic liquid cooling companies saw significant stock price increases, with leading company Inveca hitting the daily limit [2] - Siemens Energy's earnings also exceeded expectations, with a record backlog of €146 billion in orders, driven by high demand for gas turbines and grid technology [3] Group 2: Autonomous Driving - The U.S. House Energy and Commerce Committee held a hearing to ease regulations on autonomous vehicle deployment, highlighting concerns about competition with China [4] - The passage of the Autonomous Driving Bill by the U.S. House Energy and Commerce Committee marks a significant step towards accelerating autonomous driving deployment across the U.S. [4] - In China, the Ministry of Industry and Information Technology is soliciting public opinions on new mandatory national standards for autonomous driving systems, indicating a proactive approach to regulation [7] Group 3: Industry Insights and Updates - The Knowledge Star platform has upgraded its daily reports to include summaries of news from major international media and insights from analysts across various industries, including memory, autonomous driving, and liquid cooling [8]
大厂春节抢算力,液冷龙头超预期
傅里叶的猫· 2026-02-11 13:48
Core Insights - Major companies are competing to release new models and marketing activities, leading to a tight supply of computing power, which is expected to persist for a considerable time [2] Group 1: Liquid Cooling Market - The global liquid cooling market is projected to grow 5-10 times compared to last year, with many companies expected to receive substantial orders [4] - Vertiv reported strong order growth, with organic growth of approximately 252% year-over-year and 117% quarter-over-quarter, estimating order size between $8.2 billion and $8.4 billion [6] - The backlog of orders reached $15 billion, showing a year-over-year increase of about 109% and a quarter-over-quarter increase of 57% [6] Group 2: UBS Upgrade on Invec - UBS raised the target price for Invec by 60% to 160 yuan, driven by a significant upward revision of the global liquid cooling market size and an expected CAGR of 51% from 2025 to 2030 [10] - Invec's market share is anticipated to increase faster than previously expected, with a projected market share of 10% by 2027 [10] - UBS believes Invec's core competitive advantage lies in its full value chain liquid cooling capabilities, which enhance heat dissipation efficiency [11] Group 3: Valuation Insights - UBS argues that Invec's current valuation is unreasonable and significantly underestimates its growth potential, with a projected revenue and EPS CAGR of 69% and 167% from 2025 to 2027 [12] - The valuation adjustment reflects confidence in the company's short-term market share increase and the potential for margin improvement from overseas business [12] Group 4: ByteDance's Liquid Cooling - ByteDance has increased its global capital expenditure to 300 billion, indicating a significant commitment to liquid cooling technology [13]
国产“大算力”在手,乘胜追击万亿参数大模型?
傅里叶的猫· 2026-02-10 12:05
Core Viewpoint - The development of domestic AI computing power is crucial for the evolution of large AI models, with the recent establishment of a significant domestic AI computing pool marking a historical convergence in this field [1][3]. Group 1: Domestic AI Computing Power - The recent "National AI Computing Power Pool" in Zhengzhou has become the largest domestic AI computing pool with a deployment of 30,000 cards, operational as of February 5 [1]. - The demand for large AI models is driving the evolution of domestic computing power towards "ten-thousand card" level computing clusters, as AI models with trillions of parameters are becoming the new standard [3][4]. Group 2: Growth of Large Models - By December 2025, the number of large models registered with China's National Cyberspace Administration is expected to reach 748, with many models exceeding the trillion-parameter scale [4]. - The scaleX ten-thousand card super cluster has been deployed for training trillion-parameter models and high-throughput inference, indicating a significant advancement in AI computing capabilities [4]. Group 3: Challenges and Innovations - Despite the growth of domestic computing power, challenges remain, such as the late start of domestic AI chips compared to NVIDIA GPUs and the need for additional adaptation for open-source models [5]. - The current computing ecosystem is fragmented, with barriers between intelligent computing centers, which limits the performance of domestic computing infrastructure in heterogeneous environments [5]. Group 4: Collaborative Innovation - A consensus is forming around an open collaborative approach in the AI industry, with stakeholders from model vendors and computing infrastructure providers working together on a joint initiative for "domestic large computing power + domestic large models" [6]. - The concept of "open collaboration" is becoming a central theme in the evolution of AI computing in China, potentially leading to a restructuring of competitive rules in the industry [7].
观点更新:光模块/CPO、CPU
傅里叶的猫· 2026-02-09 15:57
Group 1: Optical Modules and CPO - The ongoing debate regarding optical modules and CPO has seen significant attention, with CSPs showing support for the CPO plan, although their actual interest appears limited [3] - Market expectations for the growth of 800G/1.6T optical module demand are strong, with projections indicating a 60% year-on-year increase in capital expenditure from the top four US cloud service providers by 2026 [7] - The market for optical circuit switches (OCS) is expected to gradually increase in the coming quarters, with a projected market size exceeding $2 billion by 2030 [7] - Initial deployment of Co-Packaged Optics (CPO) is anticipated in horizontal architectures, with vertical upgrades expected to begin around the end of 2027 [7] - Near-Package Optics (NPO) is set to ramp up in the second half of 2026, providing a technical solution for cloud service providers to achieve vertical upgrades in computing power [7] - The supply of indium phosphide (InP) lasers is expected to remain tight through 2027, with long-term agreements already in place [7] - Overall, the demand outlook for the optical communication industry in 2027 is optimistic, although the impact on optical modules and copper connections in the next couple of years is expected to be minimal [7] Group 2: CPU Market Insights - Current CPU prices are on an upward trend, but the actual increase remains relatively moderate [8] - Intel's inventory levels are notably tight in Q1, but supply pressures are expected to ease gradually in Q2 [8] - The surge in CPU demand is primarily driven by the explosive growth of AI applications, yet actual usage scenarios indicate that individual sandbox instances may not fully utilize a single CPU core [8] - Research on the demand from a leading AI company shows that even with a conservative estimate, the incremental CPU demand from this company is only around 3,000 to 5,000 units [8] - Domestic cloud service providers currently have a CPU utilization rate of less than 30%, indicating a significant amount of untapped computing resources in the market [8] - The current CPU supply shortage is not expected to last long, contrasting with the explosive demand seen in the storage sector [8] Group 3: Industry Updates - Recent updates to the daily reports in the knowledge-sharing platform include comprehensive summaries of news and analyst opinions across various industries, including Memory, AI computing, and optical technologies [10]
Seedance 2.0和字节链
傅里叶的猫· 2026-02-08 15:58
Group 1 - The core point of the article is the significant advancements and commercial potential of ByteDance's Seedance 2.0, which has generated considerable discussion due to its ability to transition from "generating a scene" to "completing a work" [2][3] - Seedance 2.0 demonstrates strong determinism in content generation, allowing creators to precisely control outcomes through integrated visual and auditory signals, enhancing the naturalness of audio-visual synchronization [3] - The model's design focuses on reducing uncertainty in generation paths, optimizing token consumption, and significantly lowering production costs for video content, making it appealing for e-commerce, short dramas, and advertising industries [4] Group 2 - Analysts are optimistic about three main areas benefiting from Seedance 2.0: AI content production and distribution, AI infrastructure, and ByteDance's computing power chain [6] - The recent upgrade of the knowledge platform includes comprehensive daily reports summarizing news and analyst opinions across various industries, enhancing the understanding of market trends [6]