CPU
Search documents
Constellation's Wang on Google-Nvidia Chips Rivalry
Bloomberg Television· 2025-11-26 07:17
AI Chip Landscape - Tensor Processing Units (TPUs) are purpose-built for AI and deep learning, offering lower total costs and greater power efficiency compared to GPUs [1] - Google has been developing TPUs for some time, aiming for efficiency and supply chain diversification beyond Nvidia [2][3] - Google's full-stack approach, from chip to application, provides significant efficiencies of scale [5][6] - Diversifying chip base is crucial, as different chips excel in different tasks, similar to diversifying cloud providers [10][11] Market Demand and Competition - The AI market is projected to reach a $7 trillion market cap by 2030, indicating substantial demand [8] - The market demand is large enough to accommodate multiple players, suggesting it's not a zero-sum game between CPU and GPU [8][9] - Hyperscalers not directly competing with Google, pharmaceutical giants, energy companies, and governments are potential adopters of TPUs [13][14] - AMD and Google are positioned to provide alternatives to Nvidia's dominance in the AI chip market [15] Google's AI Capabilities - Gemini 3 is competitive with other leading large language models like ChatGPT, Claude, and Perplexity, excelling in various use cases [16][17] - Sovereign AI and companies building data centers/physical AI will drive market headlines in 2026 [24] Nvidia's Outlook - Models suggest Nvidia has the potential for another $1 trillion in sovereign AI market cap and another $1 trillion in physical AI market cap, potentially peaking around $6.5 to $7 trillion market cap [22][23]
苏姿丰:誓夺AI芯片市场“两位数”份额,预计到2030年AMD营收年增或超35%、利润增超两倍
华尔街见闻· 2025-11-12 10:12
Core Viewpoint - AMD's CEO, Lisa Su, provided an optimistic outlook for the AI market, projecting accelerated sales growth over the next five years, with a target of achieving a "double-digit" market share in the data center AI chip market [1][3]. Financial Goals - AMD aims for an annual revenue compound annual growth rate (CAGR) exceeding 35% over the next three to five years, with AI data center revenue expected to grow at an average of 80% [1][12]. - The company projects that its annual revenue from data center chips will reach $100 billion within five years, and profits are expected to more than double by 2030 [1][3]. - AMD's earnings per share (EPS) is anticipated to rise to $20 within three to five years, significantly higher than the current analyst expectations of $2.68 for 2025 [14][15]. Market Size and Growth - The total addressable market (TAM) for AI data centers is expected to exceed $1 trillion by 2030, up from approximately $200 billion this year, with a CAGR of over 40% [3][16]. - The AI processor market is projected to surpass $500 billion by 2028 [4]. Competitive Positioning - AMD aims to capture a "double-digit" market share in the AI chip sector, currently dominated by NVIDIA, which holds over 90% of the market [9]. - The company emphasizes the ongoing strong demand for AI infrastructure, countering previous expectations of a stabilization in AI investments [9][10]. Product Development and Strategy - AMD plans to launch its next-generation MI400 series AI chips in 2026, along with a complete "rack-scale" system to support large-scale AI models [17]. - The company is also focusing on enhancing its software ecosystem through strategic acquisitions in the AI software domain [17]. Recent Performance and Market Reaction - AMD reported a 36% year-over-year revenue increase to $9.246 billion for Q3, with data center revenue growing by 22% to $4.3 billion [19]. - Despite positive long-term projections, AMD's stock experienced volatility, reflecting investor concerns about the pace of returns from AI investments [20].
Automating Excellence: Transforming Work Through Technology | Tharun Theja S | TEDxVCE
TEDx Talks· 2025-10-13 15:57
[Music] [Music] All right. So, let's get started, guys. Thank you. Thank you everybody.Thanks a lot for very patiently waiting for all this while show most patience waiting for so long for the day. I'm sure you didn't wait for my talk for 100%. But there should be a reason you should also wait and I'm trying to do the best that I can so that you get will make really sense in your lives.So with no further ado, I'll get started into the actual topic. Let's start with the cycle of life where today we start wit ...
X @Avi Chawla
Avi Chawla· 2025-09-21 19:48
RT Avi Chawla (@_avichawla)PyTorch dataloader has 2 terrible default settings.Fixing them gave me ~5x speedup.When you train a PyTorch model on a GPU:- .to(device) transfers the data to the GPU.- Everything after this executes on the GPU.This means when the GPU is working, the CPU is idle, and when the CPU is working, the GPU is idle.Memory pinning optimizes this as follows:- When the model is trained on the 1st mini-batch, the CPU can transfer the 2nd mini-batch to the GPU.- This ensures that the GPU does ...
X @Avi Chawla
Avi Chawla· 2025-09-21 06:33
PyTorch dataloader has 2 terrible default settings.Fixing them gave me ~5x speedup.When you train a PyTorch model on a GPU:- .to(device) transfers the data to the GPU.- Everything after this executes on the GPU.This means when the GPU is working, the CPU is idle, and when the CPU is working, the GPU is idle.Memory pinning optimizes this as follows:- When the model is trained on the 1st mini-batch, the CPU can transfer the 2nd mini-batch to the GPU.- This ensures that the GPU does not have to wait for the ne ...
Nvidia's getting into AMD's business with $5B stake in Intel, says Constellation's Ray Wang
CNBC Television· 2025-09-19 11:41
Intel uh coming off its uh best day in a long long time. The stock surged 22% following Nvidia's announcement that it will invest $5 billion in Intel. Joining us now is Ray Wong, Constellation Research founder and chairman.What I'm looking over in Becky's chair. It's a lot of room there. Normally you're in here with us.Uh Ray, I put on some extra makeup because you usually take a a picture and tweet it out. What What happened. Why aren't you here.Hey, I'd love to be in New York. Happy Friday. I I ended up i ...
X @郭明錤 (Ming-Chi Kuo)
郭明錤 (Ming-Chi Kuo)· 2025-09-18 14:27
Key Industry Takeaways from Nvidia’s $5 Billion Investment in Intel1. Partnership Could Define and Accelerate the AI PC LandscapeFor Nvidia, developing its own Windows-on-ARM processors carries high uncertainty; for Intel, establishing a competitive edge in GPUs is difficult. Teaming up (CPU + GPU) could create powerful synergies and advantages across the PC ecosystem.2. Significant Synergistic Potential in x86 / Mid & Low-Range / Inference AI ServersA key trend ahead is enterprises building x86-based / mid ...
X @郭明錤 (Ming-Chi Kuo)
郭明錤 (Ming-Chi Kuo)· 2025-09-18 14:27
Collaboration Synergies - Nvidia and Intel collaboration aims to define AI PC and accelerate its development, leveraging combined CPU and GPU strengths [1] - Potential high synergy in x86/mid-to-low end/inference AI servers, combining Intel's x86 server enterprise client and channel resources with Nvidia's AI chip technology [2] Competitive Landscape Impact - Investment may shift market share among competitors like AMD in PC, GPU, and x86 server chips, and Broadcom in networking chips [2] TSMC's Perspective - TSMC's advanced process technology leadership is expected to remain until at least 2030, unaffected by Nvidia-Intel collaboration [1] - TSMC's AI chip orders are unaffected as AI chips require the most advanced processes [2] - Overall risk to TSMC is considered controllable, considering continued Nvidia and Intel orders, and lower contribution from networking products using less advanced processes [2]
光博会见闻反馈
2025-09-15 01:49
Summary of Key Points from the Conference Call Industry Overview - The optical module industry is experiencing high growth, with saturation expected in demand from the second half of 2025 to 2026, driven by the introduction of 1.6T solutions, primarily benefiting from the mass import of NVIDIA's C8X network card and potentially the CX9 network card initiating 3.2T demand [2][5][20] - The iteration cycle for optical modules has shortened to approximately two years, favoring leading manufacturers [2][5] Core Insights and Arguments - Domestic second-tier optical module manufacturers such as Solstice, Cambridge, and Lantech are seizing the high demand for AI optical modules to penetrate the North American market, despite the limited opportunities due to established suppliers [2][6] - Domestic optical chip manufacturers are accelerating technological advancements, with significant progress reported by Yuanjie in CW laser technology and Changguang Huaxin in 100G EML, enhancing market competitiveness [2][7][8] - The CPC (Copax) and pluggable optical module solution proposed by Xuchuang is gaining traction, having been adopted by overseas companies like Broadcom and Marvell, marking it as a significant competitor in the short term [2][13] Emerging Technologies - Liquid cooling products were prominently showcased at the 2025 data center exhibition, indicating readiness for NVIDIA's opportunities, with high demand noted [3] - OCS (Optical Circuit Switching) technology is gaining attention, with Google pushing its development and domestic manufacturers like Guangku and Lingyun Light showcasing related products [12] - NPU (Near-Package Unit) technology is emerging as a promising alternative to CPU, with expectations for earlier market adoption and significant demand for switches [11] Market Dynamics - The optical module industry is expected to see less price decline in 2026 due to strong demand and tight supply conditions, with shortages in core materials like EMA and CW light sources contributing to price stability [4][20] - The North American market's demand for 800G and 1.6T is creating opportunities for domestic manufacturers, despite the competitive landscape [6] Notable Developments - Changfei Fiber showcased an AI intelligent hub solution and hollow fiber products, achieving a significant milestone with a 100-kilometer hollow fiber link demonstrating a loss of 0.089dB per kilometer, nearing the limits of quartz fiber [4][18][19] - The rapid development of supernodes in China is being driven by major players like Huawei and ZTE, indicating a robust growth trend in the industry [14] Conclusion - The optical module industry is poised for significant changes driven by technological advancements and market dynamics, with new solutions like CPC and hollow fiber technology potentially reshaping competitive landscapes and driving growth [21]
X @BREAD | ∑:
BREAD | ∑:· 2025-08-18 21:06
SALT Architecture & Performance - SALT 通过异步方式将更改持久化到磁盘,以保持顶层完整树的内存状态 [1] - SALT 的身份验证数据结构性能不受键值对数量或 SSD 数量的影响,因为它的大小固定且完全驻留在内存中 [1] - SALT 可以自由选择最佳的键值“数据库”,因为底层键值存储是正交问题 [1] - SALT 在实验中受 CPU 限制,CPU 成本不随存储的键值对数量增加 [3] - SALT 每次帐户/存储槽更新只会对底层键值引擎进行一次更新,这被认为是最佳的 [4] Comparison with Other Technologies - 文档承认 NOMT 和 QMDB 的比较可能存在问题,因为它们未在相同数量的键值对上进行比较 [2] - SALT 团队认为优化键值引擎不是他们的主要任务 [5] - SALT 团队将在论文发表时进行适当的评估 [5] Key-Value Engine & Scalability - SALT 可以使用任何键值引擎,这被认为是主要优势,因为可以跟随最新的技术发展 [4] - 即使 SALT 存储 10 亿(1 Billion)或更多的键值对,键值引擎也不太可能取代 CPU 成为主要瓶颈 [4] - 在 RocksDB 上重放 EVM 存储/帐户更新(超过 10 亿(1 Billion)),实现了每秒数十万次的写入,远超 CPU 导致的每秒 87,000 次更新的瓶颈 [4]