晶圆级集成技术

Search documents
晶圆级芯片,是未来
3 6 Ke· 2025-06-29 23:49
Group 1: Industry Overview - The computational power required for large AI models has increased by 1000 times in just two years, significantly outpacing hardware iteration speeds [1] - Current AI training hardware is divided into two main camps: dedicated accelerators using wafer-level integration technology and traditional GPU clusters [1][2] Group 2: Wafer-Level Chips - Wafer-level chips are seen as a breakthrough, allowing for the integration of multiple dies on a single wafer, which enhances bandwidth and reduces latency [3][4] - The size of a single die chip is approximately 858 mm², and the maximum size is constrained by the exposure window [2][3] Group 3: Key Players - Cerebras has developed the WSE-3 wafer-level chip, which utilizes TSMC's 5nm process, featuring 4 trillion transistors and 900,000 AI cores [5][6] - Tesla's Dojo chip employs a different approach, integrating 25 proprietary D1 chips on a wafer, achieving 9 Petaflops of computing power [10][11] Group 4: Performance Comparison - WSE-3 can train models 10 times larger than GPT-4 and Gemini, with a peak performance of 125 PFLOPS [8][14] - In comparison, the WSE-3 has 880 times the on-chip memory capacity and 7000 times the memory bandwidth of the NVIDIA H100 [8][13] Group 5: Cost and Scalability - The cost of Tesla's Dojo system is estimated between $300 million to $500 million, while Cerebras WSE systems range from $2 million to $3 million [18][19] - NVIDIA GPUs, while cheaper initially, face long-term operational cost issues due to high energy consumption and performance bottlenecks [18][19] Group 6: Future Outlook - The wafer-level chip architecture is considered the highest integration density for computing nodes, indicating significant potential for future developments in AI training hardware [20]
深度|对话Cerebras CEO:3-5年后我们对Transformer依赖程度将降低,英伟达市占率将降至50-60%
Z Potentials· 2025-04-06 04:55
图片来源: 20VC with Harry Stebbings Z Highlights Andrew Feldman 是 Cerebras 的联合创始人兼首席执行官, Cerebras 是世界上最快的人工智能推理 + 训练平台。本次访谈为他和 20VC 主播 Harry Stebbings 探讨 AI 时代改变芯片构造需求以及行业趋势。 AI 对芯片需求的改变 Harry : 见到你真是太高兴了。我期待这次对话很久了。 Eric 经常向我提起你,一直对你赞不绝口,非常感谢你能接受我的访谈。 Andrew : Harry ,谢谢邀请。很荣幸能参与这个对话。 Harry : 这一定会是场精彩的对话,感觉今天能跟你学到很多。让我们回到 2015 年,当时你和团队在 AI 领域看到了什么机遇,促使你们创立了 Cerebras 公司? Andrew : 我们看到了一种新兴工作负载的崛起 —— 这对计算机架构师而言堪称梦想成真。我们发现了一个值得解决的新问题,这意味着或许可以为此打 造更适配的硬件系统。 2015 年时,我的联合创始人 Gary 、 Sean 、 JP 和 Michael 率先预见了 AI 的兴起。这预 ...