突破“存储墙”,三路并进
TSMCTSMC(US:TSM) 3 6 Ke·2025-12-31 03:35

Core Insights - The explosive growth of AI and high-performance computing is driving an exponential increase in computing demand, leading to a significant challenge known as the "storage wall" [1][2] - The competition for AI and high-performance computing chips will focus not only on transistor density and frequency but also on memory subsystem performance, energy efficiency, and integration innovation [1][4] Group 1: AI and Computing Demand - The evolution of AI models has led to a dramatic increase in computational requirements, with model parameters rising from millions to trillions, resulting in a training computation increase of over 10^18 times in the past 70 years [2][4] - The growth rate of computational performance has significantly outpaced that of memory bandwidth, creating a "bandwidth wall" that limits overall system performance [4][7] Group 2: Memory Technology Challenges - The traditional memory technologies are struggling to meet the unprecedented demands for performance, power consumption, and area (PPA) from various applications, including large language models and edge devices [1][4] - The average growth of DRAM bandwidth over the past 20 years has only been 100 times, compared to a 60,000 times increase in hardware peak floating-point performance [4][7] Group 3: TSMC's Strategic Insights - TSMC emphasizes that the future evolution of memory technology will revolve around "storage-compute synergy," transitioning from traditional on-chip caches to integrated memory solutions that enhance performance and energy efficiency [7][11] - TSMC is focusing on optimizing embedded memory technologies such as SRAM, MRAM, and DCiM to address the challenges posed by AI and HPC demands [11][33] Group 4: SRAM Technology - SRAM is identified as a key technology for high-speed embedded memory, offering low latency, high bandwidth, and low power consumption, making it essential for various high-performance chips [12][16] - The area scaling of SRAM is critical for optimizing chip performance, but it faces challenges as technology nodes advance to 2nm [12][17] Group 5: Computing-in-Memory (CIM) - CIM architecture represents a revolutionary approach that integrates computing capabilities directly into memory arrays, significantly reducing energy consumption and latency associated with data movement [21][24] - TSMC believes that DCiM (Digital Computing-in-Memory) has greater potential than ACiM (Analog Computing-in-Memory) due to its compatibility with advanced processes and flexibility in precision control [26][28] Group 6: MRAM Technology - MRAM is emerging as a viable alternative to traditional embedded flash memory, offering non-volatility, high reliability, and durability, making it suitable for applications in automotive electronics and edge AI [33][35] - TSMC's N16 FinFET embedded MRAM technology meets stringent automotive requirements, showcasing its potential in high-performance applications [39][49] Group 7: System-Level Integration - TSMC advocates for a system-level approach to memory technology breakthroughs, emphasizing the need for 3D packaging and chiplet integration to achieve high bandwidth and low latency [50][54] - The future of AI chips may see a blurring of boundaries between memory and computation, with innovations in 3D stacking and integrated voltage regulators enhancing overall system performance [60][61] Group 8: Future Outlook - The future of storage technology in AI computing is characterized by a comprehensive innovation revolution, with TSMC's roadmap focusing on SRAM, MRAM, and DCiM to overcome the "bandwidth wall" and energy efficiency challenges [62] - The ability to achieve full-stack optimization from transistors to systems will be crucial for leading the next era of AI computing [62]

突破“存储墙”,三路并进 - Reportify