SRAM
Search documents
“中国芯片起步晚、发展快”这个说法,并不准确
Guan Cha Zhe Wang· 2026-02-01 06:11
近日,美国众议院外交事务委员会以压倒性优势通过一项两党联合提案,意图将先进AI芯片对华销售 的审查权收归国会,并参照军售模式进行监管。这再次凸显了美西方在关键技术上对华"卡脖子"的长期 策略。 面对外部封锁,中国芯片产业正加速推进自主可控进程。目前,中芯国际、华虹等主要代工厂产能利用 率持续满载,在成熟工艺领域已占据全球领先份额。尽管在先进制程方面仍受限制,但国内正在全力攻 坚7纳米乃至5纳米技术,芯片自主化率逐步提高,部分高端AI芯片与服务器芯片的研发也进入加速阶 段。 当下,芯片的价值发生了哪些历史性的变化?面对技术"卡脖子",我们已取得哪些进展?还存在哪些短 板?围绕这些问题,观察者网特邀《芯片风云》作者、物理学博士戴瑾进行深度解读。 【对话/观察者网 郑乐欢】 观察者网:戴老师您好!您在著作《芯片风云》中,将芯片比作"现代石油"。能否请您简单谈谈芯片在 今天人类社会中"看不见"的价值? 戴瑾:其实也不是真的看不见。你把家里随便一个机盒打开,里面全都是芯片。比如把手机拆开——当 然自己的手机别乱拆,挺贵的——但你看网上很多拆机图,里面有很多芯片,有些还会标注这个芯片是 干什么的、那个芯片是干什么的。 包 ...
SK海力士、三星们的巨额利润,就是中国存储企业的机会
Hua Er Jie Jian Wen· 2026-01-27 02:12
据DIGITIMES周二报道,在韩国近期举办的下一代半导体技术趋势研讨会上,成均馆大学教授Seok-jun Kwon指出,存储行业正经历超级周期,DRAM价格在短短几个月内飙升300-400%。然而三星、SK海力 士和美光无法满足全球需求,这为中国存储制造商创造了进入市场的机会。 Kwon强调,中国企业的扩张潜力不仅限于消费零售领域,还包括企业级市场。即使只占据5-10%的市 场份额,也足以为其未来增长积累动能。 中国企业若能在这一窗口期获得更多技术经验,将对全球存储芯片格局产生深远影响。 技术追赶速度超预期 中国存储制造商正以惊人速度追赶韩国领先企业。2025年初,当SK海力士供应第三代高带宽内存 (HBM3E)芯片时,长鑫存储科技(CXMT)仍在开发HBM2技术,但到2025年年中就有报道称其已跃进至 HBM3开发阶段。 Kwon指出,CXMT的加速进展不仅源于自身资金,还得益于地方政府和华为的投资。新技术测试不仅 在CXMT自有工厂进行,还经常在华为投资的广东和上海工厂开展,使公司能在内部扩大生产前识别可 行方案。这种模式让CXMT缩短了研发周期,快速实现技术创新的商业化。 市场窗口期的战略意义 存储芯 ...
北京君正:与华力微的合作主要涉及SRAM等
Mei Ri Jing Ji Xin Wen· 2026-01-26 05:04
北京君正(300223.SZ)1月26日在投资者互动平台表示,目前eMMC产能比较紧张,后续会持续加强与 供应商的合作与沟通;与华力微的合作主要涉及SRAM等。 (文章来源:每日经济新闻) 每经AI快讯,有投资者在投资者互动平台提问:公司emmc产能充足吗,向华力微采购的是什么产品? 产能锁定了吗? ...
这一创新,打破内存微缩死局!
半导体芯闻· 2026-01-23 09:38
Core Insights - The demand for low-power memory close to computing logic is driven by artificial intelligence workloads, leading to new memory designs and material explorations across various applications [1][11] - DRAM remains the preferred technology for most applications despite challenges in miniaturization and increasing demand from AI data centers, resulting in a memory shortage in the industry [1][11] Group 1: DRAM and Memory Technologies - The miniaturization of DRAM faces challenges, with designers looking to vertical structures to increase density while avoiding high lithography costs [1] - Low-leakage transistors are being explored to reduce refresh power in large storage arrays, with materials like IGZO showing promise due to their acceptable carrier mobility and low leakage [1][2] - Research from Samsung indicates that zinc migration during IGZO annealing can lead to uncoordinated indium sites, affecting performance, but optimizing electrode materials can mitigate interface migration and oxygen loss [2] Group 2: Innovations in Oxide Semiconductors - Researchers from Changxin Storage Technology successfully created functional IGZO devices by optimizing deposition processes and reducing hydrogen content, achieving a drive current of 60.9 μA/μm [3] - Kioxia demonstrated a 3D DRAM oxide channel replacement process that helps reduce thermal degradation, achieving over 30 μA per cell in prototype storage units [5] - A hybrid design using oxide semiconductors and silicon in a 256×256 array improved density by 3.6 times and reduced energy consumption by 15% compared to high-density SRAM [6] Group 3: Advanced Memory Architectures - A fully self-aligned design by Georgia Tech improved performance by 10 times and reduced energy-delay-area product by 75% to 80% compared to traditional SRAM cells [8] - Researchers are exploring the integration of transistor-based memory into backend processes, balancing speed and maturity of silicon technology with simpler but lower-performing alternatives [8] - Non-volatile memory designs using ferroelectric layers and IGZO as channel materials have shown promising durability and performance, with a wide storage window of 1.6 V [9]
黄仁勋:SRAM无法取代HBM
半导体芯闻· 2026-01-08 10:36
Core Viewpoint - NVIDIA's CEO Jensen Huang emphasized the ongoing necessity of HBM (High Bandwidth Memory) despite the advantages of SRAM (Static Random Access Memory) and other cost-saving solutions for AI workloads, indicating that each memory type has its own strengths and limitations [1][2]. Group 1: Memory Technology Insights - Huang noted that while SRAM can significantly enhance efficiency for certain workloads, its capacity limitations may hinder its effectiveness in large-scale AI production environments compared to HBM, which offers better bandwidth and density [1][2]. - The transition of workload pressures among different memory types and interconnects is dynamic, with varying demands based on the architecture of AI models, such as mixed expert models and multi-modal models [2]. Group 2: Market Dynamics and Cost Considerations - Despite customer complaints regarding the costs of HBM and GPUs, Huang believes there is no inherent conflict between these costs and the demand for advanced memory solutions, suggesting that the market for NVIDIA's products remains robust [3].
SRAM是什么?和HBM有何不同?
半导体芯闻· 2026-01-04 10:17
Core Viewpoint - Nvidia's investment of $20 billion in acquiring Groq's Language Processing Unit (LPU) technology highlights the rising importance of SRAM in the AI, server, and high-performance computing (HPC) sectors, shifting the focus from mere capacity to speed, latency, and energy consumption [1][5]. Group 1: SRAM and HBM Comparison - SRAM (Static Random Access Memory) is characterized by high speed and low latency, commonly used within CPUs, GPUs, and AI chips. It is volatile, meaning data is lost when power is off, and it does not require refreshing, making it suitable for immediate data processing [3][4]. - HBM (High Bandwidth Memory) is an advanced type of DRAM that utilizes 3D stacking and through-silicon vias (TSV) to connect multiple memory layers to logic chips, offering high bandwidth (up to several TB/s) and lower power consumption compared to traditional DRAM, but with higher costs and complexity [4][6]. Group 2: Shift in Market Demand - The focus in AI development has shifted from computational power to real-time inference capabilities, driven by applications such as voice assistants, translation, customer service, and autonomous systems, where high latency is a critical concern [6]. - Nvidia's acquisition of Groq's technology is not just about enhancing AI accelerator capabilities but is fundamentally linked to SRAM's strengths in providing extremely low-latency memory access, which is essential for real-time AI applications [5][6].
突破“存储墙”,三路并进
3 6 Ke· 2025-12-31 03:35
Core Insights - The explosive growth of AI and high-performance computing is driving an exponential increase in computing demand, leading to a significant challenge known as the "storage wall" [1][2] - The competition for AI and high-performance computing chips will focus not only on transistor density and frequency but also on memory subsystem performance, energy efficiency, and integration innovation [1][4] Group 1: AI and Computing Demand - The evolution of AI models has led to a dramatic increase in computational requirements, with model parameters rising from millions to trillions, resulting in a training computation increase of over 10^18 times in the past 70 years [2][4] - The growth rate of computational performance has significantly outpaced that of memory bandwidth, creating a "bandwidth wall" that limits overall system performance [4][7] Group 2: Memory Technology Challenges - The traditional memory technologies are struggling to meet the unprecedented demands for performance, power consumption, and area (PPA) from various applications, including large language models and edge devices [1][4] - The average growth of DRAM bandwidth over the past 20 years has only been 100 times, compared to a 60,000 times increase in hardware peak floating-point performance [4][7] Group 3: TSMC's Strategic Insights - TSMC emphasizes that the future evolution of memory technology will revolve around "storage-compute synergy," transitioning from traditional on-chip caches to integrated memory solutions that enhance performance and energy efficiency [7][11] - TSMC is focusing on optimizing embedded memory technologies such as SRAM, MRAM, and DCiM to address the challenges posed by AI and HPC demands [11][33] Group 4: SRAM Technology - SRAM is identified as a key technology for high-speed embedded memory, offering low latency, high bandwidth, and low power consumption, making it essential for various high-performance chips [12][16] - The area scaling of SRAM is critical for optimizing chip performance, but it faces challenges as technology nodes advance to 2nm [12][17] Group 5: Computing-in-Memory (CIM) - CIM architecture represents a revolutionary approach that integrates computing capabilities directly into memory arrays, significantly reducing energy consumption and latency associated with data movement [21][24] - TSMC believes that DCiM (Digital Computing-in-Memory) has greater potential than ACiM (Analog Computing-in-Memory) due to its compatibility with advanced processes and flexibility in precision control [26][28] Group 6: MRAM Technology - MRAM is emerging as a viable alternative to traditional embedded flash memory, offering non-volatility, high reliability, and durability, making it suitable for applications in automotive electronics and edge AI [33][35] - TSMC's N16 FinFET embedded MRAM technology meets stringent automotive requirements, showcasing its potential in high-performance applications [39][49] Group 7: System-Level Integration - TSMC advocates for a system-level approach to memory technology breakthroughs, emphasizing the need for 3D packaging and chiplet integration to achieve high bandwidth and low latency [50][54] - The future of AI chips may see a blurring of boundaries between memory and computation, with innovations in 3D stacking and integrated voltage regulators enhancing overall system performance [60][61] Group 8: Future Outlook - The future of storage technology in AI computing is characterized by a comprehensive innovation revolution, with TSMC's roadmap focusing on SRAM, MRAM, and DCiM to overcome the "bandwidth wall" and energy efficiency challenges [62] - The ability to achieve full-stack optimization from transistors to systems will be crucial for leading the next era of AI computing [62]
突破“存储墙”,三路并进
半导体行业观察· 2025-12-31 01:40
Core Viewpoint - The article discusses the exponential growth of AI and high-performance computing, highlighting the emerging challenge of the "storage wall" that limits the performance of AI chips due to inadequate memory bandwidth and efficiency [1][2]. Group 1: AI and Storage Demand - The evolution of AI models has led to a dramatic increase in computational demands, with model parameters rising from millions to trillions, resulting in a training computation increase of over 10^18 times in the past 70 years [2]. - The performance of any computing system is determined by its peak computing power and memory bandwidth, leading to a significant imbalance where hardware peak floating-point performance has increased 60,000 times over the past 20 years, while DRAM bandwidth has only increased 100 times [5][8]. Group 2: Memory Technology Challenges - The rapid growth in computational performance has not been matched by memory bandwidth improvements, creating a "bandwidth wall" that restricts overall system performance [5][8]. - AI inference scenarios are particularly affected, with memory bandwidth becoming a major bottleneck, leading to idle computational resources as they wait for data [8]. Group 3: Future Directions in Memory Technology - TSMC emphasizes that the evolution of memory technology in the AI and HPC era requires a comprehensive optimization across materials, processes, architectures, and packaging [12]. - The future of memory architecture will focus on "storage-compute synergy," transitioning from traditional on-chip caches to integrated memory solutions that enhance performance and efficiency [12][10]. Group 4: SRAM as a Key Technology - SRAM is identified as a critical technology for high-performance embedded memory due to its low latency, high bandwidth, and energy efficiency, widely used in various high-performance chips [13][20]. - TSMC's SRAM technology has evolved through various process nodes, with ongoing innovations aimed at improving density and efficiency [14][22]. Group 5: Computing-in-Memory (CIM) Innovations - CIM architecture represents a revolutionary approach that integrates computing capabilities directly within memory arrays, significantly reducing data movement and energy consumption [23][26]. - TSMC believes that Digital Computing-in-Memory (DCiM) has greater potential than Analog Computing-in-Memory (ACiM) due to its compatibility with advanced processes and flexibility in precision control [28][30]. Group 6: MRAM Developments - MRAM is emerging as a viable alternative to traditional embedded flash memory, offering non-volatility, high reliability, and durability, making it suitable for applications in automotive electronics and edge AI [35][38]. - TSMC's MRAM technology meets stringent automotive requirements, providing robust performance and longevity [41][43]. Group 7: System-Level Integration - TSMC advocates for a system-level approach to memory and compute integration, utilizing advanced packaging technologies like 2.5D/3D integration to enhance bandwidth and reduce latency [50][52]. - The future of AI chips may see a blurring of the lines between memory and compute, with tightly integrated architectures that optimize energy efficiency and performance [58][60].
北京君正:公司SRAM主要用于车规、工业和医疗等市场
Zheng Quan Ri Bao Wang· 2025-12-30 11:12
证券日报网讯12月30日,北京君正(300223)在互动平台回答投资者提问时表示,公司SRAM主要用于 车规、工业和医疗等市场,产品可以替换瑞萨、英飞凌相关产品。 ...
北京君正(300223.SZ):公司的SRAM为独立芯片,目前未提供片上SRAM IP业务
Ge Long Hui· 2025-12-30 08:52
Group 1 - The core point of the article is that Beijing Junzheng (300223.SZ) has clarified that its SRAM is an independent chip and currently does not provide on-chip SRAM IP business [1]