SRAM
Search documents
SRAM,更难了
半导体行业观察· 2026-03-27 00:52
公众号记得加星标⭐️,第一时间看推送不会错过。 SRAM 是所有计算系统的重要组成部分,但它未能跟上逻辑电路的扩展步伐,造成了越来越棘手的 问题,而这些问题在过去五年中变得更加严重。 早在1990年,亨尼西和帕特森就出版了《计算机体系结构:量化方法》一书。作者当时就已清楚地认 识到,内存容量和性能将成为未来处理能力发展的关键瓶颈(见图1)。几十年来,硬件架构一直在 回避这个问题,通常使用SRAM作为缓存,并辅以容量更大的片外DRAM。虽然这使得内存看起来更 大,但速度往往慢得多。这就是所谓的"内存墙"。 在所有计算形式中,程序和数据都存储在静态随机存取存储器(SRAM)中。处理器从该存储器中读 取指令。这些指令告诉处理器要对同样存储在该存储器中的数据执行哪些操作。 SRAM 比处理器内部临时存储数据的寄存器更便宜。虽然寄存器单元可以使用与 SRAM 相同数量的 晶体管,但寄存器使用更昂贵的解码和访问机制,这种机制无法随着寄存器组大小的增加而扩展。 SRAM 存储器由一系列存储单元组成,周围环绕着电路,这些电路能够以随机方式读取和存储数 据。在许多情况下,周围的逻辑电路是半定制的,因为它会随着存储阵列规模的增大 ...
英伟达GTC和全球光通讯大会有哪些看点?
淡水泉投资· 2026-03-27 00:03
重要提示:本材料不构成淡水泉任何形式的要约、承诺或其他法律文件,亦非任何投资、法律或财务等方面的专业建议。过往业绩 不预示未来表现。投资须谨慎。 今年3月,全球科技产业迎来两场重量级盛会:在加州圣何塞举行的被称为"AI领域超级碗"的英伟达 GTC大会,以及在洛杉矶举办的全球光通讯行业OFC大会。这两个会议传递哪些行业趋势?又将如何影 响AI产业的发展? 01 AI算力:从"训练"到"推理"的转向 2022年以来,在生成式AI爆发初期,大模型的演进主要聚焦于"训练"阶段,即让大模型"学会知识"。而 随着模型训练日趋完善并逐步投入应用,2025年以来,算力负荷的重心开始向"推理"转移,即让训练好 的模型"运用知识"更好的生成内容。 来源: NVIDIA GTC 2026。 这一转向直接重构了算力硬件的需求逻辑与优化方向。训练阶段侧重"算力+通信全方位堆叠",以突破 模型性能上限,推理阶段则追求"效率提升带来用量和性价比提升",核心瓶颈转向通信效率与存储适 配,这也推动算力硬件从"更依赖算力"向"更依赖通信和存储"升级 。 存储配套 来源: NVIDIA GTC 2026。 架构优化 架构优化主要集中在两个方面:一 ...
美光:未来汽车将需要300GB的内存
芯世相· 2026-03-23 06:34
我是芯片超人花姐,入行20年,有50W+芯片行业粉丝。 有很多不方便公开发公众号的, 关于芯片买卖、关于资 源链接等, 我会分享在朋友圈 。 扫码加我本人微信 美光 CEO Sanjay Mehrotra 表示 , 随着车企推出具备 L4 级自动驾驶能力的车型,汽车最终 将需要超过300GB的RAM 。 据 The Register 报道,梅赫罗特拉是在美光发布季度财报后作出 上述表态的。 与此同时,这些车辆还包含一系列关键性不同的功能。比如,信息娱乐系统中的某些 功能 对提醒 驾驶员至关重要,而另一些则不是。这里的挑战在于,既要把整车作为一个单一系统来管理,又要 把它看作一个"系统中的系统",其中某些功能比其他功能拥有更高优先级。而解决这一问题的最 佳方式,就是提高带宽、降低延迟,并且更细致地划分:哪些部件需要部署在哪些位置、采用哪种 制造工艺,以及对应的成本是多少。 西门子 EDA 汽车与军工航天领域混合物理与虚拟系统副总裁 David Fritz 表示:"当我们谈到像 具备服务质量保障的 10Gb 车载以太网这类东西时,传统汽车工程师会说,'我怎么保证这个信号 真的能在 100 毫秒内到达制动系统?' ...
美光:DDR5利润率现已超过HBM
半导体芯闻· 2026-03-20 10:08
Core Viewpoint - Micron Technology has reported that the profit margins for traditional DRAM, including DDR5, have recently surpassed those of High Bandwidth Memory (HBM), reflecting the impact of long-term contract structures and supply constraints [1][2] Group 1: Profit Margin Dynamics - The profit margins for non-HBM products are currently higher than those for HBM, indicating a shift in profitability within the memory market [1] - HBM's supply is increasingly constrained by long-term agreements, which limit manufacturers' ability to capitalize on rapid price increases [2] - Traditional DRAM is benefiting from strong demand and limited supply, with average DRAM prices recently rising over 60%, allowing for real-time profit margin reflection [2] Group 2: Strategic Product Management - Micron is managing its product portfolio cautiously in response to the growing demand for AI in data centers, rather than solely focusing on profit-driven strategies [2][3] - A balanced approach is necessary to meet customer needs, particularly in AI server deployments, where both HBM and DDR5 DRAM are required [3] - The company aims for comprehensive growth across its data center product offerings, including HBM, DDR5, low-power DRAM, SODIMM, and SSDs [3] Group 3: Long-term Strategy - Micron's long-term strategy focuses on maintaining a diversified supplier position across multiple industries, which is seen as a key driver for the company's performance and industry growth [3]
GSI Technology Stock: Another Member Of The APU Family In The Plans (NASDAQ:GSIT)
Seeking Alpha· 2026-03-18 06:17
If you thought our angle on this company was interesting, you may want to check out our idea room, The Value Lab . We focus on long-only value ideas of interest to us, where we try to find international mispriced equities and target a portfolio yield of about 4% . We've done really well for ourselves over the last 5 years, but it took getting our hands dirty in international markets. If you are a value-investor, serious about protecting your wealth, our gang could help broaden your horizons and give some in ...
英伟达出手,SRAM重回C位
3 6 Ke· 2026-03-17 11:08
过去两年,全球半导体产业的聚光灯始终打在HBM身上。这种通过硅通孔技术垂直堆叠的DRAM,伴随英伟达GPU的大规模出货,完成了从一个小 众产品到供不应求的"硬通货"的蜕变。然而,就在2026年的春天,一个看似陈旧的技术名词——SRAM(静态随机存取存储器),正在以惊人的速 度重回舞台中央。 要理解这场复权的底层逻辑,必须先厘清存储层级的基本分工。在当代计算架构中,存储系统呈现为一座金字塔:塔尖是集成在CPU、GPU计算核 心附近的片上SRAM,具备纳秒级访问时延与高度确定性的带宽特性,带宽极高但容量极小、成本极高;向下依次是HBM、DRAM和SSD,每一级 的容量递增,但时延和带宽的不确定性也随之增加。在过去以训练为主的时代,大容量吞吐比纳秒级响应更重要,因此HBM占据了主导。但当AI应 用从实验室走向普罗大众,当用户体验的标尺从"模型有多大"转向"回答有多快",这座金字塔的受力结构正在发生深刻变化。 3月17日,加州圣何塞SAP中心的舞台上,身着标志性黑色皮夹克的黄仁勋用两个半小时的演讲,正式为这一趋势写下了注脚。在这场备受瞩目的 GTC 2026主题演讲中,英伟达正式发布了集成Groq LPU架构的推理芯 ...
英伟达GTC 大会重磅前瞻,下周资金将疯狂涌向这三个板块
3 6 Ke· 2026-03-13 00:12
Core Insights - The upcoming GTC 2026 conference hosted by NVIDIA is not just about showcasing new GPUs but aims to redefine the rules of AI infrastructure [9][3] - The market is focused on whether NVIDIA can transition from a hardware company to a platform company that defines AI infrastructure [3][8] - The conference will likely influence the valuation of NVIDIA and the entire AI industry chain, potentially leading to significant shifts in investment focus [8][9] Group 1: Key Themes of GTC 2026 - The core message of GTC is to redefine the game rules of AI infrastructure rather than just showcasing new products [9] - The focus is shifting from single-chip performance to system-level optimization, emphasizing collaboration among chips, networks, memory, and software [10][11] - The concept of workload decomposition is crucial, indicating that AI tasks will require specialized resources rather than relying on a single powerful GPU [12][15] Group 2: System-Level Dominance - NVIDIA's strategy is to enhance its system-level dominance by optimizing the allocation of resources for different AI tasks [15][17] - The importance of network and interconnect technology is highlighted, as they will determine the efficiency of AI systems in a multi-chip environment [18][19] - The investment focus is shifting from GPU performance to the underlying network infrastructure, which is essential for data transfer speed and latency [19][20] Group 3: Memory Technology and Tokenomics - NVIDIA is introducing SRAM for low-latency inference while maintaining HBM for large-scale training, indicating a complementary approach rather than a replacement [24][28] - The focus on tokenomics is crucial, as it addresses the cost-effectiveness of AI infrastructure and the return on investment for new technologies [30][32] - The expected EPS for NVIDIA in FY2028 is projected to reach $15, suggesting a low forward P/E ratio of 12, indicating significant market undervaluation [32] Group 4: Future Product Roadmap - The next-generation products from NVIDIA, such as Blackwell and Rubin, are not just about stronger GPUs but represent a strategic shift towards system-level capabilities [34][36] - The emphasis will be on how many computations, bandwidth, and memory can be accommodated in a single rack, along with managing power consumption and interconnects [36] - NVIDIA's goal is to push for infrastructure standardization and systemization, moving from merely selling chips to offering a unique AI system [36]
GSI (NasdaqGS:GSIT) 2026 Conference Transcript
2026-03-10 19:32
GSI Technology, Inc. Conference Call Summary Company Overview - **Company**: GSI Technology, Inc. (NasdaqGS:GSIT) - **Industry**: Semiconductor, specifically high-performance SRAM and Associative Processing Unit (APU) technology - **Established**: Over 30 years in the semiconductor industry Core Business and Technology - GSI is known for high-performance SRAM products used in networking, defense, and demanding applications, which provide a financial foundation for developing next-generation technology, the APU [2][4] - The APU, specifically the Gemini Two, is designed for edge environments like drones and satellites, focusing on power efficiency and low latency [2][3] - GSI has invested over $175 million in APU R&D, funded by SRAM product revenues [4] Financial Performance - Trailing twelve-month revenues are just under $25 million, with a projected 25% increase in revenue for fiscal 2026 compared to fiscal 2025 [4][5] - Cash and cash equivalents exceed $70 million, with a market cap around $320 million [6][21] - Operating expenses are approximately $7 million per quarter, with a notable increase due to IP purchases for the Plato design [21] Market Opportunities - The edge AI market is projected to grow from $20 billion to $120 billion by 2030, with GSI targeting a market share of approximately $7 billion [14] - Applications include drones, SAR satellites, smart cities, and autonomous systems [14][25] - GSI's APU architecture is designed to minimize data movement, significantly increasing performance per watt, which is critical for edge applications [10][11] Competitive Advantages - GSI's APU architecture allows for processing within memory arrays, reducing latency and power consumption compared to traditional architectures [8][9] - The company has filed 87 patents related to the APU, emphasizing its unique technology [5] - GSI's SRAM products are considered to be 1-2 generations ahead of competitors, with high average selling prices (ASPs) and gross margins exceeding 90% [22][24] Government Contracts and Funding - GSI has secured $4.4 million in Small Business Innovation Research (SBIR) grants, with ongoing projects for the U.S. Army and other defense agencies [18][19] - Future funding opportunities include a pipeline of $6-$10 million in submitted SBIRs and larger grants from programs like STRATFI and TACFI [20] Future Developments - The next-generation APU, named Plato, is designed for large language models (LLMs) at the edge, with a target power consumption of around 10 watts [15][16] - The design for Plato is expected to be completed by mid-2027, with anticipated market entry in 2028 [16][18] Summary of Key Metrics - **Current Revenue**: Just under $25 million - **Projected Revenue Growth**: 25% increase for fiscal 2026 - **Cash Reserves**: Over $70 million - **Market Cap**: Approximately $320 million - **Patents Filed**: 87 related to APU technology - **Edge AI Market Growth**: From $20 billion to $120 billion by 2030 Conclusion GSI Technology, Inc. is positioned to capitalize on the growing edge AI market with its innovative APU technology, strong financial foundation, and strategic government partnerships. The company's focus on low power and high performance in edge applications sets it apart from traditional semiconductor competitors.
Micron stock soaring 6% today: should you buy before earnings?
Invezz· 2026-03-10 17:20
Core Viewpoint - Micron Technology's stock has seen a significant increase, driven by bullish analyst commentary and strong demand for memory chips, particularly in the artificial intelligence sector [1] Analyst Ratings and Price Targets - Analysts from Citi have raised their price target for Micron from $385 to $430, citing strong demand from hyperscale data center operators and an expected 171% surge in DRAM prices in 2026 due to AI infrastructure demand [1][1] - Susquehanna's analyst has increased the price target for Micron from $345 to $525, indicating a positive long-term outlook despite potential margin narrowing [1][1] - Aletheia Capital's Warren Lau has set a new high price target of $650 for Micron, reflecting a 70% upside, driven by rising demand for high-bandwidth memory used in AI applications [1][1] Earnings Expectations - Micron is set to report its fiscal second-quarter results on March 18, with expectations of earnings per share at $8.52 and revenue of $18.85 billion [1][1] - Analysts predict DRAM prices could rise by as much as 70% in the upcoming quarter, influenced by tight supply and strong demand related to AI infrastructure [1][1] Competitive Landscape - Despite the positive outlook, there are concerns regarding Micron's competitive position, particularly with Nvidia planning to use HBM4 memory from Samsung and SK Hynix for its upcoming AI platform [1][1] - Micron is expected to supply HBM4 chips for specific systems focused on inference workloads, which may not be as high-performance as the training systems being developed [1][1]
ram在AI推理中拓展应用,堆叠方案可助力容量扩充
Orient Securities· 2026-03-07 07:59
Investment Rating - The report maintains a "Positive" investment rating for the electronic industry, indicating a favorable outlook for the sector [5]. Core Insights - SRAM is expanding its applications in AI inference, with stacking solutions aiding in capacity expansion. This presents significant investment opportunities in related companies [3][8]. - The report highlights the potential of SRAM architecture in AI inference, emphasizing its high access speed and low latency, which are critical for data requiring quick access [7]. - The industry is witnessing a trend towards 3D stacking solutions for SRAM, which can enhance density and overcome traditional capacity limitations [7]. Summary by Sections Investment Recommendations and Targets - Key companies to watch include: - Zhaoyi Innovation (兆易创新) and Beijing Junzheng (北京君正) for customized storage solutions [3][8]. - Hengshuo Co., Ltd. (恒烁股份) for SRAM-based digital computing solutions [3][8]. - Changdian Technology (长电科技) and Tongfu Microelectronics (通富微电) for advanced packaging [3][8]. - Companies like Deep South Circuit (深南电路) and Huitian Technology (沪电股份) are expected to benefit from NVIDIA's new chip solutions [3][8]. - Upstream PCB companies such as Shengyi Technology (生益科技) and Nanya Technology (南亚新材) are also highlighted [3][8]. Industry Developments - NVIDIA is set to unveil new AI inference chip solutions at the GTC 2026 conference, which could drive further SRAM applications [7]. - The report notes that SRAM's architecture is gaining recognition for its potential in AI inference, particularly for small parameter models and intermediate results [7]. - The 3D stacking technology, such as AMD's 3D V-Cache, is noted for its ability to significantly increase SRAM cache capacity [7][15].