存算一体
Search documents
知存科技 2026 届校招启动:这类半导体人才将成香饽饽
半导体行业观察· 2025-09-17 01:30
Core Viewpoint - The article discusses the challenges faced by traditional chip architectures due to the rise of generative AI models and the emergence of in-memory computing technology, which significantly enhances AI computing efficiency and is seen as a disruptive technology in the post-Moore era [1][3]. Group 1: In-Memory Computing Technology - In-memory computing technology has gained traction as it addresses the "storage wall" and "power wall" issues inherent in the von Neumann architecture, leading to a potential efficiency improvement of several times in AI computing [1][3]. - The in-memory computing chips developed by Zhichun Technology have already served over 30 clients in commercial applications, showcasing the technology's practical viability [5]. Group 2: Talent Acquisition and Development - Zhichun Technology has launched the "Genius Doctor Program" for 2026, aiming to attract top talent in semiconductor devices, circuit design, and AI algorithms, reflecting the industry's talent competition amid rapid technological advancements [1][7]. - The program offers a unique growth system that includes mentorship and rotation across core R&D positions, allowing participants to gain comprehensive experience in the technology development process [7][10]. Group 3: Industry Trends and Future Outlook - The semiconductor industry is expected to face a talent shortage of over 300,000 professionals by 2025, highlighting the urgency for companies to develop and attract skilled individuals [1]. - The current phase of in-memory computing technology is critical as it transitions from "production validation" to "scale application," indicating a pivotal moment for the industry [12].
【金牌纪要库】AI芯片驱动先进逻辑半导体设备订单增长强劲,上半年两大龙头订单同比增长40%,这个技术被视为下一代封装技术核心
财联社· 2025-09-12 15:11
Core Insights - The article highlights the strong growth in orders for advanced logic semiconductor equipment driven by AI chips, with two major industry leaders experiencing a 40% year-on-year increase in orders in the first half of the year [1] - NVIDIA's launch of the Rubin CPX is expected to significantly lower token generation costs, potentially stimulating overall demand for AI applications as this product is anticipated to grow alongside the overall increase in AI workloads [1] - The rise of AI terminals may disrupt the traditional separation of "computation" and "storage" architectures, with "compute-storage integration" or "near-storage computing" likely to come to the forefront, driving demand for corresponding equipment and materials [1]
半壁江山都来了!最燃AI芯片盛会最终议程公布,同期超节点研讨会深入解读华为384
傅里叶的猫· 2025-09-12 10:42
Core Viewpoint - The 2025 Global AI Chip Summit will be held on September 17 in Shanghai, focusing on the theme "AI Infrastructure, Smart Chip New World," addressing the new infrastructure wave in the AI era and the breakthroughs in China's chip industry under large models [2][3]. Group 1: Event Overview - The summit will feature over 180 industry experts sharing insights on cutting-edge research, innovations, and industry trends, making it a significant platform for understanding AI chip developments [2]. - The event will consist of a main forum, specialized forums, technical seminars, and an exhibition area, providing a comprehensive agenda for attendees [2][3][5]. Group 2: Main Forum Highlights - The opening report will be delivered by Professor Wang Zhongfeng, focusing on "Shaping the Intelligent Future: Architectural Innovation and Paradigm Shift of AI Chips," discussing solutions to overcome bottlenecks in AI chip development [7]. - Key speakers include leaders from major companies such as Huawei and Yuntian Lifei, discussing trends in AI development and the strategic positioning of AI chips [7][8][9]. Group 3: Specialized Forums - The Large Model AI Chip Specialized Forum will address the competitive landscape of large models and the infrastructure needed for AI, emphasizing cost-effectiveness as a critical factor [18][19]. - The AI Chip Architecture Innovation Forum will explore new chip architectures, including wafer-level chips and RISC-V based solutions, highlighting the need for innovative approaches in the face of technological constraints [22][24]. Group 4: Technical Workshops - The workshops will focus on topics such as memory wall issues in traditional architectures and the importance of storage-computing integration in AI chip design [32][33]. - Experts will discuss advancements in DRAM near-memory computing architectures and the challenges of integrating heterogeneous systems for AI applications [34][35]. Group 5: Exhibition Area - The exhibition will feature over 10 exhibitors, including leading companies like Achronix and Sunrise, showcasing their latest technologies and solutions in the AI chip sector [3].
易华录:公司将继续着力于在智慧交通等业务提升竞争力并实现业绩提升
Zheng Quan Ri Bao Wang· 2025-09-11 13:40
Group 1 - The company, Yihualu (300212), is focusing on enhancing its competitiveness and achieving performance improvement in areas such as smart transportation, data elements, and integrated computing and storage [1]
全球首个RISC-V存算一体标准研制工作启动
3 6 Ke· 2025-09-11 10:28
Core Insights - The Chinese chip industry is facing three major challenges: limitations in advanced process technology, reliance on a closed software ecosystem, and bandwidth bottlenecks due to traditional architecture [1][2][3][4] Group 1: Challenges in the Domestic Chip Industry - The lack of advanced manufacturing processes has resulted in a bottleneck in computing density, with current domestic 3nm/5nm technologies still in the R&D phase and unable to meet the demands of large AI models [2] - The domestic AI chip industry is heavily dependent on Western closed-source ecosystems, particularly the CUDA ecosystem, which monopolizes AI model training and inference software, leading to a situation where high-performance chips may lack compatible software [3] - Traditional von Neumann architecture separates computing and storage units, causing data to be frequently moved via buses, creating a "memory wall" bottleneck that significantly reduces inference efficiency as model parameters scale to hundreds of billions [4] Group 2: 3D-CIM Technology as a Solution - The 3D-CIM (3D Compute-in-Memory) technology, introduced by Micronano Core, integrates computing capabilities within storage, addressing the exponential growth in computing demands for AI models [5] - This technology utilizes SRAM compute-in-memory combined with DRAM 3D stacking to perform computations within the memory, fundamentally eliminating data transfer overhead and is seen as a key path for sustaining computing growth in the post-Moore's Law era [5] - The core breakthrough of 3D-CIM lies in its SRAM compute-in-memory design, which allows for in-situ tensor computations, significantly enhancing computing density and achieving performance comparable to traditional NPU/GPU at a lower manufacturing cost [5][6] Group 3: Ecosystem and Application Prospects - The open and flexible RISC-V architecture complements the 3D-CIM technology, meeting the high parallelism and low power consumption needs of AI models while alleviating external process restrictions [7] - Micronano Core is collaborating with upstream and downstream enterprises to promote the ecological implementation of 3D-CIM technology and RISC-V architecture [8] - The application prospects for 3D-CIM technology are categorized into short-term, mid-term, and long-term, with initial applications in edge AI devices, followed by cloud-based AI model applications, and eventually expanding into embodied intelligence applications [8]
科技投资关“建”词 | “科技+”的力量之硬件篇
Zhong Guo Zheng Quan Bao· 2025-09-03 23:42
Group 1 - The technology sector in A-shares continues to perform well, with a focus on systematic investment strategies in the technology field [1] - The underlying hardware systems are crucial for advancements in AI and autonomous driving, with computing power being likened to "oil" in the digital age [3] - China still relies on imports for high-end chips, advanced manufacturing equipment, and key materials, but technological breakthroughs and policy support are accelerating the restructuring of the semiconductor industry [6] Group 2 - The concept of "storage-compute integration" is emerging, which allows for data processing at the storage unit level, addressing the challenges of power consumption and latency in traditional computing architectures [9] - Advanced packaging technology, particularly optical-electrical co-packaging, is becoming essential for improving data transmission efficiency and reducing losses, with significant market growth potential [13] - The focus on semiconductor leaders and technology innovation is critical for capturing investment opportunities in the tech sector [14]
“科技+”的力量之硬件篇
Zhong Guo Zheng Quan Bao· 2025-09-03 23:37
Group 1 - The technology sector in A-shares continues to perform well, with a focus on systematic investment strategies in the technology field [1] - The underlying hardware systems are crucial for advancements in AI and autonomous driving, with computing power being likened to "oil" in the digital age [3] - China still relies on imports for high-end chips, advanced manufacturing equipment, and key materials, but technological breakthroughs and policy support are accelerating the restructuring of the semiconductor industry [6] Group 2 - The concept of "storage-compute integration" is emerging, which allows for data processing at the storage unit level, addressing the challenges of power consumption and latency in traditional computing architectures [9] - Advanced packaging technology, particularly optical-electrical co-packaging, is becoming essential for improving data transmission efficiency and reducing losses, with significant market growth potential [14] - The focus on semiconductor leaders is critical for capturing opportunities in technological innovation [15]
最新消息:阿里巴巴三步走战略替代英伟达的,追加寒武纪GPU至15万片
是说芯语· 2025-08-30 07:46
Core Viewpoint - Alibaba is developing a new generation of AI chips focused on multifunctional inference scenarios, aiming to fill the market gap left by NVIDIA's H20 exit [1][3]. Chip Development and Specifications - The new chip utilizes domestic 14nm or more advanced processes, supported by local foundries like Yangtze Memory Technologies, integrating high-density computing units and large-capacity memory with an expected LPDDR5X bandwidth exceeding 1TB/s, targeting a single-card computing power of 300-400 TOPS (INT8), comparable to H20's approximately 300 TOPS [1][3]. - Compared to NVIDIA's H20, Alibaba's chip offers full-scene compatibility, supporting FP8/FP16 mixed precision computing and seamless integration with the CUDA ecosystem, reducing migration costs by over 70% [3]. - Alibaba has urgently increased its order for the Cambricon Siyuan 370 chip to 150,000 units, which is based on a 7nm process and utilizes Chiplet technology, integrating 39 billion transistors and achieving a measured computing power of 300 TOPS (INT8) with a 40% improvement in energy efficiency [5]. Market Strategy and Production Capacity - The Cambricon Siyuan 370 chip is expected to cover 60% of Alibaba Cloud's inference demand by Q2 2025 and supports multi-card interconnection via PCIe 5.0, facilitating user growth for Tongyi Qianwen [5]. - Alibaba collaborates with Yangtze Memory Technologies to develop AI chips focusing on overcoming storage bottlenecks, achieving a storage density of 20GB/mm² and read/write speeds of 7000MB/s, a 40% improvement over the previous generation, expanding local storage capacity to 128GB [5][6]. - To ensure mass production, Alibaba employs a dual-foundry backup strategy, with SMIC's 14nm production line handling basic chip production, achieving a stable yield of over 95% and a monthly capacity of 50,000 units [6]. Future Roadmap - Alibaba's three-step strategy includes: - Short-term (2025-2026): Focus on 7nm/14nm inference chips to quickly capture market share through ecosystem compatibility [10]. - Mid-term (2027-2028): Launch 4nm training chips targeting a computing power of 1 EFLOPS, competing with NVIDIA's H100 [10]. - Long-term (post-2030): Explore disruptive technologies like photonic computing and integrated storage-computing solutions, with the first commercial photonic AI chip already released, promising a speed increase of 1000 times and a 90% reduction in power consumption compared to GPUs [10]. - Alibaba's path to domestic computing power is characterized as a dual battle of technological breakthroughs and ecosystem reconstruction, aiming to disrupt NVIDIA's monopoly through a "compatibility-replacement-surpassing" strategy [10][11].
HBM,挑战加倍
3 6 Ke· 2025-08-19 10:59
Core Insights - High Bandwidth Memory (HBM) is emerging as a critical component for AI model training and inference due to its unique 3D stacked structure, which significantly enhances data transfer rates compared to traditional memory solutions like GDDR [1][2] HBM Market Dynamics - SK Hynix has established a dominant position in the HBM market, with its market share surpassing that of Samsung, which has seen a decline from 41% to 17% in the same period [3][4] - The launch of HBM3E has been pivotal for SK Hynix, attracting major tech companies like AMD, NVIDIA, Microsoft, and Amazon, leading to a significant increase in demand [3] - SK Hynix's sales in DRAM and NAND reached approximately 21.8 trillion KRW, surpassing Samsung's 21.2 trillion KRW for the first time [3] Competitive Landscape - Samsung is attempting to regain its footing by reviving Z-NAND technology, aiming for performance improvements of up to 15 times over traditional NAND and a reduction in power consumption by up to 80% [6][7] - NEO Semiconductor has introduced X-HBM architecture, which offers 16 times the bandwidth and 10 times the density of existing memory technologies, targeting the AI chip market [10] - Saimemory, a collaboration between SoftBank, Intel, and Tokyo University, is developing a new stacked DRAM architecture aimed at becoming a direct HBM alternative with significant performance improvements [11] Innovations and Alternatives - SanDisk and SK Hynix are collaborating on High Bandwidth Flash (HBF), a new storage architecture designed for AI applications, which combines 3D NAND flash with HBM characteristics [12][13] - The industry is exploring various architectural innovations, such as Processing-In-Memory (PIM), to reduce reliance on HBM and enhance efficiency [15][16] Future Trends - The AI memory market is expected to evolve into a heterogeneous multi-tiered structure, where HBM will focus on training scenarios, while PIM memory will cater to high-efficiency inference applications [18] - The demand for HBM, particularly HBM3 and above, is projected to remain strong, with significant price increases noted in the market [17]
一文看懂“存算一体”
Hu Xiu· 2025-08-15 06:52
Core Concept - The article discusses the concept of "Compute In Memory" (CIM), which integrates storage and computation to enhance data processing efficiency and reduce energy consumption [1][20]. Group 1: Background and Need for CIM - Traditional computing architecture, known as the von Neumann architecture, separates storage and computation, leading to inefficiencies as data transfer speeds cannot keep up with processing speeds [2][10]. - The explosion of data in the internet era and the rise of AI have highlighted the limitations of this architecture, resulting in the emergence of the "memory wall" and "power wall" challenges [11][12]. - The "memory wall" refers to the inadequate data transfer speeds between storage and processors, while the "power wall" indicates high energy consumption during data transfer [13][16]. Group 2: Development of CIM - Research on CIM dates back to 1969, but significant advancements have only occurred in the 21st century due to improvements in chip and semiconductor technologies [23][26]. - Notable developments include the use of memristors for logic functions and the construction of CIM architectures for deep learning, which can achieve significant reductions in power consumption and increases in speed [27][28]. - The recent surge in AI demands has accelerated the development of CIM technologies, with numerous startups entering the field alongside established chip manufacturers [30][31]. Group 3: Technical Classification of CIM - CIM is categorized into three types based on the proximity of storage and computation: Processing Near Memory (PNM), Processing In Memory (PIM), and Computing In Memory (CIM) [34][35]. - PNM involves integrating storage and computation units to enhance data transfer efficiency, while PIM integrates computation capabilities directly into memory chips [36][40]. - CIM represents the true integration of storage and computation, eliminating the distinction between the two and allowing for efficient data processing directly within storage units [43][46]. Group 4: Applications of CIM - CIM is particularly suited for AI-related computations, including natural language processing and intelligent decision-making, where efficiency and energy consumption are critical [61][62]. - It also has potential applications in AIoT products and high-performance cloud computing scenarios, where traditional architectures struggle to meet diverse computational needs [63][66]. Group 5: Market Potential and Challenges - The global CIM technology market is projected to reach $30.63 billion by 2029, with a compound annual growth rate (CAGR) of 154.7% [79]. - Despite its potential, CIM faces technical challenges related to semiconductor processes and the establishment of a supportive ecosystem for design and testing tools [70][72]. - Market challenges include competition with traditional architectures and the need for cost-effective solutions that meet user demands [74][76].