AI Storage
Search documents
AI存储解决方案巨头冲击IPO,估值45.8亿,腾讯押注,来自北京
格隆汇APP· 2026-01-29 10:08
格隆汇新股 AI存储解决方案巨头冲击IPO,估值45.8亿,腾讯押注,来自北京 原创 阅读全文 ...
星辰天合冲击IPO,专注于AI存储解决方案领域,连续两年亏损
Ge Long Hui· 2026-01-29 10:00
近期,存储芯片涨价是全球市场共同关注的焦点之一。 三星电子已在2026年第一季度将NAND闪存供应价格上调100%以上,SK海力士等也同步跟进。三星、海力士、希捷 科技、西部数据、闪迪、美光科技等存储芯片公司股价表现强势。 国内不少企业也迎来利好。澜起科技、兆易创新、佰维存储、德明利、香农芯创等多家存储芯片公司先后披露2025年 度业绩预告。其中,澜起科技2025年净利润同比预增52%至66%,德明利2025年4季度净利润预计环比增长645%至 810%。 与此同时,近期有一家专注于AI存储解决方案领域的公司港交所发起了冲击。 截至2026年1月20日,根据一致行动人士协议,胥昕、王豪迈及星辰天枢作为一组股东,共同有权控制公司已发行股 份总数约25.72%所附带的表决权。 公司成立以来,一共完成了8轮融资,投资人主要包括博裕投资、Northern Light Venture Capital、CRVC、启明创投、 Redpoint、腾讯、NEA、中金甲子、君联资本、博华资本、源码资本、昆仑、毅商、上海国鑫、云晖及恒生电子等。 在2022年12月的融资中,星辰天合的投后估值约45.8亿元。 胥昕今年35岁,任执行 ...
【研报行业】液冷千亿市场蓄势待发,国产链加速入局,谁能抢占英伟达生态新红利?关注这些全链条布局厂商
第一财经· 2025-12-08 11:47
Group 1 - The core viewpoint of the article emphasizes the importance of timely and relevant research reports in identifying investment opportunities, particularly in emerging markets like liquid cooling and AI-driven storage solutions [1] - The liquid cooling market is projected to reach a trillion yuan, with domestic companies accelerating their entry into the market, highlighting the competitive landscape and potential beneficiaries within the NVIDIA ecosystem [1] - AI is driving a new cycle in storage, with HBM (High Bandwidth Memory) expected to grow fivefold over six years, indicating a significant growth opportunity in the equipment sector for key players [1]
广发证券:推理驱动AI存储快速增长 建议关注产业链核心受益标的
智通财经网· 2025-09-23 08:56
Core Insights - The rapid growth of AI inference applications is significantly increasing the reliance on high-performance memory and tiered storage, with HBM, DRAM, SSD, and HDD playing critical roles in long-context and multimodal inference scenarios [1][2][3] - The overall demand for storage is expected to surge to hundreds of exabytes (EB) as lightweight model deployment drives storage capacity needs [1][3] Group 1: Storage in AI Servers - Storage in AI servers primarily includes HBM, DRAM, and SSD, characterized by decreasing performance, increasing capacity, and decreasing costs [1] - Frequently accessed or mutable data is retained in higher storage tiers, such as CPU/GPU caches, HBM, and dynamic RAM, while infrequently accessed or long-term data is moved to lower storage tiers like SSD and HDD [1] Group 2: Tiered Storage for Efficient Computing - HBM is integrated within GPUs to provide high-bandwidth temporary buffering for weights and activation values, supporting parallel computing and low-latency inference [2] - DRAM serves as system memory, storing intermediate data, batch processing queues, and model I/O, facilitating efficient data transfer between CPU and GPU [2] - Local SSDs are used for real-time loading of model parameters and data, meeting high-frequency read/write needs, while HDDs offer economical large capacity for raw data and historical checkpoints [2] Group 3: Growth Driven by Inference Needs - Memory benefits from long-context and multimodal inference demands, where high bandwidth and large capacity memory reduce access latency and enhance parallel efficiency [3] - For example, the Mooncake project achieved computational efficiency leaps through resource reconstruction, and various upgrades in hardware support high-performance inference in complex models [3] - Based on key assumptions, the storage capacity required for ten Google-level inference applications by 2026 is estimated to be 49EB [3]