NPU

Search documents
芯片概念股走强,多只科创芯片相关ETF涨超5%
Sou Hu Cai Jing· 2025-08-27 06:20
在券商看来,全球半导体市场持续扩张,世界半导体贸易统计协会(WSTS)预计2025年市场规模将达7008.74亿美元,增速 11.2%,主要由逻辑芯片和存储芯片驱动。AI端侧应用加速渗透,NPU凭借低功耗特性成为边缘设备理想选择,无线连接技术迭 代推动物联网发展。行业并购整合浪潮兴起,覆盖材料、设备、EDA、封装等领域,企业通过横向并购扩大规模、纵向并购完 善产业链。 每日经济新闻 科创芯片概念股走强,澜起科技涨超9%,寒武纪-U、恒玄科技涨超8%。 受盘面影响,多只科创芯片相关ETF涨超5%。 | 代码 | 名称 | 现价 | 涨跌 | 涨跌幅 ▼ | | --- | --- | --- | --- | --- | | 589100 | 科创芯片ETF国泰 | 1.393 | 0.070 | 5.29% | | 588890 | 科创芯片ETF南方 | 2.430 | 0.120 | 5.19% | | 588290 | 科创芯片ETF基金 | 2.115 | 0.102 | 5.07% | | 588810 | 科创芯片ETF富国 | 1.524 | 0.075 | 5.18% | | 588990 | ...
云天励飞:正在推进下一代高性能NPU的研发 将更适合AI推理应用
Mei Ri Jing Ji Xin Wen· 2025-08-26 08:01
每经AI快讯,8月26日,云天励飞在互动平台表示,公司长期专注于AI推理芯片的研发设计及商业化, 是全球第一批提出NPU驱动的AI推理芯片概念并商业化落地的公司。公司已完成第四代NPU的研发, 目前正在推进下一代高性能NPU的研发,将更适合AI推理应用。 ...
民生证券-芯原股份-688521-2025年半年报点评:在手订单连创新高,国产ASIC龙头加速腾飞-250825
Xin Lang Cai Jing· 2025-08-25 21:09
营收快速增长,亏损环比大幅收窄。从2025年Q2单季度来看,收入为5.84亿元,环比增长49.90%,亏损环比大幅收窄,收入主要由知识产权授权使用费收入及量产业务收入增长所带动。从分 云侧赋能数据中心/服务器,端侧开拓增量市场。云侧:公司的VPU、NPU和GPGPUIP被广泛应用于数据中心/服务器市场,并推出可扩展的高性能GPGPU-AI计算IP与多家领先AI计算客户深度合 逆势加速扩张,人才储备充足。公司基于对行业周期的判断,一定规模人才储备能够帮助公司在竞争中抢占先机,增强企业的竞争力。公司逆势招聘优秀专业人才的战略有助于在行业复苏时快速 投资建议:我们预计25/26/27年营业收入分别为33.24/43.15/55.35亿元,对应现价PS为25.0/19.2/15.0倍。公司在ASIC业务上的技术积累和客户基础扎实,叠加AI与定制芯片浪潮持续 风险提示:产品研发迭代不足的风险;下游需求波动的风险;市场竞争加剧的风险 炒股就看金麒麟分析师研报,权威,专业,及时,全面,助您挖掘潜力主题机会! 事件:芯原股份8月22日晚发布了2025年半年报,2025年上半年实现营业收入9.74亿元,同比增长4.49%;实现 ...
芯原股份(688521):在手订单连创新高,国产ASIC龙头加速腾飞
Minsheng Securities· 2025-08-25 11:37
芯原股份(688521.SH)2025 年半年报点评 在手订单连创新高,国产 ASIC 龙头加速腾飞 2025 年 08 月 25 日 ➢ 事件:芯原股份 8 月 22 日晚发布了 2025 年半年报,2025 年上半年实现 营业收入 9.74 亿元,同比增长 4.49%;实现归母净利润为亏损 3.2 亿元。 ➢ 营收快速增长,亏损环比大幅收窄。从 2025 年 Q2 单季度来看,收入为 5.84 亿元,环比增长 49.90%,亏损环比大幅收窄,收入主要由知识产权授权使用费 收入及量产业务收入增长所带动。从分业务来看,第二季度实现知识产权授权使 用费收入 1.87 亿元,环比增长 99.63%,同比增长 16.97%;第二季度实现量产 业务收入 2.61 亿元,环比增长 79.01%,同比增长 11.65%。2025 年第二季度, 公司新签订单 11.82 亿元,单季度环比提升近 150%。公司在手订单已连续七个 季度保持高位,再创公司历史新高,截至 2025 年第二季度末,公司在手订单金 额为 30.25 亿元,较 2025 年第一季度末增长 5.69 亿元,环比增长 23.17%。公 司技术能力业界领先, ...
芯片概念股走势分化,多只科创芯片、半导体设备相关ETF跌超2%
Mei Ri Jing Ji Xin Wen· 2025-08-25 05:49
在券商看来,全球半导体市场持续扩张,世界半导体贸易统计协会(WSTS)预计2025年市场规模将达7008.74亿美元,增速11.2%,主要由 逻辑芯片和存储芯片驱动。AI端侧应用加速渗透,NPU凭借低功耗特性成为边缘设备理想选择,无线连接技术迭代推动物联网发展。行业并 购整合浪潮兴起,覆盖材料、设备、EDA、封装等领域,企业通过横向并购扩大规模、纵向并购完善产业链。 (文章来源:每日经济新闻) 芯片概念股早盘冲高后走势分化,海光信息涨超10%,寒武纪-U涨超4%,而中微公司、思特威-W跌超3%,中芯国际、沪硅产业跌超2%。 受盘面影响,多只科创芯片、半导体设备相关ETF跌超2%。 | 代码 | 类型 名称 | 现价 | 涨跌 | 涨跌幅 ▲ | | --- | --- | --- | --- | --- | | 588780 | 主 科创芯片设计ETF | 1.435 | -0.072 | -4.78% | | 588920 | 主 科创芯片ETF指数 | 1.315 | -0.060 | -4.36% | | 588890 | 主 科创芯片ETF南方 | 2.285 | -0.087 | -3.67% | ...
东吴证券给予瑞芯微买入评级,2025年中报业绩点评:25H1营收利润高增,AIoT矩阵和生态共振
Mei Ri Jing Ji Xin Wen· 2025-08-18 15:21
Group 1 - The core viewpoint of the report is that Dongwu Securities has given a "buy" rating for Rockchip (603893.SH) based on strong revenue growth and enhanced profitability in the first half of 2025 [2] - The company's flagship products are experiencing steady growth, and the new NPU is actively expanding into edge AI scenarios [2] - Rockchip is accelerating its product innovation and expanding its full-scenario AIoT chip layout [2]
处理器芯片,大混战
半导体芯闻· 2025-08-18 10:48
Core Viewpoint - The article discusses the evolving landscape of artificial intelligence (AI) processing solutions, highlighting the need for companies to balance current performance with future adaptability in AI models and methods. Various processing units such as GPUs, ASICs, NPUs, and FPGAs are being utilized across different applications, from high-end smartphones to low-power edge devices [1][12]. Summary by Sections AI Processing Units - Companies are exploring a range of processing units for AI tasks, including GPUs, ASICs, NPUs, and DSPs, each with unique advantages and trade-offs in terms of power consumption, performance, flexibility, and cost [1][2]. - GPUs are favored in data centers for their scalability and flexibility, but their high power consumption limits their use in mobile devices [2]. - NPUs are optimized for AI tasks, offering low power and low latency, making them suitable for mobile and edge devices [2]. - ASICs provide the highest efficiency and performance for specific tasks but lack flexibility and have high development costs, making them ideal for large-scale, targeted deployments [3]. Custom Silicon - The trend towards custom silicon is growing, with major tech companies like NVIDIA, Microsoft, and Google investing in tailored chips to optimize performance for their specific software needs [4]. - Custom AI accelerators can provide significant advantages, but they require a robust ecosystem to support software development and deployment [4]. Flexibility and Adaptability - The rapid evolution of AI algorithms necessitates flexible hardware solutions that can adapt to new models and use cases, as traditional ASICs may struggle to keep pace with these changes [4][5]. - The need for adaptable architectures is emphasized, as AI capabilities may grow exponentially, putting pressure on decision-makers to choose the right processing solutions [4][5]. Role of DSPs and FPGAs - DSPs are increasingly being replaced or augmented by AI-specific processors, enhancing capabilities in areas like audio processing and motion detection [7]. - FPGAs are seen as a flexible alternative, allowing for algorithm updates without the need for complete hardware redesigns, thus combining the benefits of ASICs and general-purpose processors [8]. Edge Device Applications - Low-power edge devices are utilizing MCUs equipped with DSPs and NPUs to meet specific processing needs, differentiating them from high-performance mobile processors [10]. - The integration of AI capabilities into edge devices is becoming more prevalent, with companies developing specialized MCUs for machine learning and context-aware applications [10][11]. Conclusion - The edge computing landscape is characterized by a complex mix of specialized and general-purpose processors, with a trend towards customization and fine-tuning for specific workloads [12].
CEVA(CEVA) - 2025 Q2 - Earnings Call Transcript
2025-08-11 13:30
Financial Data and Key Metrics Changes - Revenue for Q2 2025 was $25.7 million, down 10% from $28.4 million in Q2 2024 [15] - Licensing and related revenue totaled $15 million, representing 59% of total revenue, reflecting a 13% year-over-year decline [15][16] - Royalty revenue for the quarter was $10.7 million, accounting for 41% of total revenues, with a 16% sequential increase but a 5% year-over-year decrease [17][18] - GAAP net loss for Q2 was $3.7 million, with a diluted loss per share of $0.15, compared to a net loss of $0.3 million and diluted loss per share of $0.01 in the same period last year [19] Business Line Data and Key Metrics Changes - The company secured 13 license agreements, including five first-time customers and four OEM customers, indicating strong licensing execution [4] - Royalty revenue saw a sequential growth of 16%, driven by increased shipments from consumer and smartphone customers [11] - Consumer IoT shipments were up 21% sequentially and 60% year-over-year, reflecting strong demand [12] Market Data and Key Metrics Changes - Shipments by CEVA's licensees during Q2 2025 were 488 million units, up 16% sequentially and 6% year-over-year [20] - Cellular IoT shipments reached an all-time record high at 66 million units, up 66% year-over-year [21] - WiFi shipments were 62 million units, up 80% from 35 million units a year ago, with WiFi 6 shipments up 113% year-over-year [21] Company Strategy and Development Direction - The company aims to expand its NPU business into infrastructure and data center markets, indicating a strategic shift towards AI integration [9][13] - CEVA is focused on deepening relationships through multiple IP agreements, enhancing product capabilities and increasing royalty per device [6][10] - The company views the milestone of over 20 billion devices shipped as a launchpad for future growth in the Smart Edge Era [14] Management's Comments on Operating Environment and Future Outlook - Management expressed optimism about the licensing pipeline and potential deal flow, particularly around Edge AI prospects [23] - The company anticipates stronger royalty revenue in the second half of the year due to seasonality and new product deployments [23][24] - Management reiterated confidence in achieving a double-digit percentage increase in non-GAAP net income and fully diluted non-GAAP EPS relative to 2024 [25] Other Important Information - Total GAAP operating expenses for Q2 were $26.6 million, above guidance due to higher employee-related benefits [18] - The company repurchased 300,000 shares for approximately $6.2 million during the quarter [22] - CEVA's cash and cash equivalents were approximately $157 million as of June [22] Q&A Session Summary Question: Will increased licensing in NPUs lead to higher royalty revenues? - Management confirmed that higher complexity in technology will lead to better economics and a meaningful increase in royalty per unit as these devices are deployed [28][29] Question: What is the expected timing for royalties from more complex designs? - Management indicated that the time from licensing to royalty reporting is typically 18 to 24 months, but may be shorter for consumer devices due to rapid market needs [30][31] Question: What is the outlook for flagship smartphone customers in 2026? - Management did not provide specific guidance for 2026 but expressed confidence in technology penetration and expected strong performance in the second half of the year [39][40] Question: What is the scalability of CEVA's AI offerings? - Management highlighted the scalability of their NPU solutions and the comprehensive software stack provided to customers, which supports various applications including edge and cloud inference [42][44] Question: What contributed to the decline in Bluetooth shipments this quarter? - Management noted that the decline was not due to specific issues but expected good sequential growth in the second half of the year as new Bluetooth technologies are adopted [57][58]
芯片巨头,争霸NPU
半导体行业观察· 2025-08-10 01:52
Core Viewpoint - The integration of Neural Processing Units (NPU) in laptops enhances the efficiency of AI tasks, improving performance and battery life while reducing the load on CPUs and GPUs [1][2][5]. Group 1: NPU Functionality and Benefits - NPU is designed to handle AI tasks such as background blurring and real-time subtitles, allowing CPUs to focus on other processes, which results in smoother multitasking [2][3]. - The use of NPU leads to significant improvements in application responsiveness and overall system performance, especially when running AI-related applications [2][5]. - With NPU, AI functionalities can operate directly on the device without relying on cloud services, ensuring faster processing and enhanced privacy [4][5]. Group 2: Market Trends and Developments - Major chip manufacturers like Intel and AMD are integrating NPU into their processors, with examples including Intel's Core Ultra series and AMD's Ryzen AI series [7][8]. - Dell has introduced the Pro Max Plus laptop featuring Qualcomm's AI 100 PC inference card, claiming it to be the first workstation with an enterprise-level independent NPU [8]. - Emerging companies like Encharge AI are also developing independent NPU solutions, indicating a growing trend towards specialized AI processing capabilities in PCs [8][9]. Group 3: Future Prospects - AMD is exploring the potential of dedicated NPU chips as alternatives to GPUs for AI workloads, with discussions ongoing with OEMs about their use cases [9][10]. - The integration of AI engines from acquisitions, such as Xilinx, is expected to enhance the performance of future NPU products from AMD [10][11]. - The industry is focused on ensuring that independent NPU solutions consume less energy than traditional GPUs, which is crucial for widespread adoption [11].
为什么Thor芯片要保留GPU,又有NPU?
理想TOP2· 2025-08-02 14:46
Core Viewpoint - Pure GPU can achieve basic functions for low-level autonomous driving but has significant shortcomings in processing speed and energy consumption, making it unsuitable for higher-level autonomous driving needs [4][40]. Group 1: GPU Limitations - Pure GPU can handle certain parallel computing tasks required for autonomous driving, such as sensor data fusion and image recognition, but is primarily designed for graphics rendering, leading to limitations [4][6]. - Early autonomous driving tests using pure GPU solutions, like the NVIDIA GTX 1080, showed a detection delay of approximately 80 milliseconds, which poses safety risks at high speeds [5]. - The data processing capacity for L4 autonomous vehicles generates about 5-10GB of data per second, requiring multiple GPUs to work together, which increases power consumption significantly [6][9]. Group 2: NPU and TPU Advantages - NPU is specifically designed for neural network computations, featuring a large number of MAC (Multiply-Accumulate) units, which optimize matrix multiplication and accumulation operations [10][19]. - TPU, developed by Google, utilizes a pulsed array architecture that enhances data reuse and reduces external memory access, achieving higher efficiency in large matrix operations compared to GPU [12][19]. - NPU and TPU architectures are more efficient for neural network inference, with NPU showing a significant reduction in energy consumption compared to GPU [36][40]. Group 3: Cost and Efficiency Comparison - In terms of energy efficiency, NPU's performance is 2.5 to 5 times better than that of GPU, with lower power consumption for equivalent AI computing power [36][40]. - The cost of NPU solutions is significantly lower than pure GPU solutions, with NPU hardware costs being only 12.5% to 40% of those for pure GPU setups [37][40]. - For example, achieving 144 TOPS of AI computing power with a pure GPU solution requires multiple GPUs, leading to a total cost of around $4000, while a solution with NPU can cost about $500 [37][40]. Group 4: Hybrid Solutions - NVIDIA's Thor chip integrates both GPU and NPU to leverage their strengths, allowing for efficient task division and compatibility with existing software, thus reducing development time and costs [33][40]. - The collaboration between GPU and NPU in autonomous driving systems enhances overall efficiency by avoiding frequent data transfers between different chips, resulting in a 40% efficiency improvement [33][40]. - The future trend in autonomous driving technology is expected to favor hybrid solutions that combine NPU and GPU capabilities to meet the demands of high-level autonomous driving while maintaining cost-effectiveness [40].