国芯科技:研发的神经网络处理器DPNPU新IP产品内部测试成功

Core Insights - Guoxin Technology (688262.SH) has successfully tested its newly developed neural network processor DPNPU (Dataflow Parallel NPU) internally, aimed at high-performance AI processing for edge and endpoint computing [1] - The DPNPU is optimized for complex and variable computing tasks in AI applications, focusing on achieving the best balance between power consumption, performance, and flexibility [1] Technical Specifications - The DPNPU supports a flexible computing power configuration ranging from 0.5 to 4.8 TOPS, allowing for linear scalability to provide customized AI computing solutions for various scenarios [2] - It utilizes an innovative open architecture compliant with the RISC-V instruction set architecture (ISA), featuring a dedicated Task Distribution & Synchronization (TDS) hardware scheduling engine for efficient task management and data flow control [2] - The DPNPU integrates over 90 neural network operators, covering CNN and RNN architectures, and supports various RNN variants such as LSTM and GRU, with provisions for future AI model adaptations [2] - It supports post-training quantization (PTQ) techniques, including symmetric, asymmetric, layer-wise, and channel-wise quantization, while maintaining model accuracy and significantly reducing computational resource and storage space requirements [2] Software Ecosystem - To lower the development threshold for AI applications, Guoxin Technology has built a complete software ecosystem around the DPNPU, named CCore NPU Studio, which includes a comprehensive suite of tools, drivers, and runtime software [3] - The CCore NPU Studio provides end-to-end model deployment capabilities, including model conversion, preprocessing, quantization, compilation, and simulation tools [3] - The runtime support for DPNPU encompasses inference framework software and various extended soft operator libraries, while the driver is compatible with mainstream CPU platforms like RISC-V and supports different application environments [3] Market Positioning - The DPNPU architecture has been validated for feasibility, energy efficiency, and software stack, laying a foundation for the continued development of NPU technology and the advancement of edge and endpoint AI chip applications [3] - Compared to cloud-based AI, edge and endpoint AI offer significant advantages such as real-time response, data privacy protection, and low network dependency, which demand higher energy efficiency and computing density from chips [3]

C*Core Technology-国芯科技:研发的神经网络处理器DPNPU新IP产品内部测试成功 - Reportify