原生FP8

Search documents
Deepseek V3.1的UE8M0 FP8和英伟达的FP8格式有什么区别
傅里叶的猫· 2025-08-24 12:31
Core Viewpoint - The introduction of UE8M0 FP8 by Deepseek for the upcoming domestic chips signifies a strategic move to enhance compatibility and efficiency in the Chinese AI ecosystem, addressing the unique requirements of domestic hardware [5][10][12]. Group 1: UE8M0 and FP8 Concept - FP8 is an 8-bit floating-point format that significantly reduces memory usage by 75% compared to 32-bit formats, enhancing computational speed and efficiency for large model training and inference [7][13]. - UE8M0 is a specific encoding format for FP8 tensor data, designed to optimize compatibility with domestic chips, differing from Nvidia's E4M3 and E5M2 formats which focus on precision and dynamic range [9][10]. - The Open Compute Project (OCP) introduced UE8M0 as part of its MXFP8 formats, aiming to standardize FP8 usage across various hardware platforms [8]. Group 2: Strategic Importance of UE8M0 - The development of UE8M0 is crucial for ensuring that domestic chips can effectively utilize FP8 without relying on foreign standards, thus reducing dependency on Nvidia's technology [12]. - Deepseek's integration of UE8M0 into its model development process aims to ensure that models can run stably on upcoming domestic chips, facilitating a smoother transition from development to deployment [11][12]. - The focus of UE8M0 is not to outperform foreign FP8 standards but to provide a viable solution that allows domestic chips to leverage FP8 efficiency [14]. Group 3: Performance and Limitations - UE8M0 can save approximately 75% in memory usage compared to FP32, allowing for larger models or increased request handling during inference [13]. - The inference throughput using UE8M0 can be about twice that of BF16, making it particularly beneficial for large-scale AI applications [13]. - However, UE8M0 is not a one-size-fits-all solution; certain calculations still require higher precision formats like BF16 or FP16, and effective calibration is necessary to avoid errors in extreme value scenarios [15].