Core Insights - The rapid development of AI large models has highlighted GPU memory capacity as a critical bottleneck for training and inference efficiency [1][3] - Fourth Paradigm has launched the "Virtual VRAM" plug-in virtual memory expansion card, which transforms physical memory into a dynamically scheduled memory buffer pool, allowing for elastic expansion of GPU computing resources [1][2] Group 1: Product Features - The "Virtual VRAM" can expand the virtual memory capacity of a single GPU card up to 256GB, significantly enhancing the capabilities of existing NVIDIA GPUs [2] - Users can run larger-scale AI training and inference tasks without needing to replace hardware, thus avoiding additional costs associated with purchasing new GPUs [2] - The product supports various environments, including physical machines, Docker containers, and Kubernetes, allowing for easy deployment without code modification [2] Group 2: Market Implications - As the number and scale of AI models continue to grow, memory capacity has become a key factor in building AI capabilities and controlling costs for enterprises [3] - The new product is expected to provide a cost-effective computing expansion solution, helping users maintain high performance while reducing costs [3] - Fourth Paradigm plans to collaborate with more memory manufacturers to further optimize and popularize AI infrastructure [3]
第四范式(06682.HK)发布「Virtual VRAM」虚拟显存扩展卡,GPU资源利用率实现突破