Core Insights - The rapid development of AI large models has highlighted GPU memory capacity as a critical bottleneck for model training and inference efficiency [1] - Fourth Paradigm has launched the "Virtual VRAM" plug-in virtual memory expansion card, which transforms physical memory into a dynamically scheduled memory buffer pool, allowing for elastic expansion of GPU computing resources [1][2] - The new product aims to address the high costs associated with traditional GPU memory expansion and improve the scalability and multi-tasking capabilities of AI models [1][3] Company Overview - Fourth Paradigm's "Virtual VRAM" can expand the virtual memory capacity of a single GPU card up to 256GB, significantly enhancing the performance of existing NVIDIA GPUs without requiring hardware changes [2] - The product targets two main application scenarios: alleviating memory shortages during single-card operations and enabling multiple models to be deployed on the same GPU in low-load situations [2] Industry Implications - As the number and parameter scale of AI models continue to grow rapidly, memory capacity has become a key factor in building AI capabilities and controlling costs for enterprises [3] - The introduction of "Virtual VRAM" is expected to provide a cost-effective computing expansion solution, helping users maintain high performance while reducing costs [3] - Fourth Paradigm plans to collaborate with more memory manufacturers to further optimize and popularize AI infrastructure [3]
第四范式(06682)发布“Virtual VRAM”虚拟显存扩展卡 GPU资源利用率实现突破