AI 算力储备 (AI Compute Treasury) 战略
Search documents
VCI Global 推出基于 NVIDIA GPU 基础设施的 AI 算力储备战略
Globenewswire· 2026-03-13 01:56
Core Insights - VCI Global Limited has launched its AI Compute Treasury strategy to accumulate and deploy high-performance GPU infrastructure to meet the growing demand for AI inference workloads globally [1][2] - The global AI infrastructure market is projected to reach approximately $394.5 billion by 2030, with a compound annual growth rate (CAGR) of 19.4% from 2024 to 2030 [2] - The AI Compute Treasury strategy is designed around a scalable AI infrastructure flywheel model that aims to generate continuous demand and revenue from AI-driven applications [2][3] Company Strategy - The AI Compute Treasury strategy will focus on gradually accumulating GPU infrastructure assets specifically for AI inference, which is the deployment phase of trained AI models [1][2] - The strategy involves a flywheel model that includes capital investment in GPU assets, providing AI compute to enterprises and developers, expanding adoption of AI workloads, generating continuous revenue, and reinvesting in more GPU infrastructure [2][3] Market Context - The AI inference market is expected to grow to nearly $255 billion by 2030, driven by the rapid deployment of generative AI and real-time enterprise AI applications [2] - VCI Global views AI compute infrastructure as a strategically significant long-term asset class that can generate sustained demand from AI-driven applications [2] Infrastructure Development - VCI Global has recently launched the AI GPU Lounge, a collaborative platform providing high-performance GPU infrastructure for AI development and inference [3] - The AI GPU Lounge is the first operational platform in VCI Global's broader AI infrastructure strategy, enabling access to scalable AI compute resources for enterprises and developers [3] Leadership Perspective - The Chief Technology Officer, Jason Thye, emphasized that AI is entering a phase of large-scale deployment, leading to a rapid expansion in demand for efficient AI inference compute [4] - The company aims to build GPU-driven compute gradually to support enterprises, developers, and next-generation AI applications [4]