英伟达未来五年豪掷260亿美元押注开源AI大模型,这一投资远超OpenAI训练GPT-4时的30亿美元,首批AI模型最快将于2026年底至2027年初问世

Core Insights - Nvidia plans to invest $26 billion (approximately 178.8 billion RMB) over the next five years to advance the development of open-source AI large models, significantly exceeding the $3 billion spent by OpenAI to train GPT-4 [1] - This investment marks Nvidia's strategic shift from being a "chip manufacturer" to a "full-stack AI leading laboratory" [1] Investment Strategy - The $26 billion investment will not focus solely on a single model but will cover the entire industry chain of open-source AI large models, with funds expected to be deployed gradually over the next 18 to 24 months [1] - The first self-developed open-source AI models are anticipated to be released by the end of 2026 or early 2027 [1] Technical Approach - Nvidia has chosen an "open-weight" model, which is a middle ground between OpenAI's fully closed-source approach and Meta's fully open-source Llama series [1] - Key parameters (weights) of the models will be made public, allowing businesses and developers to download and run them on their own devices or private clouds, addressing needs for data privacy, customization, and cost control [1] - However, the training data and code may not be fully disclosed [1] Model Development Focus - Nvidia will concentrate on developing cutting-edge multimodal and multi-domain large models, covering areas such as language, code, scientific computing, and intelligent agents [2] - The company has secretly completed pre-training on a 550 billion parameter super-large model, which serves as a technical validation and stress test for subsequent open-source model development [2]

Nvidia-英伟达未来五年豪掷260亿美元押注开源AI大模型,这一投资远超OpenAI训练GPT-4时的30亿美元,首批AI模型最快将于2026年底至2027年初问世 - Reportify