Workflow
Nemotron Nano v2
icon
Search documents
英伟达开源9B参数小模型,比Qwen3快6倍
量子位· 2025-08-19 05:25
Core Insights - The article discusses the emergence of small AI models, highlighting the launch of NVIDIA's new small language model, Nemotron Nano v2, which is designed to perform complex reasoning tasks efficiently [1][3][7]. Group 1: Model Features and Performance - Nemotron Nano v2 is a 9 billion parameter model that matches or exceeds the accuracy of the leading open-source model Qwen3-8B in complex reasoning benchmarks while being 6 times faster [1][7]. - The model supports a "reasoning trace" feature, allowing it to generate reasoning processes before providing final answers, which enhances the quality of responses, especially for complex tasks [8][11]. - Users can control the "thinking budget," specifying the number of tokens the model can use during reasoning, which helps in managing the model's performance [10][12]. Group 2: Training and Data - The model underwent extensive pre-training on over 20 trillion tokens, utilizing FP8 precision and a Warmup-Stable-Decay learning rate schedule [19]. - Post-training involved various techniques, including supervised fine-tuning and reinforcement learning from human feedback, with about 5% of the data containing intentionally truncated reasoning traces [21]. - NVIDIA has also released a significant portion of the data used for training, including a diverse pre-training dataset with 66 trillion tokens across multiple categories [26][23]. Group 3: Open Source Strategy - NVIDIA's approach contrasts with other tech giants moving towards closed-source models, emphasizing an open-source strategy with the Nemotron ecosystem [30][32]. - The company has made significant strides in open-sourcing its models, which may influence the competitive landscape in AI development [29][33].