英伟达韩松团队新作:具有后神经架构搜索的高效语言模型
NvidiaNvidia(US:NVDA) 量子位·2025-08-26 08:11

Core Insights - The article discusses the launch of Jet-Nemotron, a new efficient language model based on Post Neural Architecture Search, which outperforms existing models in various benchmarks and achieves significant speed improvements in throughput [1][6][24]. Performance Metrics - Jet-Nemotron-2B shows a throughput increase of 47 times compared to Qwen3-1.7B-Base, with a cache size reduced to 1/47 [3]. - In mathematical tasks, Jet-Nemotron-2B achieves an average accuracy of 49.6, surpassing Qwen3-1.7B-Base by 6.3 points while being 47 times faster [26]. - For common sense reasoning tasks, Jet-Nemotron-2B reaches an average accuracy of 62.0, outperforming all baseline models [30]. - In retrieval tasks, Jet-Nemotron-2B performs better than all baseline models except Qwen3-1.7B-Base [33]. - Jet-Nemotron-4B achieves a peak average accuracy of 76.2 while maintaining a 21 times speed advantage over Qwen3 [34]. Model Architecture - Jet-Nemotron is built on Post Neural Architecture Search, which optimizes the placement of attention layers and selects the best linear attention modules [6][10]. - The model incorporates a new linear attention module called JetBlock, which uses a kernel generator for dynamic convolution kernel generation [17][18]. - Hardware-aware architecture search is employed to optimize model parameters for better accuracy without compromising throughput [19][22]. Future Developments - The research team plans to release the code and model on GitHub, pending legal compliance review [23].