Kimi旗舰模型K2.5
Search documents
Kimi创始人杨植麟:未来AI研发将进入AI主导时代
凤凰网财经· 2026-03-29 10:49
Core Insights - The essence of large models is the conversion of energy into intelligence, with scalability being a core foundation for AI development. However, scalability is not merely about brute-force computing power and energy but focuses on upgrading efficiency [1][3]. Group 1: Scalability Strategy - Kimi's scalability strategy is built around three main directions: Token efficiency, long context, and Agent swarm technology, aiming to maximize intelligence with limited resources [1][3]. - Improving Token efficiency involves utilizing better network architectures and optimizers to learn more intelligence from the same amount of data [3]. - Kimi's proprietary Kimi Linear architecture enhances long context capabilities, allowing models to achieve lower loss functions with longer inputs, supporting more complex task execution [3]. Group 2: Evolution of Model Training - The evolution of large model training has three stages: Initially relying on natural internet data with minimal human annotation, moving towards large-scale reinforcement learning systems by 2025, where human-defined tasks are enhanced through reinforcement learning [3]. - In the near future, AI will increasingly lead research and development efforts, with researchers equipped with vast amounts of Tokens, allowing AI to autonomously synthesize new tasks, construct new environments, and define optimal reward functions [3]. - This shift is expected to accelerate the pace of research and development across the AI field [3].