Workflow
强化学习
icon
Search documents
大型语言模型稳定强化学习的新路径:几何平均策略优化GMPO
机器之心· 2025-08-13 00:52
本文主要作者:赵毓钟,中国科学院大学在读博士,微软亚洲研究院 MSRA 实习生,主要研究方向为多模态学习、语言模型后训练。刘悦,中国科学院大学在读 指导老师:万方,中国科学院大学计算机学院副教授,博导。叶齐祥,中国科学院大学电子学院教授,博导。 崔磊,微软亚洲研究院通用人工智能组(GenAI) 首席研究经理。韦福如,微软亚洲研究院通用人工智能组(GenAI)杰出科学家。 近年来,强化学习(RL)在大型语言模型(LLM)的微调过程中,尤其是在推理能力提升方面,取得了显著的成效。传统的强化学习方法,如近端策略优化 (Proximal Policy Optimization,PPO)及其变种,包括组相对策略优化(Group Relative Policy Optimization,GRPO),在处理复杂推理任务时表现出了强大的潜 力。然而,尽管它们在许多场景下都表现良好,仍然 面临着在训练过程中不 稳定 的问题 ,尤其是在处理带有极端重要性加权奖励时。几何平均策略优化 (Geometric-Mean Policy Optimization,GMPO),作为 GRPO 的稳定化版本,解决这一问题。本文将深入探讨 GM ...
25年8月8日理想VLA体验分享(包含体验过特斯拉北美FSD的群友)
理想TOP2· 2025-08-12 13:50
Core Insights - The article discusses the performance and user experience of the Li Auto's VLA (Vehicle Lane Assist) system compared to Tesla's FSD (Full Self-Driving) system, highlighting that while VLA shows promise, it still falls short of the seamless experience provided by FSD in certain scenarios [1][2][3]. Experience Evaluation - The experience is divided into three parts: driving in a controlled environment with no driver present, a one-hour public road test, and a two-hour self-selected route test [1]. - Feedback from users indicates that the VLA system provides a comfortable and efficient experience, particularly in controlled environments, but its performance in more complex road scenarios remains to be fully evaluated [2][3]. User Feedback - Users noted a significant difference in the braking experience of VLA, describing it as smooth and seamless compared to traditional driving, which enhances the perception of safety and comfort [3][4]. - The article emphasizes that the initial goal for autonomous driving systems should be to outperform 80% of average drivers before aiming for higher benchmarks [4][5]. Iteration Potential - The VLA system is believed to have substantial room for improvement compared to its predecessor, VLM, with potential advancements in four key areas: simulation data efficiency, maximizing existing hardware capabilities, enhancing model performance through reinforcement learning, and improving user voice control experiences [6][7]. - The article suggests that the shift to reinforcement learning for VLA allows for targeted optimizations in response to specific driving challenges, which was a limitation in previous models [8][9]. User Experience and Product Development - The importance of user experience is highlighted, with the assertion that in the AI era, product experience can be as crucial as technical capabilities [10]. - The voice control feature of VLA is seen as a significant enhancement, allowing for personalized driving experiences based on user preferences, which could improve overall satisfaction [10].
理想汽车的VLA“长征”
Jing Ji Guan Cha Wang· 2025-08-12 10:04
Core Insights - The core philosophy of Li Auto's CEO, Li Xiang, emphasizes a long-term approach to success, advocating for patience and resilience in the face of industry challenges [1] - The launch event for the Li Auto i8 highlighted the introduction of the VLA driver model, which reflects the company's commitment to long-term innovation rather than short-term gains [1][3] Group 1: VLA Driver Model - The VLA driver model distinguishes itself from traditional end-to-end architectures by utilizing reinforcement learning to enhance machine understanding of driving decisions [4][11] - The goal for VLA is to significantly improve safety metrics, aiming for an accident rate of one in 600 million kilometers, compared to current figures of 350-400 million kilometers for Li Auto's assisted driving [4][8] - VLA's ability to adapt to individual driving styles through continuous learning is a key feature, allowing for a personalized driving experience [4][8] Group 2: Testing and Efficiency - Li Auto has opted for simulation testing over extensive real-world testing, achieving over 40 million kilometers of simulated driving by mid-2025, with daily peaks of 300,000 kilometers [5][9] - The company has focused on creating a robust simulation environment to address the limitations of real-world testing, which cannot fully replicate extreme driving scenarios [9][10] - The efficiency of VLA's testing process is a critical factor in its development, with a strong emphasis on transforming research and development workflows [5][9] Group 3: Technical Challenges - Li Auto's approach to developing the VLA model involves overcoming significant challenges in data, algorithms, computing power, and engineering capabilities [19] - The company has accumulated 4.3 billion kilometers of assisted driving data and 1.2 billion kilometers of valid feedback data, which are essential for refining the VLA model [9] - The VLA model's architecture is designed to provide logical reasoning capabilities, addressing the shortcomings of traditional end-to-end models [11][12] Group 4: Market Response and Future Goals - The market response to the VLA model has been positive, with a 72.4% trial rate and a 92% satisfaction rate reported for Li Auto's intelligent driving features [8] - Li Auto aims to enhance its MPI takeover mileage to 400-500 kilometers by the end of 2025, with aspirations to reach 1,000 kilometers in the near future [8] - The company's commitment to long-term innovation is reflected in its strategic decisions, prioritizing safety and effective computing power over immediate performance metrics [25][26]
让强化学习快如闪电:FlashRL一条命令实现极速Rollout,已全部开源
机器之心· 2025-08-12 09:51
Core Viewpoint - The article discusses the development and implementation of FlashRL, an open-source reinforcement learning solution that utilizes quantized rollouts without sacrificing downstream performance, addressing the challenges of rollout-training mismatch through the introduction of Truncated Importance Sampling (TIS) [4][16][37]. Group 1: DAPO and Rollout Challenges - DAPO, developed by Tsinghua AIR and ByteDance, is an open-source SOTA system for large-scale LLM reinforcement learning, achieving a score of 50 on the AIME 2024 benchmark with the Qwen2.5-32B model [1]. - The research team identified that rollout generation is a major bottleneck in reinforcement learning training, consuming approximately 70% of total training time [3]. - The application of 8-bit quantization during rollout generation, combined with TIS technology, significantly accelerates the process while maintaining downstream performance [3][4]. Group 2: FlashRL Implementation - FlashRL is the first open-source reinforcement learning implementation that applies INT8/FP8 during the rollout phase, achieving performance parity with BF16 without any performance loss [4][15]. - The introduction of TIS mitigates the rollout-training mismatch, allowing quantized rollout training to achieve performance levels comparable to BF16 rollout training, and even surpassing naive BF16 rollout training [16][37]. - FlashRL supports online quantization and has been integrated with existing inference engines like vLLM to enhance their capabilities for models with parameter updates [22]. Group 3: Performance and Acceleration - FlashRL's INT8 rollout can provide up to 1.7 times throughput improvement while retaining the advantages of reinforcement learning [23]. - In standard environments, the acceleration observed with 8-bit quantization is more pronounced in larger models, with a speedup of up to 1.75 times for the 32B model compared to BF16 [29]. - In memory-constrained environments, INT8 quantization can lead to over 3 times speedup in generation speed, highlighting its potential for larger models [34]. Group 4: Validation and Usage - The effectiveness of FlashRL was validated in training the DAPO-32B model, demonstrating that INT8 rollout significantly improves training speed without compromising accuracy on the AIME benchmark [36][37]. - FlashRL can be easily implemented with a single command, allowing users to integrate it into their RL training without code modifications [41].
深聊GPT-5发布:过度营销的反噬与AI技术困局
Tai Mei Ti A P P· 2025-08-12 03:18
Core Viewpoint - The release of GPT-5 by OpenAI has faced significant criticism from users, leading to the reinstatement of GPT-4o for paid users. The expectations for GPT-5 were high, but the actual advancements were perceived as underwhelming compared to the leap from GPT-3 to GPT-4. The release highlighted various technical challenges and a shift in focus towards market competition and application in specific sectors like education, healthcare, and programming [1][3][4]. Group 1: Technical Challenges and Product Development - The development of GPT-5 encountered numerous technical bottlenecks, including data scarcity and model failures, which have raised concerns about OpenAI's ability to innovate [3][6][41]. - GPT-5 is speculated to be a "unifying system" that integrates various capabilities but relies on a "Real-time Model Router" to connect different sub-models rather than being a groundbreaking single model [6][7]. - The reliance on existing technologies for the routing system has led to skepticism about the novelty of GPT-5, with some experts suggesting it should be considered an incremental improvement rather than a significant upgrade [7][10]. Group 2: Market Implications and Application Areas - OpenAI is targeting three main verticals for GPT-5: education, healthcare, and programming, indicating a strategic shift towards commercial applications [13][14]. - The education sector is particularly highlighted, with concerns that ChatGPT could disrupt existing educational platforms, as evidenced by the stock fluctuations of language learning companies during the GPT-5 announcement [16][17]. - In healthcare, GPT-5 is positioned to assist patients in understanding complex medical information, potentially transforming patient-doctor interactions and empowering patients with knowledge [19][20]. Group 3: User Experience and Feedback - User feedback has been largely negative, with many expressing dissatisfaction over the perceived loss of customization and the effectiveness of GPT-5 compared to GPT-4o. This has led to calls for the return of the previous model [10][12]. - OpenAI's CEO has acknowledged the need for more customizable features and ongoing improvements to GPT-5 in response to user concerns [12][29]. Group 4: Future Directions and Innovations - The article discusses potential future directions for AI development, including reinforcement learning, multi-modal capabilities, and exploring alternative architectures like Joint Embedding Predictive Architecture (JEPA) to overcome the limitations of the current transformer-based models [46][57][62]. - The industry is at a critical juncture, with the need for breakthroughs in AI technology becoming increasingly urgent as existing models face diminishing returns in performance [41][63].
理想VLA的实质 | 强化学习占主导的下一个action token预测
自动驾驶之心· 2025-08-11 23:33
Core Insights - The article discusses the potential and understanding of AI, particularly focusing on the concept of "predicting the next token" and its implications for AI capabilities and consciousness [2][3][18]. Group 1: Understanding AI and Token Prediction - Different interpretations of "predicting the next token" reflect varying understandings of the potential and essence of LLM (Large Language Models) and AI [2]. - Those who view "predicting the next token" as more than just a statistical distribution are more likely to recognize the significant potential of LLMs and AI [2][18]. - The article argues that the contributions of companies like 理想 (Li Auto) in AI development are often underestimated due to a lack of deep understanding of AI's capabilities [2][19]. Group 2: Ilya's Contributions and Perspectives - Ilya, a prominent figure in AI, has been instrumental in several key advancements in the field, including deep learning and reinforcement learning [4][5][6]. - His views on "predicting the next token" challenge the notion that it cannot surpass human performance, suggesting that a sufficiently advanced neural network could extrapolate behaviors of hypothetical individuals with superior capabilities [8][9][18]. Group 3: Li Auto's VLA and AI Integration - 理想's VLA (Vehicle Learning Architecture) operates by continuously predicting the next action token based on sensor inputs, which is a more profound understanding of the physical world rather than mere statistical analysis [19][20]. - The reasoning process of 理想's VLA is likened to consciousness, differing from traditional chatbots, as it operates in real-time and ceases when the system is turned off [21][22]. - The article posits that the integration of AI software and hardware in 理想's approach is at a high level, which is often overlooked by those in the industry [29]. Group 4: Reinforcement Learning in AI Applications - The article asserts that assisted driving is more suitable for reinforcement learning compared to chatbots, as the reward functions in driving are clearer and more defined [24][26]. - The differences in the underlying capabilities required for AI software and hardware development are significant, with software allowing for rapid iteration and testing, unlike hardware [28].
闭环碰撞率爆降50%!DistillDrive:异构多模态蒸馏端到端新方案
自动驾驶之心· 2025-08-11 23:33
Core Insights - The article discusses the development of DistillDrive, an end-to-end autonomous driving model that significantly reduces collision rates by 50% and improves closed-loop performance by 3 percentage points compared to baseline models [2][7]. Group 1: Model Overview - DistillDrive utilizes a knowledge distillation framework to enhance multi-modal motion feature learning, addressing the limitations of existing models that overly focus on ego-vehicle status [2][6]. - The model incorporates a structured scene representation as a teacher model, leveraging diverse planning instances for multi-objective learning [2][6]. - Reinforcement learning is introduced to optimize the mapping from states to decisions, while generative modeling is used to construct planning-oriented instances [2][6]. Group 2: Experimental Validation - The model was validated on the nuScenes and NAVSIM datasets, demonstrating a 50% reduction in collision rates and a 3-point improvement in performance metrics [7][37]. - The nuScenes dataset consists of 1,000 driving scenes, while the NAVSIM dataset enhances perception capabilities with high-quality annotations and complex scenarios [33][36]. Group 3: Performance Metrics - DistillDrive outperformed existing models, achieving lower collision rates and reduced L2 error compared to SparseDrive, indicating the effectiveness of diversified imitation learning [37][38]. - The teacher model exhibited superior performance, confirming the effectiveness of reinforcement learning in optimizing state space [37][39]. Group 4: Future Directions - Future work aims to integrate world models with language models to further enhance planning performance and employ more effective reinforcement learning methods [54][55].
用时间积累换突破——月之暗面专注通用人工智能领域
Jing Ji Ri Bao· 2025-08-11 22:12
Core Insights - Moonshot AI, based in Beijing, is gaining attention for its open-source model Kimi K2, which ranked fifth globally upon its launch in July 2023 [1] - The company's mission is to explore the limits of intelligence and make AI universally accessible [1] Company Overview - Founded in April 2023 by a team with extensive experience in natural language processing (NLP), Moonshot AI aims to discover transformative possibilities in artificial intelligence [1] - The company has approximately 300 employees, with a significant portion being young talent from the '90s generation [2] Product Development - Kimi K2, a trillion-parameter model, has a unique capability to handle long texts, supporting up to 200,000 Chinese characters [2][5] - The Kimi intelligent assistant was launched in October 2023, followed by several product releases, including Kimi browser assistant and Kimi-Researcher [2] Technical Innovations - Kimi K2's architecture allows for complex tasks at a lower cost, with only 32 billion active parameters [3] - The model has excelled in various benchmarks, particularly in programming, tool usage, and mathematical reasoning [6] User Engagement - Kimi K2's long-text capability has led to a significant increase in user adoption, with user numbers growing from hundreds of thousands to tens of millions in 2024 [5] - The model is designed to be user-friendly, allowing non-programmers to utilize its capabilities effectively [7] Future Aspirations - Moonshot AI aims to create a general-purpose AI that surpasses human intelligence, focusing on developing versatile skills that can enhance each other [8] - The company emphasizes the importance of building a strong foundational model before releasing products, ensuring robust performance and capabilities [8]
质疑VLA模型、AI完全不够用?有从业者隔空回应宇树王兴兴
第一财经· 2025-08-11 14:51
Core Viewpoint - The article discusses the skepticism of Wang Xingxing, CEO of Yushu, regarding the VLA (Vision-Language-Action) model, suggesting that the robotics industry is overly focused on data while lacking sufficient embodied intelligence in AI [3][4]. Group 1: Challenges in Robotics - The traditional robotics industry faces three core challenges: perception limitations, decision-making gaps, and generalization bottlenecks [6][7]. - Current robots often rely on preset rules for task execution, making it difficult to understand complex and dynamic environments [6]. - In multi-task switching, traditional robots frequently require human intervention for reprogramming or strategy adjustments [6]. - Robots need extensive retraining and debugging when confronted with new tasks or scenarios [6]. Group 2: Need for Model Reconstruction - There is a call within the industry to reconstruct the VLA model and seek new paradigms for embodied intelligence [5][7]. - Jiang Lei emphasizes the need for a complete system that integrates both hardware and software, rather than merely relying on large language models [6]. - The current research landscape is fragmented, with large language model researchers focusing solely on language, while edge intelligence concentrates on smaller models [6]. Group 3: Future Directions - Jiang Lei proposes exploring cloud and edge computing collaboration to create a comprehensive deployment architecture for humanoid robots [6]. - The ideal "brain" model for humanoid robots should possess full parameter capabilities, while the "small brain" model deployed on the robot must achieve breakthroughs in size and real-time performance [6]. - The industry is optimistic about humanoid robots becoming a significant sector, with this year being referred to as the year of mass production for humanoid robots [7].
关于 AI Infra 的一切
Hu Xiu· 2025-08-11 10:50
Group 1 - The core concept of AI Infrastructure (AI Infra) encompasses both hardware and software components [2][3] - Hardware includes AI chips, GPUs, and switches, while the software layer can be likened to cloud computing, divided into three layers: IaaS, PaaS, and an optimization layer for training and inference frameworks [3][4][5] - The rise of large models has created significant opportunities for AI Infra professionals, marking a pivotal moment similar to the early days of search engines [8][12] Group 2 - AI Infra professionals are increasingly recognized as essential to the success of AI models, with their role evolving from support to a core component of model capabilities [102][106] - The performance of AI models is heavily influenced by the efficiency of the underlying infrastructure, with metrics such as model response latency and GPU utilization being critical [19][40] - Companies must evaluate the cost-effectiveness of building their own infrastructure versus utilizing cloud services, as optimizing infrastructure can lead to substantial savings [22][24] Group 3 - The distinction between traditional infrastructure and AI Infra lies in their specific hardware and network requirements, with AI Infra primarily relying on GPUs [14][15] - Future AI Infra professionals will likely emerge from both new engineers and those transitioning from traditional infrastructure roles, emphasizing the importance of accumulated knowledge [16][18] - The collaboration between algorithm developers and infrastructure engineers is crucial, as both parties must work together to optimize model performance and efficiency [56][63] Group 4 - The emergence of third-party companies in the AI Infra space is driven by the need for diverse API offerings, although their long-term viability depends on unique value propositions [26][29] - Open-source models can stimulate advancements in AI Infra by encouraging optimization efforts, but excessive focus on popular models may hinder innovation [84][87] - The integration of domestic chips into AI Infra solutions is a growing area of interest, with efforts to enhance their competitiveness through tailored model designs [85][97]