Workflow
AI自我进化
icon
Search documents
LLM已能自我更新权重,自适应、知识整合能力大幅提升,AI醒了?
机器之心· 2025-06-14 04:12
Core Insights - The article discusses the increasing research and discussions around AI self-evolution, highlighting various frameworks and models that aim to enable AI systems to improve themselves autonomously [1][2]. Group 1: AI Self-Evolution Frameworks - Several notable frameworks for AI self-improvement are mentioned, including "Darwin-Gödel Machine" (DGM), "Self-Reinforcement Training" (SRT), "MM-UPT" for multimodal large models, and "UI-Genie" for self-improvement [1]. - OpenAI's CEO Sam Altman envisions a future where humanoid robots can autonomously manufacture more robots and essential infrastructure, indicating a significant leap in AI capabilities [1]. - A recent MIT paper titled "Self-Adapting Language Models" introduces SEAL (Self-Adapting LLMs), which allows language models to update their weights based on generated training data [2][4]. Group 2: SEAL Methodology - SEAL employs a self-editing mechanism through reinforcement learning, where the model generates its own training data and updates its weights based on performance improvements [10][12]. - The SEAL framework consists of two nested loops: an external reinforcement learning loop for optimizing self-editing generation and an internal update loop for adjusting model parameters [13][15]. - The model's training involves generating self-edits and using supervised fine-tuning to update its parameters, enhancing its adaptability to new tasks [18][19]. Group 3: Experimental Results - In few-shot learning experiments, SEAL achieved a success rate of 72.5%, significantly outperforming baseline methods, which had success rates of 0% and 20% [34][36]. - For knowledge integration tasks, SEAL demonstrated improved accuracy, achieving 47.0% in single passage scenarios and 43.8% in continued pretraining, surpassing other training methods [38][40]. - The results indicate that SEAL's reinforcement learning approach leads to more effective self-edits, enhancing overall model performance [43].