人工超级智能(ASI)

Search documents
万字长文!首篇智能体自进化综述:迈向超级人工智能之路~
自动驾驶之心· 2025-07-31 23:33
Core Insights - The article discusses the transition from static large language models (LLMs) to self-evolving agents that can adapt and learn continuously from interactions with their environment, aiming for artificial superintelligence (ASI) [3][5][52] - It emphasizes three fundamental questions regarding self-evolving agents: what to evolve, when to evolve, and how to evolve, providing a structured framework for understanding and designing these systems [6][52] Group 1: What to Evolve - Self-evolving agents can improve various components such as models, memory, tools, and workflows to enhance performance and adaptability [14][22] - The evolution of agents is categorized into four pillars: cognitive core (model), context (instructions and memory), external capabilities (tool creation), and system architecture [22][24] Group 2: When to Evolve - Self-evolution occurs in two main time modes: intra-test-time self-evolution, which happens during task execution, and inter-test-time self-evolution, which occurs between tasks [26][27] - The article outlines three basic learning paradigms relevant to self-evolution: in-context learning (ICL), supervised fine-tuning (SFT), and reinforcement learning (RL) [27][28] Group 3: How to Evolve - The article discusses various methods for self-evolution, including reward-based evolution, imitation and demonstration learning, and population-based approaches [32][36] - It highlights the importance of continuous learning from real-world interactions, seeking feedback, and adjusting strategies based on dynamic environments [30][32] Group 4: Evaluation of Self-evolving Agents - Evaluating self-evolving agents presents unique challenges, requiring assessments that capture adaptability, knowledge retention, and long-term generalization capabilities [40] - The article calls for dynamic evaluation methods that reflect the ongoing evolution and diverse contributions of agents in multi-agent systems [51][40] Group 5: Future Directions - The deployment of personalized self-evolving agents is identified as a critical goal, focusing on accurately capturing user behavior and preferences over time [43] - Challenges include ensuring that self-evolving agents do not reinforce existing biases and developing adaptive evaluation metrics that reflect their dynamic nature [44][45]
OpenAI反挖四位特斯拉、xAI、Meta高级工程师,目标星际之门
机器之心· 2025-07-09 04:23
Core Viewpoint - The article discusses the intense competition for AI talent between major companies like OpenAI and Meta, highlighting recent talent acquisitions and the implications for the industry [1][2][8]. Group 1: Talent Acquisition - OpenAI has recently hired four prominent engineers from competitors, including David Lau, former software engineering VP at Tesla, and others from xAI and Meta [3][5][6]. - Meta has aggressively recruited at least seven employees from OpenAI, offering high salaries and substantial computational resources to support their research [8][18]. - The competition for talent has escalated, with OpenAI's Chief Research Officer Mark Chen expressing a strong commitment to countering Meta's recruitment efforts [19]. Group 2: Strategic Initiatives - OpenAI's expansion team, which includes the new hires, is focused on building AI infrastructure, including a significant joint project named "Stargate," aimed at developing a supercomputer with a projected cost of $115 billion [7]. - The new hires emphasize the importance of infrastructure in bridging research and practical applications, with Uday Ruddarraju describing Stargate as a "moonshot" project [7][8]. - The competition has prompted OpenAI to reconsider its compensation strategies to retain top talent amidst the aggressive recruitment by Meta [8]. Group 3: Industry Context - The AI industry has seen a surge in talent competition since the launch of ChatGPT in late 2022, with companies re-evaluating their hiring practices to secure leading researchers [13][15]. - Discussions around achieving "Artificial Superintelligence (ASI)" have become more prevalent, indicating a shift in focus towards groundbreaking technological advancements [14]. - The article notes that scaling capabilities are crucial for AI development, as using more data and computational power enhances model performance [16][17].