人工超级智能(ASI)
Search documents
从物竞天择到智能进化,首篇自进化智能体综述的ASI之路
机器之心· 2025-08-12 09:51
Core Insights - The article discusses the limitations of static large language models (LLMs) and introduces the concept of self-evolving agents as a new paradigm in artificial intelligence [2] - A comprehensive review has been published by researchers from Princeton University and other top institutions to establish a unified theoretical framework for self-evolving agents, aiming to pave the way for artificial general intelligence (AGI) and artificial superintelligence (ASI) [2][32] Definition and Framework - The review provides a formal definition of self-evolving agents, laying a mathematical foundation for research and discussion in the field [5] - It constructs a complete framework for analyzing and designing self-evolving agents based on four dimensions: What, When, How, and Where [8] What to Evolve? - The four core pillars for self-improvement within the agent system are identified: Models, Context, Tools, and Architecture [11] - Evolution can occur at two levels for models: optimizing decision policies and accumulating experience through interaction with the environment [13] - Context evolution involves dynamic management of memory and automated optimization of prompts [13] - Tools evolution includes the creation of new tools, mastery of existing tools, and efficient management of tool selection [13] - Architecture evolution can target both single-agent and multi-agent systems to optimize workflows and collaboration [14] When to Evolve? - Evolution timing determines the relationship between learning and task execution, categorized into two main modes: intra-test-time and inter-test-time self-evolution [17] How to Evolve? - Intra-test-time self-evolution occurs during task execution, allowing agents to adapt in real-time [20] - Inter-test-time self-evolution happens after task completion, where agents iterate on their capabilities based on accumulated experiences [20] - Evolution can be driven by various methodologies, including reward-based evolution, imitation learning, and population-based methods [21][22] Where to Evolve? - Self-evolving agents can evolve in general domains to enhance versatility or specialize in specific domains such as coding, GUI interaction, finance, medical applications, and education [25] Evaluation and Future Directions - The review emphasizes the need for dynamic evaluation metrics for self-evolving agents, focusing on adaptability, knowledge retention, generalization, efficiency, and safety [28] - Future challenges include developing personalized AI agents, enhancing generalization and cross-domain adaptability, ensuring safety and controllability, and exploring multi-agent ecosystems [32]
对话凯文·凯利:不必过多担忧,AI变强后,人类只需专注于“玩”
3 6 Ke· 2025-08-01 10:55
Group 1 - The article discusses Kevin Kelly's vision of a future shaped by AI, particularly in his new book "2049: The Possible 10,000 Days" [1][2] - Kelly emphasizes the concept of "alien intelligence" rather than AGI, suggesting that AI will coexist with humans as a different form of intelligence rather than a superior one [6][7][10] - The idea of a "mirror world" is introduced, which is a virtual dimension layered over reality, allowing for interaction and collaboration between humans and AI [16][17][18] Group 2 - Kelly expresses skepticism about the current AI models achieving AGI, arguing that intelligence is a complex compound requiring more than just scaling existing models [4][5][13] - He believes that the future will see a few dominant companies in the AI space due to network effects, leading to a natural monopoly [26][27] - The article highlights the potential for AI to enhance human creativity and collaboration, suggesting that human value will increase as AI takes over routine tasks [29][35][36] Group 3 - The discussion includes the importance of learning how to learn in an AI-driven future, emphasizing the need for lifelong learning and adaptability [41][42] - Kelly argues that human interaction will remain valuable, even in a world with advanced AI, as the essence of human presence becomes a rare commodity [35][38] - The article raises concerns about privacy and trust in AI systems, suggesting that people may willingly trade privacy for personalized services [38][39]
万字长文!首篇智能体自进化综述:迈向超级人工智能之路~
自动驾驶之心· 2025-07-31 23:33
Core Insights - The article discusses the transition from static large language models (LLMs) to self-evolving agents that can adapt and learn continuously from interactions with their environment, aiming for artificial superintelligence (ASI) [3][5][52] - It emphasizes three fundamental questions regarding self-evolving agents: what to evolve, when to evolve, and how to evolve, providing a structured framework for understanding and designing these systems [6][52] Group 1: What to Evolve - Self-evolving agents can improve various components such as models, memory, tools, and workflows to enhance performance and adaptability [14][22] - The evolution of agents is categorized into four pillars: cognitive core (model), context (instructions and memory), external capabilities (tool creation), and system architecture [22][24] Group 2: When to Evolve - Self-evolution occurs in two main time modes: intra-test-time self-evolution, which happens during task execution, and inter-test-time self-evolution, which occurs between tasks [26][27] - The article outlines three basic learning paradigms relevant to self-evolution: in-context learning (ICL), supervised fine-tuning (SFT), and reinforcement learning (RL) [27][28] Group 3: How to Evolve - The article discusses various methods for self-evolution, including reward-based evolution, imitation and demonstration learning, and population-based approaches [32][36] - It highlights the importance of continuous learning from real-world interactions, seeking feedback, and adjusting strategies based on dynamic environments [30][32] Group 4: Evaluation of Self-evolving Agents - Evaluating self-evolving agents presents unique challenges, requiring assessments that capture adaptability, knowledge retention, and long-term generalization capabilities [40] - The article calls for dynamic evaluation methods that reflect the ongoing evolution and diverse contributions of agents in multi-agent systems [51][40] Group 5: Future Directions - The deployment of personalized self-evolving agents is identified as a critical goal, focusing on accurately capturing user behavior and preferences over time [43] - Challenges include ensuring that self-evolving agents do not reinforce existing biases and developing adaptive evaluation metrics that reflect their dynamic nature [44][45]
OpenAI反挖四位特斯拉、xAI、Meta高级工程师,目标星际之门
机器之心· 2025-07-09 04:23
Core Viewpoint - The article discusses the intense competition for AI talent between major companies like OpenAI and Meta, highlighting recent talent acquisitions and the implications for the industry [1][2][8]. Group 1: Talent Acquisition - OpenAI has recently hired four prominent engineers from competitors, including David Lau, former software engineering VP at Tesla, and others from xAI and Meta [3][5][6]. - Meta has aggressively recruited at least seven employees from OpenAI, offering high salaries and substantial computational resources to support their research [8][18]. - The competition for talent has escalated, with OpenAI's Chief Research Officer Mark Chen expressing a strong commitment to countering Meta's recruitment efforts [19]. Group 2: Strategic Initiatives - OpenAI's expansion team, which includes the new hires, is focused on building AI infrastructure, including a significant joint project named "Stargate," aimed at developing a supercomputer with a projected cost of $115 billion [7]. - The new hires emphasize the importance of infrastructure in bridging research and practical applications, with Uday Ruddarraju describing Stargate as a "moonshot" project [7][8]. - The competition has prompted OpenAI to reconsider its compensation strategies to retain top talent amidst the aggressive recruitment by Meta [8]. Group 3: Industry Context - The AI industry has seen a surge in talent competition since the launch of ChatGPT in late 2022, with companies re-evaluating their hiring practices to secure leading researchers [13][15]. - Discussions around achieving "Artificial Superintelligence (ASI)" have become more prevalent, indicating a shift in focus towards groundbreaking technological advancements [14]. - The article notes that scaling capabilities are crucial for AI development, as using more data and computational power enhances model performance [16][17].