Workflow
强化学习
icon
Search documents
《Science Robotics》重磅:仅需2小时,机器人柔性装配技能直逼人类顶尖水平
机器人大讲堂· 2025-09-06 11:43
Core Insights - The article discusses the challenges in robotic manipulation and introduces a new system called HIL-SERL that significantly improves the efficiency and effectiveness of robotic training in real-world scenarios [1][2]. Traditional Methods Challenges - Traditional robotic control methods require extensive engineering design or imitation learning, which often lack adaptability and struggle in new environments [1]. - These methods fail to achieve human-level proficiency and speed, leading to inefficiencies in real-world applications [1]. HIL-SERL System Overview - The HIL-SERL system developed by a research team at UC Berkeley allows robots to learn complex tasks with only 1 to 2.5 hours of real-world training, achieving near-perfect success rates and surpassing human execution speeds [2][3]. - The system combines human guidance with autonomous exploration, creating an efficient and safe learning loop [3]. System Architecture - HIL-SERL consists of three core components: an executor process, a learner process, and a replay buffer integrated within the learner [4]. - It employs off-policy reinforcement learning techniques to optimize behavior strategies by leveraging historical data, allowing robots to learn from human demonstrations and assess the contribution of different actions towards achieving goals [4]. Performance in Multi-Task Scenarios - The system was tested on challenging tasks such as precision assembly, dual-arm coordination, and dynamic manipulation, demonstrating its versatility [5][8]. - In precision assembly tasks, robots achieved sub-millimeter accuracy, while in dual-arm coordination tasks, they effectively managed complex operations requiring synchronized movements [8]. Results and Adaptability - After 1 to 2.5 hours of training, robots achieved nearly 100% success rates and executed tasks 1.8 times faster than traditional imitation learning methods, which had an average success rate of 49.7% [9]. - The robots exhibited remarkable adaptability, successfully adjusting to unexpected situations, such as misalignments or disturbances, showcasing their ability to learn from real-time feedback [12]. Learning Mechanism - HIL-SERL's adaptability stems from its ability to evolve different control strategies based on task requirements, allowing for real-time adjustments and corrections [13][16]. - For high-precision tasks, the system employs a closed-loop response strategy, while for dynamic tasks, it utilizes an open-loop predictive strategy, demonstrating a high level of confidence in executing planned actions [13]. Conclusion - The research highlights the potential of HIL-SERL to overcome traditional reinforcement learning limitations, enabling efficient learning of complex skills in real-world environments [14]. - This advancement opens new avenues for industrial applications, particularly in flexible manufacturing sectors requiring small-batch production [14].
想要「版本」超车,Agent 需要怎样的「Environment」?
机器之心· 2025-09-06 07:00
Core Viewpoint - The article discusses the recent transformation of AI startup you.com from a search engine to an AI infrastructure company following a $100 million Series C funding round. This shift aligns with the "product-driven infrastructure" strategy and reflects a broader trend of commercializing Agentic AI from laboratory settings [1]. Group 1: Agent Environment and Its Evolution - The focus of artificial intelligence is shifting from content creation to goal-driven, autonomous Agentic AI, driven by rapid advancements in the field [4]. - AI agents are expected to become the new interface for human-computer interaction, allowing users to issue commands in natural language without needing to write code [5]. - Companies like Cursor, Bolt, and Mercor have achieved significant revenue growth by leveraging unique intelligent agent products [6]. Group 2: Development of Agent Environment - The development of a suitable "Agent Environment" is crucial for modern intelligent applications, balancing the need for freedom in code execution with security and isolation [7]. - Companies like E2B and Modal Labs are providing secure, isolated cloud environments (sandboxes) for running AI-generated code [7]. - The concept of Agent Environment can be traced back to reinforcement learning, where it serves as a simulated space for training agents through trial and error [8]. Group 3: Real-World Application and Safety - As LLM-based agents advance, the requirements for their environments are evolving from training spaces to operational zones, necessitating safe access to real-world tools [9]. - Different types of agents require distinct environments, such as physical environments for robots and digital environments for virtual assistants [10].
深度|OpenAI联创:GPT-5的突破在于智能开始触及真正的深度认知领域;理想状态应该是默认使用我们的自动选择,而非手动配置
Z Potentials· 2025-09-06 04:40
Core Insights - OpenAI has released GPT-5 and GPT-OSS, marking significant advancements in AI technology and accessibility [4][3] - GPT-5 is the first hybrid model, designed to enhance user experience by automatically selecting model architectures [5][6] - The evolution of OpenAI's reasoning capabilities has transitioned from simple next-token prediction to more complex reasoning paradigms [9][10] Group 1: OpenAI's Technological Advancements - The release of GPT-5 and GPT-OSS has seen millions of downloads within days, showcasing the demand for these technologies [4] - GPT-5's breakthrough lies in its ability to engage in deep cognitive tasks, surpassing the limitations of its predecessor, GPT-4 [24][25] - The model's training has shifted from a one-time training approach to a more iterative reasoning-training cycle, enhancing its learning efficiency [9][10] Group 2: Learning Mechanisms and Challenges - OpenAI emphasizes the importance of real-world experience for models to develop generalization capabilities, highlighting the limitations of purely theoretical training [6][15] - The company is exploring the potential of real-time online learning, aiming to allow models to adapt continuously during operation [10][11] - Current bottlenecks in AI development are primarily related to computational power, which is essential for enhancing model capabilities [11][12] Group 3: Future Directions and Applications - OpenAI is focused on creating models that can assist in complex problem-solving, with applications in various fields, including mathematics and biology [25][22] - The company aims to improve the integration of AI into real-world applications, ensuring that models can handle the complexities of diverse environments [27][30] - OpenAI's vision includes making AI technology accessible to a broader audience, with plans for aggressive pricing strategies to enhance adoption [39][40]
沉寂一个月,openPangu性能飙升8%!华为1B开源模型来了
机器之心· 2025-09-05 04:31
Core Viewpoint - Huawei's Pangu Embedded-1B model represents a significant advancement in edge AI, enabling powerful AI capabilities on resource-constrained devices, thus paving the way for intelligent upgrades in various industries [1][5]. Group 1: Model Performance and Efficiency - The openPangu Embedded-1B model, with 1 billion parameters, achieves a new state-of-the-art (SOTA) record in performance and efficiency, demonstrating that smaller models can deliver substantial capabilities [2][3]. - The model's overall average score reached 63.90, surpassing similar models and matching larger models like Qwen3-1.7B, showcasing its parameter efficiency [3][4]. - In mathematical reasoning, the model scored 82.76% on the GSM8K benchmark and 81.83% on the MATH dataset, significantly outperforming its peers [3][4]. Group 2: Technical Innovations - The model employs a soft-hardware collaborative design, optimizing its architecture to align with the characteristics of Ascend hardware, ensuring efficient resource utilization [9][10]. - A two-stage curriculum learning approach is utilized to enhance the model's reasoning capabilities, simulating a human-like learning process [15][16]. - The introduction of offline On-Policy knowledge distillation allows for a more flexible and effective training process, improving the model's accuracy and generalization [18][19]. Group 3: Reinforcement Learning and Future Directions - The model incorporates a multi-source reward reinforcement learning mechanism, enhancing its performance through targeted feedback based on task complexity [22][25]. - Future developments aim to integrate fast and slow thinking processes within a single model, allowing for adaptive responses based on problem difficulty, thus improving both speed and accuracy [29][30].
从近1000篇工作中,看具身智能的技术发展路线!
具身智能之心· 2025-09-05 00:45
Core Insights - The article discusses the evolution and challenges of embodied intelligence, emphasizing the need for a comprehensive understanding of its development, issues faced, and future directions [3][4]. Group 1: Robotic Manipulation - The survey on robotic manipulation highlights the transition from mechanical programming to embodied intelligence, focusing on the evolution from simple grippers to dexterous multi-fingered hands [5][6]. - Key challenges in dexterous manipulation include data collection methods such as simulation, human demonstration, and teleoperation, as well as skill learning frameworks like imitation learning and reinforcement learning [5][6]. Group 2: Navigation and Manipulation - The discussion on robotic navigation emphasizes the importance of physics simulators in addressing high costs and data scarcity in real-world training, with a focus on the Sim-to-Real transfer challenges [9][15]. - The evolution of navigation techniques is outlined, transitioning from explicit memory to implicit memory, and the role of various simulators in narrowing the Sim-to-Real gap is analyzed [15][16]. Group 3: Multimodal Large Models - The exploration of embodied multimodal large models (EMLMs) reveals their potential to bridge perception, cognition, and action gaps, driven by advancements in large model technologies [17][19]. - Challenges identified include cross-modal alignment difficulties, high computational resource demands, and weak domain generalization [19]. Group 4: Teleoperation and Data Collection - The survey on teleoperation of humanoid robots discusses the integration of human cognition with robotic capabilities, particularly in hazardous environments, while addressing challenges such as high degrees of freedom and communication limitations [29][30]. - Key components of teleoperation systems include human state measurement, motion retargeting, and multimodal feedback mechanisms [30][33]. Group 5: Vision-Language-Action Models - The analysis of Vision-Language-Action (VLA) models covers their evolution from cross-modal learning architectures to the integration of visual language models and action planners [33][36]. - The article identifies core challenges in real-time control, multimodal action representation, and system scalability, while proposing future directions for adaptive AI and cross-entity generalization [36][41].
GPT-5被批过度炒作、性能落后,OpenAI联创揭秘其中原因:我们把它关在 “象牙塔”,和现实世界接触不够
AI前线· 2025-09-04 06:30
Core Insights - OpenAI is shifting its focus from consumer markets to enterprise markets with the launch of GPT-5, despite initial setbacks in its release [2][5] - GPT-5 has received positive feedback from enterprise users, indicating its potential in the corporate sector [5][24] - The pricing strategy for GPT-5 is competitive, with significant reductions in costs over time, making it more accessible for businesses [34][35] Summary by Sections OpenAI's Market Shift - Sam Altman aims to capitalize on the enterprise market with GPT-5, moving beyond the consumer-focused ChatGPT [2] - Initial criticisms of GPT-5 led to a temporary rollback to GPT-4 for paid users, but the model is designed for enterprise applications [2][5] Enterprise Adoption - Companies like Cursor, Vercel, and Factory have adopted GPT-5 as their default model, citing improvements in speed, performance, and cost [2][3] - Box's CEO described GPT-5 as a breakthrough in reasoning capabilities, surpassing previous systems [3] - JetBrains has integrated GPT-5 into its AI Assistant, highlighting its efficiency in generating tools quickly [3][4] Technical Developments - OpenAI's Greg Brockman discussed the evolution of reasoning in AI models, emphasizing the importance of reinforcement learning for reliability [8][10] - The transition from offline to online learning is noted as a significant shift in AI training methodologies [10][12] Cost Efficiency - OpenAI has achieved a 1000-fold reduction in model costs over two and a half years, enhancing accessibility for users [34][35] - The company continues to focus on improving computational efficiency and model architecture to further reduce costs [35] Future Directions - The potential for GPT-5 to serve as a collaborative partner in research and development is highlighted, with implications for various fields including mathematics and biology [22][21] - OpenAI is exploring the integration of AI models into real-world applications, aiming to enhance productivity and problem-solving capabilities [24][40]
以年轻科创精神为桥梁,“西南偏南”科技艺术节向2025外滩大会发来邀请
Jing Ji Guan Cha Wang· 2025-09-04 04:50
Core Insights - The 2025 Bund Conference will take place from September 10 to 13 in Shanghai, showcasing cutting-edge technology and cross-disciplinary creativity, attracting global attention [1] - The conference has received a special video message from SXSW, highlighting the inspiration drawn from the Bund Conference and the vibrant creativity of China's younger generation [1][2] - The Bund Conference aims to connect advanced technology with everyday life, featuring various interactive exhibits and discussions on significant scientific topics [3][4] Group 1: Event Overview - The Bund Conference is recognized as a high-level global fintech and frontier technology event, with the 2025 theme being "Reshaping Innovation Growth" [4] - The event will include a main forum, over 40 open insight forums, a global theme day, 18 Creator innovation stages, nearly 10,000 square meters of technology exhibitions, a technology market, and a technology innovation competition [4] Group 2: Participation and Engagement - Nearly 20,000 young tech talents from over 10 countries, including China, the US, the UK, and Australia, have registered to participate in various activities such as the Innovators Stage and AI Innovation Competition [2] - The conference will feature prominent figures in AI, including Turing Award winner Richard Sutton and other leading young AI innovators [2] Group 3: Technological Innovations - The technology exhibition will demonstrate practical applications of advanced medical services, such as health management tools and interactive robotics [3] - The conference emphasizes that technology impacts daily life and creative expression, aligning with the views expressed by SXSW's Neil Minocha [3]
苹果四位 AI 大将出走,其中三位是华人
3 6 Ke· 2025-09-04 02:13
前段时间轰轰烈烈的Meta抢人行动,容易让我们忘掉一点:AI人才的流动一直都很大,而"被高薪挖走"从来就不是唯 一的原因。 彭博社名记马克·古尔曼(Mark Gurman)爆料称,苹果又损失了四位AI大将,分别是: 苹果的机器人首席AI研究员Jian Zhang,以及苹果基础模型团队三名研员Nan Du、Zhao Meng和John Peebles。 从这里面我们至少能得到两个信息。 第一,离开的研究员很集中,有三个都是基础模型团队的。 第二,华人占比依然很高,四个当中除了John Peebles都是华人。 这很像是Meta抢人的习惯,但这次真的它关系不大——四个人中,只有Jian Zhang去了Meta。Nan Du和John Peelbles去 了OpenAI,而Zhao Meng则加入了Anthropic。 Meta挖走了苹果的机器人AI大将 从2005年加入,到如今离开,Jian Zhang在苹果整整效力十年。领英资料显示,他离开时已经是苹果人工智能与机器学 习(AIML)部门的机器人研究负责人。 不同于特斯拉的人形机器人项目,机器人技术是苹果未来产品线的关键组成部分。据彭博社报道,苹果有一系列设备 ...
松延动力:从清华园跑出的机器人“小孩哥”
Xin Jing Bao· 2025-09-03 02:02
Group 1 - The company, Songyan Power, has achieved significant recognition in the humanoid robotics field, winning multiple awards including the runner-up in the first humanoid robot marathon and gold medals in gymnastics and long jump at the World Humanoid Robot Games [1][4] - Founded by Jiang Zheyuan, who dropped out of Tsinghua University, the company has evolved from a small startup to a notable player in the robotics industry, with a focus on developing humanoid robots like N2 and E1 [2][3] - The N2 robot, priced at several tens of thousands, has gained substantial market traction, with total orders exceeding 2,500 units and a contract value surpassing 100 million yuan, positioning Songyan Power as a leading humanoid robot manufacturer [3][4] Group 2 - Songyan Power's strategy includes a focus on diverse application scenarios for its robots, targeting sectors such as education, research, cultural tourism, and commercial performances, with plans to expand into overseas markets [5][4] - The company has developed the "Xiao Nuo" robot, which features over 30 degrees of freedom for facial expressions, aimed at applications in elderly care, exhibition reception, and psychological counseling [5] - Beijing's supportive policies and initiatives since 2019 have fostered a robust robotics ecosystem, contributing to a 50% year-on-year growth in the city's robotics industry revenue in 2024 [5][6]
机器人操控新范式:一篇VLA模型系统性综述 | Jinqiu Select
锦秋集· 2025-09-02 13:41
Core Insights - The article discusses the emergence of Vision-Language-Action (VLA) models based on large Vision-Language Models (VLMs) as a transformative paradigm in robotic manipulation, addressing the limitations of traditional methods in unstructured environments [1][4][5] - It highlights the need for a structured classification framework to mitigate research fragmentation in the rapidly evolving VLA field [2] Group 1: New Paradigm in Robotic Manipulation - Robotic manipulation is a core challenge at the intersection of robotics and embodied AI, requiring deep understanding of visual and semantic cues in complex environments [4] - Traditional methods rely on predefined control strategies, which struggle in unstructured real-world scenarios, revealing limitations in scalability and generalization [4][5] - The advent of large VLMs has provided a revolutionary approach, enabling robots to interpret high-level human instructions and generalize to unseen objects and scenes [5][10] Group 2: VLA Model Definition and Classification - VLA models are defined as systems that utilize a large VLM to understand visual observations and natural language instructions, followed by a reasoning process that generates robotic actions [6][7] - VLA models are categorized into two main types: Monolithic Models and Hierarchical Models, each with distinct architectures and functionalities [7][8] Group 3: Monolithic Models - Monolithic VLA models can be implemented in single-system or dual-system architectures, integrating perception and action generation into a unified framework [14][15] - Single-system models process all modalities together, while dual-system models separate reflective reasoning from reactive behavior, enhancing efficiency [15][16] Group 4: Hierarchical Models - Hierarchical models consist of a planner and a policy, allowing for independent operation and modular design, which enhances flexibility in task execution [43] - These models can be further divided into Planner-Only and Planner+Policy categories, with the former focusing solely on planning and the latter integrating action execution [43][44] Group 5: Advancements in VLA Models - Recent advancements in VLA models include enhancements in perception modalities, such as 3D and 4D perception, as well as the integration of tactile and auditory information [22][23][24] - Efforts to improve reasoning capabilities and generalization abilities are crucial for enabling VLA models to perform complex tasks in diverse environments [25][26] Group 6: Performance Optimization - Performance optimization in VLA models focuses on enhancing inference efficiency through architectural adjustments, parameter optimization, and inference acceleration techniques [28][29][30] - Dual-system models have emerged to balance deep reasoning with real-time action generation, facilitating smoother deployment in real-world scenarios [35] Group 7: Future Directions - Future research directions include the integration of memory mechanisms, 4D perception, efficient adaptation, and multi-agent collaboration to further enhance VLA model capabilities [1][6]