强化学习

Search documents
深度|OpenAI联创:GPT-5的突破在于智能开始触及真正的深度认知领域;理想状态应该是默认使用我们的自动选择,而非手动配置
Z Potentials· 2025-09-06 04:40
Core Insights - OpenAI has released GPT-5 and GPT-OSS, marking significant advancements in AI technology and accessibility [4][3] - GPT-5 is the first hybrid model, designed to enhance user experience by automatically selecting model architectures [5][6] - The evolution of OpenAI's reasoning capabilities has transitioned from simple next-token prediction to more complex reasoning paradigms [9][10] Group 1: OpenAI's Technological Advancements - The release of GPT-5 and GPT-OSS has seen millions of downloads within days, showcasing the demand for these technologies [4] - GPT-5's breakthrough lies in its ability to engage in deep cognitive tasks, surpassing the limitations of its predecessor, GPT-4 [24][25] - The model's training has shifted from a one-time training approach to a more iterative reasoning-training cycle, enhancing its learning efficiency [9][10] Group 2: Learning Mechanisms and Challenges - OpenAI emphasizes the importance of real-world experience for models to develop generalization capabilities, highlighting the limitations of purely theoretical training [6][15] - The company is exploring the potential of real-time online learning, aiming to allow models to adapt continuously during operation [10][11] - Current bottlenecks in AI development are primarily related to computational power, which is essential for enhancing model capabilities [11][12] Group 3: Future Directions and Applications - OpenAI is focused on creating models that can assist in complex problem-solving, with applications in various fields, including mathematics and biology [25][22] - The company aims to improve the integration of AI into real-world applications, ensuring that models can handle the complexities of diverse environments [27][30] - OpenAI's vision includes making AI technology accessible to a broader audience, with plans for aggressive pricing strategies to enhance adoption [39][40]
沉寂一个月,openPangu性能飙升8%!华为1B开源模型来了
机器之心· 2025-09-05 04:31
Core Viewpoint - Huawei's Pangu Embedded-1B model represents a significant advancement in edge AI, enabling powerful AI capabilities on resource-constrained devices, thus paving the way for intelligent upgrades in various industries [1][5]. Group 1: Model Performance and Efficiency - The openPangu Embedded-1B model, with 1 billion parameters, achieves a new state-of-the-art (SOTA) record in performance and efficiency, demonstrating that smaller models can deliver substantial capabilities [2][3]. - The model's overall average score reached 63.90, surpassing similar models and matching larger models like Qwen3-1.7B, showcasing its parameter efficiency [3][4]. - In mathematical reasoning, the model scored 82.76% on the GSM8K benchmark and 81.83% on the MATH dataset, significantly outperforming its peers [3][4]. Group 2: Technical Innovations - The model employs a soft-hardware collaborative design, optimizing its architecture to align with the characteristics of Ascend hardware, ensuring efficient resource utilization [9][10]. - A two-stage curriculum learning approach is utilized to enhance the model's reasoning capabilities, simulating a human-like learning process [15][16]. - The introduction of offline On-Policy knowledge distillation allows for a more flexible and effective training process, improving the model's accuracy and generalization [18][19]. Group 3: Reinforcement Learning and Future Directions - The model incorporates a multi-source reward reinforcement learning mechanism, enhancing its performance through targeted feedback based on task complexity [22][25]. - Future developments aim to integrate fast and slow thinking processes within a single model, allowing for adaptive responses based on problem difficulty, thus improving both speed and accuracy [29][30].
从近1000篇工作中,看具身智能的技术发展路线!
具身智能之心· 2025-09-05 00:45
Core Insights - The article discusses the evolution and challenges of embodied intelligence, emphasizing the need for a comprehensive understanding of its development, issues faced, and future directions [3][4]. Group 1: Robotic Manipulation - The survey on robotic manipulation highlights the transition from mechanical programming to embodied intelligence, focusing on the evolution from simple grippers to dexterous multi-fingered hands [5][6]. - Key challenges in dexterous manipulation include data collection methods such as simulation, human demonstration, and teleoperation, as well as skill learning frameworks like imitation learning and reinforcement learning [5][6]. Group 2: Navigation and Manipulation - The discussion on robotic navigation emphasizes the importance of physics simulators in addressing high costs and data scarcity in real-world training, with a focus on the Sim-to-Real transfer challenges [9][15]. - The evolution of navigation techniques is outlined, transitioning from explicit memory to implicit memory, and the role of various simulators in narrowing the Sim-to-Real gap is analyzed [15][16]. Group 3: Multimodal Large Models - The exploration of embodied multimodal large models (EMLMs) reveals their potential to bridge perception, cognition, and action gaps, driven by advancements in large model technologies [17][19]. - Challenges identified include cross-modal alignment difficulties, high computational resource demands, and weak domain generalization [19]. Group 4: Teleoperation and Data Collection - The survey on teleoperation of humanoid robots discusses the integration of human cognition with robotic capabilities, particularly in hazardous environments, while addressing challenges such as high degrees of freedom and communication limitations [29][30]. - Key components of teleoperation systems include human state measurement, motion retargeting, and multimodal feedback mechanisms [30][33]. Group 5: Vision-Language-Action Models - The analysis of Vision-Language-Action (VLA) models covers their evolution from cross-modal learning architectures to the integration of visual language models and action planners [33][36]. - The article identifies core challenges in real-time control, multimodal action representation, and system scalability, while proposing future directions for adaptive AI and cross-entity generalization [36][41].
GPT-5被批过度炒作、性能落后,OpenAI联创揭秘其中原因:我们把它关在 “象牙塔”,和现实世界接触不够
AI前线· 2025-09-04 06:30
Core Insights - OpenAI is shifting its focus from consumer markets to enterprise markets with the launch of GPT-5, despite initial setbacks in its release [2][5] - GPT-5 has received positive feedback from enterprise users, indicating its potential in the corporate sector [5][24] - The pricing strategy for GPT-5 is competitive, with significant reductions in costs over time, making it more accessible for businesses [34][35] Summary by Sections OpenAI's Market Shift - Sam Altman aims to capitalize on the enterprise market with GPT-5, moving beyond the consumer-focused ChatGPT [2] - Initial criticisms of GPT-5 led to a temporary rollback to GPT-4 for paid users, but the model is designed for enterprise applications [2][5] Enterprise Adoption - Companies like Cursor, Vercel, and Factory have adopted GPT-5 as their default model, citing improvements in speed, performance, and cost [2][3] - Box's CEO described GPT-5 as a breakthrough in reasoning capabilities, surpassing previous systems [3] - JetBrains has integrated GPT-5 into its AI Assistant, highlighting its efficiency in generating tools quickly [3][4] Technical Developments - OpenAI's Greg Brockman discussed the evolution of reasoning in AI models, emphasizing the importance of reinforcement learning for reliability [8][10] - The transition from offline to online learning is noted as a significant shift in AI training methodologies [10][12] Cost Efficiency - OpenAI has achieved a 1000-fold reduction in model costs over two and a half years, enhancing accessibility for users [34][35] - The company continues to focus on improving computational efficiency and model architecture to further reduce costs [35] Future Directions - The potential for GPT-5 to serve as a collaborative partner in research and development is highlighted, with implications for various fields including mathematics and biology [22][21] - OpenAI is exploring the integration of AI models into real-world applications, aiming to enhance productivity and problem-solving capabilities [24][40]
以年轻科创精神为桥梁,“西南偏南”科技艺术节向2025外滩大会发来邀请
Jing Ji Guan Cha Wang· 2025-09-04 04:50
2025Inclusion·外滩大会即将于9月10日-13日在上海黄浦世博园区召开。近年来,外滩大会因其展现的前 沿科技、跨界创意,吸引了全球关注。 外滩大会的科技展览和创新者舞台等,展现了宏大科技命题的具象化,链接到了普通人日常生活的点 滴。在科技展区,通过蚂蚁健康管家AQ 使用一键拍药盒、读体检报告、咨询病症,进行睡眠管理、皮 肤管理、肺功能检测等360度健康管家服务,让先进医疗服务成为每个普通人都能享受到的日常。在机 器人小镇,观众还可围观机器人拳击赛、武术、足球赛等极具未来感的互动,体验低空飞行器、自动驾 驶等未来生活方式。在创新者舞台,观众可与科学家面对面,近距离了解可控核聚变等尖端科学领域, 触摸下一代能源曙光。更有科技发烧友们用AI复刻味觉和触觉,把城市噪音变成预防阿尔兹海默症的 创意DJ,这些看似"异想天开"的实践,正是外滩大会最年轻的底色。 外滩大会组委会相关负责人表示,"科技不止是芯片和火箭,也可以是一个一呼即应的AI生活伙伴、提 醒老人及时用药的AI助手"。这也与尼尔·米诺查在视频中提到的,"科技不仅关乎宏大的构想,更影响 着日常生活与创意表达"观点不谋而合。 2025年外滩大会开幕之际 ...
苹果四位 AI 大将出走,其中三位是华人
3 6 Ke· 2025-09-04 02:13
前段时间轰轰烈烈的Meta抢人行动,容易让我们忘掉一点:AI人才的流动一直都很大,而"被高薪挖走"从来就不是唯 一的原因。 彭博社名记马克·古尔曼(Mark Gurman)爆料称,苹果又损失了四位AI大将,分别是: 苹果的机器人首席AI研究员Jian Zhang,以及苹果基础模型团队三名研员Nan Du、Zhao Meng和John Peebles。 从这里面我们至少能得到两个信息。 第一,离开的研究员很集中,有三个都是基础模型团队的。 第二,华人占比依然很高,四个当中除了John Peebles都是华人。 这很像是Meta抢人的习惯,但这次真的它关系不大——四个人中,只有Jian Zhang去了Meta。Nan Du和John Peelbles去 了OpenAI,而Zhao Meng则加入了Anthropic。 Meta挖走了苹果的机器人AI大将 从2005年加入,到如今离开,Jian Zhang在苹果整整效力十年。领英资料显示,他离开时已经是苹果人工智能与机器学 习(AIML)部门的机器人研究负责人。 不同于特斯拉的人形机器人项目,机器人技术是苹果未来产品线的关键组成部分。据彭博社报道,苹果有一系列设备 ...
松延动力:从清华园跑出的机器人“小孩哥”
Xin Jing Bao· 2025-09-03 02:02
Group 1 - The company, Songyan Power, has achieved significant recognition in the humanoid robotics field, winning multiple awards including the runner-up in the first humanoid robot marathon and gold medals in gymnastics and long jump at the World Humanoid Robot Games [1][4] - Founded by Jiang Zheyuan, who dropped out of Tsinghua University, the company has evolved from a small startup to a notable player in the robotics industry, with a focus on developing humanoid robots like N2 and E1 [2][3] - The N2 robot, priced at several tens of thousands, has gained substantial market traction, with total orders exceeding 2,500 units and a contract value surpassing 100 million yuan, positioning Songyan Power as a leading humanoid robot manufacturer [3][4] Group 2 - Songyan Power's strategy includes a focus on diverse application scenarios for its robots, targeting sectors such as education, research, cultural tourism, and commercial performances, with plans to expand into overseas markets [5][4] - The company has developed the "Xiao Nuo" robot, which features over 30 degrees of freedom for facial expressions, aimed at applications in elderly care, exhibition reception, and psychological counseling [5] - Beijing's supportive policies and initiatives since 2019 have fostered a robust robotics ecosystem, contributing to a 50% year-on-year growth in the city's robotics industry revenue in 2024 [5][6]
机器人操控新范式:一篇VLA模型系统性综述 | Jinqiu Select
锦秋集· 2025-09-02 13:41
Core Insights - The article discusses the emergence of Vision-Language-Action (VLA) models based on large Vision-Language Models (VLMs) as a transformative paradigm in robotic manipulation, addressing the limitations of traditional methods in unstructured environments [1][4][5] - It highlights the need for a structured classification framework to mitigate research fragmentation in the rapidly evolving VLA field [2] Group 1: New Paradigm in Robotic Manipulation - Robotic manipulation is a core challenge at the intersection of robotics and embodied AI, requiring deep understanding of visual and semantic cues in complex environments [4] - Traditional methods rely on predefined control strategies, which struggle in unstructured real-world scenarios, revealing limitations in scalability and generalization [4][5] - The advent of large VLMs has provided a revolutionary approach, enabling robots to interpret high-level human instructions and generalize to unseen objects and scenes [5][10] Group 2: VLA Model Definition and Classification - VLA models are defined as systems that utilize a large VLM to understand visual observations and natural language instructions, followed by a reasoning process that generates robotic actions [6][7] - VLA models are categorized into two main types: Monolithic Models and Hierarchical Models, each with distinct architectures and functionalities [7][8] Group 3: Monolithic Models - Monolithic VLA models can be implemented in single-system or dual-system architectures, integrating perception and action generation into a unified framework [14][15] - Single-system models process all modalities together, while dual-system models separate reflective reasoning from reactive behavior, enhancing efficiency [15][16] Group 4: Hierarchical Models - Hierarchical models consist of a planner and a policy, allowing for independent operation and modular design, which enhances flexibility in task execution [43] - These models can be further divided into Planner-Only and Planner+Policy categories, with the former focusing solely on planning and the latter integrating action execution [43][44] Group 5: Advancements in VLA Models - Recent advancements in VLA models include enhancements in perception modalities, such as 3D and 4D perception, as well as the integration of tactile and auditory information [22][23][24] - Efforts to improve reasoning capabilities and generalization abilities are crucial for enabling VLA models to perform complex tasks in diverse environments [25][26] Group 6: Performance Optimization - Performance optimization in VLA models focuses on enhancing inference efficiency through architectural adjustments, parameter optimization, and inference acceleration techniques [28][29][30] - Dual-system models have emerged to balance deep reasoning with real-time action generation, facilitating smoother deployment in real-world scenarios [35] Group 7: Future Directions - Future research directions include the integration of memory mechanisms, 4D perception, efficient adaptation, and multi-agent collaboration to further enhance VLA model capabilities [1][6]
性能逼近闭源最强,通义实验室开源Mobile-Agent-v3刷新10项GUI基准SOTA
机器之心· 2025-09-02 03:44
Core Viewpoint - The article highlights the launch of the GUI-Owl and Mobile-Agent-v3, which are advanced open-source models for GUI automation, showcasing superior performance compared to existing models and emphasizing their capabilities in various environments [1][29]. Group 1: Key Achievements - GUI-Owl has achieved state-of-the-art (SOTA) performance on both Android and desktop platforms, with the 32B model surpassing closed-source top models in multiple evaluations [21][29]. - The models are designed to operate in a cloud environment, allowing for dynamic task execution and data collection across multiple operating systems, including Android, Ubuntu, macOS, and Windows [11][29]. Group 2: Technical Innovations - The system employs a self-evolving data production chain that minimizes human involvement in generating high-quality training data, allowing the models to iteratively optimize themselves [11][14]. - GUI-Owl's capabilities include advanced UI element grounding, long task planning, and robust reasoning, enabling it to understand and execute complex tasks effectively [16][20]. Group 3: Reinforcement Learning Framework - A scalable reinforcement learning (RL) system has been developed to enhance the model's stability and adaptability in real-world environments, allowing it to learn continuously from its interactions [22][26]. - The introduction of the Trajectory-aware Relative Policy Optimization (TRPO) algorithm addresses the challenges of sparse and delayed reward signals in GUI automation tasks, improving learning efficiency [26]. Group 4: Conclusion - The release of GUI-Owl and Mobile-Agent-v3 represents a significant advancement in open-source GUI automation, providing a powerful tool for various applications while reducing deployment and resource costs [29].
XDog:具身低成本科研平台,四足机械狗+单臂(含VLA/强化学习/仿真/sim2real教程)
具身智能之心· 2025-09-02 02:00
Core Viewpoint - Xdog is a low-cost, multifunctional quadruped robotic dog and robotic arm development platform designed for embodied developers, featuring a comprehensive curriculum for research and learning in robotics [1][2]. Hardware Overview - Xdog integrates advanced functionalities such as voice control, sim2real, real2sim, target recognition and tracking, autonomous robotic arm grasping, and reinforcement learning gait control, covering most of the technology stack for embodied intelligent lower limb control [2][5]. - The robotic dog measures 25cm x 20cm x 30cm and weighs 7.0kg, with a maximum speed of 7.2 km/h and a maximum rotation speed of 450 degrees per second [3][11]. - The main control chip is Allwinner H616, featuring a quad-core 1.6GHz CPU, 4GB RAM, and 32GB storage [4][5]. - The robotic arm can reach a maximum height of 0.85m and has a grasping range of 0.4m around its base [7]. Software and Functionality - The system supports various control methods including voice control via TCP, keyboard control, visual control, and reinforcement learning for autonomous movement [15][17]. - Development is based on ROS1, with Python as the primary programming language, and it is recommended to use a GPU of at least 2080ti for inference [16][24]. - The platform includes a comprehensive curriculum covering topics from basic ROS knowledge to advanced reinforcement learning principles and practical applications [22][23]. Team and Support - The project is led by a team of experienced instructors responsible for project advancement, technical support, and course development [22]. - After-sales service is provided for one year post-delivery, with video and source code access granted immediately after hardware receipt [26]. Delivery and Consultation - The delivery cycle is set to be completed within three weeks after payment [25]. - For further inquiries, potential customers are encouraged to consult the assistant via WeChat [27].