强化学习
Search documents
Meta万引强化学习大佬跑路,用小扎原话作为离别寄语,扎心了
3 6 Ke· 2025-08-27 06:48
Core Viewpoint - The departure of Rishabh Agarwal from Meta has raised concerns about employee retention and morale within the company, especially as he was a key figure in the reinforcement learning domain and had made significant contributions during his tenure [1][3][15]. Group 1: Rishabh Agarwal's Background and Contributions - Rishabh Agarwal has a strong academic and professional background in reinforcement learning, with over 10,000 citations of his work and an h-index of 34 [5][6]. - He was involved in the development of significant models such as Gemini 1.5 and Gemma 2 during his time at Google and later at Meta [3][11]. - His paper "Deep Reinforcement Learning at the Edge of the Statistical Precipice" won the NeurIPS Outstanding Paper Award in 2021, highlighting his expertise in the field [11][13]. Group 2: Implications of His Departure - Agarwal's exit is seen as part of a broader trend of experienced employees leaving Meta, which may be linked to internal conflicts over compensation disparities between new hires and long-term staff [15][17]. - The departure of Agarwal and other senior employees could impact Meta's research capabilities and innovation in artificial intelligence [1][15]. - There are speculations that Agarwal may pursue entrepreneurial ventures, indicating a potential shift in the competitive landscape of AI research [14]. Group 3: Company Culture and Employee Morale - The recruitment drive at Meta has reportedly created friction among employees, leading to threats of resignation from some researchers [17]. - The situation reflects a challenging environment for Meta as it attempts to balance attracting new talent while retaining its existing workforce [17].
打磨7年,李航新书《机器学习方法(第2版)》发布,有了强化学习,赠书20本
机器之心· 2025-08-27 03:18
Core Viewpoint - The article discusses the release of the second edition of "Machine Learning Methods" by Li Hang, which expands on traditional machine learning to include deep learning and reinforcement learning, addressing the growing interest in these areas within the AI community [4][5][22]. Summary by Sections Overview of the Book - The new edition of "Machine Learning Methods" includes significant updates and additions, particularly in reinforcement learning, which has been gaining attention in AI applications [4][5]. - The book is structured into four main parts: supervised learning, unsupervised learning, deep learning, and reinforcement learning, providing a comprehensive framework for readers [5][22]. Supervised Learning - The first part covers key supervised learning methods such as linear regression, perceptron, support vector machines, maximum entropy models, logistic regression, boosting methods, hidden Markov models, and conditional random fields [7]. Unsupervised Learning - The second part focuses on unsupervised learning techniques, including clustering, singular value decomposition, principal component analysis, Markov chain Monte Carlo methods, EM algorithm, latent semantic analysis, and latent Dirichlet allocation [8]. Deep Learning - The third part introduces major deep learning methods, such as feedforward neural networks, convolutional neural networks, recurrent neural networks, Transformers, diffusion models, and generative adversarial networks [9]. Reinforcement Learning - The fourth part details reinforcement learning methods, including Markov decision processes, multi-armed bandit problems, proximal policy optimization, and deep Q networks [10]. - The book aims to provide a systematic introduction to reinforcement learning, which has been less covered in previous textbooks [4][10]. Learning Approach - Each chapter presents one or two machine learning methods, explaining models, strategies, and algorithms in a clear manner, supported by mathematical derivations to enhance understanding [12][19]. - The book is designed for university students and professionals, assuming a background in calculus, linear algebra, probability statistics, and computer science [22]. Author Background - Li Hang, the author, is a recognized expert in the field, with a background in natural language processing, information retrieval, machine learning, and data mining [24].
手把手教机器人:斯坦福大学提出RTR框架,让机械臂助力人形机器人真机训练
机器之心· 2025-08-27 00:46
Core Viewpoint - The application of reinforcement learning (RL) algorithms in humanoid robot motion control is emerging as a key research area, with a focus on the "Sim-to-Real" paradigm, which aims to train general control models in diverse simulated environments to adapt to the real world [2][3]. Group 1: Current Challenges and Innovations - Existing methods primarily utilize domain randomization to train models in simulation, achieving impressive results in various tasks but often sacrificing performance in specific real-world environments [2][3]. - Recent efforts have begun to explore fine-tuning models with limited real-world data after simulation pre-training, with notable contributions from institutions like NVIDIA and CMU [3]. - The challenge of conducting RL training in real environments has been a significant barrier due to the instability of humanoid robots, which can lead to hardware damage from minor errors [3]. Group 2: Proposed Solution - RTR System - The RTR (Robot-Trains-Robot) system introduces a novel approach where a "teacher" robotic arm guides a "student" humanoid robot through online reinforcement learning, inspired by how human parents teach infants to walk [4][6]. - The teacher arm plays multiple roles: it provides safety support, assists in resetting the student after failures, collects valuable training data, and sets a curriculum to enhance learning efficiency [5][6]. Group 3: Hardware and Algorithm Design - The RTR system consists of a hardware setup with a teacher and student robot, where the teacher is a UR5 robotic arm equipped with force-torque sensors, and the student is based on the open-source ToddlerBot [8][9]. - The system's algorithm involves a three-stage Sim-to-Real process: training adaptable strategies in simulation, optimizing a general initial latent variable, and performing online fine-tuning in the real world with minimal data [9][11]. Group 4: Experimental Validation - Experiments demonstrated the effectiveness of the RTR system in tasks like walking and swinging, showing that the teacher's flexible assistance significantly improves learning outcomes compared to fixed supports [15][19]. - The proposed fine-tuning method using latent variables outperformed traditional methods in data efficiency and final performance, achieving a twofold speed increase in walking strategies with just 20 minutes of real-world training [15][18]. Group 5: Future Prospects - The RTR framework not only addresses the current challenges in deploying humanoid robots but also introduces a new paradigm of physical assistance for real-world learning, with potential applications in larger humanoid robots and other complex robotic systems [17].
一天之内,Meta痛失两员大将,小扎钞能力失效?
机器之心· 2025-08-26 08:53
Core Viewpoint - Meta is experiencing significant talent attrition, particularly among top AI researchers, due to internal management issues and a lack of alignment with the company's vision and culture [1][9][39]. Group 1: Talent Departure - Two senior researchers, Rishabh Agarwal and Bert Maher, recently announced their departure from Meta, with Agarwal moving to an unspecified location and Maher joining Anthropic [3][24]. - Agarwal's exit highlights the argument that even high salaries cannot retain top talent, as he follows Zuckerberg's advice on taking risks in a rapidly changing world [14][39]. - Maher, who worked at Meta for 12 years, contributed to significant projects like PyTorch and HHVM, indicating the loss of valuable expertise [25][27]. Group 2: Internal Management Issues - Meta's internal management culture is cited as a reason for its low employee retention rate of 64%, compared to Anthropic's 80% [30][33]. - Previous complaints from former employees, including John Carmack and Tijmen Blankevoort, point to issues such as poor resource utilization, performance evaluation pressures, and internal competition [33][34]. - The lack of a strong CTO to balance the power of the CEO is seen as a potential risk for the company's future stability [11]. Group 3: Cultural Misalignment - Many top researchers are leaving Meta due to a misalignment with the company's focus on speed and profitability, which contrasts with their values of safety, independence, and long-term research [39][40]. - The absence of a compelling mission at Meta makes it difficult for some employees to justify staying, as exemplified by Tesla engineer Yun-Ta Tsai's decision to remain with his current employer for its meaningful goals [40][42]. - The perception that Meta's culture prioritizes financial gain over meaningful work is leading to a reluctance among potential recruits to join the company [39][42].
Meta万引强化学习大佬跑路!用小扎原话作为离别寄语,扎心了
量子位· 2025-08-26 04:36
Core Viewpoint - The departure of Rishabh Agarwal from Meta highlights a potential trend of employee attrition within the company, raising concerns about internal conflicts and employee satisfaction amidst a hiring spree [1][22][24]. Group 1: Rishabh Agarwal's Departure - Rishabh Agarwal, a prominent figure in reinforcement learning at Meta, is leaving the company after 7.5 years, expressing a desire to explore a completely different path [1][17]. - His contributions include significant work on models like Gemini 1.5 and Gemma 2, and he received the Outstanding Paper Award at NeurIPS in 2021 for his research on statistical instability in deep reinforcement learning [4][14][13]. - Agarwal's next steps remain uncertain, but speculation suggests he may venture into entrepreneurship [17]. Group 2: Employee Turnover at Meta - Agarwal's exit is part of a broader trend, as another long-term employee with 12 years at Meta also announced their departure, joining a competing firm, Anthropic [18][19]. - Reports indicate that tensions between new and old employees regarding salary disparities have led to dissatisfaction, prompting some researchers to threaten resignation [23][24]. - The current hiring surge at Meta may be exacerbating internal conflicts, contributing to the trend of experienced employees leaving the company [22][24].
最新智能体自动操作手机电脑,10个榜单开源SOTA全拿下|通义实验室
量子位· 2025-08-25 23:05
Core Viewpoint - The article discusses the launch of the Mobile-Agent-v3 framework by Tongyi Lab, which achieves state-of-the-art (SOTA) performance in automating tasks on mobile and desktop platforms, showcasing its ability to perform complex tasks through a multi-agent system [2][9]. Group 1: Framework and Capabilities - The Mobile-Agent-v3 framework can independently execute complex tasks with a single command and seamlessly switch roles within a multi-agent framework [3][9]. - It has achieved SOTA performance across ten major GUI benchmarks, demonstrating both foundational capabilities and reasoning generalization [9][11]. Group 2: Data Production and Model Training - The framework relies on a robust cloud infrastructure built on Alibaba Cloud, enabling large-scale parallel task execution and data collection [11][13]. - A self-evolving data production chain automates data collection and model optimization, creating a feedback loop for continuous improvement [13][15]. - The model is trained using high-quality trajectory data, which is generated through a combination of historical task data and large-scale pre-trained language models [22][23]. Group 3: Task Execution and Understanding - The framework emphasizes precise interface element localization, allowing the AI to understand the graphical interface effectively [18][19]. - It incorporates complex task planning, enabling the AI to strategize before executing tasks, enhancing its ability to handle long-term and cross-application tasks [21][22]. - The model understands the causal relationship between actions and interface changes, which is crucial for effective task execution [24][25]. Group 4: Reinforcement Learning and Performance - The Mobile-Agent team employs reinforcement learning (RL) to enhance the model's decision-making capabilities through real-time interactions [28][29]. - An innovative TRPO algorithm addresses the challenges of sparse and delayed reward signals in GUI tasks, significantly improving learning efficiency [31][36]. - The framework has shown a performance increase of nearly 8 percentage points in dynamic environments, indicating its self-evolution potential [36][40]. Group 5: Multi-Agent Collaboration - The Mobile-Agent-v3 framework supports multi-agent collaboration, allowing different agents to handle various aspects of task execution, planning, reflection, and memory [33][34]. - This collaborative approach creates a closed-loop enhancement pipeline, improving the overall efficiency and effectiveness of task execution [34][35]. - The framework's design enables AI to act with purpose, adjust based on feedback, and retain critical information for future tasks [35][36].
VLA/强化学习/VLN方向1v1论文辅导~
具身智能之心· 2025-08-25 06:00
Group 1 - The article announces the availability of 1v1 paper guidance in the field of embodied intelligence, specifically focusing on three areas: vla, reinforcement learning, and sim2real [1] - The guidance is primarily aimed at participants of major conferences such as CVPR, ICCV, ECCV, ICLR, CoRL, ICML, and ICRA [1] - The instructors are actively engaged in the academic field of embodiment and have innovative ideas [1] Group 2 - Interested individuals are encouraged to add a specific WeChat contact for inquiries or to scan a QR code for consultation regarding the paper guidance [2]
自动驾驶转具身智能有哪些切入点?
自动驾驶之心· 2025-08-24 23:32
Core Viewpoint - The article discusses the transition from autonomous driving to embodied intelligence, highlighting the similarities and differences in algorithms and tasks between the two fields [1]. Group 1: Algorithm and Task Comparison - Embodied intelligence largely continues the algorithms used in robotics and autonomous driving, such as training and fine-tuning methods, as well as large models [1]. - There are notable differences in specific tasks, including data collection methods and the emphasis on execution hardware and structure [1]. Group 2: Community and Learning Resources - A full-stack learning community named "Embodied Intelligence Heart" has been established to share knowledge related to algorithms, data collection, and hardware solutions in the field of embodied intelligence [1]. - Key areas of focus within the community include VLA, VLN, Diffusion Policy, reinforcement learning, robotic arm grasping, pose estimation, robot simulation, multimodal large models, chip deployment, sim2real, and robot hardware structure [1].
重磅!浙大最新综述,解码40+年足式机器人技术演进与未来挑战
机器人大讲堂· 2025-08-24 13:15
近日, 浙江大学流体动力与机电系统国家重点实验室 的研究团队在国际期刊《 Cyborg and Bionic Systems 》上发表一篇系统性综述论文,全面梳理单腿机器人在结构设计、建模方法与控制策略等核心领域 的发展演进与未来挑战。 论文名为《 Bridging the Gap to Bionic Motion: Challenges in Legged Robot Limb Units Design, Modeling, and Control 》, 由中国工程院院士领衔的研究团队撰写,系统探讨了实现 "仿生运动"的关键 路径 ,为理解 "让机器人像生物一样灵活行走"这一根本性难题提供了新的思路。 该研究的独特价值在于:它 不仅追溯了四十多年来从简单伸缩结构到复杂关节系统的演化历程,更重要的是 揭示单腿机器人作为多腿机器人 "基本单元"的科学意义 ——通过在简化系统复杂度的前提下聚焦腿足运动本 质,为波士顿动力 Spot 、云深处绝影等商业化四足机器人的成功奠定了理论基础。 文章链接: https://spj.science.org/doi/10.34133/cbsystems.0365 ▍ 为什么要从 ...
在OpenAI炼Agent一年半,回国做出首个开源Agent训练框架!这个30岁清华天才却说:创业不是技术命
AI前线· 2025-08-23 05:32
Core Viewpoint - The article highlights the journey and achievements of Wu Yi, a prominent figure in AI and reinforcement learning, emphasizing his contributions to the field and the unique positioning of his startup, BianSai Technology, which focuses on the AReaL framework for training large models [2][4][8]. Group 1: Career and Achievements - Wu Yi has a distinguished background, being an ACM World Medalist and a coach for the IOI team, with significant experiences at Facebook, ByteDance, and OpenAI [2][4]. - His startup, BianSai Technology, was acquired by Ant Group in 2024, and the team has developed a unique asynchronous reinforcement learning framework called AReaL, which has gained traction on GitHub with 2.4k stars [2][4][8]. Group 2: Insights from OpenAI Experience - Wu Yi's decision to join OpenAI was somewhat serendipitous, as he initially aimed for Google Brain but found OpenAI more accommodating due to its non-profit structure [4][5]. - He emphasizes the importance of evidence-driven decision-making in AI development, advocating for a flexible approach that allows for rapid adjustments based on new findings [5][13]. Group 3: Reinforcement Learning and Competitions - Wu Yi discusses the differences in performance of AI models in competitions like IOI and CCPC, attributing failures to the readiness of the models rather than inherent limitations of AI [6][7]. - He believes that AI's role in competitive programming is akin to sports, where psychological factors and skills play a significant role [6][7]. Group 4: AReaL Framework and Market Position - AReaL is positioned as a unique framework for training agent models, with Wu Yi asserting that there are currently no direct competitors in this space [2][33][36]. - The framework aims to facilitate faster and more effective training of agent models, focusing on user-friendliness and performance [36][37]. Group 5: Future Directions and Challenges - Wu Yi anticipates that multi-agent systems will become increasingly important as the complexity of agent workflows grows, presenting new opportunities for algorithm development [41][42]. - He expresses confidence that agent technology will evolve to become a mainstream interaction form in AI, moving towards more autonomous and proactive roles [42].