鲁棒性
Search documents
与时代同行,广发基金杨冬团队打造适配全周期的工具箱
Di Yi Cai Jing Zi Xun· 2025-12-25 03:56
2025年末,A股市场在风格轮动的"快速换挡"中步入新的阶段。复盘过去几年的市场,2022—2023年的 价值回归、2024年的杠铃策略、2025年的成长牛市……每一次剧烈的风格切换,对单一策略的投资者来 说,都是较大的挑战。 与此同时,国内公募基金行业正经历一场深刻的高质量发展改革。监管机构持续推动行业从规模导向转 向投资者利益导向,核心要求之一便是追求更具持续性的投资回报,让来之不易的超额收益更加可预 期、更稳定。 面对复杂多变的市场环境,深耕主动权益二十二载的广发基金,历时近四年,培育了以多策略为核心、 以量化赋能主动投资的全天候风格策略团队。现有团队成员6人,平均从业年限超10年,团队由证券从 业19年、投资管理超16年的广发基金总经理助理杨冬率领,从大类资产配置、中观行业比较、个股选 择、策略构建等各维度参与产品管理。 作为"主观+量化"的复合型团队,杨冬率领的这支团队不仅致力于提高组合收益,还希望解决一个核心 问题:如何才能在市场的风浪中,提供"风格策略的全天候"产品,实现超额收益的稳定性、持续性和系 统化?这也是他们对贯彻行业高质量发展要求的一种实践与思考。 01 老将进化:从个人勤奋到团队智慧 ...
与时代同行,广发基金杨冬团队打造适配全周期的工具箱
第一财经· 2025-12-25 03:52
2025年末,A股市场在风格轮动的"快速换挡"中步入新的阶段。复盘过去几年的市场,2022—2023年的价值回 归、2024年的杠铃策略、2025年的成长牛市……每一次剧烈的风格切换,对单一策略的投资者来说,都是较大的 挑战。 与此同时,国内公募基金行业正经历一场深刻的高质量发展改革。监管机构持续推动行业从规模导向转向投资者利 益导向,核心要求之一便是追求更具持续性的投资回报,让来之不易的超额收益更加可预期、更稳定。 面对复杂多变的市场环境,深耕主动权益二十二载的广发基金,历时近四年,培育了 以多策略为核心、以量化赋能 主动投资的全天候 风格策略 团队 。 现有团队成员 6人,平均从业年限超10年 ,团队由证券从业 19年、投资管 理超16年的 广发基金总经理助理 杨冬率领 ,从大类资产配置、中观行业比较、个股选择、策略构建等各维度参与 产品管理。 作为 "主观+量化"的复合型团队,杨冬率领的这支团队不仅致力于提高组合收益,还希望解决一个核心问题: 如 何才能在市场的风浪中,提供 "风格策略的全天候"产品,实现超额收益的稳定性、持续性和系统化? 这也是他们 对贯彻行业高质量发展要求的一种实践与思考。 01 老 ...
英伟达开源自动驾驶软件,中国车企要接吗?
汽车商业评论· 2025-12-03 23:07
Core Insights - The article discusses the launch of the Alpamayo-R1 model by NVIDIA, which is the world's first open-source visual-language-action (VLA) model designed for autonomous driving scenarios, enhancing decision-making through "chain reasoning" [5][10][12] - The model significantly improves safety in complex long-tail scenarios, achieving a 12% increase in planning accuracy, a 35% reduction in accident rates, and a 25% decrease in near-miss incidents [10][12] - NVIDIA's strategy includes expanding its ecosystem influence by providing open-source technology, allowing automakers to quickly assemble autonomous driving systems [14][16] Technical Advancements - The Alpamayo-R1 model processes sensor data into natural language descriptions, enabling step-by-step reasoning similar to human drivers [5][10] - The model's low latency response of 99 milliseconds enhances its effectiveness in real-time decision-making [10] - The accompanying Cosmos developer toolchain offers resources for data construction, scene generation, and model evaluation, facilitating model fine-tuning and deployment [12] Strategic Considerations - NVIDIA's move to open-source its core algorithms is seen as a strategic effort to solidify its market position and drive demand for its hardware, such as the Orin/Thor automotive-grade chips [14][16] - The initiative is expected to establish industry standards for safety and evaluation, aligning with global regulatory demands for transparency in autonomous driving [19] - The shift from closed to open-source models in the autonomous driving sector may trigger a new wave of open-source development, as decision-making algorithms become critical competitive factors [24] Industry Impact and Opportunities - NVIDIA's open-source approach intensifies competition between open-source and closed-source ecosystems in the autonomous driving industry [21][24] - Chinese automakers, heavily reliant on NVIDIA's platforms, stand to benefit from the open-source tools for local algorithm development and scene tuning [26][27] - However, the industry faces challenges, including a significant talent gap in autonomous driving engineering, with a projected shortfall of over one million professionals by 2025 [29][30]
理想分享自动驾驶强化学习闭环训练框架
理想TOP2· 2025-11-27 16:10
Core Viewpoint - The article discusses the advancements in autonomous driving through the introduction of the AD-R1 framework, which utilizes closed-loop reinforcement learning to enhance safety and robustness in end-to-end autonomous driving systems, addressing the limitations of existing world models in predicting dangerous outcomes [2][4]. Group 1: Closed-Loop vs. Open-Loop Systems - Open-loop systems rely on offline data and static playback, while closed-loop systems interact dynamically with the environment, allowing for real-time adjustments to the vehicle's trajectory [1]. - The AD-R1 framework represents a significant step in closed-loop reinforcement learning for autonomous driving [1]. Group 2: Challenges in Imitation Learning - Imitation learning faces two main challenges: distribution shift due to unseen long-tail scenarios in the real world and the lack of negative feedback, making it difficult for AI to learn from mistakes [3]. - Optimistic bias is identified as a systemic flaw in reinforcement learning for autonomous driving, where models may generate unrealistic safe scenarios despite unsafe actions [3]. Group 3: AD-R1 Framework Components - The AD-R1 framework includes two core components: the development of an impartial world model and reinforcement learning based on future imaginings [4]. - The impartial world model employs counterfactual data synthesis to teach the model the consequences of unsafe driving behaviors [4]. Group 4: Model Training and Evaluation - The training process involves sampling candidate trajectories, imagining future scenarios using the impartial world model, scoring based on predicted outcomes, and updating the policy using the GRPO algorithm [8]. - The framework allows for detailed reward calculations through the use of 3D/4D voxel outputs, enhancing the evaluation of collision severity and ensuring vehicle stability on the road [8]. Group 5: Additional Features - Trajectory-aware gating is implemented to ensure the model focuses on relevant features along the driving path, while ego-trajectory fidelity loss penalizes deviations from the input control commands [6]. - The framework also includes volume collision penalties and vertical clearance checks to enhance safety in complex environments [8].
机器人格斗赛,还得靠人类遥控指挥?
Hu Xiu· 2025-05-28 02:22
Core Insights - The article discusses the inaugural "CMG World Robot Competition Series" featuring humanoid robots in combat, showcasing advancements in motion control and balance capabilities [2][5]. Group 1: Event Overview - The competition is the first of its kind globally, focusing on humanoid robots as the main participants in combat sports [2]. - The event featured four teams controlling the Yushu G1 humanoid robot, which stands 1.3 meters tall and weighs 35 kilograms, demonstrating 29 degrees of freedom [5]. Group 2: Technology and Control - The competition primarily utilized remote control technology, emphasizing the operator's reaction time alongside the robot's algorithms [3][10]. - Current remote control technology is likened to the robot's "small brain," while non-remote control technology, which requires advanced capabilities like visual recognition and real-time decision-making, is compared to the "big brain" [3][11]. Group 3: Performance Metrics - The competition employed a scoring system based on effective strikes, with different points awarded for hits to various body parts [5]. - The ability of robots to recover from falls within 8 seconds was a critical performance metric, testing both hardware and software resilience [8][9]. Group 4: Robustness and Material - "Robustness" is highlighted as a key performance indicator, referring to the robot's ability to maintain stability and performance under various disturbances [6][7]. - The robots are constructed using lightweight materials like carbon fiber and aluminum alloys, enhancing strength while reducing weight [9]. Group 5: Future Developments - Experts predict that achieving fully autonomous control in complex scenarios may take an additional 3 to 5 years, with significant challenges remaining in real-time perception and decision-making algorithms [4][14]. - The development of advanced hardware, such as high-precision sensors and AI chips, is essential for the evolution of non-remote control capabilities, but these components significantly increase costs [13].