Workflow
策略优化
icon
Search documents
从模拟到实盘之路:对话百万模拟账户考核优胜者杨琰骏泽
Group 1 - The core principle of trading emphasized by Yang Yanjunze is focusing on intraday trading and strict risk control, avoiding overnight positions due to external risks [2] - Yang primarily trades stock index futures due to their good liquidity and fast market rhythm, allowing for quick opportunity capture while maintaining a light position and managing leverage within safe limits [2] - The importance of maintaining a clear mindset amidst changing market conditions is highlighted, with a focus on the ability to secure profits after a market trend ends rather than just capitalizing on short-term gains [3] Group 2 - Yang Yanjunze's growth as a trader involved rigorous training and frequent simulations, with painful experiences being seen as crucial for rapid progress [4] - The choice of the Jinpan Shou platform was based on its emphasis on risk control, ongoing support for trader development, and flexible assessment rules that align with real trading conditions [4] - Despite passing the million simulation account assessment, Yang maintains a humble and clear perspective, viewing the real challenge as beginning in live trading and aspiring to manage larger funds for stable asset growth [4][5]
探寻交易之道,共赴西安之约→
Qi Huo Ri Bao· 2025-11-03 23:49
Core Insights - The 19th National Futures (Options) Live Trading Competition and the 12th Global Derivatives Live Trading Competition Award Ceremony will be held on November 15 in Xi'an, attracting participants eager to learn and network in a volatile market environment [1][2] - The event serves multiple purposes, including recognizing outstanding traders and providing a platform for knowledge sharing and experience exchange among industry professionals [1][2] Group 1 - The current market volatility has made traditional trading strategies less effective, prompting traders to seek new insights and strategies for stable profits [1][2] - Participants, including newcomers to the futures industry, express a desire to learn advanced risk management concepts and trading systems from experienced peers [1][2] Group 2 - Industry experts emphasize the importance of strategy optimization and risk control for individual investors, while companies should focus on integrating finance with industry and developing green finance initiatives [2] - The success of participants in the competition increasingly relies on their professional knowledge and practical experience, highlighting the need for a deep integration of both in futures trading [2] - The event is anticipated by various stakeholders, including individual investors seeking knowledge enhancement and companies aiming for financial integration [2]
基于深度强化学习的轨迹规划
自动驾驶之心· 2025-08-28 23:32
Core Viewpoint - The article discusses the advancements and potential of reinforcement learning (RL) in the field of autonomous driving, highlighting its evolution and comparison with other learning paradigms such as supervised learning and imitation learning [4][7][8]. Summary by Sections Background - The article notes the recent industry focus on new technological paradigms like VLA and reinforcement learning, emphasizing the growing interest in RL following significant milestones in AI, such as AlphaZero and ChatGPT [4]. Supervised Learning - In autonomous driving, perception tasks like object detection are framed as supervised learning tasks, where a model is trained to map inputs to outputs using labeled data [5]. Imitation Learning - Imitation learning involves training models to replicate actions based on observed behaviors, akin to how a child learns from adults. This is a primary learning objective in end-to-end autonomous driving [6]. Reinforcement Learning - Reinforcement learning differs from imitation learning by focusing on learning through interaction with the environment, using feedback from task outcomes to optimize the model. It is particularly relevant for sequential decision-making tasks in autonomous driving [7]. Inverse Reinforcement Learning - Inverse reinforcement learning addresses the challenge of defining reward functions in complex tasks by learning from user feedback to create a reward model, which can then guide the main model's training [8]. Basic Concepts of Reinforcement Learning - Key concepts include policies, rewards, and value functions, which are essential for understanding how RL operates in autonomous driving contexts [14][15][16]. Markov Decision Process - The article explains the Markov decision process as a framework for modeling sequential tasks, which is applicable to various autonomous driving scenarios [10]. Common Algorithms - Various algorithms are discussed, including dynamic programming, Monte Carlo methods, and temporal difference learning, which are foundational to reinforcement learning [26][30]. Policy Optimization - The article differentiates between on-policy and off-policy algorithms, highlighting their respective advantages and challenges in training stability and data utilization [27][28]. Advanced Reinforcement Learning Techniques - Techniques such as DQN, TRPO, and PPO are introduced, showcasing their roles in enhancing training stability and efficiency in reinforcement learning applications [41][55]. Application in Autonomous Driving - The article emphasizes the importance of reward design and closed-loop training in autonomous driving, where the vehicle's actions influence the environment, necessitating sophisticated modeling techniques [60][61]. Conclusion - The rapid development of reinforcement learning algorithms and their application in autonomous driving is underscored, encouraging practical engagement with the technology [62].