Workflow
样本错配
icon
Search documents
量化漫谈系列之十九:AI 选股模型失效的三种应对方法
SINOLINK SECURITIES· 2025-12-30 08:53
Group 1 - The core viewpoint of the report highlights a significant shift in the A-share market style from "value/low volatility" to "small-cap/momentum" in 2024, and further converging to "consensus growth" in 2025, leading to a pronounced mean reversion effect due to overcrowding in market capitalization factors [2][13] - During the extreme market conditions from August to September 2025, mainstream AI strategies failed to adapt to the rapid style shift, resulting in significant net value drawdowns that were highly correlated with small-cap factor reversals [2][17] - The report identifies that both traditional linear multi-factor models and advanced AI strategies experienced a notable decline in excess returns during extreme market conditions, with AI strategies suffering more than traditional ones due to their reliance on historical data paths [2][17] Group 2 - The report discusses the issue of strategy homogeneity within the industry, where the widespread use of models like GRU and LightGBM has led to a high correlation between factors generated by different institutions, increasing systemic risk during market reversals [3][24] - It emphasizes that the mismatch between training sample distributions and extreme market conditions is a critical factor in AI model failures, as these models struggle to capture asset linkage patterns during rare events [3][35] Group 3 - An external risk control system has been developed, independent of stock selection models, to address the challenges of traditional timing strategies, utilizing a standardized three-layer processing workflow to generate clear long/short signals [4][40] - The empirical backtesting of this timing framework shows significant improvements in annualized returns and drawdown control, with the annualized return for the composite strategy on the CSI A500 index reaching 10.61% and maximum drawdown reduced to 11.82% [4][45] Group 4 - The report outlines targeted optimizations for core AI models, including enhancements to the LightGBM model through a "high-quality sample weighting" mechanism and the use of Huber Loss to reduce sensitivity to outliers, resulting in a significant reduction in maximum drawdown [5][61] - For the GRU model, the introduction of Attention Pooling and a memory module with CVaR Loss has improved the model's ability to utilize historical information effectively, leading to a substantial increase in excess returns and a decrease in maximum drawdown [5][67]