Workflow
深度学习模型
icon
Search documents
深度学习模型可预测细胞每分钟发育变化 为构建“数字胚胎”奠定基础
Ke Ji Ri Bao· 2025-12-26 00:37
团队表示,"MultiCell"是首个能在多细胞自组装过程中,实现各类细胞行为单细胞精度预测的算法。鉴 于其可捕捉细胞动力学上存在的微妙差异,未来将助力早期诊断或药物筛选。 美国麻省理工学院、密歇根大学和东北大学联合团队在最新《自然·方法》杂志上发表论文,介绍了一 种名为"MultiCell"的几何深度学习模型。该模型首次实现了在单细胞分辨率下,预测果蝇胚胎发育过程 中,每个细胞在每分钟的行为变化。未来可在此基础上设计出通用的多细胞发育预测模型,构建"数字 胚胎",用于药物筛选甚至指导人工组织设计。 一个胚胎如何从一团细胞变成有头有尾、有器官的完整生命体,是发育生物学领域持续百年的核心谜 题。虽然科学家早已知道细胞会分裂、移动、折叠,但具体到某一个细胞在下一分钟会有什么动态行 为,却一直难以预测。 模型采用四维全胚胎数据进行训练和测试,这些数据具有亚微米级分辨率和较高的帧率,每个胚胎包含 约5000个被标注边界和细胞核的细胞。在测试中,模型不仅能判断细胞是否会发生特定行为,还能精确 预测行为发生的时间是几分钟后。团队将这一方法与"阿尔法折叠"预测的蛋白质结构相类比:阿尔法折 叠是从氨基酸序列推断蛋白质三维结构 ...
——量化学习笔记之一:基于堆叠LSTM模型的十年期国债收益率预测
EBSCN· 2025-12-15 07:56
2025 年 12 月 15 日 总量研究 基于堆叠 LSTM 模型的十年期国债收益率预测 ——量化学习笔记之一 要点 1、 金融时序预测和神经网络模型 针对金融时间序列的预测,经历了从传统计量模型、到传统机器学习模型、再到 深度学习模型的三个主要发展阶段。深度学习模型能够较好适应金融时间序列的 非平稳、非线性、高噪声和长记忆性等复杂特征,是当前主流的金融时序预测方 法之一。 模型设计优化。在现有模型的基础上,针对时间窗口、数据处理、网络架构和训 练策略等相关设计进行调整优化。 输入多维度变量。将输入变量从单一的收益率序列扩展至宏观、市场、情绪等多 维度变量,使模型预测更加符合经济逻辑,捕捉信息更加全面。 构建混合模型。将 LSTM 模型与传统计量模型或其他机器学习模型相结合,构建 如 ARIMAX-LSTM、CNN-LSTM-ATT 等混合模型,能够发挥不同模型优势,弥补 单一 LSTM 模型缺陷,提升预测精度。 引入滚动回测机制。采用滚动时间窗口回测机制,固定样本时间窗口并随时间推 移实现模型的动态更新和持续预测,使模型能更好适应市场变化,提升其稳健性。 4、 风险提示 模型结构简单导致当前预测误差较大; ...
量化学习笔记之一:基于堆叠LSTM模型的十年期国债收益率预测
EBSCN· 2025-12-15 06:53
总量研究 基于堆叠 LSTM 模型的十年期国债收益率预测 ——量化学习笔记之一 2025 年 12 月 15 日 2、 基于堆叠 LSTM 模型的国债收益率预测 构建混合模型。将 LSTM 模型与传统计量模型或其他机器学习模型相结合,构建 如 ARIMAX-LSTM、CNN-LSTM-ATT 等混合模型,能够发挥不同模型优势,弥补 单一 LSTM 模型缺陷,提升预测精度。 本报告采用了三层堆叠 LSTM+Dropout 正则化的经典稳健架构来构建十年期国 债收益率预测模型,初步探索深度学习模型在固收量化领域的应用和效果。 模型以 2021 年初至数据获取当日(截至 2025 年 12 月 12 日)的十年期国债收 益率为数据标的,以过去 60 个交易日的收益率一阶差分作为输入特征,以未来 一周的收益率一阶差分作为预测目标来构建时间序列样本。最终构建出一个包含 约 13 万个可调参数的中等复杂度的 LSTM 神经网络模型,模型于第 27 轮训练 迭代出最优模型,针对测试集预测的平均绝对误差为 1.43BP,最优模型预测本 周(2025 年 12 月 15 日-2025 年 12 月 19 日)十年期国债收益率整 ...
AI文章仿写工具哪个好?深度评测帮你选
Sou Hu Cai Jing· 2025-12-14 16:14
在内容创作效率至上的今天,你是否在寻找一款工具,能够真正实现从"采集"到"发bu"的全流程自动化,解放你的双手与时间?面对海量的AI写作工具,它 们往往功能单一,要么只能生成,要么只能改写,难以满足内容运营者规模化、高质量、多渠道分发的复杂需求。本文将基于对行业核心需求的洞察,对市 面上的几款主流"AI生成文章仿写"工具进行深度评测,帮助你找到最适合自己的"内容工厂"。 AI生成文章仿写,其本质是利用人工智能技术,对已有文本进行语义理解、结构分析和语言重组,从而生成在核心信息上相近但表达方式不同的新文本。 这一过程涉及自然语言处理(NLP)、深度学习模型(如Transformer架构)等技术。根据中国人工智能产业发展联盟发bu的《2024年人工智能生成内容 (AIGC)白皮书》,文本生成技术已从早期的模板填充和简单替换,发展到如今的深度语义理解和创造性仿写阶段。一项发表在《自然-机器智能》子刊上 的研究指出,现代大型语言模型在文本仿写任务上,已能在保留原文事实性信息的同时,实现高达70%以上的词汇和句式变化,有效规避简单的重复检测。 本次评测,我们将从自动化程度、功能集成度、原创质量、发bu灵活性及综合成本效 ...
中邮因子周报:深度学习模型回撤显著,高波占优-20250901
China Post Securities· 2025-09-01 05:47
Quantitative Models and Construction 1. Model Name: barra1d - **Model Construction Idea**: This model is part of the GRU factor family and is designed to capture short-term market dynamics through daily data inputs[4][6][8] - **Model Construction Process**: The barra1d model uses daily market data to calculate factor exposures and returns. It applies industry-neutralization and standardization processes to ensure comparability across stocks. The model is rebalanced monthly, selecting the top 10% of stocks with the highest factor scores for long positions and the bottom 10% for short positions, with equal weighting[17][28][29] - **Model Evaluation**: The barra1d model demonstrated strong performance in multiple stock pools, showing resilience in volatile market conditions[4][6][8] 2. Model Name: barra5d - **Model Construction Idea**: This model extends the barra1d framework to a five-day horizon, aiming to capture slightly longer-term market trends[4][6][8] - **Model Construction Process**: Similar to barra1d, the barra5d model uses five-day aggregated data for factor calculation. It follows the same industry-neutralization, standardization, and rebalancing processes as barra1d[17][28][29] - **Model Evaluation**: The barra5d model experienced significant drawdowns in recent periods, indicating sensitivity to market reversals[4][6][8] 3. Model Name: open1d - **Model Construction Idea**: This model focuses on open price data to identify short-term trading opportunities[4][6][8] - **Model Construction Process**: The open1d model calculates factor exposures based on daily opening prices. It applies the same industry-neutralization and rebalancing methodology as other GRU models[17][28][29] - **Model Evaluation**: The open1d model showed moderate performance, with some drawdowns in recent periods[4][6][8] 4. Model Name: close1d - **Model Construction Idea**: This model emphasizes closing price data to capture end-of-day market sentiment[4][6][8] - **Model Construction Process**: The close1d model uses daily closing prices for factor calculation. It follows the same construction and rebalancing methodology as other GRU models[17][28][29] - **Model Evaluation**: The close1d model demonstrated stable performance, with positive returns in certain stock pools[4][6][8] --- Model Backtesting Results 1. barra1d Model - Weekly Excess Return: +0.57%[29][30] - Monthly Excess Return: +0.75%[29][30] - Year-to-Date Excess Return: +4.38%[29][30] 2. barra5d Model - Weekly Excess Return: -2.17%[29][30] - Monthly Excess Return: -3.76%[29][30] - Year-to-Date Excess Return: +4.13%[29][30] 3. open1d Model - Weekly Excess Return: -0.97%[29][30] - Monthly Excess Return: -2.85%[29][30] - Year-to-Date Excess Return: +4.20%[29][30] 4. close1d Model - Weekly Excess Return: -1.68%[29][30] - Monthly Excess Return: -4.50%[29][30] - Year-to-Date Excess Return: +1.90%[29][30] --- Quantitative Factors and Construction 1. Factor Name: Beta - **Factor Construction Idea**: Measures historical market sensitivity of a stock[15] - **Factor Construction Process**: Calculated as the regression coefficient of a stock's returns against market returns over a specified period[15] 2. Factor Name: Size - **Factor Construction Idea**: Captures the size effect, where smaller firms tend to outperform larger ones[15] - **Factor Construction Process**: Defined as the natural logarithm of total market capitalization[15] 3. Factor Name: Momentum - **Factor Construction Idea**: Identifies stocks with strong recent performance[15] - **Factor Construction Process**: Combines historical excess return mean, volatility, and cumulative deviation into a weighted formula: $ Momentum = 0.74 * \text{Volatility} + 0.16 * \text{Cumulative Deviation} + 0.10 * \text{Residual Volatility} $[15] 4. Factor Name: Volatility - **Factor Construction Idea**: Measures the risk or variability in stock returns[15] - **Factor Construction Process**: Weighted combination of historical residual volatility and other measures[15] 5. Factor Name: Valuation - **Factor Construction Idea**: Captures the value effect, where undervalued stocks tend to outperform[15] - **Factor Construction Process**: Defined as the inverse of the price-to-book ratio[15] 6. Factor Name: Liquidity - **Factor Construction Idea**: Measures the ease of trading a stock[15] - **Factor Construction Process**: Weighted combination of turnover rates over monthly, quarterly, and yearly horizons: $ Liquidity = 0.35 * \text{Monthly Turnover} + 0.35 * \text{Quarterly Turnover} + 0.30 * \text{Yearly Turnover} $[15] 7. Factor Name: Profitability - **Factor Construction Idea**: Identifies stocks with strong earnings performance[15] - **Factor Construction Process**: Weighted combination of various profitability metrics, including analyst forecasts and financial ratios[15] 8. Factor Name: Growth - **Factor Construction Idea**: Captures the growth potential of a stock[15] - **Factor Construction Process**: Weighted combination of earnings and revenue growth rates[15] --- Factor Backtesting Results 1. Beta Factor - Weekly Return: +0.14%[21] - Monthly Return: +1.65%[21] - Year-to-Date Return: +5.29%[21] 2. Size Factor - Weekly Return: +0.36%[21] - Monthly Return: +1.00%[21] - Year-to-Date Return: +6.37%[21] 3. Momentum Factor - Weekly Return: +2.21%[24] - Monthly Return: +8.80%[24] - Year-to-Date Return: +23.30%[24] 4. Volatility Factor - Weekly Return: +2.82%[24] - Monthly Return: +12.29%[24] - Year-to-Date Return: +25.25%[24] 5. Valuation Factor - Weekly Return: +1.47%[21] - Monthly Return: +2.30%[21] - Year-to-Date Return: -2.26%[21] 6. Liquidity Factor - Weekly Return: +1.80%[21] - Monthly Return: +5.91%[21] - Year-to-Date Return: +19.70%[21] 7. Profitability Factor - Weekly Return: +4.57%[21] - Monthly Return: +7.53%[21] - Year-to-Date Return: +27.56%[21] 8. Growth Factor - Weekly Return: +2.76%[24] - Monthly Return: +6.51%[24] - Year-to-Date Return: +14.51%[24]
国债期货系列报告:多通道深度学习模型在国债期货因子择时上的应用
Guo Tai Jun An Qi Huo· 2025-08-28 08:42
1. Report Industry Investment Rating No relevant content provided. 2. Core Viewpoints of the Report - The report innovatively proposes a dual - channel deep - learning model (LSTM and GRU) that integrates daily - frequency and minute - frequency data, which can effectively capture market information on different time scales, significantly improve the prediction accuracy and stability of the strategy outside the sample (especially during market downturns), and provide a new idea with strong generalization ability for reconstructing the quantitative timing system of the bond market [2]. - The dual - channel model shows excellent generalization ability and robustness in out - of - sample tests, and can maintain a high winning rate in bear markets, effectively making up for the shortcoming of traditional factors failing in market downturns [3]. - In the multi - factor timing framework, the weight of deep - learning factors should be controlled at a relatively low proportion, and machine - learning factors should play a supplementary role to achieve the unity of interpretability and performance improvement [43][44]. 3. Summary by Relevant Catalogs 3.1 Deep - Learning Model Introduction - Traditional quantitative factors in the bond market have declined in performance in recent years, and there is a need to reconstruct and re - mine bond - market quantitative factors. Deep - learning methods can be used to find complex relationships in data, and RNN, LSTM, and GRU are considered suitable for the timing task of Treasury bond futures [7][8]. - RNN can process time - series data but has the problem of gradient disappearance when dealing with long time - series [9]. - LSTM solves the gradient - disappearance problem through a cell state and three gating units, enabling it to learn long - range dependencies in sequences [15]. - GRU simplifies the structure of LSTM, reduces the number of learnable parameters, and has high parameter efficiency and fast training speed [19]. - A dual - channel model is designed to process daily - frequency and minute - frequency data simultaneously to extract features on different time scales and predict the daily - frequency returns of Treasury bond futures, which can reduce the over - fitting risk [22]. 3.2 Treasury Bond Futures Timing Test 3.2.1 Back - testing Settings - The target variable is the open - to - open return of 10 - year Treasury bond futures, and the back - testing time interval is from January 2016 to August 2025, with daily rebalancing, 100% margin, 1 - time leverage, and a bilateral handling fee of 0.01% [25][26][27]. 3.2.2 Daily - frequency Channel Model - The single - daily - frequency channel model based on daily - frequency features performs well within the sample but poorly outside the sample, with obvious over - fitting [33]. 3.2.3 Dual - channel Model - The dual - channel model fuses multi - frequency time - series information. The addition of minute - frequency information significantly improves the prediction effect of the model outside the sample, enhances the generalization ability and stability, and maintains a relatively high winning rate in both long and short positions [40][41][42]. 3.3 Deep - Learning Allocation in the Multi - factor Framework - Deep - learning factors in the multi - factor timing framework have high performance but also have over - fitting risks and lack of interpretability. The weight of deep - learning factors should be controlled at a relatively low proportion, and machine - learning factors should play a supplementary role [43][44]. 3.4 Conclusion - The report explores the application of deep - learning models in Treasury bond futures quantitative timing and proposes a dual - channel deep - learning framework based on multi - frequency data fusion, which can effectively improve the performance of multi - factor strategies [45].
Cell子刊:舒妮/黄伟杰团队综述AI赋能多模态成像,用于神经精神疾病精准医疗
生物世界· 2025-05-26 23:57
Core Viewpoint - The integration of multimodal neuroimaging and artificial intelligence (AI) is revolutionizing the early diagnosis and personalized treatment of neuropsychiatric disorders, addressing the challenges posed by their complex pathology and clinical heterogeneity [2][6]. Multimodal Neuroimaging: A Comprehensive Brain Examination - Traditional single-modality brain examinations are limited, while multimodal imaging can decode the brain from structural, functional, and molecular dimensions, enabling early intervention [7][8]. - Structural imaging (e.g., MRI) reveals brain tissue volume and cortical thickness, functional imaging (e.g., fMRI, EEG) captures neuronal activity, and molecular imaging (e.g., PET) tracks pathological markers like amyloid proteins, providing early warnings for conditions like Alzheimer's disease [9]. AI as a Puzzle Solver - AI demonstrates three key capabilities in handling vast heterogeneous data: feature fusion (early, mid, and late fusion), deep learning models, and clinical prediction tools, significantly enhancing diagnostic accuracy [12][13]. - For instance, multimodal AI models have improved early Alzheimer's diagnosis accuracy to 92.7%, surpassing single-modality methods by over 15% [13]. Practical Achievements: AI's Impact - AI has achieved high diagnostic accuracy, distinguishing Alzheimer's from Lewy body dementia at 87% and predicting epilepsy seizures with over 98% accuracy [14]. - It can predict the efficacy of depression medications with 89% accuracy and assess cognitive decline rates [15]. - AI identified three subtypes in over 2,000 bipolar disorder patients, guiding personalized treatment approaches [16]. Challenges and Breakthroughs: Path to Clinical Application - The integration of multimodal neuroimaging data faces challenges such as data availability, heterogeneity, and AI model interpretability, compounded by issues like class imbalance, algorithm bias, and data privacy [20]. - Addressing these challenges is crucial for developing robust AI models based on multimodal neuroimaging [20]. Future Research Directions - The future of AI in neuropsychiatric disorders includes the development of transformer models for cross-modal data processing, dynamic monitoring of brain network changes, and creating lightweight models for clinical use [23][24]. - Despite significant advancements, further exploration of clinical effectiveness and usability is needed to transition from research to practical applications [24].