Workflow
FinBERT
icon
Search documents
上市公司数字技术风险暴露数据(2007-2024年)
Sou Hu Cai Jing· 2025-12-10 07:57
Group 1 - The article discusses the exposure of listed companies to digital technology risks from 2007 to 2024, utilizing FinBERT, a large language model, to analyze the Management Discussion and Analysis (MD&A) sections of annual reports for sentiment related to digital technology security [2][3] - The methodology involves identifying relevant text on digital technology risks, constructing a keyword list based on existing guidelines, and extracting sentences that reflect these risks [3][4] - A training dataset is created by annotating a sample of sentences to determine whether they indicate risk exposure or preventive measures, using a combination of AI models for accuracy [4][5] Group 2 - The final exposure level of digital technology risk is defined as the difference between the maximum negative sentiment probability of disclosed risks and the average positive sentiment probability of preventive measures, leading to the creation of specific indicators for data security and cyber risk exposure [6] - The effectiveness of the digital technology risk exposure indicators is validated by examining their correlation with other types of risks, revealing a significant positive relationship with financial and operational risks [7][8] - The model's accuracy in sentiment analysis related to digital technology risks is confirmed through random sampling and manual review, demonstrating high performance, especially in clearly biased sentences [8]
AI赋能资产配置(十八):LLM助力资产配置与投资融合
Guoxin Securities· 2025-10-29 14:43
Group 1: Core Conclusions - LLM reshapes the information foundation of asset allocation, enhancing the absorption of unstructured information such as sentiment, policies, and financial reports, which traditional quantitative strategies have struggled with [1][11] - The effective implementation of LLM relies on a collaborative mechanism involving "LLM + real-time data + optimizer," where LLM handles cognition and reasoning, external APIs and RAG provide real-time information support, and numerical optimizers perform weighting calculations [1][12] - LLM has established operational pathways in sentiment signal extraction, financial report analysis, investment reasoning, and agent construction, providing a realistic basis for enhancing traditional asset allocation systems [1][3] Group 2: Information Advantage Reconstruction - LLM enables efficient extraction, quantification, and embedding of soft information such as sentiment, financial reports, and policy texts into allocation models, significantly enhancing market expectation perception and strategy sensitivity [2][11] - The modular design of LLM, APIs, RAG, and numerical optimizers enhances strategy stability and interpretability while being highly scalable for multi-asset allocation [2][12] - A complete chain of capabilities from signal extraction to agent execution has been formed, demonstrating LLM's application in quantitative factor extraction and allocation [2][20] Group 3: Case Studies - The first two case studies focus on how sentiment and financial report signals can be transformed into quantitative factors for asset allocation, improving strategy sensitivity and foresight [20][21] - The third case study constructs a complete investment agent process, emphasizing the collaboration between LLM, real-time data sources, and numerical optimizers, showcasing a full-chain investment application from information to signal to optimization to execution [20][31] Group 4: Future Outlook - The integration of LLM with reinforcement learning, Auto-Agent, multi-agent systems, and personalized research platforms will drive asset allocation from a tool-based approach to a systematic and intelligent evolution, becoming a core technological path for building information advantages and strategic moats for buy-side institutions [3][39]