Core Viewpoint - The integration of large language models (LLMs) into the banking sector is driving digital transformation, but the inherent opacity of these models presents significant challenges in explainability, necessitating the establishment of a transparent and trustworthy AI application framework to ensure safe and compliant operations [3][4]. Regulatory Constraints on Explainability - Financial regulatory bodies are increasingly emphasizing the need for transparency in AI models, requiring banks to disclose decision-making processes to meet compliance standards and protect consumer rights, which serves as a primary external constraint on LLM applications [6]. - In scenarios like credit approval that directly affect customer rights, algorithmic decisions must provide clear justifications to ensure fairness and accountability. Regulations such as the EU's General Data Protection Regulation (GDPR) mandate transparency in automated decision-making, and domestic regulators also require banks to explain reasons for credit application rejections [7]. - Global regulatory trends are converging towards the necessity for AI model explainability, with frameworks like Singapore's FEAT principles and China's guidelines emphasizing fairness, ethics, accountability, and transparency. The upcoming EU AI Act will impose strict transparency and explainability obligations on high-risk financial AI systems [8]. Technical Explainability Challenges of LLMs - The architecture and operational mechanisms of LLMs inherently limit their technical explainability, as their complex structures and vast parameter counts create a "black box" effect [10]. - The attention mechanism, once thought to provide insights into model behavior, has been shown to have weak correlations with the importance of features in model predictions, undermining its reliability as an explanation tool. The sheer scale of parameters complicates traditional explanation algorithms, making it difficult to analyze high-dimensional models effectively [11]. - The phenomenon of "hallucination," where LLMs generate plausible but factually incorrect content, exacerbates the challenge of explainability. This issue leads to outputs that cannot be traced back to reliable inputs or training data, creating significant risks in financial contexts [12].
商业银行应用大语言模型的可解释性挑战 | 金融与科技
清华金融评论·2025-09-07 10:13