BIS

Search documents
临时文件管理解释:监管机构如何应对人工智能可解释性问题
BIS· 2025-09-10 08:06
Investment Rating - The report does not provide a specific investment rating for the industry Core Insights - The increasing adoption of artificial intelligence (AI) in financial institutions is transforming operations, risk management, and customer interactions, but the limited explainability of complex AI models poses significant challenges for both financial institutions and regulators [7][9] - Explainability is crucial for transparency, accountability, regulatory compliance, and consumer trust, yet complex AI models like deep learning and large language models (LLMs) are often difficult to interpret [7][9] - There is a need for robust model risk management (MRM) practices in the context of AI, balancing explainability and model performance while ensuring risks are adequately assessed and managed [9][19] Summary by Sections Introduction - AI models are increasingly applied across all business activities in financial institutions, with a cautious approach in customer-facing applications [11] - The report highlights the importance of explainability in AI models, particularly for critical business activities [12] MRM and Explainability - Existing MRM guidelines are often high-level and may not adequately address the specific challenges posed by advanced AI models [19][22] - The report discusses the need for clearer articulation of explainability concepts within existing MRM requirements to better accommodate AI models [19][22] Challenges in Implementing Explainability Requirements - Financial institutions face challenges in meeting existing regulatory requirements for AI model explainability, particularly with complex models like deep neural networks [40][56] - The report emphasizes the need for tailored explainability requirements based on the audience, such as senior management, consumers, or regulators [58] Potential Adjustments to MRM Guidelines - The report suggests potential adjustments to MRM guidelines to better address the unique challenges posed by AI models, including the need for clearer definitions and expectations regarding model changes [59][60] Conclusion - The report concludes that overcoming explainability challenges is crucial for financial institutions to leverage AI effectively while maintaining regulatory compliance and managing risks [17][18]
生成式人工智能在中央银行的应用
BIS· 2025-03-11 06:20
Data Science in Central Banking Closing remarks by Paolo Angelini, Deputy Governor of the Bank of Italy 4th Irving Fisher Committee and Bank of Italy Workshop Rome, 20 February 2025 Good afternoon ladies and gentlemen, I am happy to be here with you today as we mark the conclusion of this workshop on data science in central banking. As highlighted in the Irving Fisher Committee Annual Report for 2024, launching this periodic workshop series jointly with the Bank of Italy, back in 2019, was a far-sighted str ...
2024生成式AI的崛起对美国劳动力市场的影响分析报告渗透度替代效应及对不平等状况
BIS· 2025-01-03 01:35
BIS Working Papers No 1207 The rise of generative AI: modelling exposure, substitution, and inequality effects on the US labour market by Raphael Auer, David Köpfer, Josef Švéda Monetary and Economic Department September 2024 JEL classification: E24, E51, G21, G28, J23, J24, M48, O30, O33 Keywords: Labour market, Artificial intelligence, Employment, Inequality, Automation, ChatGPT, GPT, LLM, Wage, Technology This publication is available on the BIS website (www.bis.org). BIS Working Papers are written by me ...
2024年Nexus:实现即时跨境支付报告
BIS· 2024-07-25 06:00
&> BIS Innovation Hub July 2024 A Project Nexus Enabling instant cross-border payments Project Nexus Enabling instant cross-border payments BANK NEGARA MALAYSIA BANK OF THAILAND This publication is available at bis.org Publication date: July 2024 © Bank for International Settlements 2024. All rights reserved. Use of this publication is subject to the terms and conditions of use published on bis.org. Brief excerpts may be reproduced or translated provided the source is stated. BIS Innovation Hub Project Nexu ...