Core Viewpoint - The emerging technology of artificial intelligence (AI) presents significant challenges for the international community, particularly in the context of regulation and governance [1][3]. Regulatory Framework - AI systems are categorized into four risk levels: unacceptable risk (e.g., social credit scoring systems), high risk (significant impact on health, safety, or fundamental rights), limited risk (users must be informed they are interacting with AI), and minimal or no risk (no regulatory requirements) [1][4]. - The European Union is seen as a leader in AI regulation, with the upcoming 2024 EU Artificial Intelligence Act focusing on establishing a regulatory framework [1][4]. AI Projects in Financial Institutions - Financial institutions, including the French central bank, are actively engaging in various AI projects, particularly in anti-money laundering and counter-terrorism financing [3]. - The EU aims to create a trustworthy and comprehensive AI system with unified rules from the outset [3]. Risks Associated with AI - Three additional risks associated with AI systems were identified: - Cyber risk, with financial institutions experiencing about half of global cyber attacks due to their interconnectedness [3]. - Concentration risk among service providers, which can lead to operational risks and synchronized market reactions, increasing the likelihood of market disruptions [3]. - Explainability risk, where reliance on AI for decision-making without human verification can lead to litigation, liability risks, and inconsistent decision-making [4]. Conclusion on AI Regulation - AI is viewed as a double-edged sword, enhancing regulators' ability to monitor risks while also amplifying the potential impact of those risks [4]. - The EU's proactive stance in AI regulation is characterized as stringent but aims to set standards that other countries can follow to ensure responsible AI development [4].
为何欧盟AI监管超前?法兰西银行副行长这样说
Di Yi Cai Jing·2025-10-23 10:54