Workflow
治理模式
icon
Search documents
AI惹祸谁来担责? Finoverse首席执行官:应由多方共同承担
Mei Ri Jing Ji Xin Wen· 2025-11-17 13:25
Core Insights - The adoption rate of artificial intelligence (AI) has significantly increased since the release of ChatGPT in 2022, but trust in AI systems remains a critical challenge, with only 46% of respondents willing to trust AI [1] - Many advanced AI models are perceived as "black boxes," making it difficult for even developers to fully understand their decision-making logic, raising concerns about transparency and trust [1] - Establishing trust in AI requires strong human oversight, transparent data usage, and enterprise-level testing, emphasizing that responsible AI should enhance human outcomes rather than replace human judgment [1] Responsibility and Governance - In cases of severe errors caused by AI systems, responsibility should be shared among developers, deployers, and users, similar to how traffic safety relies on the cooperation of drivers, pedestrians, and regulatory bodies [2] - The rapid advancement of AI capabilities raises concerns about malicious use and potential loss of human control, necessitating governance to ensure that technological development serves human welfare [2] - A flexible, risk-based governance model is essential, with stricter regulations in sensitive areas like finance, healthcare, and public safety, while allowing more freedom in low-risk applications under clear ethical guidelines [3]