Workflow
分散大模型合规风险,首批生成式AI侵权责任保险落地
Nan Fang Du Shi Bao·2025-05-22 09:41

Core Viewpoint - The first generative AI content infringement liability insurance in China has been launched in Wuxi, providing risk coverage for AI-generated content, addressing potential infringement issues related to copyright, portrait rights, and reputation rights [1][2]. Group 1: Insurance Product Details - China Pacific Insurance Wuxi Branch signed a liability insurance agreement with Wuxi Xuelang Digital Technology Co., providing 700,000 yuan in risk coverage for the company's self-developed Xuelang Industrial Model [1]. - The insurance is designed to cover unintentional infringement of third-party rights during the use of the AI model, with a one-year policy period from May 22, 2025, to May 21, 2026, and a premium of 15,000 yuan [1]. - Claims will be based on actual litigation costs incurred by the insured and compensation amounts determined by court or arbitration decisions [1]. Group 2: Market Demand and Industry Context - The demand for generative AI infringement liability insurance has increased due to multiple infringement disputes since 2024, notably the "AI-generated Ultraman infringement image" case in January 2025, which raised awareness of the responsibilities of AI service providers [2]. - Xuelang Digital Technology indicated that the insurance product effectively addresses industry pain points and helps mitigate risks faced by generative AI service companies [2]. - The willingness of insurance companies to underwrite this new type of insurance is high, with discussions on premium rates reflecting the actual risks faced by Xuelang Digital Technology [2]. Group 3: Insurance Coverage Scope - Traditional liability insurance primarily focuses on hardware failures or data breaches, which are inadequate for addressing AI-generated content infringement issues [3]. - This new insurance product covers the entire process of AI training and inference, helping technology companies alleviate concerns during innovation and establishing a risk mitigation mechanism for AI users [3].