Workflow
用户赋权
icon
Search documents
15款大模型透明度测评:两款允许用户撤回数据不投喂AI
Nan Fang Du Shi Bao· 2025-12-19 01:28
Core Insights - The report highlights a significant lack of transparency among 15 tested domestic AI models, with only DeepSeek disclosing the general source of its training data [1][3] - The report emphasizes the importance of enhancing transparency in AI services to ensure fairness, avoid bias, and meet legal compliance requirements [2][10] Group 1: Transparency and Data Disclosure - Among the 15 AI models tested, only DeepSeek provided information about its training data sources, which include publicly available information and data obtained through third-party collaborations [3][4] - The average transparency score for the AI models was 60.2, indicating a need for improvement in areas such as training data sources, user data withdrawal mechanisms, and copyright protection [3][10] - The report calls for continuous enhancement of transparency in AI models to facilitate compliance assessments by external stakeholders [3][10] Group 2: User Empowerment and Data Management - Two models, DeepSeek and Tencent Yuanbao, offer users an "opt-out" switch, allowing them to choose whether their data can be used for model training [5][6] - Five models provide mechanisms for users to withdraw consent for their data to be used in model optimization, although technical challenges exist in completely erasing data once it has been integrated into model parameters [5][6] - The report suggests that user empowerment and respect for user rights should be prioritized, drawing inspiration from successful international models [8][10] Group 3: AI Content Generation and Identification - All tested models have implemented AI-generated content identification, marking a significant improvement from previous assessments [9][10] - The report notes that while models have improved in disclosing content sources, there is still a lack of features like "rest reminders" for prolonged interactions, which are present in some international models [12][13] - The report advocates for responsible and phased disclosures to enhance service transparency and educate users about generative AI [13]