AI数据隐私

Search documents
七年后,才发现误会了老实人李彦宏
Sou Hu Cai Jing· 2025-09-18 14:34
Core Viewpoint - Anthropic, an AI company valued over $180 billion, has announced a change in its user privacy policy, allowing user interaction data to be used for model training unless users opt out by September 28. This move aligns with industry trends where user data is increasingly utilized for AI training, often at the expense of privacy [2][5][6]. Group 1: Policy Changes and User Data - Anthropic has modified its privacy policy, requiring users to actively opt out if they do not want their interaction data used for model training, with data retention periods differing based on user consent [2][5]. - The new policy applies to all personal users of the Claude series, including both free and paid users, while enterprise and government clients are exempt from this change [2][5]. - This shift reflects a broader trend among AI companies, including OpenAI, where user data from non-paying or low-paying users is often used for training unless explicitly declined [5][6]. Group 2: Industry Context and User Privacy - The AI industry is facing a dilemma between enhancing AI capabilities and protecting user privacy, with many companies lowering privacy standards to access high-quality training data [3][22]. - OpenAI has established a precedent by allowing users to disable chat history, indicating a growing recognition of user data rights, yet still defaults to using data from users who do not opt out [5][6]. - The legal framework in China supports the use of user data for training, with regulations requiring user consent for data usage, highlighting a global trend towards data utilization in AI development [8][9]. Group 3: Data Quality and Training Challenges - High-quality user interaction data is essential for training AI models, as it provides real-world benchmarks for model performance [5][22]. - Research indicates that using synthetic data for training can lead to model degradation, emphasizing the importance of real human-generated data for effective AI training [22][24]. - A study found that Chinese AI models have lower levels of data pollution compared to their international counterparts, suggesting better data quality in training processes [20][22].