Workflow
AI数据隐私
icon
Search documents
50位专家齐聚冰城 共探AI时代安全防护新路径
Zhong Guo Xin Wen Wang· 2025-11-30 06:23
Core Insights - The 2025 Northeast Academic Seminar on Information Network Security was held in Harbin, focusing on the challenges and advancements in cybersecurity related to artificial intelligence and industrial internet security [1][2]. Group 1: Seminar Overview - The seminar gathered 50 experts from over 20 universities and research institutions to discuss cutting-edge topics such as industrial internet security, AI data privacy, and deep forgery [1][2]. - The event was organized by the Ministry of Public Security's Third Research Institute and co-hosted by several universities and professional committees [5]. Group 2: Key Discussions - Professor Yao Yu from Northeast University presented on the challenges of industrial internet security and shared the "Listening" security capability system, highlighting AI's potential in industrial safety [2]. - Professor Lv Hongwu from Harbin Engineering University discussed the progress of AI-driven traffic classification methods and the challenges of imbalanced traffic, long sequence dependencies, and high labeling costs [2]. - A roundtable discussion addressed topics such as model governance, security evaluation systems, AI content safety, and data security, with experts proposing the establishment of a secure and controllable large model system and the improvement of model security standards [2]. Group 3: Specialized Forums - A sub-forum at Heilongjiang University focused on critical issues in content security and trusted computing, including image authenticity verification, risk assessment of deep forgery, and methods for detecting visual content tampering [5]. - The forum also explored key technologies in federated learning and efficient data structure design for trusted execution environments [5].
七年后,才发现误会了老实人李彦宏
Sou Hu Cai Jing· 2025-09-18 14:34
Core Viewpoint - Anthropic, an AI company valued over $180 billion, has announced a change in its user privacy policy, allowing user interaction data to be used for model training unless users opt out by September 28. This move aligns with industry trends where user data is increasingly utilized for AI training, often at the expense of privacy [2][5][6]. Group 1: Policy Changes and User Data - Anthropic has modified its privacy policy, requiring users to actively opt out if they do not want their interaction data used for model training, with data retention periods differing based on user consent [2][5]. - The new policy applies to all personal users of the Claude series, including both free and paid users, while enterprise and government clients are exempt from this change [2][5]. - This shift reflects a broader trend among AI companies, including OpenAI, where user data from non-paying or low-paying users is often used for training unless explicitly declined [5][6]. Group 2: Industry Context and User Privacy - The AI industry is facing a dilemma between enhancing AI capabilities and protecting user privacy, with many companies lowering privacy standards to access high-quality training data [3][22]. - OpenAI has established a precedent by allowing users to disable chat history, indicating a growing recognition of user data rights, yet still defaults to using data from users who do not opt out [5][6]. - The legal framework in China supports the use of user data for training, with regulations requiring user consent for data usage, highlighting a global trend towards data utilization in AI development [8][9]. Group 3: Data Quality and Training Challenges - High-quality user interaction data is essential for training AI models, as it provides real-world benchmarks for model performance [5][22]. - Research indicates that using synthetic data for training can lead to model degradation, emphasizing the importance of real human-generated data for effective AI training [22][24]. - A study found that Chinese AI models have lower levels of data pollution compared to their international counterparts, suggesting better data quality in training processes [20][22].