Workflow
AI数据隐私
icon
Search documents
50位专家齐聚冰城 共探AI时代安全防护新路径
Zhong Guo Xin Wen Wang· 2025-11-30 06:23
50位专家齐聚冰城 共探AI时代安全防护新路径 中新网哈尔滨11月30日电 (史轶夫 钟扬)29日,2025年《信息网络安全》东北地区学术研讨会在哈尔滨 召开。来自全国20余所高校及科研院所的50名网络安全领域专家学者,围绕工业互联网安全、人工智能 数据安全及隐私保护、大模型安全、深度伪造等前沿技术展开深入交流。 在人工智能广泛应用、网络威胁持续升级的背景下,工业互联网安全、人因风险、AI数据隐私与芯片 漏洞等问题日益凸显。东北大学姚羽教授围绕工业互联网安全面临的挑战,分享了"谛听"工业互联网安 全能力体系及在实际场景中的应用成果,深入探讨了人工智能在工业安全中的潜力。 来源:中国新闻网 编辑:郭晋嘉 广告等商务合作,请点击这里 本文为转载内容,授权事宜请联系原著作权人 2025年《信息网络安全》东北地区学术研讨会现场 张丽萍 摄 安全大模型圆桌讨论现场 张丽萍 摄 此外,会议在黑龙江大学设立分论坛,围绕当前内容安全与可信计算领域的关键议题展开分享。涵盖结 合硬件指纹的图像来源真实性验证技术、深度伪造的风险研判与治理路径、视觉内容生成安全与篡改检 测方法,以及异构场景下的联邦学习关键技术与面向可信执行环境的数 ...
七年后,才发现误会了老实人李彦宏
Sou Hu Cai Jing· 2025-09-18 14:34
Core Viewpoint - Anthropic, an AI company valued over $180 billion, has announced a change in its user privacy policy, allowing user interaction data to be used for model training unless users opt out by September 28. This move aligns with industry trends where user data is increasingly utilized for AI training, often at the expense of privacy [2][5][6]. Group 1: Policy Changes and User Data - Anthropic has modified its privacy policy, requiring users to actively opt out if they do not want their interaction data used for model training, with data retention periods differing based on user consent [2][5]. - The new policy applies to all personal users of the Claude series, including both free and paid users, while enterprise and government clients are exempt from this change [2][5]. - This shift reflects a broader trend among AI companies, including OpenAI, where user data from non-paying or low-paying users is often used for training unless explicitly declined [5][6]. Group 2: Industry Context and User Privacy - The AI industry is facing a dilemma between enhancing AI capabilities and protecting user privacy, with many companies lowering privacy standards to access high-quality training data [3][22]. - OpenAI has established a precedent by allowing users to disable chat history, indicating a growing recognition of user data rights, yet still defaults to using data from users who do not opt out [5][6]. - The legal framework in China supports the use of user data for training, with regulations requiring user consent for data usage, highlighting a global trend towards data utilization in AI development [8][9]. Group 3: Data Quality and Training Challenges - High-quality user interaction data is essential for training AI models, as it provides real-world benchmarks for model performance [5][22]. - Research indicates that using synthetic data for training can lead to model degradation, emphasizing the importance of real human-generated data for effective AI training [22][24]. - A study found that Chinese AI models have lower levels of data pollution compared to their international counterparts, suggesting better data quality in training processes [20][22].