Group 1 - The core issue of AI-generated content, particularly the risks associated with AI face-swapping technology, has gained significant attention following an incident involving actor Wen Zhengrong [1] - Hu Zhenquan, president of 360 Digital Security Group, highlighted the challenges in identifying AI-generated content due to its realism, indicating a need for improved detection technologies [1][3] - The 2025 World Internet Conference in Wuzhen served as a platform for the release of the "Large Model Security White Paper," which outlines the security vulnerabilities associated with AI large models [3][4] Group 2 - The white paper identified 281 security vulnerabilities, with 177 being unique to large models, representing over 60% of the total [3] - Five key risk categories threatening large model security were outlined, including infrastructure security risks, content security risks, data and knowledge base security risks, user-end security risks, and the complexities arising from the interconnection of these risks [4] - The proposed dual governance strategy includes "external security" focusing on model protection and "native platform security" embedding security capabilities within core components [4] Group 3 - Despite the controversies surrounding AI intelligent agents, Hu Zhenquan expressed optimism about their future, likening their current stage to the early days of personal computers [5] - He emphasized that intelligent agents, as essential carriers for large model applications, are expected to evolve and become mainstream in AI applications [5] - The development of intelligent agents is anticipated to lead to significant advancements in efficiency and capability in the near future [5]
360胡振泉谈AI换脸乱象:以现有识别鉴定技术看破有难度