Core Insights - The 2025 China Internet Conference highlighted significant risks associated with large models in practical applications, as discussed by Zhou Hongyi, founder of 360 Group [1][2]. Group 1: Risks Identified - The first major risk is that large models can produce errors or "hallucinations," leading to potentially dangerous outcomes when integrated into industrial production and government operations [1][2]. - The second risk involves the lowered barrier for individuals to attack AI systems, as even those without programming knowledge can manipulate large models to execute harmful commands, such as "injection attacks" [2]. - The third risk pertains to advanced threats at a national level, where hackers can embed their skills into large models, allowing them to control multiple AI agents simultaneously, thus transforming the landscape of cybersecurity [2][3]. Group 2: Proposed Solutions - In response to these risks, 360 Group is developing intelligent security agents to provide real-time detection and defense against attacks, effectively using algorithms to counteract other algorithms [3]. - Additionally, 360 has created a "Large Model Guardian," a specialized system designed to monitor the commands given to large models and assess the validity of their outputs, aiming to minimize the occurrence of errors [3].
周鸿祎:大模型降低了使用门槛,也降低了被攻击门槛
Xin Lang Ke Ji·2025-07-23 03:26