大语言模型运行安全

Search documents
当Search Agent遇上不靠谱搜索结果,清华团队祭出自动化红队框架SafeSearch
机器之心· 2025-10-16 07:34
Core Insights - The article discusses the vulnerabilities of large language model (LLM)-based search agents, emphasizing that while they can access real-time information, they are susceptible to unreliable web sources, which can lead to the generation of unsafe outputs [2][7][26]. Group 1: Search Agent Vulnerabilities - A real-world case is presented where a developer lost $2,500 due to a search error involving unreliable code from a low-quality GitHub page, highlighting the risks associated with trusting search results [4]. - The research identifies that 4.3% of nearly 9,000 search results from Google were deemed suspicious, indicating a prevalence of low-quality websites in search results [11]. - The study reveals that search agents are not as robust as expected, with a significant percentage of unsafe outputs generated when exposed to unreliable search results [12][26]. Group 2: SafeSearch Framework - The SafeSearch framework is introduced as a method for automated red-teaming to assess the safety of LLM-based search agents, focusing on five types of risks including harmful outputs and misinformation [14][21]. - The framework employs a multi-stage testing process to generate high-quality test cases, ensuring comprehensive coverage of potential risks [16][19]. - SafeSearch aims to enhance transparency in the development of search agents by providing a quantifiable and scalable safety assessment tool [37]. Group 3: Evaluation and Results - The evaluation of various search agent architectures revealed that the impact of unreliable search results varies significantly, with the GPT-4.1-mini model showing a 90.5% susceptibility in a search workflow scenario [26][36]. - Different LLMs exhibit varying levels of resilience against risks, with GPT-5 and GPT-5-mini demonstrating superior robustness compared to others [26][27]. - The study concludes that effective filtering methods can significantly reduce the attack success rate (ASR), although they cannot eliminate risks entirely [36][37]. Group 4: Implications and Future Directions - The findings underscore the importance of systematic evaluation in ensuring the safety of search agents, as they are easily influenced by low-quality web content [37]. - The article suggests that the design of search agent architectures can significantly affect their security, advocating for a balance between performance and safety in future developments [36][37]. - The research team hopes that SafeSearch will become a standardized tool for assessing the safety of search agents, facilitating their evolution in both performance and security [37].