Workflow
大模型挖掘漏洞
icon
Search documents
Claude 90分钟挖穿20年漏洞!5w星“安全”系统跌下神坛,Linux内核也未能幸免
量子位· 2026-03-29 05:28
Core Viewpoint - The rapid advancement of large language models (LLMs) has enabled them to autonomously discover and exploit zero-day vulnerabilities in software, significantly changing the landscape of cybersecurity [13][14]. Group 1: Vulnerability Discovery - Anthropic's model, Claude, identified its first high-risk vulnerability in Ghost CMS within 90 minutes, allowing unauthorized access to sensitive data [3][21]. - Claude has autonomously identified and verified over 500 high-risk security vulnerabilities in open-source software libraries, which had previously gone unnoticed by the community or professional tools [21][22]. - The vulnerabilities discovered include a SQL injection flaw in Ghost CMS and multiple remote exploitable buffer overflow vulnerabilities in the Linux kernel [26][29]. Group 2: Implications for Cybersecurity - The ability of AI to find vulnerabilities that are typically difficult for humans to detect poses a significant security risk, as attackers could leverage similar models to exploit these vulnerabilities [12][39]. - The time from vulnerability discovery to exploitation has drastically reduced from months to mere hours, creating unprecedented challenges for cybersecurity [45]. - The rapid evolution of LLM capabilities suggests that within a year, even average models may be able to perform similar tasks, raising concerns about the speed at which attackers can operate compared to defenders [37][41]. Group 3: Call to Action - There is an urgent need for the cybersecurity community to address the security implications of LLMs, as they are currently in a critical window for developing effective solutions [46].