Core Viewpoint - ChatGPT has a significant security vulnerability known as "zero-click attack," allowing attackers to steal sensitive data without user interaction [1][2][5]. Group 1: Attack Mechanism - The vulnerability arises when ChatGPT connects to third-party applications, where attackers can inject malicious prompts into documents uploaded by users [9][10]. - Attackers can embed invisible payloads in documents, prompting ChatGPT to inadvertently send sensitive information to the attacker's server [14][18]. - The attack can be executed by malicious insiders who can easily manipulate accessible documents, increasing the likelihood of successful indirect prompt injection [16][17]. Group 2: Data Exfiltration - Attackers can use image rendering to exfiltrate data, embedding sensitive information in image URL parameters that are sent to the attacker's server when ChatGPT renders the image [20][24]. - The process involves instructing ChatGPT to search for API keys in connected services like Google Drive and send them to the attacker's endpoint [29][30]. Group 3: OpenAI's Mitigation Efforts - OpenAI is aware of the vulnerability and has implemented measures to check URLs for safety before rendering images [32][33]. - However, attackers have found ways to bypass these measures by using trusted services like Azure Blob for image hosting, which logs requests and parameters [37][38]. Group 4: Broader Implications and Recommendations - The security issue poses a significant risk to enterprises, potentially leading to the leakage of sensitive documents and data [46]. - Experts recommend strict access controls, monitoring solutions tailored for AI activities, and user education on the risks of uploading unknown documents [48].
ChatGPT惊现“零点击攻击”,API密钥被轻松泄露,OpenAI暂未解决
量子位·2025-08-12 09:35