零点击攻击
Search documents
ChatGPT惊现“零点击攻击”,API密钥被轻松泄露,OpenAI暂未解决
3 6 Ke· 2025-08-12 10:08
Core Viewpoint - ChatGPT has a significant security vulnerability known as "zero-click attack," allowing attackers to steal sensitive data without user interaction, including API keys [1][3]. Attack Chain Formation - The vulnerability arises when ChatGPT connects to third-party applications, where attackers can inject malicious prompts into documents uploaded by users [6][10]. - Attackers can embed invisible payloads in documents, prompting ChatGPT to inadvertently send sensitive information to their servers [10][12]. Invasion Process - Users upload documents to ChatGPT for analysis, which can lead to the execution of malicious commands embedded within those documents [7][10]. Data Exfiltration Method - Attackers can embed sensitive data, such as API keys, into image URLs, which are rendered by ChatGPT, allowing for immediate data transmission to the attackers' servers without user interaction [13][18]. OpenAI's Mitigation Measures - OpenAI has implemented measures to check URLs for safety before rendering images, aiming to prevent data leaks [19][20]. - However, attackers have found ways to bypass these measures by using trusted services like Azure Blob for data exfiltration [21][22]. Broader Security Implications - The vulnerability poses a significant risk to enterprises, as malicious insiders could easily exploit it to access and contaminate sensitive documents [12][25]. - Traditional security training may not effectively mitigate this risk, as the attack does not require user interaction [25][26]. Expert Recommendations - Experts suggest implementing strict access controls for AI connectors, deploying monitoring solutions tailored for AI activities, and educating users about the risks of uploading unknown documents [25][27].
ChatGPT惊现“零点击攻击”,API密钥被轻松泄露,OpenAI暂未解决
量子位· 2025-08-12 09:35
Core Viewpoint - ChatGPT has a significant security vulnerability known as "zero-click attack," allowing attackers to steal sensitive data without user interaction [1][2][5]. Group 1: Attack Mechanism - The vulnerability arises when ChatGPT connects to third-party applications, where attackers can inject malicious prompts into documents uploaded by users [9][10]. - Attackers can embed invisible payloads in documents, prompting ChatGPT to inadvertently send sensitive information to the attacker's server [14][18]. - The attack can be executed by malicious insiders who can easily manipulate accessible documents, increasing the likelihood of successful indirect prompt injection [16][17]. Group 2: Data Exfiltration - Attackers can use image rendering to exfiltrate data, embedding sensitive information in image URL parameters that are sent to the attacker's server when ChatGPT renders the image [20][24]. - The process involves instructing ChatGPT to search for API keys in connected services like Google Drive and send them to the attacker's endpoint [29][30]. Group 3: OpenAI's Mitigation Efforts - OpenAI is aware of the vulnerability and has implemented measures to check URLs for safety before rendering images [32][33]. - However, attackers have found ways to bypass these measures by using trusted services like Azure Blob for image hosting, which logs requests and parameters [37][38]. Group 4: Broader Implications and Recommendations - The security issue poses a significant risk to enterprises, potentially leading to the leakage of sensitive documents and data [46]. - Experts recommend strict access controls, monitoring solutions tailored for AI activities, and user education on the risks of uploading unknown documents [48].
ChatGPT 连接器被曝漏洞:无需用户操作即可窃取敏感数据
Huan Qiu Wang Zi Xun· 2025-08-07 08:10
Core Insights - Security researchers have disclosed a vulnerability in OpenAI's Connectors that allows attackers to extract sensitive information from Google Drive accounts without user interaction [1][3] - The vulnerability is classified as a "zero-click" attack, requiring only the user's email and shared documents to execute [3] - OpenAI has implemented mitigation measures after being informed of the vulnerability earlier this year, although they have not publicly commented on the issue [3] Company Overview - Connectors is a feature launched by OpenAI for ChatGPT, enabling users to integrate tools and data, search files, pull real-time data, and reference content [3] - The feature currently supports at least 17 different services [3] Security Implications - The attack can only extract limited data per instance and cannot remove entire documents [3] - The rapid response from OpenAI indicates a proactive approach to security following the discovery of the vulnerability [3]