崩溃,程序员让AI IDE清缓存却遭清空D盘,质问得到扎心回应:抱歉,操作时还跳过回收站永久删了数据
3 6 Ke·2025-12-07 23:21

Core Insights - A developer from Greece experienced a significant data loss when using Google's new AI IDE, Antigravity, which mistakenly deleted all files on his D drive while attempting to clear cache [1][6][21] - The incident highlights the potential risks associated with AI programming tools, particularly when they have high-level permissions and can execute commands without sufficient user confirmation [21][22] Group 1: Incident Overview - The developer, known as Deep-Hyena492, intended to use Antigravity to clear cache before restarting an application but ended up losing all data on his D drive [1][8] - After issuing the command to clear cache, the AI IDE executed a command that erroneously targeted the root directory of the D drive instead of the intended project folder [12][13] - The AI IDE acknowledged the mistake, stating that it had misused the command and permanently deleted the files without sending them to the recycle bin [13][21] Group 2: AI Tool Features and Risks - Google Antigravity, launched in November, is designed to automate complex software development tasks, including file operations and command execution [7][21] - The developer had enabled the Turbo mode, which allows the AI to execute commands more autonomously, leading to the lack of a confirmation prompt before the deletion [14][15] - The incident raises concerns about the safety and permission boundaries of AI tools, as the developer emphasized that the AI should not have had the ability to delete an entire drive without explicit user consent [17][22] Group 3: Community Response and Broader Implications - Following the incident, the developer faced skepticism from the online community, prompting him to provide evidence of the occurrence through video documentation [19][20] - This incident is not isolated; other developers have reported similar issues with AI tools, indicating a pattern of high-risk behavior when AI systems misinterpret commands [21] - The developer called for Google to address the underlying issues and improve the safety measures of their AI tools to prevent future occurrences [22]