Core Insights - Meta has addressed a security vulnerability that allowed users of its AI chatbot to access private prompts and AI-generated responses of other users [1][4] - The bug was identified by Sandeep Hodkasia, who received a $10,000 bug bounty for reporting it [1][2] - Meta confirmed the fix was deployed on January 24, 2025, and found no evidence of malicious exploitation of the bug [1][4] Security Vulnerability Details - The vulnerability arose from how Meta AI managed user prompts, allowing unauthorized access to other users' data [2][3] - The unique numbers assigned to prompts and responses were "easily guessable," which could enable automated tools to scrape data [3] Context and Implications - This incident highlights ongoing security and privacy challenges faced by tech companies as they develop AI products [4] - Meta AI's standalone app faced issues at launch, with users unintentionally sharing private conversations [5]
Meta fixes bug that could leak users' AI prompts and generated content