Core Insights - The article highlights the privacy risks associated with AI plugins, revealing that over half of the sampled Chrome AI plugins collect user data, with nearly one-third targeting personally identifiable information (PII) [1][3]. Group 1: AI Plugin Data Collection - A study by Incogni analyzed 442 AI-labeled plugins, finding that many use scripting permissions to access user input and alter webpage content [3]. - High-risk categories include programming assistants, math tools, meeting assistants, and voice transcription plugins, with notable examples being Grammarly and Quillbot [3]. - The current trend in AI deployment relies heavily on cloud services, making AI plugins a convenient way for users to access AI capabilities without complex installations [3]. Group 2: Data Scarcity and AI Development - The article discusses a looming "data drought" for AI companies, with predictions that high-quality text data on the internet will be exhausted by 2028, and machine learning datasets may run out by 2026 [5]. - The reliance on synthetic data has emerged as a solution, but it has proven inadequate in practical applications, leading to issues like underfitting and model failures [5]. - Media and content platforms are becoming aware of the value of their data, leading to legal battles with AI companies over data usage rights [5]. Group 3: Privacy Concerns and User Choices - The article raises concerns about the lack of regulation for browser plugins compared to stricter app stores, allowing malicious plugins to bypass oversight [7]. - AI plugins are primarily distributed through personal blogs, AI community links, and GitHub, as developers prioritize efficiency over regulatory compliance [9]. - Users face a dilemma of whether to trade privacy for convenience, with over 50% of AI plugins collecting user data, making it a widespread issue [12].
还敢用吗,超过一半的AI插件正悄悄收集你的隐私
3 6 Ke·2026-02-09 03:10