Core Viewpoint - The recent release of multiple group standards highlights concerns regarding the misuse of sensitive permissions by AI agents, emphasizing the need for developers to address data security and privacy risks associated with their operations [1] Group 1: AI Technology and Privacy Risks - AI agents utilize two main technical paths to execute user commands on mobile devices: API calls and visual recognition, with the former being smoother but limited by third-party app cooperation [2] - Visual recognition requires sensitive permissions such as accessibility and screen recording, allowing AI agents to read screen content and monitor user actions, which raises significant privacy concerns [3] Group 2: Group Standards and Regulations - On June 13, the Guangdong Provincial Standardization Association released the group standard "Safety Requirements for AI Task Execution," providing clear guidelines for AI developers and operators [4] - The standard prohibits AI agents from using accessibility permissions to operate third-party apps and mandates a "dual authorization" process, requiring both third-party app and user consent before executing tasks [7] - The standard also emphasizes that AI decision-making algorithms must be fair, just, and transparent, avoiding interference with user choices [8] Group 3: Comparison with Previous Standards - The "Mobile Internet Service Accessibility Safety Requirements," released earlier, shares similarities with the new AI standard but allows conditional use of accessibility permissions if user consent is obtained [8][9] - The previous standard mandates that AI vendors guide users to understand permissions and privacy policies before enabling accessibility services, ensuring informed consent [9]
AI智能体接管手机引隐私担忧,多份安全标准划出红线
Nan Fang Du Shi Bao·2025-06-17 03:37