Workflow
行为生物识别
icon
Search documents
真·顺着网线抓你,OpenAI深夜上线防沉迷,GPT直连警局
3 6 Ke· 2026-01-22 13:16
Core Viewpoint - OpenAI has implemented a "minor protection system" in ChatGPT, which uses behavioral biometrics to classify users' mental ages based on their interaction patterns, leading to potential restrictions for those deemed "underage" [1][2][3]. Group 1: System Implementation - OpenAI has deployed a real-time age prediction classifier that disregards the user's registered birth date, relying instead on behavioral patterns to assess maturity [3]. - The system identifies "immature traits" such as limited vocabulary and emotional outbursts, categorizing users based on their interaction style [3][5]. - Users exhibiting behaviors typical of minors, such as late-night browsing or frequent questioning, may be classified as underage, resulting in restricted access to certain features [3][6]. Group 2: User Experience and Consequences - Users who are classified as minors face significant limitations, including restrictions on discussing adult topics and accessing certain functionalities [7][8]. - To regain full access, users must submit government ID and real-time facial scans, raising privacy concerns [7][8]. - OpenAI acknowledges that while they aim to protect minors, this may lead to misclassification of adults, as the system prioritizes safety over accuracy [6][8]. Group 3: Monitoring and Intervention - OpenAI has introduced a "crisis real-time intervention" protocol that monitors user interactions for specific emotional keywords, potentially leading to law enforcement involvement in severe cases [9][11]. - This monitoring blurs the line between service provider and overseer, fundamentally altering the nature of human-AI interaction [11]. Group 4: Societal Implications - The implementation of this system reflects a Silicon Valley version of a "social credit system," where privacy is traded for limited digital rights under the guise of protection [12][14]. - The current situation mirrors past criticisms of similar systems in other regions, highlighting a shift in the narrative around digital privacy and user autonomy [12][14].
你以为在点「红绿灯」验证身份,其实是在给AI免费打工
机器之心· 2025-11-12 13:23
Core Viewpoint - The article discusses the evolution of CAPTCHA systems, highlighting how they have transitioned from simple text-based challenges to more complex image-based tasks, and now to behavior-based assessments, while also addressing the implications for AI training and privacy concerns [9][19][25]. Group 1: Evolution of CAPTCHA - CAPTCHA, which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," was initially designed to prevent bots from performing automated tasks [9]. - The first version of CAPTCHA involved distorted text that was difficult for machines to read, but advancements in AI led to a significant increase in the accuracy of AI models in solving these challenges [15][16]. - The introduction of reCAPTCHA v2 required users to identify images, such as cars and traffic lights, which inadvertently contributed to training Google's autonomous driving AI [19][20]. Group 2: AI and Human Labor - The article estimates that the collective human effort in solving CAPTCHAs over the years has generated a value exceeding $6.1 billion, as users unknowingly transcribed historical documents and trained AI systems [20]. - As AI capabilities improved, the effectiveness of traditional CAPTCHA systems diminished, leading to the development of reCAPTCHA v3, which relies on behavioral biometrics to assess user authenticity [25][26]. Group 3: Privacy and Ethical Concerns - The shift to behavior-based assessments in reCAPTCHA v3 raises significant privacy issues, as it involves extensive monitoring of user interactions, which some critics liken to spyware [27][28]. - The article highlights a paradox where efforts to protect privacy, such as using VPNs or clearing cookies, can result in lower trust scores from the CAPTCHA system, making users appear more like bots [28]. - Future CAPTCHA systems may focus on identifying errors that AI would make, rather than traditional human problem-solving tasks, indicating a shift in the nature of these verification systems [30][31].