Workflow
自动驾驶(Waymo)
icon
Search documents
你以为在点「红绿灯」验证身份,其实是在给AI免费打工
3 6 Ke· 2025-11-12 23:46
Core Viewpoint - The article discusses the evolution of CAPTCHA systems, highlighting how they have transitioned from simple text-based challenges to complex image recognition tasks, and how these systems inadvertently contribute to training AI models, particularly in the context of Google's projects like Waymo. Group 1: CAPTCHA Evolution - CAPTCHA, which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," was initially designed to prevent bots from performing automated tasks online [11] - The first version of CAPTCHA involved distorted text that was difficult for machines to read, but advancements in AI led to its obsolescence, with Google's AI achieving a 99.8% accuracy rate in solving these challenges by 2014 [15][16] - The introduction of reCAPTCHA v2 required users to identify objects in images, which simultaneously served to train Google's autonomous driving AI by collecting data on cars, traffic signals, and pedestrian crossings [19] Group 2: AI Training and Data Collection - Users unknowingly contributed to a massive data collection effort, with estimates suggesting that the value of this "human computation" over the years exceeds $6.1 billion [19] - By 2024, research indicated that AI could solve reCAPTCHA v2 challenges with 100% accuracy, raising questions about the effectiveness of these systems [20][22] - The underlying purpose of reCAPTCHA v2 shifted from merely distinguishing humans from bots to analyzing user behavior for privacy data, which has implications for user privacy and data security [22][25] Group 3: Future of CAPTCHA - The transition to reCAPTCHA v3 involves behavioral biometrics, which monitors user interactions to assign a credibility score, thus making the system less visible to users [23][24] - Concerns have been raised regarding the privacy implications of such extensive monitoring, as it conflicts with regulations like GDPR and creates a paradox where efforts to protect privacy may lead to lower credibility scores [25] - Future CAPTCHA systems may focus on identifying errors that AI would make, rather than traditional human problem-solving tasks, indicating a shift in how these systems will function [27][28]
你以为在点「红绿灯」验证身份,其实是在给AI免费打工
机器之心· 2025-11-12 13:23
Core Viewpoint - The article discusses the evolution of CAPTCHA systems, highlighting how they have transitioned from simple text-based challenges to more complex image-based tasks, and now to behavior-based assessments, while also addressing the implications for AI training and privacy concerns [9][19][25]. Group 1: Evolution of CAPTCHA - CAPTCHA, which stands for "Completely Automated Public Turing test to tell Computers and Humans Apart," was initially designed to prevent bots from performing automated tasks [9]. - The first version of CAPTCHA involved distorted text that was difficult for machines to read, but advancements in AI led to a significant increase in the accuracy of AI models in solving these challenges [15][16]. - The introduction of reCAPTCHA v2 required users to identify images, such as cars and traffic lights, which inadvertently contributed to training Google's autonomous driving AI [19][20]. Group 2: AI and Human Labor - The article estimates that the collective human effort in solving CAPTCHAs over the years has generated a value exceeding $6.1 billion, as users unknowingly transcribed historical documents and trained AI systems [20]. - As AI capabilities improved, the effectiveness of traditional CAPTCHA systems diminished, leading to the development of reCAPTCHA v3, which relies on behavioral biometrics to assess user authenticity [25][26]. Group 3: Privacy and Ethical Concerns - The shift to behavior-based assessments in reCAPTCHA v3 raises significant privacy issues, as it involves extensive monitoring of user interactions, which some critics liken to spyware [27][28]. - The article highlights a paradox where efforts to protect privacy, such as using VPNs or clearing cookies, can result in lower trust scores from the CAPTCHA system, making users appear more like bots [28]. - Future CAPTCHA systems may focus on identifying errors that AI would make, rather than traditional human problem-solving tasks, indicating a shift in the nature of these verification systems [30][31].