Core Viewpoint - The rise of AI-generated content (AIGC) has led to an increasing demand for AI content detectors, particularly in fields where authenticity is crucial, such as academic writing [1] Group 1: Performance of AI Detectors - Recent experiences shared by author Adam Kay revealed that an AI detector incorrectly identified 29.7% of his content as machine-generated, despite the work being published nearly a decade ago when AI technology was less advanced [2][3] - The viral nature of this incident has sparked a widespread "lie detector challenge," with many individuals testing texts they believe could not have been generated by AI, resulting in humorous and absurd outcomes [5] Group 2: Impact on Academia - Academic articles have been particularly affected, with examples such as a professor's article being flagged as 90% AI-generated and another professor's paper being marked at 77% [6][7] - A user tested a 2008 paper on AI and received a 100% false certification of being AI-generated, humorously referring to it as "GPT negative 6" [9] Group 3: Broader Implications - Even journalistic content is not immune, as demonstrated by a case where a 2000-word article about local history was deemed 91% likely to be AI-written, despite its original and unpublished sources [10] - The absurdity extends to classic literature, with Shakespeare's "Romeo and Juliet" being identified as having 41% AI-generated content [15] Group 4: Underlying Issues with AI Detectors - The flawed results stem from the fact that AI models are trained on human-created content, leading to a paradox where human writing is misidentified as AI-generated [18] - Higher writing quality, characterized by richer vocabulary and more precise grammar, paradoxically increases the likelihood of being flagged as AI-generated [19] - Critics argue that AI detectors are fundamentally flawed, as they rely on human knowledge for training yet question human originality, creating a logical inconsistency [20][22]
糟糕,大佬45年前论文,被判AI生成
机器之心·2026-03-26 11:41