Workflow
AI生成内容检测
icon
Search documents
离了大谱,21%的ICLR 2026审稿意见竟是AI生成的?官方回应来了
机器之心· 2025-11-17 03:19
Core Insights - The article discusses the significant presence of AI-generated content in the review process for ICLR 2026, highlighting a trend where a substantial portion of review comments are created by AI [2][11]. Group 1: AI Usage in Paper Reviews - A systematic analysis of 75,800 review comments revealed that 21% were fully generated by AI, while 4% were heavily edited by AI, 9% moderately edited, and 22% lightly edited, with only 43% being fully human-written [2][11]. - AI-generated reviews tend to be longer by 26% and score higher on average, with fully AI-generated reviews averaging a score of 4.43 compared to 4.13 for fully human-written reviews [11]. - The average confidence level for fully AI-generated reviews is slightly higher, indicating a tendency to provide more confident evaluations [12]. Group 2: Implications and Responses - The ICLR 2026 organizing committee acknowledged the issue of low-quality reviews generated by AI and is considering appropriate measures, including marking and reporting such reviews [18]. - Suggestions for handling AI-generated reviews include removing poor evaluations and designating the reviewers as having failed their responsibilities, which could lead to automatic rejection of their submissions [18]. - Pangram Labs' analysis indicates that 39% of submitted papers utilized AI in some capacity, with a correlation between higher AI usage and lower average scores [8].
如何让AI“识破”AI?这项研究给出答案
Ke Ji Ri Bao· 2025-08-25 01:32
Core Insights - The emergence of large models as essential productivity tools has led to significant challenges, including the generation of misleading information and academic integrity issues due to AI-generated content [1][2] - A new research achievement from Nankai University's Media Computing Lab proposes a Direct Difference Learning (DDL) optimization strategy to enhance AI detection capabilities, which has been accepted for presentation at ACM MM 2025 [1][2] Group 1: AI Detection Challenges - Existing AI detection tools often misjudge AI-generated content due to their reliance on fixed patterns, which limits their ability to generalize to new challenges [2] - The rapid iteration of large models makes it nearly impossible to collect all relevant data for training effective detection tools [2] Group 2: DDL Methodology - The DDL method optimizes the difference between model predictions and human-defined target values, enabling the model to learn the intrinsic knowledge necessary for AI text detection [2] - DDL-trained detectors can accurately identify content generated by the latest models, such as GPT-5, even with limited prior exposure [2] Group 3: MIRAGE Dataset - The MIRAGE dataset is the first benchmark focused on detecting commercial large language models, created using 17 powerful models to generate a challenging and representative test set [3] - Testing results show that existing detectors drop from 90% accuracy on simple datasets to around 60%, while DDL-trained detectors maintain over 85% accuracy [3] Group 4: Performance Improvements - DDL-trained detectors outperform Stanford's DetectGPT by 71.62% and methods from the University of Maryland and Carnegie Mellon University by 68.03% [3] - The research team aims to continuously upgrade evaluation benchmarks and technologies for faster, more accurate, and cost-effective AI-generated text detection [3]
让AI“识破”AI
Core Insights - OpenAI has released its next-generation AI model, GPT-5, which has garnered global attention as AI-generated content becomes increasingly integrated into daily productivity tools [1] - The emergence of AI-generated content has raised concerns regarding misinformation, academic integrity, and the effectiveness of AI detection systems [1] Group 1: AI Detection Challenges - Existing AI detection methods often fall short in complex real-world scenarios, leading to misjudgments in identifying AI-generated texts [2] - The current detection tools are likened to rote learning, lacking the ability to generalize and adapt to new challenges, resulting in a significant drop in accuracy when faced with unfamiliar content [2] Group 2: Innovative Solutions - A research team from Nankai University has proposed a novel "direct difference learning" optimization strategy to enhance AI detection capabilities, allowing for better differentiation between human and AI-generated texts [2] - The team has developed a comprehensive benchmark dataset named MIRAGE, which includes nearly 100,000 human-AI text pairs, aimed at improving the evaluation of commercial large language models [3] Group 3: Performance Metrics - The MIRAGE dataset revealed that existing detection systems' accuracy plummets from approximately 90% on simpler datasets to around 60% on more complex ones, while the new detection system maintains over 85% accuracy [3] - The new detection system shows a performance improvement of 71.62% compared to Stanford's DetectGPT and 68.03% compared to methods proposed by other universities [3] Group 4: Future Directions - The research team aims to continuously upgrade evaluation benchmarks and technologies to achieve faster, more accurate, and cost-effective AI-generated text detection [4]
学者三年田野调查被判AI代笔,论文AI率检测如何避免“误伤”?
Yang Guang Wang· 2025-05-18 00:57
Core Viewpoint - The rapid development of AI technology has led to an increase in the ability of AI to generate academic papers, raising concerns in the academic and educational sectors regarding the detection of AI-generated content in student theses [1]. Group 1: AI Detection Issues - Some universities require students' theses to not only pass plagiarism checks but also to be evaluated for AI-generated content (AIGC), with a threshold of 15% AI detection rate [1][4]. - A case was reported where a student's thesis was flagged with a significantly higher AI detection rate on a subsequent check, despite being the same document, indicating potential inconsistencies in detection systems [1][4]. - The AI detection tools have been criticized for misidentifying original content as AI-generated, leading to unnecessary revisions and additional costs for students [4][5]. Group 2: Academic Concerns - A professor from Renmin University expressed frustration when their original research, developed over three years, was flagged as "highly suspected of being AI-generated" by a detection platform [5][6]. - The general consensus among academics is that while plagiarism detection is reliable, AI detection tools are problematic and should not be used as strict criteria for assessing academic integrity [6][8]. - There is a growing concern that the standards for determining whether a paper is AI-generated are still vague, making it difficult to accurately assess the originality of academic work [8][9]. Group 3: Recommendations for Universities - Experts suggest that universities should avoid making AI detection a mandatory graduation requirement and instead focus on guiding students in the appropriate use of AI tools in their research [8][9]. - The transition towards integrating AI in academic research is seen as inevitable, with future evaluations of academic ability likely shifting towards how effectively individuals can collaborate with AI [9].
论文AI率检测如何避免“误伤”
Core Viewpoint - The increasing reliance on AI detection tools for academic papers raises concerns about their accuracy and potential for misjudgment, leading to debates on the appropriateness of using AI-generated content as a criterion for academic integrity [1][2][3][5][7]. Group 1: AI Detection Tools and Their Impact - A significant number of academic papers are being flagged as "highly suspected AI-generated," even when they are original works, leading to confusion and frustration among students and faculty [1][2]. - Universities are implementing regulations requiring students to disclose the use of AI tools in their thesis work, with some setting thresholds for acceptable AI generation rates [2][6]. - The effectiveness of current AI detection tools is questioned, as they often misidentify human-written content as AI-generated due to similarities in text features [3][5][7]. Group 2: Academic Integrity and AI Usage - There is a growing concern that using AI detection tools as a strict measure of academic integrity may undermine the educational process and lead to misjudgments [3][5][7]. - Faculty members emphasize the importance of teaching students to use AI as a supportive tool rather than a replacement for original thought, advocating for a focus on the process of writing rather than just the final product [9]. - Some institutions are exploring ways to integrate AI into the academic process while maintaining standards of originality and critical thinking [8][9]. Group 3: Student Experiences and Reactions - Students have reported instances of their original work being flagged as AI-generated, prompting them to alter their writing styles to avoid detection [2][3]. - There is a shared sentiment among students that while AI can provide inspiration, the core content must remain their own to uphold academic integrity [3][9]. - The debate continues on how to balance the use of AI in academic settings while ensuring that students develop their own analytical and creative skills [8][9].