评审用不用AI,作者说了算?ICML 2026全新评审政策出炉
机器之心·2026-01-19 08:54

Core Viewpoint - ICML 2026 has introduced a new review type selection mechanism allowing authors to decide whether to permit the use of large language models (LLMs) in the review process [3][9]. Group 1: Review Policy Changes - Two policies have been established: Policy A strictly prohibits the use of any LLMs during the review process, while Policy B allows their use with specific restrictions [4]. - Allowed actions under Policy B include using LLMs to assist in understanding the paper, language polishing of review comments, and querying LLMs for strengths or weaknesses of the paper [7][9]. - The choice of whether to allow LLMs in the review process is now in the hands of the authors, marking a significant shift from previous practices where the decision was primarily up to reviewers [9]. Group 2: Implementation Challenges - There are concerns regarding the enforcement of the new regulations on LLM usage, as past experiences have shown a prevalence of AI-generated reviews [11][13]. - A study on ICLR 2026 revealed that 21% of review comments were entirely generated by AI, indicating a widespread reliance on AI tools in the review process [11]. - The effectiveness of ICML's new rules may be limited, as compliance by reviewers cannot be guaranteed, raising questions about the integrity of the review process [14][15]. Group 3: Author Control and Options - Authors now have the option to refuse LLM-assisted reviews, providing a "one-size-fits-all" choice that may address concerns about trust in the review process [16].

评审用不用AI,作者说了算?ICML 2026全新评审政策出炉 - Reportify