ICLR 2026史上最严新规:论文用LLM不报,直接拒稿
3 6 Ke·2025-08-29 03:23

Core Points - ICLR 2026 has introduced strict regulations regarding the use of Large Language Models (LLMs) in paper writing and reviewing, requiring explicit acknowledgment of LLM usage [1][15][16] - The new policies aim to ensure accountability among authors and reviewers, mandating that they take full responsibility for their contributions [16][20] Group 1: New Regulations - The ICLR 2026 committee has established two main policies regarding LLM usage: all LLM usage must be clearly stated, and authors and reviewers must be accountable for their contributions [15][16] - The policies are in line with ICLR's ethical guidelines, which emphasize the importance of acknowledging all research contributions [15][16] - Violations of these policies will result in immediate rejection of submissions, reflecting the committee's commitment to maintaining ethical standards [17] Group 2: Submission Details - The submission deadlines for ICLR 2026 are set, with the abstract submission deadline on September 19, 2025, and the paper submission deadline on September 24, 2025 [9] - The total number of submissions for ICLR 2025 reached 11,565, with a 32.08% acceptance rate, indicating a growing trend in submissions [3][5] Group 3: Ethical Concerns - There have been instances of authors using hidden prompts to manipulate reviewer feedback, which is considered a serious ethical violation [21][24] - The committee has highlighted the potential risks associated with LLMs, including the possibility of generating false information or breaching confidentiality [20][24] Group 4: AI in Review Process - The use of LLMs in the review process has been tested, showing that AI suggestions were adopted in 12,222 instances, with 26.6% of reviewers updating their evaluations based on AI feedback [29][32] - The integration of LLMs has been shown to enhance the quality of reviews and increase engagement during the rebuttal phase [32][34]