Core Viewpoint - The article discusses the newly established policies regarding the use of large language models (LLMs) in academic research, particularly in the context of the ICLR conference, aiming to ensure academic integrity and mitigate risks associated with LLMs [2][4][14]. Group 1: ICLR Conference Policies - ICLR 2026 has introduced specific policies for the use of LLMs, which are based on the conference's ethical guidelines [2][4]. - The conference received 11,565 submissions in 2025, with an acceptance rate of 32.08% [2]. - The policies emphasize that any use of LLMs must be disclosed, and authors and reviewers are ultimately responsible for their contributions [6][7]. Group 2: Specific Policy Applications - Authors must disclose the use of LLMs in writing assistance, and they are responsible for all content, including any errors generated by the LLM [9]. - When LLMs are used for research ideas or data analysis, authors must verify the validity and accuracy of the contributions made by the LLM [9]. - Reviewers must also disclose their use of LLMs in writing reviews and are responsible for maintaining the confidentiality of submitted papers [11]. Group 3: Prohibited Practices - The article highlights the prohibition of "prompt injection," where authors manipulate the review process through hidden prompts, which is considered collusion and a serious academic misconduct [12]. - Violations of these policies can lead to severe consequences, including desk rejection of submissions [7]. Group 4: Broader Context - The article notes that ICLR is not alone in implementing such policies; other major conferences like NeurIPS and ICML have also established guidelines for LLM usage [13][15]. - The increasing reliance on LLMs raises concerns about academic integrity, including issues like false citations and plagiarism, prompting the need for clear guidelines [14].
拒稿警告,靠大模型「偷摸水论文」被堵死,ICLR最严新规来了
机器之心·2025-08-27 08:36