Workflow
保序回归
icon
Search documents
图灵巨头反水,ICML新规血洗学术圈,学术散户只能“裸奔”
3 6 Ke· 2026-01-19 11:41
Core Viewpoint - The ICML 2026 has introduced a revolutionary self-rating mechanism to address the overwhelming submission crisis in the academic peer review system, which is becoming increasingly ineffective due to information overload [1][3][8]. Group 1: Peer Review Crisis - The number of submissions to NeurIPS has surged to over 30,000 in 2025, nearly doubling from the previous year, leading to a crisis in the existing peer review system [1]. - The traditional peer review process is likened to a "cat-and-mouse game," where authors attempt to present subpar work as high quality, while reviewers struggle to identify flaws [9][10]. Group 2: Self-Rating Mechanism - ICML 2026's new policy allows authors to self-rate their submissions, which is seen as a way to involve authors in the review process and alleviate the burden on reviewers [3][5]. - The self-rating system operates on a "ranking" basis, where authors must order their submissions rather than assign scores, which is intended to improve the accuracy of predictions regarding a paper's future impact [10][12]. Group 3: Data-Driven Insights - Data from ICML 2023 indicates that papers ranked first by their authors have a citation rate 200% higher than those ranked last after 16 months [14]. - The self-rating mechanism is supported by statistical evidence showing that author rankings can more accurately predict a paper's success compared to traditional reviewer scores [12][19]. Group 4: Philosophical Shift - Yoshua Bengio, a prominent figure in AI, supports this shift, arguing that the traditional pursuit of objectivity in peer review is inefficient in the face of overwhelming submissions [15][19]. - Bengio emphasizes the importance of embracing subjective signals from authors, as they possess the most insight into their work, thus challenging the sanctity of traditional peer review [19][21]. Group 5: Implications for Authors - The new self-rating system favors authors with multiple submissions, as they can leverage their papers against each other to calibrate scores, creating a disparity between "big players" and "ordinary authors" [22][23]. - Approximately 75.5% of authors only submit one paper, leaving them vulnerable in the new system, which may inadvertently reward quantity over quality [23][26]. Group 6: Critique of the System - Critics argue that the self-rating mechanism may incentivize "salami slicing," where researchers break down their work into smaller pieces to meet the submission criteria for self-rating [27]. - The shift towards algorithm-driven evaluations raises concerns about the erosion of academic ideals, as the focus moves from genuine quality to strategic manipulation of the submission process [27][28].