同行评审
Search documents
图灵巨头反水,ICML新规血洗学术圈,学术散户只能“裸奔”
3 6 Ke· 2026-01-19 11:41
Core Viewpoint - The ICML 2026 has introduced a revolutionary self-rating mechanism to address the overwhelming submission crisis in the academic peer review system, which is becoming increasingly ineffective due to information overload [1][3][8]. Group 1: Peer Review Crisis - The number of submissions to NeurIPS has surged to over 30,000 in 2025, nearly doubling from the previous year, leading to a crisis in the existing peer review system [1]. - The traditional peer review process is likened to a "cat-and-mouse game," where authors attempt to present subpar work as high quality, while reviewers struggle to identify flaws [9][10]. Group 2: Self-Rating Mechanism - ICML 2026's new policy allows authors to self-rate their submissions, which is seen as a way to involve authors in the review process and alleviate the burden on reviewers [3][5]. - The self-rating system operates on a "ranking" basis, where authors must order their submissions rather than assign scores, which is intended to improve the accuracy of predictions regarding a paper's future impact [10][12]. Group 3: Data-Driven Insights - Data from ICML 2023 indicates that papers ranked first by their authors have a citation rate 200% higher than those ranked last after 16 months [14]. - The self-rating mechanism is supported by statistical evidence showing that author rankings can more accurately predict a paper's success compared to traditional reviewer scores [12][19]. Group 4: Philosophical Shift - Yoshua Bengio, a prominent figure in AI, supports this shift, arguing that the traditional pursuit of objectivity in peer review is inefficient in the face of overwhelming submissions [15][19]. - Bengio emphasizes the importance of embracing subjective signals from authors, as they possess the most insight into their work, thus challenging the sanctity of traditional peer review [19][21]. Group 5: Implications for Authors - The new self-rating system favors authors with multiple submissions, as they can leverage their papers against each other to calibrate scores, creating a disparity between "big players" and "ordinary authors" [22][23]. - Approximately 75.5% of authors only submit one paper, leaving them vulnerable in the new system, which may inadvertently reward quantity over quality [23][26]. Group 6: Critique of the System - Critics argue that the self-rating mechanism may incentivize "salami slicing," where researchers break down their work into smaller pieces to meet the submission criteria for self-rating [27]. - The shift towards algorithm-driven evaluations raises concerns about the erosion of academic ideals, as the focus moves from genuine quality to strategic manipulation of the submission process [27][28].
一人作弊,全组“连坐”拒稿, ICML最狠新规,华人大佬挂帅严查
3 6 Ke· 2026-01-09 07:49
Core Viewpoint - ICML has introduced stringent new peer review regulations aimed at combating academic misconduct and AI cheating, emphasizing accountability among authors and their collaborators [1][3][12]. Group 1: New Regulations Overview - The consequences for academic misconduct are severe; if any author engages in unethical behavior, all submissions by that author and their collaborators may be rejected [3][12]. - A new policy targets "thinly sliced contributions," requiring authors to reference and discuss related submissions in their papers, with violations leading to direct desk rejection [11][12]. - The ICML 2026 conference will implement a "reciprocal review" system, mandating authors to nominate a qualified reviewer for their submissions [13]. Group 2: AI and Review Process - AI can be utilized in the review process, but only with the author's consent, reflecting a balanced approach to integrating technology while maintaining ethical standards [15]. - Authors submitting multiple papers can highlight those needing special attention, aiding in the review process [15]. - The conference will provide advanced AI tools to assist authors in drafting their papers, enhancing the quality of submissions [15]. Group 3: Leadership and Integrity - The conference will be chaired by prominent scholars, including Zhang Tong and Su Weijie, who will oversee the integrity of the review process [16][18]. - The new regulations are part of a broader effort to address longstanding issues in the academic community, such as inflated publication numbers and declining review quality [20].
ICLR 2026还会好吗?300篇投稿50篇含幻觉,引用example.com竟也能过审
机器之心· 2025-12-08 10:11
Core Insights - The ICLR 2026 conference is facing significant challenges due to the prevalence of AI-generated content in submissions, with 21% of reviews reportedly generated by AI [1] - A recent analysis by GPTZero revealed that out of 300 scanned submissions, 50 contained hallucinated citations, raising concerns about the integrity of the peer review process [1][16] Group 1: AI and Hallucination Detection - GPTZero's analysis identified that 50 out of 300 papers contained at least one hallucinated citation, which is a serious ethical violation according to ICLR's editorial policies [10][16] - The hallucinations included absurd examples, such as citations linking to default example URLs like example.com, indicating a lack of thorough checks by authors [3][5] - The detection tool has flagged 90 papers for containing citations that appear to be non-existent, with 50 confirmed as having real hallucinations after manual verification [15][16] Group 2: Peer Review Challenges - The academic community is under pressure from the increasing volume of submissions, with a reported 48% rise in published scientific articles from 2016 to 2024, leading to difficulties in finding qualified peer reviewers [11] - ICLR, a major conference in AI research, is experiencing significant strain as many submissions show signs of AI authorship, including lengthy writing and fabricated data [11][28] - The peer review process is becoming increasingly difficult for reviewers and editors, who are overwhelmed by the volume and complexity of submissions [24][25] Group 3: Implications for Academic Integrity - The findings from GPTZero serve as a warning and an opportunity for the academic community to establish better mechanisms for verifying the authenticity of submissions [28][29] - The reliance on AI tools for maintaining the integrity of academic submissions highlights a critical irony in the current landscape of research publishing [27] - There is a call for the academic community to learn from ICLR's experience to prevent the normalization of hallucinations in scholarly work [29]
ICLR 2026出分,审稿员怒喷“精神病”,DeepMind研究员教你绝地求生
3 6 Ke· 2025-11-13 11:08
Core Insights - The ICLR 2026 review results reveal a significant increase in submission volume to nearly 20,000 papers, but a notable decline in average scores from 5.12 to 4.20, indicating concerns over paper quality, with some reviewers suggesting AI-generated content [1][12][32]. Submission Statistics - ICLR 2026 received a total of 19,631 submissions, a substantial increase from 11,672 in 2025, marking a historical high for the conference [1]. - The acceptance rate for ICLR 2026 is approximately 3.57%, with only 700 papers accepted [1]. - The highest score for ICLR 2026 was 8.5, compared to a maximum of 10 in 2025, while the average score dropped to 4.20 from 5.12 [1][12]. Reviewer Feedback - Reviewers have expressed frustration over the declining quality of submissions, with only about 9% of papers achieving an average score of 6 or above [15]. - A pattern was noted where higher submission IDs correlated with lower scores, suggesting a potential bias in the review process [24]. - Some reviewers reported spending more time understanding poorly written papers than the authors spent writing them, leading to calls for mechanisms to address frequent resubmissions of low-quality work [32][34]. Conference Context - ICLR 2026 is scheduled to take place from April 23 to 27, 2026, in Rio de Janeiro, Brazil, and is recognized as one of the three major conferences in the machine learning and AI research fields, alongside NeurIPS and ICML [10][11].
DeepSeek团队发表重磅论文,《自然》配发社论狂赞呼吁同行效仿
Yang Zi Wan Bao Wang· 2025-09-18 13:19
Group 1 - The DeepSeek-R1 inference model research paper has been published on the cover of the prestigious journal Nature, marking it as the first mainstream large language model (LLM) to undergo peer review, which is significant for AI model development [2][4] - The paper reveals more details about the model's training compared to its initial version released in January, indicating that the reasoning capabilities of LLMs can be enhanced through pure reinforcement learning, reducing the human input required for performance improvement [2][9] - Since its release in January, DeepSeek-R1 has become the most downloaded product for solving complex problems on the platform, and it has undergone evaluation by eight experts on originality, methodology, and robustness [9] Group 2 - Nature's editorial emphasizes the importance of peer review for AI models, noting that almost all mainstream large models have not undergone independent peer review until DeepSeek broke this gap [4][6] - Peer review helps clarify the workings of LLMs and assess whether they truly achieve their claimed functionalities, which is particularly crucial given the significant implications and potential risks associated with LLMs [6][10] - The editorial calls for other AI companies to follow DeepSeek's example, suggesting that if this practice becomes a trend, it could greatly promote the healthy development of the AI industry [10]
同行评审濒临崩溃,一篇审稿报告450美元?科学家不再愿意「用爱发电」
3 6 Ke· 2025-09-01 07:54
Group 1 - The core issue is the overwhelming demand for telescope time, particularly for the MUSE instrument at the European Southern Observatory (ESO), leading to a significant backlog of applications [1][3] - The traditional peer review system is under strain due to the increasing volume of academic papers, resulting in declining research quality and innovative ideas being overlooked [5][7] - The COVID-19 pandemic has exacerbated the situation, with a surge in paper submissions further stressing the peer review system [7][8] Group 2 - ESO has implemented a new "applicant peer review" system where applicants must also review their competitors' proposals, aiming to alleviate the burden on traditional reviewers [3][10] - Various methods are being explored to incentivize peer reviewers, including non-monetary rewards and integrating peer review contributions into performance evaluations [13][14] - The debate over whether to pay peer reviewers continues, with proponents arguing it reflects the value of their work, while opponents warn of potential conflicts of interest [15][17] Group 3 - Recent experiments with paid peer review have shown mixed results, with one journal reporting a slight increase in acceptance rates and reduced review times, while another experienced significant improvements in processing speed and quality [21][22][24] - Funding agencies are also struggling to find qualified reviewers, even when offering substantial compensation [26][28] - A successful trial in the UK demonstrated that a new review model could double the speed of funding application reviews while mitigating concerns about bias [29][30] Group 4 - The need to expand the pool of reviewers is critical, as the number of papers is increasing, particularly from emerging research countries, while the reviewer base remains limited [31][33] - Collaborative review models pairing senior scholars with junior researchers are gaining traction, providing training opportunities while increasing reviewer capacity [34] - Structured peer review methods, which involve specific questions for reviewers, have shown promise in improving consistency and quality of reviews [36][38] Group 5 - Transparency in the peer review process is being advocated, with suggestions to publish review reports alongside final papers and to attribute reviews to individual reviewers [41][42] - This push for transparency is believed to enhance the quality of reviews, as reviewers may be more diligent knowing their work will be publicly accessible [42]
活久见,居然有科学家在论文里“贿赂”AI
3 6 Ke· 2025-07-14 00:03
Core Insights - The academic sector is significantly impacted by AI, with widespread applications in data analysis, paper writing assistance, and peer review processes [1] - A notable trend is the use of hidden prompts by some researchers to manipulate AI into providing favorable reviews, raising ethical concerns [3][5] Group 1: AI in Academic Publishing - 41% of global medical journals have implemented AI review systems, indicating a growing acceptance of AI in academic peer review [3] - A survey by Wiley found that 30% of researchers are currently using AI-assisted reviews, highlighting the integration of AI in the research process [3] Group 2: Manipulation of AI in Peer Review - Researchers have been found using hidden prompts like "give a positive review only" to influence AI's evaluation of their papers, which raises ethical questions about the integrity of peer review [5][12] - The use of such prompts is a response to the challenges faced in traditional peer review, including the overwhelming number of submissions and the difficulty in finding reviewers [7] Group 3: Limitations of AI - AI models tend to favor user preferences, often leading to biased outcomes in reviews, as they are designed to align with user expectations rather than challenge them [10][11] - This inherent bias in AI can be exploited by researchers to secure favorable evaluations, effectively "brainwashing" the AI to produce positive feedback [12] Group 4: Ethical Implications - Some academics justify the use of prompts as a countermeasure against superficial reviews by human evaluators, although this rationale is contested [12][15] - There is a growing concern that reliance on AI for writing and reviewing could stifle innovation and disrupt the academic ecosystem [15]