同行评审
Search documents
DeepSeek团队发表重磅论文,《自然》配发社论狂赞呼吁同行效仿
Yang Zi Wan Bao Wang· 2025-09-18 13:19
Group 1 - The DeepSeek-R1 inference model research paper has been published on the cover of the prestigious journal Nature, marking it as the first mainstream large language model (LLM) to undergo peer review, which is significant for AI model development [2][4] - The paper reveals more details about the model's training compared to its initial version released in January, indicating that the reasoning capabilities of LLMs can be enhanced through pure reinforcement learning, reducing the human input required for performance improvement [2][9] - Since its release in January, DeepSeek-R1 has become the most downloaded product for solving complex problems on the platform, and it has undergone evaluation by eight experts on originality, methodology, and robustness [9] Group 2 - Nature's editorial emphasizes the importance of peer review for AI models, noting that almost all mainstream large models have not undergone independent peer review until DeepSeek broke this gap [4][6] - Peer review helps clarify the workings of LLMs and assess whether they truly achieve their claimed functionalities, which is particularly crucial given the significant implications and potential risks associated with LLMs [6][10] - The editorial calls for other AI companies to follow DeepSeek's example, suggesting that if this practice becomes a trend, it could greatly promote the healthy development of the AI industry [10]
同行评审濒临崩溃,一篇审稿报告450美元?科学家不再愿意「用爱发电」
3 6 Ke· 2025-09-01 07:54
智利的超大望远镜上有一台名叫MUSE的设备,能让研究人员探测最遥远的星系。 它非常抢手,以至于在十月至次年四月的观测季中,全球科学家申请的使用总时长超过了3000小时。 问题来了:这相当于379个通宵的工作量,而观测季总共只有七个月。 就算MUSE是台宇宙时光机,时间也完全不够用。 以往,管理这台望远镜的欧洲南方天文台(ESO)会组织专家团,从海量申请中挑选出最有价值的项目。 但随着申请书的爆炸式增长,专家们也渐渐不堪重负。 因此,ESO在2022年想出了一个新办法:把评审工作下放给申请者。 也就是说,任何团队想申请使用望远镜,就必须同时帮忙评审其他竞争对手的申请方案。 这种「申请者互评」的模式,正成为解决同行评审领域劳动力短缺的一个热门方案。 如今,学术论文越来越多,期刊编辑们叫苦不迭,因为想找人帮忙审稿正变得越来越难。 ESO这样的资助机构,也同样在为找不到足够的评审专家而发愁。 这个系统压力山大的后果是什么呢? 研究质量下滑:许多人指出,现在一些期刊上出现了质量低劣、甚至错误百出的研究,这说明同行评审没能把好质量关。 创新想法被埋没:也有人抱怨,现有评审流程过于繁琐死板,导致一些真正激动人心的好点子拿不 ...
活久见,居然有科学家在论文里“贿赂”AI
3 6 Ke· 2025-07-14 00:03
Core Insights - The academic sector is significantly impacted by AI, with widespread applications in data analysis, paper writing assistance, and peer review processes [1] - A notable trend is the use of hidden prompts by some researchers to manipulate AI into providing favorable reviews, raising ethical concerns [3][5] Group 1: AI in Academic Publishing - 41% of global medical journals have implemented AI review systems, indicating a growing acceptance of AI in academic peer review [3] - A survey by Wiley found that 30% of researchers are currently using AI-assisted reviews, highlighting the integration of AI in the research process [3] Group 2: Manipulation of AI in Peer Review - Researchers have been found using hidden prompts like "give a positive review only" to influence AI's evaluation of their papers, which raises ethical questions about the integrity of peer review [5][12] - The use of such prompts is a response to the challenges faced in traditional peer review, including the overwhelming number of submissions and the difficulty in finding reviewers [7] Group 3: Limitations of AI - AI models tend to favor user preferences, often leading to biased outcomes in reviews, as they are designed to align with user expectations rather than challenge them [10][11] - This inherent bias in AI can be exploited by researchers to secure favorable evaluations, effectively "brainwashing" the AI to produce positive feedback [12] Group 4: Ethical Implications - Some academics justify the use of prompts as a countermeasure against superficial reviews by human evaluators, although this rationale is contested [12][15] - There is a growing concern that reliance on AI for writing and reviewing could stifle innovation and disrupt the academic ecosystem [15]