学术伦理

Search documents
用隐藏指令诱导AI给论文打高分,谢赛宁合著论文被点名:认错,绝不鼓励
机器之心· 2025-07-08 06:54
Core Viewpoint - The article discusses the ethical implications of embedding prompts in academic papers to influence AI reviews, highlighting a recent incident involving a professor and the need for a reevaluation of academic integrity in the AI era [2][4][15]. Group 1: Incident Overview - A recent investigation revealed that at least 14 top universities had research papers containing hidden prompts instructing AI to give positive reviews [3]. - The incident involved a paper co-authored by NYU assistant professor谢赛宁, which was found to contain such a prompt, leading to significant scrutiny [4][6]. Group 2: Professor's Response - Professor谢赛宁 acknowledged his responsibility as a co-author and group leader for not thoroughly reviewing all submission documents [10][11]. - He clarified that a visiting student misunderstood a joke about embedding prompts and applied it to a submitted paper, not realizing the ethical implications [12]. Group 3: Ethical Discussion -谢赛宁 emphasized the need for a deeper discussion on research ethics in the age of AI, advocating for constructive dialogue rather than personal attacks [15][24]. - The incident raised questions about the current academic system's handling of AI in peer review, with some arguing that embedding prompts could be seen as a form of self-protection against AI reviews [20][26]. Group 4: Broader Implications - The article points out that the increase in AI-generated papers has led to a shortage of reviewers, pushing some to rely on AI for evaluations, which could compromise review quality [30]. -谢赛宁's case serves as a catalyst for further discussions on establishing reasonable constraints to improve the peer review environment [31].