Workflow
学术伦理
icon
Search documents
一名“全球前2%科学家”,栽在AI手里
Feng Huang Wang· 2025-12-25 10:52
Core Viewpoint - The article highlights the growing concern over the use of AI in academic research, particularly following a scandal at the University of Hong Kong where a professor resigned due to the inclusion of AI-generated fake references in a published paper [1][2]. Group 1: Incident Overview - A paper on Hong Kong's fertility rate published in October included 24 out of 61 references that were AI-generated, leading to an investigation and subsequent retraction of the paper [1]. - The University of Hong Kong confirmed the use of unverifiable references and announced disciplinary actions against the involved parties, including the resignation of the paper's corresponding author, Ye Zhaohui [1][2]. Group 2: Academic Integrity and AI - Ye Zhaohui, a prominent scholar, faced reputational damage due to this incident, which raises questions about the ethical limits of AI usage in academia [2]. - The article discusses a broader trend of AI-related academic misconduct globally, with various universities reporting incidents of AI cheating and misuse in coursework and research [2][4]. Group 3: Institutional Responses - The University of Hong Kong has established strict guidelines for the use of AI in research, emphasizing the importance of academic integrity [3]. - Other institutions, including Tsinghua University, have also released guidelines to address the challenges posed by AI in education, highlighting the need for multi-source verification to prevent reliance on potentially misleading AI outputs [4]. Group 4: Pressure on Researchers - The competitive academic environment, characterized by pressure to publish, has led some researchers to resort to AI tools, which can result in ethical breaches [6][9]. - The phenomenon of "academic inflation" is noted, where students feel compelled to publish numerous papers to meet peer expectations, contributing to the misuse of AI [6]. Group 5: Future Considerations - The article emphasizes the need for researchers to develop a critical understanding of AI tools, balancing efficiency with ethical considerations [9]. - It suggests that while AI can assist in research, human creativity and critical thinking remain irreplaceable, urging scholars to transcend mere reliance on AI [9].
用隐藏指令诱导AI给论文打高分,谢赛宁合著论文被点名:认错,绝不鼓励
机器之心· 2025-07-08 06:54
Core Viewpoint - The article discusses the ethical implications of embedding prompts in academic papers to influence AI reviews, highlighting a recent incident involving a professor and the need for a reevaluation of academic integrity in the AI era [2][4][15]. Group 1: Incident Overview - A recent investigation revealed that at least 14 top universities had research papers containing hidden prompts instructing AI to give positive reviews [3]. - The incident involved a paper co-authored by NYU assistant professor谢赛宁, which was found to contain such a prompt, leading to significant scrutiny [4][6]. Group 2: Professor's Response - Professor谢赛宁 acknowledged his responsibility as a co-author and group leader for not thoroughly reviewing all submission documents [10][11]. - He clarified that a visiting student misunderstood a joke about embedding prompts and applied it to a submitted paper, not realizing the ethical implications [12]. Group 3: Ethical Discussion -谢赛宁 emphasized the need for a deeper discussion on research ethics in the age of AI, advocating for constructive dialogue rather than personal attacks [15][24]. - The incident raised questions about the current academic system's handling of AI in peer review, with some arguing that embedding prompts could be seen as a form of self-protection against AI reviews [20][26]. Group 4: Broader Implications - The article points out that the increase in AI-generated papers has led to a shortage of reviewers, pushing some to rely on AI for evaluations, which could compromise review quality [30]. -谢赛宁's case serves as a catalyst for further discussions on establishing reasonable constraints to improve the peer review environment [31].