AI学术伦理
Search documents
一名“全球前2%科学家”,栽在AI手里
虎嗅APP· 2025-12-28 11:07
Core Viewpoint - The article discusses the increasing infiltration of AI into the academic sphere, highlighting a recent incident at the University of Hong Kong where a professor resigned due to the use of AI-generated fake references in a published paper [4][5]. Group 1: Incident Overview - A paper on Hong Kong's fertility rate published in the journal "China Population and Development Studies" was found to have cited 24 AI-generated fictitious references out of a total of 61 [4]. - The University of Hong Kong conducted an investigation, confirming the use of AI-generated false references, leading to the paper's retraction and the resignation of the paper's corresponding author, Ye Zhaohui [4][5]. Group 2: Academic Pressure and AI Ethics - The incident reflects a broader issue of academic misconduct related to AI, with similar cases reported globally, including large-scale AI cheating incidents at top universities in South Korea and the U.S. [5][6]. - The pressure to publish and produce research quickly has led to unethical practices, as seen in the case of a student with an unusually high number of publications in a single year [6][12]. Group 3: Institutional Responses and Guidelines - Universities worldwide are developing guidelines for AI use in research, with nearly 100 institutions releasing related policies since 2023 [9]. - Tsinghua University issued principles to warn against AI "hallucinations" and the need for multi-source verification to prevent over-reliance on AI [9]. Group 4: Personal Experiences and Reflections - Students express shock at the incident, emphasizing the importance of academic integrity and the need for strict adherence to AI usage guidelines [8][10]. - The article highlights the struggle of researchers to balance the convenience of AI tools with the necessity of maintaining academic standards and integrity [15][16].
AI写的论文首次被顶会ACL录用,评分位列投稿前8.2%
Di Yi Cai Jing· 2025-05-29 16:17
Core Insights - The article discusses the significant achievement of Intology's AI scientist, Zochi, whose paper was accepted at the prestigious ACL conference, marking a milestone in AI-generated academic research [1][4][9] Group 1: Company Overview - Intology is a relatively new startup founded in early 2025, focusing on intelligent science research, co-founded by Ron Arel and Andy Zhou, both alumni of the University of Illinois Urbana-Champaign [4][9] - The company launched Zochi, an AI scientist, in March 2025, which has since gained recognition for its research capabilities [4][9] Group 2: Research Achievement - Zochi's paper, titled "Tempest: Automatic Multi-Turn Jailbreaking of Large Language Models with Tree Search," was accepted at ACL, a top conference in natural language processing with an acceptance rate typically below 20% [4][5] - The paper received a final score of 4, ranking in the top 8.2% of all submissions, indicating a significant breakthrough in AI's ability to produce high-quality research [4][5] Group 3: Research Methodology - The Tempest framework developed by Zochi can exploit vulnerabilities in large language models through multi-turn dialogue, achieving a 100% success rate on OpenAI's GPT-3.5-turbo and 97% on GPT-4 [8] - Zochi operates independently, analyzing thousands of research papers to identify promising research directions and proposing innovative solutions, mimicking the workflow of a human scientist [8][10] Group 4: Ethical Considerations - The emergence of AI-generated research raises ethical questions regarding accountability and reproducibility in scientific research [10] - Intology emphasizes that while Zochi operates autonomously, human researchers remain responsible for validating methods and ensuring ethical compliance [10]