Core Insights - The integration of artificial intelligence (AI) in research activities has led to new forms of academic misconduct, including paper writing, data fabrication, and implicit plagiarism, posing significant challenges to academic integrity [1] Group 1: AI Usage in Research - The use of generative AI in research is increasing, with only 7% of authors disclosing its use in submissions, contrasting sharply with surveys indicating over 50% usage [2] - Instances of "hallucination citations" have been observed, where references are cited that do not exist, with some papers containing up to 10 to 15 such citations [2] Group 2: Editorial Policies and Practices - Papers that improperly use AI without disclosure will be rejected, and authors may be informed about the need for education on appropriate AI usage [3] - The rejection rate for papers due to improper AI use is currently low, with only a few instances in the past month [4] - The journal does not support the use of AI to replace researchers' work in generating scientific insights, analyzing data, or drawing conclusions [5] Group 3: Acceptable AI Applications - Authors are encouraged to use generative AI to improve grammar and language expression, as well as to summarize existing research, provided they disclose its use [7] Group 4: Addressing Integrity Challenges - The journal has established an internal working group focused on research integrity, monitoring ongoing cases, and adapting to external guidelines [8] - A research integrity committee is being planned to address the impact of generative AI on academic integrity, with a launch meeting scheduled for February next year [8] Group 5: Principles for AI Use - Researchers and editors should view AI as an auxiliary tool, being cautious of its limitations and the potential risks of de-skilling the next generation of researchers [9]
AI辅助写论文,科技期刊怎么看
Ke Ji Ri Bao·2025-12-12 01:31