速递|全球首创!ESMO重磅发布AI医疗指南,23项共识精准回应行业难题!
GLP1减重宝典·2025-11-10 13:34

Core Viewpoint - The article emphasizes the urgent need for guidelines on the application of AI in healthcare, particularly in oncology, as AI tools like ChatGPT become more prevalent in the medical field. The European Society for Medical Oncology (ESMO) has released the first systematic framework for the safe integration of large language models (LLMs) in oncology practice, focusing on data protection, clinical oversight, and decision support [2][3]. Group 1: AI Application Categories - ESMO categorizes AI applications into three types, each with specific safety and governance recommendations [3]. Group 2: Type 1 LLM Applications - Patient-Facing Tools - Type 1 applications include chatbots and virtual assistants that provide symptom information or treatment advice. A study involving a GPT-4-based breast cancer chatbot highlighted the importance of clear communication from patients to receive accurate information [7][8]. - Key recommendations for patients include maintaining open dialogue with healthcare providers, not treating LLMs as replacements for in-person consultations, and ensuring privacy when sharing health data [7][8][9]. Group 3: Type 2 LLM Applications - Medical Professionals - Type 2 applications are designed for oncologists and healthcare teams, focusing on decision support, documentation, and translation. These tools can assist in clinical decision-making but require strict validation and clear accountability mechanisms [10][12]. - Recommendations for healthcare professionals include verifying AI-suggested information, maintaining human oversight, and not delegating final responsibility for patient care to AI [11][12]. Group 4: Type 3 LLM Applications - Backend Systems - Type 3 applications operate in the background of healthcare institutions, focusing on tasks like data extraction from electronic health records (EHRs) and patient screening for clinical trials. The quality of AI outputs heavily relies on the accuracy and completeness of clinical documentation [13][15]. - Continuous performance monitoring and institutional governance are emphasized to ensure the reliability and safety of these AI systems [15].