Workflow
清华学者Nature Medicine发文:DeepSeek狂奔,已在近800家医院部署,应完善监管以保障安全
生物世界·2025-07-30 09:10

Core Viewpoint - The emergence of DeepSeek-R1, an open-source large language model (LLM) developed by a Chinese startup, has revolutionized the deployment of AI in hospitals, significantly enhancing efficiency and reducing costs compared to existing models like ChatGPT [2][12]. Group 1: Deployment and Impact - DeepSeek-R1 was released in January 2025 and quickly became the most downloaded chatbot in the US Apple App Store, surpassing OpenAI's ChatGPT [2]. - As of May 8, 2025, DeepSeek-R1 has been deployed in over 755 hospitals across China, including top-tier hospitals and grassroots medical institutions, with more than 500 achieving local deployment [5][8]. - The model is capable of various tasks, including clinical services, hospital operations, and personal health management, providing significant support in diagnosis, treatment recommendations, and administrative tasks [13][21]. Group 2: Advantages of DeepSeek-R1 - The model's deployment cost is significantly lower than traditional AI systems, with a complete local deployment costing under $100,000, making it accessible for many smaller hospitals [21]. - DeepSeek-R1's advanced reasoning capabilities are comparable to top international models, essential for handling complex medical tasks [22]. - The open-source nature allows hospitals to customize and integrate the model into existing systems, enhancing its utility [22]. Group 3: Regulatory Challenges - The rapid deployment of DeepSeek-R1 has highlighted a regulatory "gray area," raising concerns about patient safety and the need for a robust regulatory framework [6][10]. - The lack of clear classification standards for AI applications in healthcare leads to ambiguity regarding which applications are considered high-risk [32]. - The current regulatory environment does not adequately address the unique challenges posed by large language models, necessitating immediate reforms [35]. Group 4: Recommendations for Regulation - The article calls for a risk-based classification system for AI applications in healthcare, distinguishing between high-risk and low-risk applications [35]. - High-risk applications should be regulated as medical devices, requiring stringent approval and monitoring processes [35]. - Continuous monitoring and evaluation of AI applications in real-world settings are essential to ensure safety and effectiveness [38].