Workflow
人机对齐
icon
Search documents
报告:“AI+医疗”行业步入调整期 从“野蛮生长”向“精耕细作”转变
Core Insights - The report by KPMG highlights the rapid growth period of the "AI + Healthcare" sector in China from 2020 to 2021, with the number of financing rounds reaching 280 and total financing exceeding 40 billion yuan, indicating significant demand for digitalization and intelligence in healthcare [1] - From 2023 to 2024, investment and financing in the "AI + Healthcare" sector are expected to decline and stabilize, marking a transition from "wild growth" to "refined cultivation" [1] - AI has made breakthroughs in various fields such as computer vision, natural language processing, and robotics, with significant applications in drug development, enhancing precision medicine by improving gene editing accuracy from 85% to over 98% [1] Investment Trends - The report indicates a decrease in investment activity in the "AI + Healthcare" sector, suggesting a shift towards more sustainable and strategic growth approaches [1] - The integration of AI with technologies like 5G and big data is creating new research directions and treatment methods, with emerging fields such as AI drug development and traditional Chinese medicine innovation gaining traction [2] Challenges and Governance - AI in healthcare faces stringent challenges due to the sensitivity of medical data, irreversible decision outcomes, and complex responsibility attribution, necessitating a focus on "human-machine alignment" [2] - "Human-machine alignment" involves ensuring that AI's logic aligns with human medical standards and societal values through mechanisms like algorithm transparency and ethical constraints [2] - The future development of "AI + Healthcare" will depend not only on computational power and data scale but also on companies' strategic capabilities in compliance design and interdisciplinary integration [2] Policy and Support - The Chinese biotechnology sector is receiving systematic support driven by policy, focusing on collaborative innovation across the entire value chain, capital ecosystem restructuring, expedited review processes, and payment mechanism reforms [2]
有了赛博医生,就不用怕过度诊疗?
虎嗅APP· 2025-06-03 13:52
Core Viewpoint - The article discusses the challenges and biases associated with AI in the medical field, highlighting how socioeconomic factors can influence the quality of care patients receive, leading to disparities in medical treatment and outcomes [2][3][4]. Group 1: AI and Bias in Healthcare - Recent studies indicate that AI models in healthcare may exacerbate existing biases, with high-income patients more likely to receive advanced diagnostic tests like CT scans, while lower-income patients are often directed to basic checks or no checks at all [2][3]. - The research evaluated nine natural language models across 1,000 emergency cases, revealing that patients labeled with socioeconomic indicators, such as "no housing," were more frequently directed to emergency care or invasive interventions [3]. - AI's ability to predict patient demographics based solely on X-rays raises concerns about the potential for biased treatment recommendations, which could widen health disparities among different populations [3][4]. Group 2: Data Quality and Its Implications - The quality of medical data is critical, with issues such as poor representation of low-income groups and biases in data labeling contributing to the challenges faced by AI in healthcare [8][9]. - Studies have shown that biases in AI can lead to significant drops in diagnostic accuracy, with one study indicating an 11.3% decrease when biased AI models were used by clinicians [6][8]. - The presence of unconscious biases in medical practice, such as the perception of women's pain as exaggerated, further complicates the issue of equitable healthcare delivery [9][10]. Group 3: Overdiagnosis and Its Trends - Research from Fudan University indicates that the overdiagnosis rate for female lung cancer patients in China has more than doubled from 22% (2011-2015) to 50% (2016-2020), with nearly 90% of lung adenocarcinoma patients being overdiagnosed [11]. - The article suggests that simply providing unbiased data may not eliminate biases in AI, as the complexity of medical biases requires a more nuanced approach [11][12]. Group 4: The Need for Medical Advancement - The article emphasizes that addressing overdiagnosis and bias in healthcare is linked to the advancement of medical knowledge and practices, advocating for a shift towards precision medicine [19][20]. - It highlights the importance of continuous medical innovation and the need for sufficient data to clarify the boundaries between overdiagnosis and precision medicine [19][20]. - The integration of AI in healthcare should focus on a holistic approach, considering the interconnectedness of various medical fields to improve patient outcomes [21][22].
有了赛博医生,就不用怕过度诊疗?
Hu Xiu· 2025-06-03 01:03
试想一种尖端的医疗技术,可以治好你的疾病,但是医生因为不掌握信息,推荐你用了传统的治疗手 段,恢复效果远不如采用新技术的病友。知道真相后,你会不会感到恼火? 同样的情况,如果发生在赛博医生身上,原因不再是信息滞后,而是AI根据你的性别或者收入水平作 出了这样的选择呢? 近期国际上一系列研究表明,越来越聪明的大模型,把医疗领域"看人下菜碟"的问题也放大了。 美国西奈山伊坎医学院和西奈山卫生系统的研究者在其发表在Nature子刊上的研究成果显示,被标记 为"高收入"的人群更可能获得CT和核磁检查的机会,中低收入病例则通常被安排做基本检查或不进行 检查。 而被标注为"无住房"等信息的患者则会更频繁被指向紧急护理、侵入性干预或心理健康评估。 指望"赛博医生"整顿医疗的人们又失望了。 究其原因,数据确实是非常关键的因素。 根据中国中医科学院中医药信息研究所的仝媛媛等人研究中,除了常受诟病的因为信息化水平偏低等原 因造成的医疗数据质量欠佳,还有很多数据问题。 这项研究评估了9个自然语言大模型,涉及1000个急诊病例(500个真实病例和500个合成病例)的170万 个看诊结果。 更早的研究显示,AI仅凭X射线就能预测出患者 ...
医疗AI 必须以“人机对齐”为前提
Jing Ji Wang· 2025-04-30 02:21
Core Viewpoint - The article discusses the importance of AI ethics, particularly in the medical field, emphasizing the need for "human-machine alignment" to ensure AI technologies align with human values and societal norms [2][3]. Group 1: Human-Machine Alignment - Human-machine alignment is defined as the process of ensuring AI's goals, behaviors, and outputs are consistent with human values and social norms, representing a systematic approach to addressing AI ethical issues [3]. - The concept of human-machine alignment has historical roots, with its principles being validated through practical applications in AI technology [3][6]. Group 2: Importance in Medical AI - In the medical field, human-machine alignment serves three core functions: explainability, trustworthiness, and human harmony [4][5]. - Explainability allows AI to present clear decision-making logic, which helps alleviate concerns from both doctors and patients [4]. - Trust is built when AI recommendations adhere to medical ethics, enabling humans to rely on AI for health-related decisions [5]. - Human harmony ensures that AI applications do not deviate from genuine human needs, incorporating emotional and ethical considerations into algorithm design [5]. Group 3: Ethical Compliance in Medical AI - Medical AI applications face unique challenges, including data sensitivity, irreversible outcomes, and complex responsibility structures [7]. - A collaborative approach across five key areas—technical architecture, data set construction, hospital management, patient awareness, and industry regulation—is essential for ensuring ethical compliance in medical AI [7][9]. Group 4: Data Mechanisms - Establishing a "data flywheel" mechanism is crucial for continuous model optimization, creating a closed-loop system that integrates user feedback into AI development [11]. - A dual mechanism for data access and incentives is necessary to ensure data quality and encourage participation from hospitals and doctors in the alignment process [12]. Group 5: Regulatory Framework - A unified national certification standard for medical AI alignment should be established, with third-party evaluations to ensure compliance and robustness [10]. - Regular assessments by multidisciplinary ethical committees can help maintain alignment and prevent technological biases [10].