医学影像技术
Search documents
Nature子刊:王珊珊/张康合作开发新型AI模型,让AI自主找病灶,无需医生手动标注
生物世界· 2026-01-10 03:06
Core Viewpoint - The research introduces a novel multimodal vision-language model named AFLoc (Annotation-Free pathology Localization), which enables automatic localization of pathologies in medical images without the need for prior annotations from doctors, showcasing strong generalization capabilities that surpass human benchmarks in various pathology image tasks [4][9]. Group 1 - AFLoc is designed to perform pathology localization without requiring annotations, thus reducing the dependency on expert input [4][10]. - The model utilizes a contrastive learning approach based on a multi-level semantic structure, aligning diverse medical concepts with rich image features to adapt to the varied manifestations of pathologies [7][9]. - Initial experiments were conducted on a dataset of 220,000 pairs of chest X-ray images and reports, demonstrating that AFLoc outperforms current state-of-the-art methods in both localization and classification tasks without annotations [9]. Group 2 - The research validates AFLoc's generalization capabilities across different modalities, including histopathology and retinal fundus images, indicating its robustness in diverse clinical environments [9][10]. - The findings highlight AFLoc's potential to lower annotation requirements and adapt to complex clinical applications, making it a significant advancement in the field of medical imaging [10].