Workflow
AFLoc
icon
Search documents
Nature子刊:王珊珊/张康合作开发新型AI模型,让AI自主找病灶,无需医生手动标注
生物世界· 2026-01-10 03:06
Core Viewpoint - The research introduces a novel multimodal vision-language model named AFLoc (Annotation-Free pathology Localization), which enables automatic localization of pathologies in medical images without the need for prior annotations from doctors, showcasing strong generalization capabilities that surpass human benchmarks in various pathology image tasks [4][9]. Group 1 - AFLoc is designed to perform pathology localization without requiring annotations, thus reducing the dependency on expert input [4][10]. - The model utilizes a contrastive learning approach based on a multi-level semantic structure, aligning diverse medical concepts with rich image features to adapt to the varied manifestations of pathologies [7][9]. - Initial experiments were conducted on a dataset of 220,000 pairs of chest X-ray images and reports, demonstrating that AFLoc outperforms current state-of-the-art methods in both localization and classification tasks without annotations [9]. Group 2 - The research validates AFLoc's generalization capabilities across different modalities, including histopathology and retinal fundus images, indicating its robustness in diverse clinical environments [9][10]. - The findings highlight AFLoc's potential to lower annotation requirements and adapt to complex clinical applications, making it a significant advancement in the field of medical imaging [10].
医学影像诊断或将告别“手工标注时代”
Huan Qiu Wang Zi Xun· 2026-01-07 01:18
Core Insights - The article discusses the development of an AI model named AFLoc, which can autonomously identify lesions in medical images without prior annotation by doctors [1][3]. Group 1: AI Model Development - The AFLoc model learns from two types of information: medical images (such as chest X-rays, fundus photos, and pathological slices) and clinical reports written by doctors [3]. - Through repeated "contrastive learning," AFLoc can accurately identify the most likely lesion locations in images over time, even without manual annotations [3]. Group 2: Performance Validation - The research team conducted systematic validation of AFLoc on three typical medical imaging modalities: chest X-rays, fundus images, and tissue pathology images, showing excellent performance across all [3]. - In chest X-ray experiments, AFLoc outperformed existing methods in multiple lesion localization metrics across 34 common chest diseases and 8 mainstream public datasets, achieving results that meet or exceed human expert levels [3]. - AFLoc also demonstrated strong disease diagnosis capabilities, outperforming current methods in zero-shot classification tasks for chest X-rays, fundus, and tissue pathology images, particularly excelling in diagnosing retinal diseases [3]. Group 3: Implications for Clinical Use - The model effectively avoids the traditional deep learning reliance on large-scale manually annotated data, significantly enhancing the efficiency of medical image data utilization and the model's generalization ability [5]. - AFLoc provides a feasible path for transitioning clinical imaging AI from "manual annotation dependence" to "self-supervised learning," offering a new technical paradigm for building smarter and more versatile medical AI systems [5]. - The research team plans to further validate and apply AFLoc in real clinical settings, accelerating its transformation into a clinical decision support system [5].