Core Insights - The article discusses the development of an AI model named AFLoc, which can autonomously identify lesions in medical images without prior annotation by doctors [1][3]. Group 1: AI Model Development - The AFLoc model learns from two types of information: medical images (such as chest X-rays, fundus photos, and pathological slices) and clinical reports written by doctors [3]. - Through repeated "contrastive learning," AFLoc can accurately identify the most likely lesion locations in images over time, even without manual annotations [3]. Group 2: Performance Validation - The research team conducted systematic validation of AFLoc on three typical medical imaging modalities: chest X-rays, fundus images, and tissue pathology images, showing excellent performance across all [3]. - In chest X-ray experiments, AFLoc outperformed existing methods in multiple lesion localization metrics across 34 common chest diseases and 8 mainstream public datasets, achieving results that meet or exceed human expert levels [3]. - AFLoc also demonstrated strong disease diagnosis capabilities, outperforming current methods in zero-shot classification tasks for chest X-rays, fundus, and tissue pathology images, particularly excelling in diagnosing retinal diseases [3]. Group 3: Implications for Clinical Use - The model effectively avoids the traditional deep learning reliance on large-scale manually annotated data, significantly enhancing the efficiency of medical image data utilization and the model's generalization ability [5]. - AFLoc provides a feasible path for transitioning clinical imaging AI from "manual annotation dependence" to "self-supervised learning," offering a new technical paradigm for building smarter and more versatile medical AI systems [5]. - The research team plans to further validate and apply AFLoc in real clinical settings, accelerating its transformation into a clinical decision support system [5].
医学影像诊断或将告别“手工标注时代”
Huan Qiu Wang Zi Xun·2026-01-07 01:18