向量检索爆雷!傅聪联合浙大发布IceBerg Benchmark:HNSW并非最优,评估体系存在严重偏差
量子位·2025-12-25 11:51

Core Insights - The integration of multimodal data into RAG and agent frameworks is a hot topic in the LLM application field, with vector retrieval being the most natural recall method for multimodal data [1] - There is a misconception that vector retrieval methods have been standardized, particularly the use of HNSW, which does not perform well in many downstream tasks [1] - A new benchmark called IceBerg has been introduced to evaluate vector retrieval algorithms based on downstream semantic tasks rather than traditional metrics like Recall-QPS, challenging past industry perceptions [1] Group 1: Misconceptions in Vector Retrieval - Many believe that vector retrieval methods are standardized, leading to a reliance on HNSW without considering its performance in real-world tasks [1] - The evaluation systems used in the past only scratch the surface of the complexities involved in vector retrieval [1] - A significant disparity exists between the perceived effectiveness of vector retrieval methods and their actual performance in downstream tasks [7] Group 2: Case Studies and Findings - In a large-scale facial verification dataset (Glink360K), the accuracy of facial recognition reached saturation before achieving a Recall of 99%, indicating a disconnect between distance metrics and actual task performance [5] - NSG, a state-of-the-art vector retrieval algorithm, shows absolute advantages in distance metric recall but underperforms in downstream semantic tasks compared to RaBitQ [5] - Different metric spaces can lead to vastly different outcomes in downstream tasks, highlighting the importance of metric selection in vector retrieval [6] Group 3: Information Loss and Model Limitations - An information loss funnel model is proposed to illustrate how information is lost at each stage of the embedding process, leading to discrepancies in expected outcomes [7] - The capacity of representation models directly affects the quality of embeddings, with generalization errors and learning objectives impacting performance [10][11] - Many models do not prioritize learning a good metric space, which can lead to significant information loss during the embedding process [13] Group 4: Metric and Algorithm Selection - The choice of metric (Euclidean vs. inner product) can have a substantial impact on results, especially when using generative representation models [15] - Different vector retrieval methods, categorized into space partitioning and graph-based indexing, perform differently based on data distribution [17] - The IceBerg benchmark reveals a reshuffling of vector retrieval algorithm rankings, demonstrating that HNSW is not always the top performer in downstream tasks [18] Group 5: Automation and Future Directions - IceBerg provides an automated algorithm selection tool that helps users choose the right method without extensive background knowledge [21] - Statistical indicators can reveal the affinity of embeddings to metrics and algorithms, facilitating automated decision-making [23] - The research team calls for future vector retrieval studies to focus on task-metric compatibility and the development of unified vector retrieval algorithms [25]