中心动态重分配哈希,北邮团队提出并开源CRH项目 | AAAI 2026
AI前线·2025-12-05 01:29

Core Viewpoint - The article discusses the introduction of a novel end-to-end framework called Center-Reassigned Hashing (CRH) for large-scale image retrieval, which dynamically updates hash centers during the training of hash functions, significantly improving retrieval accuracy and semantic consistency without the complexities of pre-training or offline optimization [2][5][36]. Group 1: Background and Existing Methods - Traditional deep hashing methods can be categorized into pairwise, triplet, and point-based methods, with the latter achieving linear complexity but limited performance due to treating hashing as a classification problem [3]. - Existing point-based methods, such as CSQ, OrthoHash, and MDS, initialize hash centers randomly, often neglecting inter-class semantic relationships, which can lead to suboptimal performance [4][5]. Group 2: CRH Framework - CRH innovatively integrates dynamic reassignment of hash centers with hash function training, allowing for end-to-end joint learning and avoiding the pitfalls of two-stage methods [5][36]. - The framework consists of three key components: hash codebook initialization, hash function optimization, and hash center reassignment, enabling seamless integration of semantic relationships into the learning process [6][10]. Group 3: Performance Evaluation - CRH outperforms existing advanced methods across all datasets and hash lengths, with relative improvements of 2.1% to 2.6% on Stanford Cars, 4.8% to 6.6% on NABirds, and 0.4% to 4.5% on MS COCO [25]. - The method demonstrates superior retrieval performance, achieving the highest mean Average Precision (mAP) scores compared to baselines like DTSH, HashNet, and SHC [24][25]. Group 4: Ablation Studies and Robustness - Ablation studies confirm the effectiveness of the center reassignment and multi-head mechanisms, with significant performance drops observed when these features are removed [26][33]. - CRH shows high robustness against initialization randomness, with low standard deviation in mAP across multiple runs, indicating stability in performance [30]. Group 5: Semantic Quality Analysis - The learned hash centers exhibit a significantly higher Pearson correlation coefficient (PCC) with reference semantic similarity compared to random or non-semantic baselines, indicating effective semantic alignment [34]. - A positive correlation between mAP and PCC suggests that better semantic alignment typically leads to improved retrieval performance [35]. Group 6: Future Directions - The work emphasizes the importance of dynamic center optimization in deep hashing learning and suggests potential extensions to multimodal retrieval and long-tail distribution scenarios [37].

中心动态重分配哈希,北邮团队提出并开源CRH项目 | AAAI 2026 - Reportify