Workflow
开放词汇HOI检测
icon
Search documents
ICCV 2025 | 跨越视觉与语言边界,打开人机交互感知的新篇章:北大团队提出INP-CC模型重塑开放词汇HOI检测
机器之心· 2025-08-20 00:15
Core Viewpoint - The article discusses a novel open-vocabulary human-object interaction (HOI) detection method called Interaction-aware Prompt and Concept Calibration (INP-CC), which enhances the understanding of interactions in open-world scenarios by dynamically generating interaction-aware prompts and optimizing concept calibration [2][4][5]. Summary by Sections Introduction to HOI Detection - Current HOI detection methods are limited to closed environments and struggle to identify new interaction types, which restricts their practical applications [6]. - The rise of multimodal large models presents significant potential for application in open environments, making the study of their use in HOI detection a focal point [6]. Innovations of INP-CC - INP-CC introduces two core innovations: Interaction-aware Prompt Generation and Concept Calibration, which help the model better understand complex interaction semantics [7][16]. - The model employs a mechanism that allows for selective sharing of prompts among similar interactions, enhancing learning efficiency [7]. Model Architecture - INP-CC utilizes an interaction-adaptive prompt generator to dynamically construct relevant prompts based on the input image characteristics, improving the model's focus on key interaction areas [14]. - The model generates detailed visual descriptions of interactions and clusters them into a fine-grained conceptual structure, aiding in the understanding of complex interactions [14][20]. Experimental Performance - INP-CC outperforms existing methods on the HICO-DET and SWIG-HOI datasets, achieving a mean Average Precision (mAP) of 16.74% on the SWIG-HOI full test set, which is nearly a 10% improvement over the previous method CMD-SE [18][22]. - The model demonstrates strong attention capabilities, effectively focusing on critical interaction areas, as evidenced by visual analysis [23]. Conclusion - INP-CC breaks through the limitations of pre-trained visual language models in regional perception and concept understanding, showcasing the potential of integrating language model knowledge into computer vision tasks [25].