Citation
Han, Zhichao and Azman, Azreen and Khalid, Fatimah and Mustaffa, Mas Rina
(2024)
Multi-granularity semantic information integration graph for cross-modal hash retrieval.
IEEE Access, 12.
pp. 44682-44694.
ISSN 2169-3536
Abstract
With the development of intelligent collection technology and popularization of intelligent terminals, multi-source heterogeneous data are growing rapidly. The effective utilization of rich semantic information contained in massive amounts of multi-source heterogeneous data to provide users with high-quality cross-modal information retrieval services has become an urgent problem to be solved in the current field of information retrieval. In this paper, we propose a novel cross-modal retrieval method, named MGSGH, which deeply explores the internal correlation between data of different granularities by integrating coarse-grained global semantic information and fine-grained scene graph information to model global semantic concepts and local semantic relationship graphs within a modality respectively. By enforcing cross-modal consistency constraints and intra-modal similarity preservation, we effectively integrate the visual features of image data and semantic information of text data to overcome the heterogeneity between the two types of data. Furthermore, we propose a new method for learning hash codes directly, thereby reducing the impact of quantization loss. Our comprehensive experimental evaluation demonstrated the effectiveness and superiority of the proposed model in achieving accurate and efficient cross-modal retrieval. © 2013 IEEE.
Download File
Additional Metadata
Actions (login required)
|
View Item |