UPM Institutional Repository

Visual and semantic context modeling for scene-centric image annotation


Citation

Zand, Mohsen and Doraisamy, Shyamala and Abdul Halin, Alfian and Mustaffa, Mas Rina (2015) Visual and semantic context modeling for scene-centric image annotation. Multimedia Tools and Applications, 76 (6). pp. 8547-8571. ISSN 1380-7501; ESSN: 1573-7721

Abstract

Automatic image annotation enables efficient indexing and retrieval of the images in the large-scale image collections, where manual image labeling is an expensive and labor intensive task. This paper proposes a novel approach to automatically annotate images by coherent semantic concepts learned from image contents. It exploits sub-visual distributions from each visually complex semantic class, disambiguates visual descriptors in a visual context space, and assigns image annotations by modeling image semantic context. The sub-visual distributions are discovered through a clustering algorithm, and probabilistically associated with semantic classes using mixture models. The clustering algorithm can handle the inner-category visual diversity of the semantic concepts with the curse of dimensionality of the image descriptors. Hence, mixture models that formulate the sub-visual distributions assign relevant semantic classes to local descriptors. To capture non-ambiguous and visual-consistent local descriptors, the visual context is learned by a probabilistic Latent Semantic Analysis (pLSA) model that ties up images and their visual contents. In order to maximize the annotation consistency for each image, another context model characterizes the contextual relationships between semantic concepts using a concept graph. Therefore, image labels are finally specialized for each image in a scene-centric view, where images are considered as unified entities. In this way, highly consistent annotations are probabilistically assigned to images, which are closely correlated with the visual contents and true semantics of the images. Experimental validation on several datasets shows that this method outperforms state-of-the-art annotation algorithms, while effectively captures consistent labels for each image.


Download File

[img]
Preview
PDF (Abstract)
Visual and semantic context modeling for scene-centric image annotation.pdf

Download (6kB) | Preview

Additional Metadata

Item Type: Article
Divisions: Faculty of Computer Science and Information Technology
DOI Number: https://doi.org/10.1007/s11042-016-3500-5
Publisher: Springer New York LLC
Keywords: Automatic image annotation; Visual diversity; Mixture model; Visual context; Semantic context
Depositing User: Ms. Nida Hidayati Ghazali
Date Deposited: 30 Jan 2018 07:53
Last Modified: 30 Jan 2018 07:53
Altmetrics: http://www.altmetric.com/details.php?domain=psasir.upm.edu.my&doi=10.1007/s11042-016-3500-5
URI: http://psasir.upm.edu.my/id/eprint/46866
Statistic Details: View Download Statistic

Actions (login required)

View Item View Item