A probabilistic semantic model for image annotation and multi-modal image retrieval |
| |
Authors: | Ruofei Zhang Zhongfei Zhang Mingjing Li Wei-Ying Ma Hong-Jiang Zhang |
| |
Affiliation: | (1) Department of Computer Science, SUNY at Binghamton, Binghamton, NY 13902, USA;(2) Microsoft Research Asia, Beijing, 100080, People's Republic of China |
| |
Abstract: | This paper addresses automatic image annotation problem and its application to multi-modal image retrieval. The contribution of our work is three-fold. (1) We propose a probabilistic semantic model in which the visual features and the textual words are connected via a hidden layer which constitutes the semantic concepts to be discovered to explicitly exploit the synergy among the modalities. (2) The association of visual features and textual words is determined in a Bayesian framework such that the confidence of the association can be provided. (3) Extensive evaluation on a large-scale, visually and semantically diverse image collection crawled from Web is reported to evaluate the prototype system based on the model. In the proposed probabilistic model, a hidden concept layer which connects the visual feature and the word layer is discovered by fitting a generative model to the training image and annotation words through an Expectation-Maximization (EM) based iterative learning procedure. The evaluation of the prototype system on 17,000 images and 7736 automatically extracted annotation words from crawled Web pages for multi-modal image retrieval has indicated that the proposed semantic model and the developed Bayesian framework are superior to a state-of-the-art peer system in the literature. |
| |
Keywords: | Image annotation Multi-modal image retrieval Probabilistic semantic model Evaluation |
本文献已被 SpringerLink 等数据库收录! |
|