首页 | 本学科首页   官方微博 | 高级检索  
     


Encoding Spatial Context for Large-Scale Partial-Duplicate Web Image Retrieval
Authors:Wen-Gang Zhou  Hou-Qiang Li  Yijuan Lu  Qi Tian
Affiliation:1. Chinese Academy of Sciences Key Laboratory of Technology in Geo-Spatial Information Processing and Application System, University of Science and Technology of China, Hefei, 230027, China
2. Department of Electronic Engineering and Information Science, University of Science and Technology of China, Hefei, 230027, China
3. Department of Computer Science, Texas State University, San Marcos, TX, 78666, U.S.A.
4. Department of Computer Science, University of Texas at San Antonio, San Antonio, TX, 78249, U.S.A.
Abstract:Many recent state-of-the-art image retrieval approaches are based on Bag-of-Visual-Words model and represent an image with a set of visual words by quantizing local SIFT (scale invariant feature transform) features. Feature quantization reduces the discriminative power of local features and unavoidably causes many false local matches between images, which degrades the retrieval accuracy. To filter those false matches, geometric context among visual words has been popularly explored for the verification of geometric consistency. However, existing studies with global or local geometric verification are either computationally expensive or achieve limited accuracy. To address this issue, in this paper, we focus on partial duplicate Web image retrieval, and propose a scheme to encode the spatial context for visual matching verification. An efficient affine enhancement scheme is proposed to refine the verification results. Experiments on partial-duplicate Web image search, using a database of one million images, demonstrate the effectiveness and efficiency of the proposed approach. Evaluation on a 10-million image database further reveals the scalability of our approach.
Keywords:
本文献已被 SpringerLink 等数据库收录!
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号