首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
3.
4.
We propose EMD-L1: a fast and exact algorithm for computing the Earth Mover's Distance (EMD) between a pair of histograms. The efficiency of the new algorithm enables its application to problems that were previously prohibitive due to high time complexities. The proposed EMD-L1 significantly simplifies the original linear programming formulation of EMD. Exploiting the L1 metric structure, the number of unknown variables in EMD-L1 is reduced to O(N) from O(N2) of the original EMD for a histogram with N bins. In addition, the number of constraints is reduced by half and the objective function of the linear program is simplified. Formally, without any approximation, we prove that the EMD-L1 formulation is equivalent to the original EMD with a L1 ground distance. To perform the EMD-L1 computation, we propose an efficient tree-based algorithm, Tree-EMD. Tree-EMD exploits the fact that a basic feasible solution of the simplex algorithm-based solver forms a spanning tree when we interpret EMD-L1 as a network flow optimization problem. We empirically show that this new algorithm has an average time complexity of O(N2), which significantly improves the best reported supercubic complexity of the original EMD. The accuracy of the proposed methods is evaluated by experiments for two computation-intensive problems: shape recognition and interest point matching using multidimensional histogram-based local features. For shape recognition, EMD-L1 is applied to compare shape contexts on the widely tested MPEG7 shape data set, as well as an articulated shape data set. For interest point matching, SIFT, shape context and spin image are tested on both synthetic and real image pairs with large geometrical deformation, illumination change, and heavy intensity noise. The results demonstrate that our EMD-L1-based solutions outperform previously reported state-of-the-art features and distance measures in solving the two tasks.  相似文献   

5.
Subspace and similarity metric learning are important issues for image and video analysis in the scenarios of both computer vision and multimedia fields. Many real-world applications, such as image clustering/labeling and video indexing/retrieval, involve feature space dimensionality reduction as well as feature matching metric learning. However, the loss of information from dimensionality reduction may degrade the accuracy of similarity matching. In practice, such basic conflicting requirements for both feature representation efficiency and similarity matching accuracy need to be appropriately addressed. In the style of “Thinking Globally and Fitting Locally”, we develop Locally Embedded Analysis (LEA) based solutions for visual data clustering and retrieval. LEA reveals the essential low-dimensional manifold structure of the data by preserving the local nearest neighbor affinity, and allowing a linear subspace embedding through solving a graph embedded eigenvalue decomposition problem. A visual data clustering algorithm, called Locally Embedded Clustering (LEC), and a local similarity metric learning algorithm for robust video retrieval, called Locally Adaptive Retrieval (LAR), are both designed upon the LEA approach, with variations in local affinity graph modeling. For large size database applications, instead of learning a global metric, we localize the metric learning space with kd-tree partition to localities identified by the indexing process. Simulation results demonstrate the effective performance of proposed solutions in both accuracy and speed aspects.  相似文献   

6.
In retrieval from image databases, evaluation of similarity, based both on the appearance of spatial entities and on their mutual relationships, depends on content representation based on attributed relational graphs. This kind of modeling entails complex matching and indexing, which presently prevents its usage within comprehensive applications. In this paper, we provide a graph-theoretical formulation for the problem of retrieval based on the joint similarity of individual entities and of their mutual relationships and we expound its implications on indexing and matching. In particular, we propose the usage of metric indexing to organize large archives of graph models, and we propose an original look-ahead method which represents an efficient solution for the (sub)graph error correcting isomorphism problem needed to compute object distances. Analytic comparison and experimental results show that the proposed lookahead improves the state-of-the-art in state-space search methods and that the combined use of the proposed matching and indexing scheme permits for the management of the complexity of a typical application of retrieval by spatial arrangement  相似文献   

7.
基于压缩域的图象检索技术研究进展   总被引:8,自引:0,他引:8       下载免费PDF全文
压缩标准的出现 ,使得图象数据格式普遍为压缩格式 ,从而促进了压缩域内图象检索技术的迅速发展 .为了使人们对基于压缩域的图象检索技术有一概括了解 ,该文对目前的压缩域检索技术进行了回顾和讨论 :首先 ,介绍了图象检索系统的基本概念 ;然后 ,分类分析了不同压缩域的检索技术 ,包括变换域方法 (如傅立叶变换、离散余弦变换 (DCT)以及子带和小波变换 )和空域方法 (如矢量量化和分形等 ) ;接着 ,对这些检索方法进行了讨论和比较 ,并得出一些有用的结论 ;还举例介绍了基于压缩域图象检索技术的实际应用 ;最后对压缩域图象检索技术的研究发展及其应用前景指出了一些可能的方向 .  相似文献   

8.

Dictionary plays an important role in multi-instance data representation. It maps bags of instances to histograms. Earth mover’s distance (EMD) is the most effective histogram distance metric for the application of multi-instance retrieval. However, up to now, there is no existing multi-instance dictionary learning methods designed for EMD-based histogram comparison. To fill this gap, we develop the first EMD-optimal dictionary learning method using stochastic optimization method. In the stochastic learning framework, we have one triplet of bags, including one basic bag, one positive bag, and one negative bag. These bags are mapped to histograms using a multi-instance dictionary. We argue that the EMD between the basic histogram and the positive histogram should be smaller than that between the basic histogram and the negative histogram. Base on this condition, we design a hinge loss. By minimizing this hinge loss and some regularization terms of the dictionary, we update the dictionary instances. The experiments over multi-instance retrieval applications shows its effectiveness when compared to other dictionary learning methods over the problems of medical image retrieval and natural language relation classification.

  相似文献   

9.
Toward improved ranking metrics   总被引:1,自引:0,他引:1  
In many computer vision algorithms, a metric or similarity measure is used to determine the distance between two features. The Euclidean or SSD (sum of the squared differences) metric is prevalent and justified from a maximum likelihood perspective when the additive noise distribution is Gaussian. Based on real noise distributions measured from international test sets, we have found that the Gaussian noise distribution assumption is often invalid. This implies that other metrics, which have distributions closer to the real noise distribution, should be used. In this paper, we consider three different applications: content-based retrieval in image databases, stereo matching, and motion tracking. In each of them, we experiment with different modeling functions for the noise distribution and compute the accuracy of the methods using the corresponding distance measures. In our experiments, we compared the SSD metric, the SAD (sum of the absolute differences) metric, the Cauchy metric, and the Kullback relative information. For several algorithms from the research literature which used the SSD or SAD, we showed that greater accuracy could be obtained by using the Cauchy metric instead.  相似文献   

10.
Yu  Tan  Meng  Jingjing  Fang  Chen  Jin  Hailin  Yuan  Junsong 《International Journal of Computer Vision》2020,128(8-9):2325-2343

Product quantization has been widely used in fast image retrieval due to its effectiveness of coding high-dimensional visual features. By constructing the approximation function, we extend the hard-assignment quantization to soft-assignment quantization. Thanks to the differentiable property of the soft-assignment quantization, the product quantization operation can be integrated as a layer in a convolutional neural network, constructing the proposed product quantization network (PQN). Meanwhile, by extending the triplet loss to the asymmetric triplet loss, we directly optimize the retrieval accuracy of the learned representation based on asymmetric similarity measurement. Utilizing PQN, we can learn a discriminative and compact image representation in an end-to-end manner, which further enables a fast and accurate image retrieval. By revisiting residual quantization, we further extend the proposed PQN to residual product quantization network (RPQN). Benefited from the residual learning triggered by residual quantization, RPQN achieves a higher accuracy than PQN using the same computation cost. Moreover, we extend PQN to temporal product quantization network (TPQN) by exploiting temporal consistency in videos to speed up the video retrieval. It integrates frame-wise feature learning, frame-wise features aggregation and video-level feature quantization in a single neural network. Comprehensive experiments conducted on multiple public benchmark datasets demonstrate the state-of-the-art performance of the proposed PQN, RPQN and TPQN in fast image and video retrieval.

  相似文献   

11.
颜色量化是基于颜色特征的图像检索的一个重要方法。在颜色量化中引入模糊集合理论,提出了一种基于人的主观视觉感知的模糊颜色量化方法,以减小量化误差和适应人的感知模糊性,根据此方法提取模糊颜色-空间特征用于图像检索。实验结果显示了模糊量化方法的有效性和检索算法较高的检索准确性。  相似文献   

12.
目的 海量图像检索技术是计算机视觉领域研究热点之一,一个基本的思路是对数据库中所有图像提取特征,然后定义特征相似性度量,进行近邻检索。海量图像检索技术,关键的是设计满足存储需求和效率的近邻检索算法。为了提高图像视觉特征的近似表示精度和降低图像视觉特征的存储空间需求,提出了一种多索引加法量化方法。方法 由于线性搜索算法复杂度高,而且为了满足检索的实时性,需把图像描述符存储在内存中,不能满足大规模检索系统的需求。基于非线性检索的优越性,本文对非穷尽搜索的多索引结构和量化编码进行了探索新研究。利用多索引结构将原始数据空间划分成多个子空间,把每个子空间数据项分配到不同的倒排列表中,然后使用压缩编码的加法量化方法编码倒排列表中的残差数据项,进一步减少对原始空间的量化损失。在近邻检索时采用非穷尽搜索的策略,只在少数倒排列表中检索近邻项,可以大大减少检索时间成本,而且检索过程中不用存储原始数据,只需存储数据集中每个数据项在加法量化码书中的码字索引,大大减少内存消耗。结果 为了验证算法的有效性,在3个数据集SIFT、GIST、MNIST上进行测试,召回率相比近几年算法提升4%~15%,平均查准率提高12%左右,检索时间与最快的算法持平。结论 本文提出的多索引加法量化编码算法,有效改善了图像视觉特征的近似表示精度和存储空间需求,并提升了在大规模数据集的检索准确率和召回率。本文算法主要针对特征进行近邻检索,适用于海量图像以及其他多媒体数据的近邻检索。  相似文献   

13.
This paper presents a multi-level matching method for document retrieval (DR) using a hybrid document similarity. Documents are represented by multi-level structure including document level and paragraph level. This multi-level-structured representation is designed to model underlying semantics in a more flexible and accurate way that the conventional flat term histograms find it hard to cope with. The matching between documents is then transformed into an optimization problem with Earth Mover’s Distance (EMD). A hybrid similarity is used to synthesize the global and local semantics in documents to improve the retrieval accuracy. In this paper, we have performed extensive experimental study and verification. The results suggest that the proposed method works well for lengthy documents with evident spatial distributions of terms.  相似文献   

14.
15.
一种改进的基于颜色-空间特征的图像检索方法   总被引:8,自引:0,他引:8  
颜色量化是基于颜色图像检索的一个热点。由于量化边界处的颜色具有连续性和相似性,文章提出了一种改进的基于模糊量化的颜色量化方法,以减小量化误差,使量化方法更接近于人的主观视觉感知。基于这种量化方法,提出了一种基于颜色—空间特征的检索算法,采用了一种相似度量方法以利用相同直方图区间内的像素统计与空间信息之间的相关性。实验结果表明该方法具有较高的检索有效性。  相似文献   

16.
This paper presents a unified annotation and retrieval framework, which integrates region annotation with image retrieval for performance reinforcement. To integrate semantic annotation with region-based image retrieval, visual and textual fusion is proposed for both soft matching and Bayesian probabilistic formulations. To address sample insufficiency and sample asymmetry in the annotation classifier training phase, we present a region-level multi-label image annotation scheme based on pair-wise coupling support vector machine (SVM) learning. In the retrieval phase, to achieve semantic-level region matching we present a novel retrieval scheme which differs from former work: the query example uploaded by users is automatically annotated online, and the user can judge its annotation quality. Based on the user’s judgment, two novel schemes are deployed for semantic retrieval: (1) if the user judges the photo to be well annotated, Semantically supervised Integrated Region Matching is adopted, which is a keyword-integrated soft region matching method; (2) If the user judges the photo to be poorly annotated, Keyword Integrated Bayesian Reasoning is adopted, which is a natural integration of a Visual Dictionary in online content-based search. In the relevance feedback phase, we conduct both visual and textual learning to capture the user’s retrieval target. Better annotation and retrieval performance than current methods were reported on both COREL 10,000 and Flickr web image database (25,000 images), which demonstrated the effectiveness of our proposed framework.  相似文献   

17.
基于空间特征的图像检索   总被引:2,自引:1,他引:1  
史婷婷  李岩 《计算机应用》2008,28(9):2292-2296
提出一种新的基于空间特征的图像特征描述子SCH,利用基于颜色向量角和欧几里得距离的MCVAE算法共同检测原始彩色图像边缘,同时利用一种新的“最大最小分量颜色不变量模型”对原始图像量化,对边缘像素建立边缘相关矩阵;对非边缘像素使用颜色直方图描述局部颜色分布信息;然后,利用新的sin相似性度量法则衡量图像特征间的相似度。实验采用VC++6.0开发了基于内容的图像检索原型系统“SttImageRetrieval”,基于Oracle 9i数据库建立了一个综合型图像数据库“IMAGEDB”。实验分析结果证明,利用SCH描述子的检索准确度明显高于仅基于颜色统计特征的检索结果。  相似文献   

18.
图像检索中的动态相似性度量方法   总被引:10,自引:0,他引:10  
段立娟  高文  林守勋  马继涌 《计算机学报》2001,24(11):1156-1162
为提高图像检索的效率,近年来相关反馈机制被引入到了基于内容的图像检索领域。该文提出了一种新的相关反馈方法--动态相似性度量方法。该方法建立在目前被广泛采用的图像相拟性度量方法的基础上,结合了相关反馈图像检索系统的时序特性,通过捕获用户的交互信息,动态地修正图像的相似性度量公式,从而把用户模型嵌入到了图像检索系统,在某种程度上使图像检索结果与人的主观感知更加接近。实验结果表明该方法的性能明显优于其它图像检索系统所采用的方法。  相似文献   

19.
Many recent state-of-the-art image retrieval approaches are based on Bag-of-Visual-Words model and represent an image with a set of visual words by quantizing local SIFT (scale invariant feature transform) features. Feature quantization reduces the discriminative power of local features and unavoidably causes many false local matches between images, which degrades the retrieval accuracy. To filter those false matches, geometric context among visual words has been popularly explored for the verification of geometric consistency. However, existing studies with global or local geometric verification are either computationally expensive or achieve limited accuracy. To address this issue, in this paper, we focus on partial duplicate Web image retrieval, and propose a scheme to encode the spatial context for visual matching verification. An efficient affine enhancement scheme is proposed to refine the verification results. Experiments on partial-duplicate Web image search, using a database of one million images, demonstrate the effectiveness and efficiency of the proposed approach. Evaluation on a 10-million image database further reveals the scalability of our approach.  相似文献   

20.
目的 基于哈希编码的检索方法是图像检索领域中的经典方法。其原理是将原始空间中相似的图片经哈希函数投影、量化后,在汉明空间中得到相近的哈希码。此类方法一般包括两个过程:投影和量化。投影过程大多采用主成分分析法对原始数据进行降维,但不同方法的量化过程差异较大。对于信息量不均衡的数据,传统的图像哈希检索方法采用等长固定编码位数量化的方式,导致出现低编码效率和低量化精度等问题。为此,本文提出基于哈夫曼编码的乘积量化方法。方法 首先,利用乘积量化法对降维后的数据进行量化,以便较好地保持数据在原始空间中的分布情况。然后,采用子空间方差作为衡量信息量的标准,并以此作为编码位数分配的依据。最后,借助于哈夫曼树,给方差大的子空间分配更多的编码位数。结果 在常用公开数据集MNIST、NUS-WIDE和22K LabelMe上进行实验验证,与原始的乘积量化方法相比,所提出方法能平均降低49%的量化误差,并提高19%的平均准确率。在数据集MNIST上,与同类方法的变换编码方法(TC)进行对比,比较了从32 bit到256 bit编码时的训练时间,本文方法的训练时间能够平均缩短22.5 s。结论 本文提出了一种基于多位编码乘积量化的哈希方法,该方法提高了哈希编码的效率和量化精度,在平均准确率、召回率等性能上优于其他同类算法,可以有效地应用到图像检索相关领域。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号