首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
In content-based image retrieval (CBIR), relevant images are identified based on their similarities to query images. Most CBIR algorithms are hindered by the semantic gap between the low-level image features used for computing image similarity and the high-level semantic concepts conveyed in images. One way to reduce the semantic gap is to utilize the log data of users' feedback that has been collected by CBIR systems in history, which is also called “collaborative image retrieval.” In this paper, we present a novel metric learning approach, named “regularized metric learning,” for collaborative image retrieval, which learns a distance metric by exploring the correlation between low-level image features and the log data of users' relevance judgments. Compared to the previous research, a regularization mechanism is used in our algorithm to effectively prevent overfitting. Meanwhile, we formulate the proposed learning algorithm into a semidefinite programming problem, which can be solved very efficiently by existing software packages and is scalable to the size of log data. An extensive set of experiments has been conducted to show that the new algorithm can substantially improve the retrieval accuracy of a baseline CBIR system using Euclidean distance metric, even with a modest amount of log data. The experiment also indicates that the new algorithm is more effective and more efficient than two alternative algorithms, which exploit log data for image retrieval.  相似文献   

2.
Ding  Yijie  Yang  Chao  Tang  Jijun  Guo  Fei 《Applied Intelligence》2022,52(6):6598-6612
Applied Intelligence - Accurate identification of protein-nucleotide binding residues is crucial for the study of drug structure and protein functional annotation. The study of protein-nucleotide...  相似文献   

3.
Traditional classification methods assume that the training and the test data arise from the same underlying distribution. However, in several adversarial settings, the test set is deliberately constructed in order to increase the error rates of the classifier. A prominent example is spam email where words are transformed to get around word based features embedded in a spam filter.  相似文献   

4.
Matching visual appearances of the target object over consecutive frames is a critical step in visual tracking. The accuracy performance of a practical tracking system highly depends on the similarity metric used for visual matching. Recent attempts to integrate discriminative metric learned by sequential visual data (instead of a predefined metric) in visual tracking have demonstrated more robust and accurate results. However, a global similarity metric is often suboptimal for visual matching when the target object experiences large appearance variation or occlusion. To address this issue, we propose in this paper a spatially weighted similarity fusion (SWSF) method for robust visual tracking. In our SWSF, a part-based model is employed as the object representation, and the local similarity metric and spatially regularized weights are jointly learned in a coherent process, such that the total matching accuracy between visual target and candidates can be effectively enhanced. Empirically, we evaluate our proposed tracker on various challenging sequences against several state-of-the-art methods, and the results demonstrate that our method can achieve competitive or better tracking performance in various challenging tracking scenarios.  相似文献   

5.
In this paper, a regularized correntropy criterion (RCC) for extreme learning machine (ELM) is proposed to deal with the training set with noises or outliers. In RCC, the Gaussian kernel function is utilized to substitute Euclidean norm of the mean square error (MSE) criterion. Replacing MSE by RCC can enhance the anti-noise ability of ELM. Moreover, the optimal weights connecting the hidden and output layers together with the optimal bias terms can be promptly obtained by the half-quadratic (HQ) optimization technique with an iterative manner. Experimental results on the four synthetic data sets and the fourteen benchmark data sets demonstrate that the proposed method is superior to the traditional ELM and the regularized ELM both trained by the MSE criterion.  相似文献   

6.
The aim of this paper is to learn a linear principal component using the nature of support vector machines (SVMs). To this end, a complete SVM-like framework of linear PCA (SVPCA) for deciding the projection direction is constructed, where new expected risk and margin are introduced. Within this framework, a new semi-definite programming problem for maximizing the margin is formulated and a new definition of support vectors is established. As a weighted case of regular PCA, our SVPCA coincides with the regular PCA if all the samples play the same part in data compression. Theoretical explanation indicates that SVPCA is based on a margin-based generalization bound and thus good prediction ability is ensured. Furthermore, the robust form of SVPCA with a interpretable parameter is achieved using the soft idea in SVMs. The great advantage lies in the fact that SVPCA is a learning algorithm without local minima because of the convexity of the semi-definite optimization problems. To validate the performance of SVPCA, several experiments are conducted and numerical results have demonstrated that their generalization ability is better than that of regular PCA. Finally, some existing problems are also discussed.  相似文献   

7.
水平集函数规则化的C-V主动轮廓模型   总被引:2,自引:0,他引:2       下载免费PDF全文
Chan与Vese 提出的C-V主动轮廓模型采用传统的水平集方法实现,为了保证水平集函数演化的稳定性,需要加入轮廓的长度项来规则化水平集函数,且在演化的过程中要周期性重新初始化为符号距离函数,从而大大增加了计算量和实现的复杂度。提出一种新的规则化水平集函数的方法,不但可以保证水平集函数演化稳定,而且避免了重新初始化。实验结果表明:该方法稳健、快速。  相似文献   

8.
9.
Jiang  Haiyan  Xiong  Haoyi  Wu  Dongrui  Liu  Ji  Dou  Dejing 《Machine Learning》2021,110(8):2131-2150
Machine Learning - Principal component analysis (PCA) has been widely used as an effective technique for feature extraction and dimension reduction. In the High Dimension Low Sample Size setting,...  相似文献   

10.
In this letter, we show a direct relation between spectral embedding methods and kernel principal components analysis and how both are special cases of a more general learning problem: learning the principal eigenfunctions of an operator defined from a kernel and the unknown data-generating density. Whereas spectral embedding methods provided only coordinates for the training points, the analysis justifies a simple extension to out-of-sample examples (the Nystr?m formula) for multidimensional scaling (MDS), spectral clustering, Laplacian eigenmaps, locally linear embedding (LLE), and Isomap. The analysis provides, for all such spectral embedding methods, the definition of a loss function, whose empirical average is minimized by the traditional algorithms. The asymptotic expected value of that loss defines a generalization performance and clarifies what these algorithms are trying to learn. Experiments with LLE, Isomap, spectral clustering, and MDS show that this out-of-sample embedding formula generalizes well, with a level of error comparable to the effect of small perturbations of the training set on the embedding.  相似文献   

11.
This paper presents a new approach to estimating mixture models based on a recent inference principle we have proposed: the latent maximum entropy principle (LME). LME is different from Jaynes' maximum entropy principle, standard maximum likelihood, and maximum a posteriori probability estimation. We demonstrate the LME principle by deriving new algorithms for mixture model estimation, and show how robust new variants of the expectation maximization (EM) algorithm can be developed. We show that a regularized version of LME (RLME), is effective at estimating mixture models. It generally yields better results than plain LME, which in turn is often better than maximum likelihood and maximum a posterior estimation, particularly when inferring latent variable models from small amounts of data.  相似文献   

12.
Learning Linear and Nonlinear PCA with Linear Programming   总被引:1,自引:1,他引:0  
An SVM-like framework provides a novel way to learn linear principal component analysis (PCA). Actually it is a weighted PCA and leads to a semi-definite optimization problem (SDP). In this paper, we learn linear and nonlinear PCA with linear programming problems, which are easy to be solved and can obtain the unique global solution. Moreover, two algorithms for learning linear and nonlinear PCA are constructed, and all principal components can be obtained. To verify the performance of the proposed method, a series of experiments on artificial datasets and UCI benchmark datasets are accomplished. Simulation results demonstrate that the proposed method can compete with or outperform the standard PCA and kernel PCA (KPCA) in generalization ability but with much less memory and time consuming.  相似文献   

13.
Dynamics of Generalized PCA and MCA Learning Algorithms   总被引:1,自引:0,他引:1  
Principal component analysis (PCA) and minor component analysis (MCA) are two important statistical tools which have many applications in the fields of signal processing and data analysis. PCA and MCA neural networks (NNs) can be used to online extract principal component and minor component from input data. It is interesting to develop generalized learning algorithms of PCA and MCA NNs. Some novel generalized PCA and MCA learning algorithms are proposed in this paper. Convergence of PCA and MCA learning algorithms is an essential issue in practical applications. Traditionally, the convergence is studied via deterministic continuous-time (DCT) method. The DCT method requires the learning rate of the algorithms to approach to zero, which is not realistic in many practical applications. In this paper, deterministic discrete-time (DDT) method is used to study the dynamical behaviors of the proposed algorithms. The DDT method is more reasonable for the convergence analysis since it does not require constraints as that of the DCT method. It is proven that under some mild conditions, the weight vector in these proposed algorithms will converge exponentially to principal or minor component. Simulation results are further used to illustrate the theoretical results.  相似文献   

14.
基于稀疏表示的快速l2范数人脸识别方法   总被引:1,自引:0,他引:1  
多数稀疏表示方法需要原子数目远远大于原子维数的大规模冗余字典,并采用l1-范数最小化方法来计算稀疏系数。为了降低算法复杂度,本文提出一种基于稀疏表示的快速l2-范数人脸识别方法。通过提取融合特征和缩小字典规模来改善字典结构,增强l2-范数的稀疏性,从而在保证识别性能的前提下大幅提高算法运行速度。实验表明,与其他稀疏表示方法相比,本文方法可以显著降低算法复杂度,同时可以保持良好的人脸识别率和排除干扰人脸的能力。  相似文献   

15.
基于PCA学习子空间算法的有限汉字识别   总被引:11,自引:0,他引:11       下载免费PDF全文
采用PCA学习子空间方法来进行灰度图象上字符的识别,不仅克服了传统的基于二值化字符特征提取和识别所带来的主要困难,还尽量多地保存了字符特征,该算法在PCA子空间的基础上,通过反馈监督学习的方法使子空间作旋转调整,从而获得了更好的分类效果,特别当字符类别数不是很大时,子空间的训练时间也将在可接受的范围之内,应用效果也表明,采用PCAA学习子空间算法对车牌汉字这一有限汉字集进行识别,取得了较好的效果,实用价值较高。  相似文献   

16.
Recently manifold learning has attracted extensive interest in machine learning and related communities. This paper investigates the noise manifold learning problem, which is a key issue in applying manifold learning algorithm to practical problems. We propose a robust version of LTSA algorithm called RLTSA. The proposed RLTSA algorithm makes LTSA more robust from three aspects: firstly robust PCA algorithm based on iterative weighted PCA is employed instead of the standard SVD to reduce the influence of noise on local tangent space coordinates; secondly RLTSA chooses neighborhoods that are well approximated by the local coordinates to align with the global coordinates; thirdly in the alignment step, the influence of noise on embedding result is further reduced by endowing clean data points and noise data points with different weights into the local alignment errors. Experiments on both synthetic data sets and real data sets demonstrate the effectiveness of our RLTSA when dealing with noise manifold.  相似文献   

17.
Multimedia Tools and Applications - Principal component analysis is a widely used technique. However, it is sensitive to noise and considers data samples to be linearly distributed globally. To...  相似文献   

18.
Non-convex regularizers usually improve the performance of sparse estimation in practice. To prove this fact, we study the conditions of sparse estimations for the sharp concave regularizers which are a general family of non-convex regularizers including many existing regularizers. For the global solutions of the regularized regression, our sparse eigenvalue based conditions are weaker than that of L1-regularization for parameter estimation and sparseness estimation. For the approximate global and approximate stationary (AGAS) solutions, almost the same conditions are also enough. We show that the desired AGAS solutions can be obtained by coordinate descent (CD) based methods. Finally, we perform some experiments to show the performance of CD methods on giving AGAS solutions and the degree of weakness of the estimation conditions required by the sharp concave regularizers.  相似文献   

19.
20.
原泉  王艳  李玉先 《计算机应用》2005,40(9):2743-2747
针对梯度下降法收敛性较差、对局部极小值比较敏感的问题,提出一种改进NAG算法,并以此替换距离保持水平集演化(DRLSE)模型中的梯度下降算法,进而得到一个基于NAG的图像快速分割算法。首先,给出初始水平集演化方程;其次,用改进NAG算法计算梯度;最后,对水平集函数进行不断更新,从而避免水平集函数陷入局部极小值。实验结果表明,与DRLSE模型中的原算法相比,所提算法迭代次数减少了约30%,CPU运行时间减少了30%以上。该算法实现简单,能够对实时性要求较高的红外图像、医学图像进行快速、有效的分割。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号