首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到11条相似文献,搜索用时 0 毫秒
1.
The performance of many supervised and unsupervised learning algorithms is very sensitive to the choice of an appropriate distance metric. Previous work in metric learning and adaptation has mostly been focused on classification tasks by making use of class label information. In standard clustering tasks, however, class label information is not available. In order to adapt the metric to improve the clustering results, some background knowledge or side information is needed. One useful type of side information is in the form of pairwise similarity or dissimilarity information. Recently, some novel methods (e.g., the parametric method proposed by Xing et al.) for learning global metrics based on pairwise side information have been shown to demonstrate promising results. In this paper, we propose a nonparametric method, called relaxational metric adaptation (RMA), for the same metric adaptation problem. While RMA is local in the sense that it allows locally adaptive metrics, it is also global because even patterns not in the vicinity can have long-range effects on the metric adaptation process. Experimental results for semi-supervised clustering based on both simulated and real-world data sets show that RMA outperforms Xing et al.'s method under most situations. Besides applying RMA to semi-supervised learning, we have also used it to improve the performance of content-based image retrieval systems through metric adaptation. Experimental results based on two real-world image databases show that RMA significantly outperforms other methods in improving the image retrieval performance.  相似文献   

2.
Subspace and similarity metric learning are important issues for image and video analysis in the scenarios of both computer vision and multimedia fields. Many real-world applications, such as image clustering/labeling and video indexing/retrieval, involve feature space dimensionality reduction as well as feature matching metric learning. However, the loss of information from dimensionality reduction may degrade the accuracy of similarity matching. In practice, such basic conflicting requirements for both feature representation efficiency and similarity matching accuracy need to be appropriately addressed. In the style of “Thinking Globally and Fitting Locally”, we develop Locally Embedded Analysis (LEA) based solutions for visual data clustering and retrieval. LEA reveals the essential low-dimensional manifold structure of the data by preserving the local nearest neighbor affinity, and allowing a linear subspace embedding through solving a graph embedded eigenvalue decomposition problem. A visual data clustering algorithm, called Locally Embedded Clustering (LEC), and a local similarity metric learning algorithm for robust video retrieval, called Locally Adaptive Retrieval (LAR), are both designed upon the LEA approach, with variations in local affinity graph modeling. For large size database applications, instead of learning a global metric, we localize the metric learning space with kd-tree partition to localities identified by the indexing process. Simulation results demonstrate the effective performance of proposed solutions in both accuracy and speed aspects.  相似文献   

3.
In content-based image retrieval (CBIR), relevant images are identified based on their similarities to query images. Most CBIR algorithms are hindered by the semantic gap between the low-level image features used for computing image similarity and the high-level semantic concepts conveyed in images. One way to reduce the semantic gap is to utilize the log data of users' feedback that has been collected by CBIR systems in history, which is also called “collaborative image retrieval.” In this paper, we present a novel metric learning approach, named “regularized metric learning,” for collaborative image retrieval, which learns a distance metric by exploring the correlation between low-level image features and the log data of users' relevance judgments. Compared to the previous research, a regularization mechanism is used in our algorithm to effectively prevent overfitting. Meanwhile, we formulate the proposed learning algorithm into a semidefinite programming problem, which can be solved very efficiently by existing software packages and is scalable to the size of log data. An extensive set of experiments has been conducted to show that the new algorithm can substantially improve the retrieval accuracy of a baseline CBIR system using Euclidean distance metric, even with a modest amount of log data. The experiment also indicates that the new algorithm is more effective and more efficient than two alternative algorithms, which exploit log data for image retrieval.  相似文献   

4.
半监督聚类利用少量标记样本的辅助信息来引导对大量无标记数据的分割。Pedrycz提出的半监督FCM(sFCM)算法应用标记样本的类别归属信息来辅助聚类,其在标记点过于稀少时会退化为无监督FCM算法且收敛较慢,难以应用于多数实际问题。在半监督FCM的基础上提出一种改进退化的半监督FCM算法(dsFCM),通过在sFCM迭代过程中设置监督成分的比重,来加大标记样本点对聚类中心的影响力,在聚类精度、速度和鲁棒性上均比半监督FCM有所提高,解决了标记点稀疏时的退化问题,在医学图像分割上取得了良好应用。  相似文献   

5.
Hidden annotation (HA) is an important research issue in content-based image retrieval (CBIR). We propose to incorporate long-term relevance feedback (LRF) with HA to increase both efficiency and retrieval accuracy of CBIR systems. The work contains two parts. (1) Through LRF, a multi-layer semantic representation is built to automatically extract hidden semantic concepts underlying images. HA with these concepts alleviates the burden of manual annotation and avoids the ambiguity problem of keyword-based annotation. (2) For each learned concept, semi-supervised learning is incorporated to automatically select a small number of candidate images for annotators to annotate, which improves efficiency of HA.  相似文献   

6.
随着卫星遥感技术的不断发展,基于内容的遥感图像检索技术越来越受到关注。目前该方向的研究主要集中在对遥感图像中不同特征的提取和融合方面,这些方法普遍忽略了这样一个事实:对于不同类型的检索目标,特征应该是不同的。另外,小样本问题也是遥感图像检索中一个较为突出的问题。基于以上两方面考虑,本文提出一种基于特征选择和半监督学习的遥感图像检索新方法,该方法主要包括4个方面:1)利用最小描述长度准则自动确定聚类数目;2)结合聚类方法和适当的聚类有效性指标选择最能表示检索目标的特征,在计算聚类有效性指数时,针对遥感图像检索特点对原有的Davies-Bouldin指数进行了改进;3)动态确定最优颜色特征和最优纹理特征之间的权重;4)根据最优颜色特征和最优纹理特征的权重自动确定半监督学习方法,并进行遥感图像的检索。实验结果表明,与相关反馈方法的检索效果相比,该算法在土壤侵蚀区域检索以及其他一般地表覆盖目标检索中均获得了相近的检索效果,但不需要用户多次反馈。  相似文献   

7.
高效的Web图像检索对于用户来说是非常重要的,图像元搜索引擎作为一种有效的图像检索技术可以促进Web图像的检索质量和精度.提出一种基于改进的HACM(hierarchical agglomerative clustering methods)聚类算法和遗传算法的图像元搜索引擎模型,Web图像向量化表示之后运用HACM聚类技术进行分类,然后通过特殊设计的遗传算法对检索结果进行优化排序,最后将排序后的更精确的图像集提供给用户.实验结果表明,该系统可以在较短的时间内达到很高的检索精度.  相似文献   

8.
In the last few years, we have seen an upsurge of interest in content-based image retrieval (CBIR)—the selection of images from a collection via features extracted from images themselves. Often, a single image attribute may not have enough discriminative information for successful retrieval. On the other hand when multiple features are used, it is hard to determine the suitable weighing factors for various features for optimal retrieval. In this paper, we present a relevance feedback framework with Integrated Probability Function (IPF) which combines multiple features for optimal retrieval. The IPF is based on a new posterior probability estimator and a novel weight updating approach. We perform experiments on 1400 monochromatic trademark images have been performed. The proposed IPF is shown to be more effective and efficient to retrieve deformed trademark images than the commonly used integrated dissimilarity function. The new posterior probability estimator is shown to be generally better than the existing one. The proposed novel weight updating approach by relevance feedback is shown to be better than both the existing scoring approach and the existing ratio approach. In experiments, 95% of the targets are ranked at the top five positions. By two iterations of relevance feedback, retrieval performance can be improved from 75% to over 95%. The IPF and its relevance feedback framework proposed in this paper can be effectively and efficiently used in content-based image retrieval.  相似文献   

9.
王少华  狄岚  梁久祯 《计算机应用》2015,35(11):3227-3231
在以聚类分析为背景的图像分割算法中,引入局部信息是为了在保留图像细节的同时尽可能地减少噪声.在模糊C均值算法基础上,提出了一种基于核与局部信息的多维度模糊聚类分析方法来权衡图像中的噪声和细节.该算法引入2个基于局部信息的图像变体,即平滑和锐化处理后的图像,使之与原始图像一起构成多维度的灰度值向量来替换原始单维的灰度值; 再利用核方法提高其鲁棒性; 最后添加一个邻域隶属度差异惩罚项很好地修正和增强了最终的分割效果.在人工合成图片的去噪实验中,所提方法取得了近99%的分割正确率,优于Nystrom归一化分割(NNcut)和基于模糊局部信息C均值(FLICM)算法;同时在自然图片和医学图片的对比实验以及参数调控实验中,展现出了其在处理图像噪声和细节时灵活、稳定、健壮且易于调控的特点.  相似文献   

10.
本文研究了一种新型的基于知识迁移的极大熵聚类技术。拟解决两大挑战性问题:1)如何从源域中选择合适的知识对目标域进行迁移学习以最终强化目标域的聚类性能;2)若存在源域聚类数与目标域聚类数不一致的情况时,该如何进行迁移聚类。为此提出一种全新的迁移聚类机制,即基于聚类中心的中心匹配迁移机制。进一步将该机制与经典极大熵聚类算法相融合提出了基于知识迁移的极大熵聚类算法(KT-MEC)。实验表明,在不同迁移场景下的纹理图像分割应用中,KT-MEC算法较很多现有聚类算法具有更高的精确度和抗噪性。  相似文献   

11.
Zhong  Chuandong  Xiaofeng 《Neurocomputing》2007,70(16-18):2980
Some delay-dependent robust asymptotical stability criteria for uncertain linear time-variant systems with multiple delays are established by means of parameterized first-order model transformation and the transformation of the interval uncertainty into the norm-bounded uncertainty. The stable regions with respect to the delay parameters are also formulated. Based on these results, we investigate the stability issue of a class of delayed neural networks that can be transformed into linear time-variant systems, and then several new global asymptotical stability criteria are exploited. Numerical examples are presented to illustrate the effectiveness of our results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号