首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
半监督学习方法通过少量标记数据和大量未标记数据来提升学习性能.Tri-training是一种经典的基于分歧的半监督学习方法,但在学习过程中可能产生标记噪声问题.为了减少Tri-training中的标记噪声对未标记数据的预测偏差,学习到更好的半监督分类模型,用交叉熵代替错误率以更好地反映模型预估结果和真实分布之间的差距,并结合凸优化方法来达到降低标记噪声的目的,保证模型效果.在此基础上,分别提出了一种基于交叉熵的Tri-training算法、一个安全的Tri-training算法,以及一种基于交叉熵的安全Tri-training算法.在UCI(University of California Irvine)机器学习库等基准数据集上验证了所提方法的有效性,并利用显著性检验从统计学的角度进一步验证了方法的性能.实验结果表明,提出的半监督学习方法在分类性能方面优于传统的Tri-training算法,其中基于交叉熵的安全Tri-training算法拥有更高的分类性能和泛化能力.  相似文献   

2.
Insufficiency of labeled training data is a major obstacle for automatic video annotation. Semi-supervised learning is an effective approach to this problem by leveraging a large amount of unlabeled data. However, existing semi-supervised learning algorithms have not demonstrated promising results in large-scale video annotation due to several difficulties, such as large variation of video content and intractable computational cost. In this paper, we propose a novel semi-supervised learning algorithm named semi-supervised kernel density estimation (SSKDE) which is developed based on kernel density estimation (KDE) approach. While only labeled data are utilized in classical KDE, in SSKDE both labeled and unlabeled data are leveraged to estimate class conditional probability densities based on an extended form of KDE. It is a non-parametric method, and it thus naturally avoids the model assumption problem that exists in many parametric semi-supervised methods. Meanwhile, it can be implemented with an efficient iterative solution process. So, this method is appropriate for video annotation. Furthermore, motivated by existing adaptive KDE approach, we propose an improved algorithm named semi-supervised adaptive kernel density estimation (SSAKDE). It employs local adaptive kernels rather than a fixed kernel, such that broader kernels can be applied in the regions with low density. In this way, more accurate density estimates can be obtained. Extensive experiments have demonstrated the effectiveness of the proposed methods.  相似文献   

3.
在开放环境下,数据流具有数据高速生成、数据量无限和概念漂移等特性.在数据流分类任务中,利用人工标注产生大量训练数据的方式昂贵且不切实际.包含少量有标记样本和大量无标记样本且还带概念漂移的数据流给机器学习带来了极大挑战.然而,现有研究主要关注有监督的数据流分类,针对带概念漂移的数据流的半监督分类的研究尚未引起足够的重视....  相似文献   

4.
Many data mining applications have a large amount of data but labeling data is often di cult, expensive, or time consuming, as it requires human experts for annotation.Semi-supervised learning addresses this problem by using unlabeled data together with labeled data to improve the performance. Co-Training is a popular semi-supervised learning algorithm that has the assumptions that each example is represented by two or more redundantly su cient sets of features (views) and additionally these views are independent given the class. However, these assumptions are not satis ed in many real-world application domains. In this paper, a framework called Co-Training by Committee (CoBC) is proposed, in which an ensemble of diverse classi ers is used for semi-supervised learning that requires neither redundant and independent views nor di erent base learning algorithms. The framework is a general single-view semi-supervised learner that can be applied on any ensemble learner to build diverse committees. Experimental results of CoBC using Bagging, AdaBoost and the Random Subspace Method (RSM) as ensemble learners demonstrate that error diversity among classi ers leads to an e ective Co-Training style algorithm that maintains the diversity of the underlying ensemble.  相似文献   

5.
Semi-supervised document clustering, which takes into account limited supervised data to group unlabeled documents into clusters, has received significant interest recently. Because of getting supervised data may be expensive, it is important to get most informative knowledge to improve the clustering performance. This paper presents a semi-supervised document clustering algorithm and a new method for actively selecting informative instance-level constraints to get improved clustering performance. The semi- supervised document clustering algorithm is a Constrained DBSCAN (Cons-DBSCAN) algorithm, which incorporates instance-level constraints to guide the clustering process in DBSCAN. An active learning approach is proposed to select informative document pairs for obtaining user feedbacks. Experimental results show that Cons-DBSCAN with our proposed active learning approach can improve the clustering performance significantly when given a relatively small amount of constraints.  相似文献   

6.
This paper proposes a semi-supervised Bayesian ARTMAP (SSBA) which integrates the advantages of both Bayesian ARTMAP (BA) and Expectation Maximization (EM) algorithm. SSBA adopts the training framework of BA, which makes SSBA adaptively generate categories to represent the distribution of both labeled and unlabeled training samples without any user’s intervention. In addition, SSBA employs EM algorithm to adjust its parameters, which realizes the soft assignment of training samples to categories instead of the hard assignment such as winner takes all. Experimental results on benchmark and real world data sets indicate that the proposed SSBA achieves significantly improved performance compared with BA and EM-based semi-supervised learning method; SSBA is appropriate for semi-supervised classification tasks with large amount of unlabeled samples or with strict demands for classification accuracy.  相似文献   

7.
Image retrieval based on augmented relational graph representation   总被引:1,自引:1,他引:0  
The “semantic gap” problem is one of the main difficulties in image retrieval tasks. Semi-supervised learning, typically integrated with the relevance feedback techniques, is an effective method to narrow down the semantic gap. However, in semi-supervised learning, the amount of unlabeled data is usually much greater than that of labeled data. Therefore, the performance of a semi-supervised learning algorithm relies heavily on its effectiveness of using the relationships between the labeled and unlabeled data. This paper proposes a novel algorithm to better explore those relationships by augmenting the relational graph representation built on the entire data set, expected to increase the intra-class weights while decreasing the inter-class weights and linking the potential intra-class data. The augmented relational matrix can be directly used in any semi-supervised learning algorithms. The experimental results in a range of feedback-based image retrieval tasks show that the proposed algorithm not only achieves good generality, but also outperforms other algorithms in the same semi-supervised learning framework.  相似文献   

8.
一种半监督局部线性嵌入算法的文本分类方法*   总被引:3,自引:0,他引:3  
针对局部线性嵌入算法(LLE)应用于非监督机器学习中的缺陷,将该算法与半监督思想相结合,提出了一种基于半监督局部线性嵌入算法的文本分类方法。通过使用文本数据的流形结构和少量的标签样本,将LLE中的距离矩阵采用分段形式进行调整;使用调整后的矩阵进行线性重建从而实现数据降维;针对半监督LLE中使用欧氏距离的缺点,采用高斯核函数将欧氏距离进行变换,并用新的核距离取代欧氏距离,提出了基于核的半监督局部线性嵌入算法;最后通过仿真实验验证了改进算法的有效性。  相似文献   

9.
Most existing semi-supervised clustering algorithms are not designed for handling high-dimensional data. On the other hand, semi-supervised dimensionality reduction methods may not necessarily improve the clustering performance, due to the fact that the inherent relationship between subspace selection and clustering is ignored. In order to mitigate the above problems, we present a semi-supervised clustering algorithm using adaptive distance metric learning (SCADM) which performs semi-supervised clustering and distance metric learning simultaneously. SCADM applies the clustering results to learn a distance metric and then projects the data onto a low-dimensional space where the separability of the data is maximized. Experimental results on real-world data sets show that the proposed method can effectively deal with high-dimensional data and provides an appealing clustering performance.  相似文献   

10.
Due to its wide applicability, semi-supervised learning is an attractive method for using unlabeled data in classification. In this work, we present a semi-supervised support vector classifier that is designed using quasi-Newton method for nonsmooth convex functions. The proposed algorithm is suitable in dealing with very large number of examples and features. Numerical experiments on various benchmark datasets showed that the proposed algorithm is fast and gives improved generalization performance over the existing methods. Further, a non-linear semi-supervised SVM has been proposed based on a multiple label switching scheme. This non-linear semi-supervised SVM is found to converge faster and it is found to improve generalization performance on several benchmark datasets.  相似文献   

11.
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms.  相似文献   

12.
基于一致性的半监督学习方法通常使用简单的数据增强方法来实现对原始输入和扰动输入的一致性预测.在有标签数据的比例较低的情况下,该方法的效果难以得到保证.将监督学习中一些先进的数据增强方法扩展到半监督学习环境中,是解决该问题的思路之一.基于一致性的半监督学习方法MixMatch,提出了基于混合样本自动数据增强技术的半监督学...  相似文献   

13.
距离度量对模糊聚类算法FCM的聚类结果有关键性的影响。实际应用中存在这样一种场景,聚类的数据集中存在着一定量的带标签的成对约束集合的辅助信息。为了充分利用这些辅助信息,首先提出了一种基于混合距离学习方法,它能利用这样的辅助信息来学习出数据集合的距离度量公式。然后,提出了一种基于混合距离学习的鲁棒的模糊C均值聚类算法(HR-FCM算法),它是一种半监督的聚类算法。算法HR-FCM既保留了GIFP-FCM(Generalized FCM algorithm with improved fuzzy partitions)算法的鲁棒性等性能,也因为所采用更为合适的距离度量而具有更好的聚类性能。实验结果证明了所提算法的有效性。  相似文献   

14.
How to organize and retrieve images is now a great challenge in various domains. Image clustering is a key tool in some practical applications including image retrieval and understanding. Traditional image clustering algorithms consider a single set of features and use ad hoc distance functions, such as Euclidean distance, to measure the similarity between samples. However, multi-modal features can be extracted from images. The dimension of multi-modal data is very high. In addition, we usually have several, but not many labeled images, which lead to semi-supervised learning. In this paper, we propose a framework of image clustering based on semi-supervised distance learning and multi-modal information. First we fuse multiple features and utilize a small amount of labeled images for semi-supervised metric learning. Then we compute similarity with the Gaussian similarity function and the learned metric. Finally, we construct a semi-supervised Laplace matrix for spectral clustering and propose an effective clustering method. Extensive experiments on some image data sets show the competent performance of the proposed algorithm.  相似文献   

15.
基于有监督学习的射频指纹定位方法是室内高精度无线定位技术的一个研究热点. 针对有监督学习方法存在训练数据集采集代价较高的问题, 本文提出了一种基于半监督学习的室内无线定位算法. 该算法采用基于Laplacian矩阵谱分解的方法获取训练数据在特征向量空间上的表示, 然后通过有标记数据在特征向量空间上的标记对齐, 实现对未标记数据的标记. 实验结果表明, 仅需少量的有标记数据(20%左右), 便能以较高的精度(80%左右)实现对未标记数据的标记, 从而有效降低了训练开销.  相似文献   

16.
K-Hub聚类算法是一种有效的高维数据聚类算法,但是它对初始聚类中心的选择非常敏感,并且对于靠近类边界的实例往往不能正确聚类.为了解决这些问题,提出一种结合主动学习和半监督聚类的K-Hub聚类算法.运用主动学习策略学习部分实例的关联限制,然后利用这些关联限制指导K-Hub的聚类过程.实验结果表明,基于主动学习的K-Hub聚类算法能有效提升K-Hub的聚类准确率.  相似文献   

17.
Support vector machine (SVM) is a general and powerful learning machine, which adopts supervised manner. However, for many practical machine learning and data mining applications, unlabeled training examples are readily available but labeled ones are very expensive to be obtained. Therefore, semi-supervised learning emerges as the times require. At present, the combination of SVM and semi-supervised learning principle such as transductive learning has attracted more and more attentions. Transductive support vector machine (TSVM) learns a large margin hyperplane classifier using labeled training data, but simultaneously force this hyperplane to be far away from the unlabeled data. TSVM might seem to be the perfect semi-supervised algorithm since it combines the powerful regularization of SVMs and a direct implementation of the clustering assumption, nevertheless its objective function is non-convex and then it is difficult to be optimized. This paper aims to solve this difficult problem. We apply least square support vector machine to implement TSVM, which can ensure that the objective function is convex and the optimization solution can then be easily found by solving a set of linear equations. Simulation results demonstrate that the proposed method can exploit unlabeled data to yield good performance effectively.  相似文献   

18.
杜阳  姜震  冯路捷 《计算机应用》2019,39(12):3462-3466
半监督学习结合少量有标签样本和大量无标签样本,可以有效提高算法的泛化性能。传统的半监督支持向量机(SVM)算法在目标函数中引入无标签样本的依赖项来推动决策面通过低密度区域,但往往会带来高计算复杂度和局部最优解等问题。同时,半监督K-means算法面临着如何有效利用监督信息进行质心的初始化及更新等问题。针对上述问题,提出了一种结合SVM和半监督K-means的新型学习算法(SKAS)。首先,提出一种改进的半监督K-means算法,从距离度量和质心迭代两个方面进行了改进;然后,设计了一种融合算法将半监督K-means算法与SVM相结合以进一步提升算法性能。在6个UCI数据集上的实验结果表明,所提算法在其中5个数据集上的运行结果都优于当前先进的半监督SVM算法和半监督K-means算法,且拥有最高的平均准确率。  相似文献   

19.
Co-training is a good paradigm of semi-supervised, which requires the data set to be described by two views of features. There are a notable characteristic shared by many co-training algorithm: the selected unlabeled instances should be predicted with high confidence, since a high confidence score usually implies that the corresponding prediction is correct. Unfortunately, it is not always able to improve the classification performance with these high confidence unlabeled instances. In this paper, a new semi-supervised learning algorithm was proposed combining the benefits of both co-training and active learning. The algorithm applies co-training to select the most reliable instances according to the two criterions of high confidence and nearest neighbor for boosting the classifier, also exploit the most informative instances with human annotation for improve the classification performance. Experiments on several UCI data sets and natural language processing task, which demonstrate our method achieves more significant improvement for sacrificing the same amount of human effort.  相似文献   

20.
半监督聚类的若干新进展   总被引:6,自引:0,他引:6  
半监督聚类方法利用少量标记数据提高聚类算法的性能,已逐渐发展成为模式识别及相关领域的研究热点.文中首先综述了半监督聚类算法的一些新进展,包括基于约束的方法、基于距离的方法和基于距离与约束的融合方法.然后提出一种基于约束的半监督模糊C-means聚类算法.实验表明,该算法与传统的模糊C-means及半监督K-means方法相比,具有更好的聚类精度.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号