共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
Multilabel Neural Networks with Applications to Functional Genomics and Text Categorization 总被引:4,自引:0,他引:4
Min-Ling Zhang Zhi-Hua Zhou 《Knowledge and Data Engineering, IEEE Transactions on》2006,18(10):1338-1351
In multilabel learning, each instance in the training set is associated with a set of labels and the task is to output a label set whose size is unknown a priori for each unseen instance. In this paper, this problem is addressed in the way that a neural network algorithm named BP-MLL, i.e., Backpropagation for Multilabel Learning, is proposed. It is derived from the popular Backpropogation algorithm through employing a novel error function capturing the characteristics of multilabel learning, i.e., the labels belonging to an instance should be ranked higher than those not belonging to that instance. Applications to two real-world multilabel learning problems, i.e., functional genomics and text categorization, show that the performance of BP-MLL is superior to that of some well-established multilabel learning algorithms. 相似文献
3.
Mallapragada P.K. Rong Jin Jain A.K. Yi Liu 《IEEE transactions on pattern analysis and machine intelligence》2009,31(11):2000-2014
Semi-supervised learning has attracted a significant amount of attention in pattern recognition and machine learning. Most previous studies have focused on designing special algorithms to effectively exploit the unlabeled data in conjunction with labeled data. Our goal is to improve the classification accuracy of any given supervised learning algorithm by using the available unlabeled examples. We call this as the Semi-supervised improvement problem, to distinguish the proposed approach from the existing approaches. We design a metasemi-supervised learning algorithm that wraps around the underlying supervised algorithm and improves its performance using unlabeled data. This problem is particularly important when we need to train a supervised learning algorithm with a limited number of labeled examples and a multitude of unlabeled examples. We present a boosting framework for semi-supervised learning, termed as SemiBoost. The key advantages of the proposed semi-supervised learning approach are: 1) performance improvement of any supervised learning algorithm with a multitude of unlabeled data, 2) efficient computation by the iterative boosting algorithm, and 3) exploiting both manifold and cluster assumption in training classification models. An empirical study on 16 different data sets and text categorization demonstrates that the proposed framework improves the performance of several commonly used supervised learning algorithms, given a large number of unlabeled examples. We also show that the performance of the proposed algorithm, SemiBoost, is comparable to the state-of-the-art semi-supervised learning algorithms. 相似文献
4.
Many data mining applications have a large amount of data but labeling data is usually difficult, expensive, or time consuming,
as it requires human experts for annotation. Semi-supervised learning addresses this problem by using unlabeled data together with labeled data in the training process. Co-Training is a popular semi-supervised learning algorithm that has the assumptions that each example is represented by multiple sets of features (views) and these views
are sufficient for learning and independent given the class. However, these assumptions are strong and are not satisfied in
many real-world domains. In this paper, a single-view variant of Co-Training, called Co-Training by Committee (CoBC) is proposed, in which an ensemble of diverse classifiers is used instead of redundant and independent views. We introduce
a new labeling confidence measure for unlabeled examples based on estimating the local accuracy of the committee members on
its neighborhood. Then we introduce two new learning algorithms, QBC-then-CoBC and QBC-with-CoBC, which combine the merits of committee-based semi-supervised learning and active learning. The random subspace method is
applied on both C4.5 decision trees and 1-nearest neighbor classifiers to construct the diverse ensembles used for semi-supervised
learning and active learning. Experiments show that these two combinations can outperform other non committee-based ones. 相似文献
5.
6.
Some recent successful semi-supervised learning methods construct more than one learner from both labeled and unlabeled data
for inductive learning. This paper proposes a novel multiple-view multiple-learner (MVML) framework for semi-supervised learning,
which differs from previous methods in possession of both multiple views and multiple learners. This method adopts a co-training
styled learning paradigm in enlarging labeled data from a much larger set of unlabeled data. To the best of our knowledge
it is the first attempt to combine the advantages of multiple-view learning and ensemble learning for semi-supervised learning.
The use of multiple views is promising to promote performance compared with single-view learning because information is more
effectively exploited. At the same time, as an ensemble of classifiers is learned from each view, predictions with higher
accuracies can be obtained than solely adopting one classifier from the same view. Experiments on different applications involving
both multiple-view and single-view data sets show encouraging results of the proposed MVML method. 相似文献
7.
多标记学习主要用于解决单个样本同时属于多个类别的问题.传统的多标记学习通常假设训练数据集含有大量有标记的训练样本.然而在许多实际问题中,大量训练样本中通常只有少量有标记的训练样本.为了更好地利用丰富的未标记训练样本以提高分类性能,提出了一种基于正则化的归纳式半监督多标记学习方法——MASS.具体而言,MASS首先在最小化经验风险的基础上,引入两种正则项分别用于约束分类器的复杂度及要求相似样本拥有相似结构化多标记输出,然后通过交替优化技术给出快速解法.在网页分类和基因功能分析问题上的实验结果验证了MASS方法的有效性. 相似文献
8.
9.
网络表示学习是一个重要的研究课题,其目的是将高维的属性网络表示为低维稠密的向量,为下一步任务提供有效特征表示。最近提出的属性网络表示学习模型SNE(Social Network Embedding)同时使用网络结构与属性信息学习网络节点表示,但该模型属于无监督模型,不能充分利用一些容易获取的先验信息来提高所学特征表示的质量。基于上述考虑提出了一种半监督属性网络表示学习方法SSNE(Semi-supervised Social Network Embedding),该方法以属性网络和少量节点先验作为前馈神经网络输入,经过多个隐层非线性变换,在输出层通过保持网络链接结构和少量节点先验,学习最优化的节点表示。在四个真实属性网络和两个人工属性网络上,同现有主流方法进行对比,结果表明本方法学到的表示,在聚类和分类任务上具有较好的性能。 相似文献
10.
11.
12.
Semi-Supervised Learning on Riemannian Manifolds 总被引:1,自引:0,他引:1
We consider the general problem of utilizing both labeled and unlabeled data to improve classification accuracy. Under the assumption that the data lie on a submanifold in a high dimensional space, we develop an algorithmic framework to classify a partially labeled data set in a principled manner. The central idea of our approach is that classification functions are naturally defined only on the submanifold in question rather than the total ambient space. Using the Laplace-Beltrami operator one produces a basis (the Laplacian Eigenmaps) for a Hilbert space of square integrable functions on the submanifold. To recover such a basis, only unlabeled examples are required. Once such a basis is obtained, training can be performed using the labeled data set. Our algorithm models the manifold using the adjacency graph for the data and approximates the Laplace-Beltrami operator by the graph Laplacian. We provide details of the algorithm, its theoretical justification, and several practical applications for image, speech, and text classification. 相似文献
13.
该文介绍HowNet在文本挖掘中的应用,利用HowNet从中文语义的角度计算中文词语相似度,计算词语之间的相关性,为实现更深层次的信息处理做准备。 相似文献
14.
已有的数据流分类算法多采用有监督学习,需要使用大量已标记数据训练分类器,而获取已标记数据的成本很高,算法缺乏实用性。针对此问题,文中提出基于半监督学习的集成分类算法SEClass,能利用少量已标记数据和大量未标记数据,训练和更新集成分类器,并使用多数投票方式对测试数据进行分类。实验结果表明,使用同样数量的已标记训练数据,SEClass算法与最新的有监督集成分类算法相比,其准确率平均高5。33%。且运算时间随属性维度和类标签数量的增加呈线性增长,能够适用于高维、高速数据流分类问题。 相似文献
15.
16.
在e-Learning系统中,对学生学习的评价难以进行。在分析e-Learning环境下学习评价特征的基础上,本文引入电子学档评价方法,提出将文本挖掘技术运用于学习评价,依据学生学习评价量规,实现对学生学习过程的评价。 相似文献
17.
Low-rank structures play important roles in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regularization, and thus enjoy more flexibility to handle data with nonlinear structures. As applications, we demonstrate the proposed regularization to classical inverse problems in image sciences and data sciences including image inpainting, image super-resolution, X-ray computer tomography image reconstruction and semi-supervised learning. We conduct intensive numerical experiments in several image restoration problems and a semi-supervised learning problem of classifying handwritten digits using the MINST data. Our numerical tests demonstrate the effectiveness of the proposed methods and illustrate that the new regularization methods produce outstanding results by comparing with many existing methods. 相似文献
18.
联邦学习允许边缘设备或客户端将数据存储在本地来合作训练共享的全局模型。主流联邦学习系统通常基于客户端本地数据有标签这一假设,然而客户端数据一般没有真实标签,且数据可用性和数据异构性是联邦学习系统面临的主要挑战。针对客户端本地数据无标签的场景,设计一种鲁棒的半监督联邦学习系统。利用FedMix方法分析全局模型迭代之间的隐式关系,将在标签数据和无标签数据上学习到的监督模型和无监督模型进行分离学习。采用FedLoss聚合方法缓解客户端之间数据的非独立同分布(non-IID)对全局模型收敛速度和稳定性的影响,根据客户端模型损失函数值动态调整局部模型在全局模型中所占的权重。在CIFAR-10数据集上的实验结果表明,该系统的分类准确率相比于主流联邦学习系统约提升了3个百分点,并且对不同non-IID水平的客户端数据更具鲁棒性。 相似文献
19.
20.
Text mining is a relatively new research area associated with the creation of novel information resources from electronic text repositories. An expert-witness database based on text from legal, medical, and news documents demonstrates the successful application of text-mining techniques. 相似文献