共查询到20条相似文献,搜索用时 171 毫秒
1.
本文提出一种基于半监督主动学习的算法,用于解决在建立动态贝叶斯网络(DBN)分类模型时遇到的难以获得大量带有类标注的样本数据集的问题.半监督学习可以有效利用未标注样本数据来学习DBN分类模型,但是在迭代过程中易于加入错误的样本分类信息,并因而影响模型的准确性.在半监督学习中借鉴主动学习,可以自主选择有用的未标注样本来请求用户标注.把这些样本加入训练集之后,能够最大程度提高半监督学习对未标注样本分类的准确性.实验结果表明,该算法能够显著提高DBN学习器的效率和性能,并快速收敛于预定的分类精度. 相似文献
2.
3.
4.
5.
对于建立动态贝叶斯网络(DBN)分类模型时,带有类标注样本数据集获得困难的问题,提出一种基于EM和分类损失的半监督主动DBN学习算法.半监督学习中的EM算法可以有效利用未标注样本数据来学习DBN分类模型,但是由于迭代过程中易于加入错误的样本分类信息而影响模型的准确性.基于分类损失的主动学习借鉴到EM学习中,可以自主选择有用的未标注样本来请求用户标注,当把这些样本加入训练集后能够最大程度减少模型对未标注样本分类的不确定性.实验表明,该算法能够显著提高DBN学习器的效率和性能,并快速收敛于预定的分类精度. 相似文献
6.
在对实际数据进行分类求解时,往往会遇到大量未带类别标注的样本,现有的经典分类方法常采用先标注缺失样本,再进行分类,存在耗时且分类精度差等问题.为此,提出一种基于主动学习思想贝叶斯分类方法RANB. 引入主动学习旨在减少评价样本所需代价,提高分类器性能. RANB方法在主动学习策略的基础上融入条件熵和分类损失的思想,可以有效抑制不确定样本所带来的误差.实验表明,该方法与朴素贝叶斯分类器等经典方法相比,在保证分类性能的前提下,可有效地减少学习所需的样本数量,尤其是对于未带类别标志的样本,更是有其优越性. 相似文献
7.
针对海冰遥感图像分类问题中标签样本获取困难、标注成本较高导致海冰分类精度难以提高的问题,提出了一种主动学习与半监督学习相结合的方式用于海冰分类。首先,利用基于不确定性准则和多样性准则进行主动学习方法,选择一批最具信息量的标签样本建立标签样本集;其次,充分利用大量的未标签样本信息,并融合主动学习采样的思想选出部分具有代表性且分布在支持向量周边的半标签样本,建立半监督分类模型;最后,将主动学习方法和直推式支持向量机相结合构建分类模型实现海冰图像分类。实验结果表明,相对于其他方法,该方法在只有少量标签样本的情况下,可以获得更高的分类精度,该方式可有效解决遥感海冰分类问题。 相似文献
8.
为解决数据流分类过程中样本标注和概念漂移问题,提出了一种基于实例迁移的数据流分类挖掘模型.首先,该模型用支持向量机作学习器,用所得分类模型中的支持向量构建源领域,待分类的当前数据块为目标域.然后,借助互近邻思想在源域中挑选目标域中样本的真邻居进行实例迁移,避免发生负迁移.最后,通过合并目标域和迁移样本形成训练集,提高标注样本数量,增强模型的泛化能力.理论分析和实验结果表明,所提方法具有可行性,相比其它学习方法在分类准确性方面更具优势. 相似文献
9.
10.
正例无标记(PU)学习中的间谍技术极易受噪声和离群点干扰,导致划分的可靠正例不纯,且在初始正例中随机选择间谍样本的机制极易造成划分可靠负例时效率低下,针对这些问题提出一种结合新型间谍技术和半监督自训练的PU学习框架。首先,该框架对初始有标记样本进行聚类并选取离聚类中心较近的样本来取代间谍样本,这些样本能有效地映射出无标记样本的分布结构,从而更好地辅助选取可靠负例;然后对间谍技术划分后的可靠正例进行自训练提纯,采用二次训练的方式取回被误分为正例样本的可靠负例。该框架有效地解决了传统间谍技术在PU学习中分类效率易受数据分布干扰以及随机间谍样本影响的问题。通过9个标准数据集上的仿真实验结果表明,所提框架的平均分类准确率和F-值均高于基本PU学习算法(Basic_PU)、基于间谍技术的PU学习算法(SPY)、基于朴素贝叶斯的自训练PU学习算法(NBST)和基于迭代剪枝的PU学习算法(Pruning)。 相似文献
11.
Mickael Daubie Philippe Levecq & Nadine Meskens 《International Transactions in Operational Research》2002,9(5):681-694
Credit scoring is the term used to describe methods utilized for classifying applicants for credit into classes of risk. This paper evaluates two induction approaches, rough sets and decision trees, as techniques for classifying credit (business) applicants. Inductive learning methods, like rough sets and decision trees, have a better knowledge representational structure than neural networks or statistical procedures because they can be used to derive production rules. If decision trees have already been used for credit granting, the rough sets approach is rarely utilized in this domain. In this paper, we use production rules obtained on a sample of 1102 business loans in order to compare the classification abilities of the two techniques. We show that decision trees obtain better results with 87.5% of good classifications with a pruned tree, against 76.7% for rough sets. However, decision trees make more type–II errors than rough sets, but fewer type–I errors. 相似文献
12.
Huimin Zhao 《Knowledge and Information Systems》2008,15(3):321-334
In real-world classification problems, different types of misclassification errors often have asymmetric costs, thus demanding cost-sensitive learning methods that attempt to minimize average misclassification cost rather than plain error rate. Instance weighting and post hoc threshold adjusting are two major approaches to cost-sensitive classifier learning. This paper compares the effects of these two approaches on several standard, off-the-shelf classification methods. The comparison indicates that the two approaches lead to similar results for some classification methods, such as Naïve Bayes, logistic regression, and backpropagation neural network, but very different results for other methods, such as decision tree, decision table, and decision rule learners. The findings from this research have important implications on the selection of the cost-sensitive classifier learning approach as well as on the interpretation of a recently published finding about the relative performance of Naïve Bayes and decision trees. 相似文献
13.
Remote sensing is the main means of extracting land cover types,which has important significance for monitoring land use change and developing national policies.Object-based classification methods can provide higher accuracy data than pixel-based methods by using spectral,shape and texture information.In this study,we choose GF-1 satellite’s imagery and proposed a method which can automatically calculate the optimal segmentation scale.The object-based methods for classifying four typical land cover types are compared using multi-scale segmentation and three supervised machine learning algorithms.The relationship between the accuracy of classification results and the training sample proportion is analyzed and the result shows that object-based methods can achieve higher classification results in the case of small training sample ratio,overall accuracies are higher than 94%.Overall,the classification accuracy of support vector machine is higher than that of neural network and decision tree during the process of object-oriented classification. 相似文献
14.
A. V. Zukhba 《Pattern Recognition and Image Analysis》2010,20(4):484-494
The problem of selecting of prototypes is to select a subset in the learning sample for which the set of minimum cardinality
would provide the optimum of a given learning quality functional. In this article the problem of classification is considered
in two classes, the method of classification by nearest neighbor, and three functional characteristics: the frequency of errors
on the entire sample, a cross validation with one separated object, and a complete cross validation with k separated objects. It is shown that the problem of selection of prototypes in all three cases is NP-complete, which justifies
the use of well-known heuristic methods for the prototype search. 相似文献
15.
KNN作为一种简单的分类方法在文本分类中有广泛的应用,但存在着计算量大和训练文档分布不均所造成的分类准确率下降等同题.针对这些问题,基于最小化学习误差的增量思想,该文将学习型矢量量化(LVQ)和生长型神经气(GNG)结合起来提出一种新的增量学习型矢量量化方法,并将其应用到文本分类中.文中提出的算法对所有的训练样本有选择性地进行一次训练就可以生成有效的代表样本集,具有较强的学习能力.实验结果表明:这种方法不仅可以降低KNN方法的测试时间,而且可以保持甚至提高分类的准确性. 相似文献
16.
In the distribution-independent model of concept learning of Valiant, Angluin and Laird have introduced a formal model of noise process, called classification noise process, to study how to compensate for randomly introduced errors, or noise, in classifying the example data. In this article, we investigate the problem of designing efficient learning algorithms in the presence of classification noise. First, we develop a technique of building efficient robust learning algorithms, called noise-tolerant Occam algorithms, and show that using them, one can construct a polynomial-time algorithm for learning a class of Boolean functions in the presence of classification noise. Next, as an instance of such problems of learning in the presence of classification noise, we focus on the learning problem of Boolean functions represented by decision trees. We present a noise-tolerant Occam algorithm for k-DL (the class of decision lists with conjunctive clauses of size at most k at each decision introduced by Rivest) and hence conclude that k-DL is polynomially learnable in the presence of classification noise. Further, we extend the noise-tolerant Occam algorithm for k-DL to one for r-DT (the class of decision trees of rank at most r introduced by Ehrenfeucht and Haussler) and conclude that r-DT is polynomially learnable in the presence of classification noise. 相似文献
17.
On dimensionality, sample size, classification error, and complexity of classification algorithm in pattern recognition 总被引:1,自引:0,他引:1
This paper compares four classification algorithms-discriminant functions when classifying individuals into two multivariate populations. The discriminant functions (DF's) compared are derived according to the Bayes rule for normal populations and differ in assumptions on the covariance matrices' structure. Analytical formulas for the expected probability of misclassification EPN are derived and show that the classification error EPN depends on the structure of a classification algorithm, asymptotic probability of misclassification P?, and the ratio of learning sample size N to dimensionality p:N/p for all linear DF's discussed and N2/p for quadratic DF's. The tables for learning quantity H = EPN/P? depending on parameters P?, N, and p for four classifilcation algorithms analyzed are presented and may be used for estimating the necessary learning sample size, detennining the optimal number of features, and choosing the type of the classification algorithm in the case of a limited learning sample size. 相似文献
18.
《Pattern recognition letters》2002,23(1-3):227-233
The problem studied is the behavior of a discrete classifier on a finite learning sample. With naive Bayes approach, the value of misclassification probability is represented as a random function, for which the first two moments are analytically derived. For arbitrary distributions, this allows evaluating learning sample size sufficient for the classification with given admissible misclassification probability and confidence level. The comparison with statistical learning theory shows that the suggested approach frequently recommends significantly smaller learning sample size. 相似文献
19.
黄东 《计算机工程与科学》2011,33(12):130
连续数据离散化是数据挖掘分类方法中的重要预处理过程。本文提出一种基于最小描述长度原理的均衡离散化方法,该方法基于最小描述长度理论提出一种均衡的离散化函数,很好地衡量了离散区间与分类错误之间的关系。同时,基于均衡函数提出一种有效的启发式算法,寻找最佳的断点序列。仿真结果表明,在C5.0决策树和Naive贝叶斯分类器上,提出的算法有较好的分类学习能力。 相似文献
20.
基于C5.0决策树算法的西北干旱区土地覆盖分类研究——以甘肃省武威市为例 总被引:2,自引:1,他引:1
西北干旱区面积广阔,由于土地利用类型多样,成因复杂,对环境变化敏感、变化过程快、幅度大、景观差异明显等特点,在影像上表现出的“同物异谱”现象明显 |利用常规目视解译、监督非监督分类、人工参与的决策树分类等方法在效率或精度等方面各有其缺陷。采用机器学习C5.0决策树算法,综合利用地物波谱、NDVI、TC、纹理等信息,根据样本数据自动挖掘分类规则并对整个研究区进行地物分类。机器学习的决策树可以挖掘出更多的分类规则,C5.0算法对采样数据的分布没有要求,可以处理离散和连续数据,生成的规则易于理解,分类精度高,可以满足西北干旱区大面积的土地利用/覆被变化制图的需要。 相似文献