首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 156 毫秒
1.
传统分类算法的研究主要关注批量学习任务。实际中,带标注样本很难一次性获得。且存储空间开销较大的特点,也使批量学习显现出一定的局限性。因此,需要增量学习来解决该问题。朴素贝叶斯分类器简单、高效、鲁棒性强,且贝叶斯估计理论为其应用于增量任务提供了基础。但现有的增量贝叶斯模型没有对适应新类别作出描述。同时,实验表明类别之间样本数量的不平衡,会严重影响该模型的分类性能。故基于这两个问题,提出对增量贝叶斯模型的改进,增加参数修正公式,使其可适应新出现的类别,并引入最小风险决策思想减轻数据不平衡造成的影响。取UCI数据集进行仿真测试,结果表明改进后的模型可以渐进提高分类性能,并具有适应新类别的能力。  相似文献   

2.
未知类别样本的增量学习中,合理的学习序列能够优化分类器性能,提高分类精度。从优先学习的样本被正确分类概率大小的角度,提出了一种基于样本分类结果可信度的朴素贝叶斯增量学习序列算法。该算法将学习样本分类结果中可信度大的样本优先进行增量学习。在此算法基础上,实现一个病毒上报分析系统用于可疑样本的自动化分析与检测。实验结果表明,经该算法增量学习后的分类器检测效果优于随机增量学习。  相似文献   

3.
一种基于类支持度的增量贝叶斯学习算法   总被引:1,自引:0,他引:1       下载免费PDF全文
丁厉华  张小刚 《计算机工程》2008,34(22):218-219
介绍增量贝叶斯分类器的原理,提出一种基于类支持度的优化增量贝叶斯分类器学习算法。在增量学习过程的样本选择问题上,算法引入一个类支持度因子λ,根据λ的大小逐次从测试样本集中选择样本加入分类器。实验表明,在训练数据集较小的情况下,该算法比原增量贝叶斯分类算法具有更高的精度,能大幅度减少增量学习样本优选的计算时间。  相似文献   

4.
支持向量机(support vector machine,SVM)算法因其在小样本训练集上的优势和较好的鲁棒性,被广泛应用于处理分类问题。但是对于增量数据和大规模数据,传统的SVM分类算法不能满足需求,增量学习是解决这些问题的有效方法之一。基于数据分布的结构化描述,提出了一种自适应SVM增量学习算法。该算法根据原样本和新增样本与当前分类超平面之间的几何距离,建立了自适应的增量样本选择模型,该模型能够有效地筛选出参与增量训练的边界样本。为了平衡增量学习的速度和性能,模型分别为新增样本和原模型样本设置了基于空间分布相似性的调整系数。实验结果表明,该算法在加快分类速度的同时提高了模型性能。  相似文献   

5.
增量学习利用增量数据中的有用信息通过修正分类参数来更新分类模型,而朴素贝叶斯算法具有利用先验信息以及增量信息的特性,因此朴素贝叶斯算法是增量学习算法设计的最佳选择。三支决策是一种符合人类认知模式的决策理论,具有主观的特性。将三支决策思想融入朴素贝叶斯增量学习中,提出一种基于三支决策的朴素贝叶斯增量学习算法。基于朴素贝叶斯算法构造了一个称为分类确信度的概念,结合代价函数,用以确定三支决策理论中的正域、负域和边界域。利用三个域中的有用信息构造基于三支决策的朴素贝叶斯增量学习算法。实验结果显示,在阈值[α]和[β]选择合适的情况下,基于该方法的分类准确性和召回率均有明显的提高。  相似文献   

6.
一种朴素贝叶斯分类增量学习算法   总被引:1,自引:0,他引:1  
朴素贝叶斯(Nave Bayes,NB)分类方法是一种简单而有效的概率分类方法,但是贝叶斯算法存在训练集数据不完备这个缺陷。传统的贝叶斯分类方法在有新的训练样本加入时,需要重新学习已经学习过的样本,耗费大量时间。为此引入增量学习算法,算法在已有的分类器的基础上,自主选择学习新的文本来修正分类器。本文给出词频加权朴素贝叶斯分类增量学习算法思想及其具体算法,并对算法给予证明。通过算法分析可知,相比无增量学习的贝叶斯分类,本算法额外的空间复杂度与时间复杂度都在可接受范围。  相似文献   

7.
朴素贝叶斯分类器难以获得大量有类标签的训练集,而且传统的贝叶斯分类方法在有新的训练样本加入时,需要重新学习已学习过的样本,耗费大量时间。为此引入增量学习方法,在此基础上提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器的性能,加权参数直接从训练数据中学习得到。通过由Weka推荐的UCI数据集的实验结果表明,该算法是可行的和有效的。  相似文献   

8.
针对难以获得大量有标签的训练集问题,将增量式贝叶斯学习用于小规模训练集上,并提出了一种新的序列学习算法以弥补其学习序列中存在的不足:无法充分利用先验知识导致噪声数据不断传播。在增量学习的样本选择上,算法引入了配对样本检验和类支持度的知识,分别从横向和纵向角度充分利用先验知识来选取最优增量子集优化分类器,使分类器参数在动态学习过程中得以强化。实验结果表明,该算法能有效弱化噪声数据的消极影响,提高分类精度,同时能大幅度减少增量学习时间。  相似文献   

9.
梁爽  孙正兴 《软件学报》2009,20(5):1301-1312
为了解决草图检索相关反馈中小样本训练、数据不对称及实时性要求这3个难点问题,提出了一种小样本增量有偏学习算法.该算法将主动式学习、有偏分类和增量学习结合起来,对相关反馈过程中的小样本有偏学习问题进行建模.其中,主动式学习通过不确定性采样,选择最佳的用户标注样本,实现有限训练样本条件下分类器泛化能力的最大化;有偏分类通过构造超球面区别对待正例和反例,准确挖掘用户目标类别;每次反馈循环中新加入的样本则用于分类器的增量学习,在减少分类器训练时间的同时积累样本信息,进一步缓解小样本问题.实验结果表明,该算法可以有效地改善草图检索性能,也适用于图像检索和三维模型检索等应用领域.  相似文献   

10.
李曼 《微型机与应用》2011,30(18):65-68
针对已有增量分类算法只是作用于小规模数据集或者在集中式环境下进行的缺点,提出一种基于Hadoop云计算平台的增量分类模型,以解决大规模数据集的增量分类。为了使云计算平台可以自动地对增量的训练样本进行处理,基于模块化集成学习思想,设计相应Map函数对不同时刻的增量样本块进行训练,Reduce函数对不同时刻训练得到的分类器进行集成,以实现云计算平台上的增量学习。仿真实验证明了该方法的正确性和可行性。  相似文献   

11.
对于建立动态贝叶斯网络(DBN)分类模型时,带有类标注样本数据集获得困难的问题,提出一种基于EM和分类损失的半监督主动DBN学习算法.半监督学习中的EM算法可以有效利用未标注样本数据来学习DBN分类模型,但是由于迭代过程中易于加入错误的样本分类信息而影响模型的准确性.基于分类损失的主动学习借鉴到EM学习中,可以自主选择有用的未标注样本来请求用户标注,当把这些样本加入训练集后能够最大程度减少模型对未标注样本分类的不确定性.实验表明,该算法能够显著提高DBN学习器的效率和性能,并快速收敛于预定的分类精度.  相似文献   

12.
一种新的增量决策树算法   总被引:1,自引:0,他引:1  
对于数据增加迅速的客户行为分析、Web日志分析、网络入侵检测等在线分类系统来说,如何快速适应新增样本是确保其分类正确和可持续运行的关键。该文提出了一种新的适应数据增量的决策树算法,该算法同贝叶斯方法相结合,在原有决策树的基础上利用新增样本迅速训练出新的决策树。实验结果表明,提出的算法可以较好的解决该问题,与重新构造决策树相比,它的时间开销更少,且具有更高的分类准确率,更适用于在线分类系统。  相似文献   

13.
In this paper, we investigate a comprehensive learning algorithm for text classification without pre-labeled training set based on incremental learning. In order to overcome the high cost in getting labeled training examples, this approach reforms fuzzy partition clustering to obtain a small quantity of labeled training data. Then the incremental learning of Bayesian classifier is applied. The model of the proposed classifier is composed of a Naïve-Bayes-based incremental learning algorithm and a modified fuzzy partition clustering method. For improved efficiency, a feature reduction is designed based on the Quadratic Entropy in Mutual Information. We perform experiments to demonstrate the performance of the approach, and the results show that our approach is feasible and effective.  相似文献   

14.
An incremental online semi-supervised active learning algorithm, which is based on a self-organizing incremental neural network (SOINN), is proposed. This paper describes improvement of the two-layer SOINN to a single-layer SOINN to represent the topological structure of input data and to separate the generated nodes into different groups and subclusters. We then actively label some teacher nodes and use such teacher nodes to label all unlabeled nodes. The proposed method can learn from both labeled and unlabeled samples. It can query the labels of some important samples rather than selecting the labeled samples randomly. It requires neither prior knowledge, such as the number of nodes, nor the number of classes. It can automatically learn the number of nodes and teacher vectors required for a current task. Moreover, it can realize online incremental learning. Experiments using artificial data and real-world data show that the proposed method performs effectively and efficiently.  相似文献   

15.
Text Classification from Labeled and Unlabeled Documents using EM   总被引:51,自引:0,他引:51  
This paper shows that the accuracy of learned text classifiers can be improved by augmenting a small number of labeled training documents with a large pool of unlabeled documents. This is important because in many text classification problems obtaining training labels is expensive, while large quantities of unlabeled documents are readily available.We introduce an algorithm for learning from labeled and unlabeled documents based on the combination of Expectation-Maximization (EM) and a naive Bayes classifier. The algorithm first trains a classifier using the available labeled documents, and probabilistically labels the unlabeled documents. It then trains a new classifier using the labels for all the documents, and iterates to convergence. This basic EM procedure works well when the data conform to the generative assumptions of the model. However these assumptions are often violated in practice, and poor performance can result. We present two extensions to the algorithm that improve classification accuracy under these conditions: (1) a weighting factor to modulate the contribution of the unlabeled data, and (2) the use of multiple mixture components per class. Experimental results, obtained using text from three different real-world tasks, show that the use of unlabeled data reduces classification error by up to 30%.  相似文献   

16.
The traditional setting of supervised learning requires a large amount of labeled training examples in order to achieve good generalization. However, in many practical applications, unlabeled training examples are readily available, but labeled ones are fairly expensive to obtain. Therefore, semisupervised learning has attracted much attention. Previous research on semisupervised learning mainly focuses on semisupervised classification. Although regression is almost as important as classification, semisupervised regression is largely understudied. In particular, although cotraining is a main paradigm in semisupervised learning, few works has been devoted to cotraining-style semisupervised regression algorithms. In this paper, a cotraining-style semisupervised regression algorithm, that is, COREG, is proposed. This algorithm uses two regressors, each labels the unlabeled data for the other regressor, where the confidence in labeling an unlabeled example is estimated through the amount of reduction in mean squared error over the labeled neighborhood of that example. Analysis and experiments show that COREG can effectively exploit unlabeled data to improve regression estimates.  相似文献   

17.
18.
邓丽  金立左  费敏锐 《计算机工程》2011,37(22):281-283
小样本问题会制约贝叶斯相关反馈算法的学习能力。为此,提出一种基于半监督学习的视频检索贝叶斯相关反馈算法,其中一个分类器用于估计视频库中每一个镜头属于目标镜头的概率,另一个半监督学习分类器用于判断用户未标记镜头是否与目标相关,由此扩大贝叶斯学习器的训练数据集,提高其分类能力。实验结果表明,该算法提高了贝叶斯算法的检索性能。  相似文献   

19.
贝叶斯在训练样本不完备的情况下,对未知类别新增训练集进行增量学习时,会将分类错误的训练样本过早地加入到分类器中而降低其性能,另外增量学习采用固定的置信度评估参数会使其效率低下,泛化性能不稳定.为解决上述问题,提出一种动态置信度的序列选择增量学习方法.首先,在现有的分类器基础上选出分类正确的文本组成新增训练子集.其次,利用置信度动态监控分类器性能来对新增训练子集进行批量实例选择.最后,通过选择合理的学习序列来强化完备数据的积极影响,弱化噪声数据的消极影响,并实现对测试文本的分类.实验结果表明,本文提出的方法在有效提高分类精度的同时也能明显改善增量学习效率.  相似文献   

20.
Large, unlabeled datasets are abundant nowadays, but getting labels for those datasets can be expensive and time-consuming. Crowd labeling is a crowdsourcing approach for gathering such labels from workers whose suggestions are not always accurate. While a variety of algorithms exist for this purpose, we present crowd labeling latent Dirichlet allocation (CL-LDA), a generalization of latent Dirichlet allocation that can solve a more general set of crowd labeling problems. We show that it performs as well as other methods and at times better on a variety of simulated and actual datasets while treating each label as compositional rather than indicating a discrete class. In addition, prior knowledge of workers’ abilities can be incorporated into the model through a structured Bayesian framework. We then apply CL-LDA to the EEG independent component labeling dataset, using its generalizations to further explore the utility of the algorithm. We discuss prospects for creating classifiers from the generated labels.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号