首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 328 毫秒
1.
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。文章提出一种新的算法,该算法为避免数据预处理时的属性约简对分类效果的直接影响,在训练集上通过随机属性选取生成若干属性子集,以这些子集构建相应的朴素贝叶斯分类器,采用模拟退火遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的性能。  相似文献   

2.
基于因子分析的NBC及其在边坡识别中的应用   总被引:1,自引:0,他引:1  
高岩 《计算机工程与设计》2011,32(11):3828-3831
满足条件独立性假设时,朴素贝叶斯分类器理论上比其它分类方法具有更高的分类正确率,但该假设在许多实际情况中并不成立,针对这一问题,提出了一种基于因子分析的朴素贝叶斯分类模型FA-NBC,并将其应用于边坡的稳定性识别。为了保证朴素贝叶斯分类器结构上的简单性,FA-NBC模型以方差贡献为依据构建新的属性集,新属性集包含原属性集的大部分信息且满足条件独立性假设。UCI数据集上的实验结果证明了FA-NBC模型的有效性。  相似文献   

3.
基于遗传算法的朴素贝叶斯分类   总被引:1,自引:0,他引:1  
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。提出一种新的算法,该算法为避免数据预处理时,训练集的噪声及数据规模使属性约简的效果不太理想,并进而影响分类效果,在训练集上通过随机属性选取生成若干属性子集,并以这些子集构建相应的贝叶斯分类器,进而采用遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的分类精度。  相似文献   

4.
朴素贝叶斯分类器是一种简单而高效的分类器,但是其属性独立性假设限制了对实际数据的应用。提出一种新的算法,该算法为避免数据预处理时,训练集的噪声及数据规模使属性约简的效果不太理想,并进而影响分类效果,在训练集上通过随机属性选取生成若干属性子集,并以这些子集构建相应的贝叶斯分类器,进而采用遗传算法进行优选。实验表明,与传统的朴素贝叶斯方法相比,该方法具有更好的分类精度。  相似文献   

5.
基于条件信息熵的自主式朴素贝叶斯分类算法   总被引:9,自引:0,他引:9  
朴素贝叶斯是一种简单而高效的分类算法,但其条件独立性和属性重要性相等的假设并不符合客观实际,这在某种程度上影响了它的分类性能。如何去除这种先验假设,根据数据本身的特点实现知识自主学习是机器学习中的一个难题。根据Rough Set的相关理论,提出了基于条件信息熵的自主式朴素贝叶斯分类方法,该方法结合了选择朴素贝叶斯和加权朴素贝叶斯的优点。通过在UCI数据集上的仿真实验,验证了该方法的有效性。  相似文献   

6.
基于完全无向图的贝叶斯分类器在入侵检测中的应用   总被引:2,自引:0,他引:2  
朴素贝叶斯分类器由于其强独立性假设,并不考虑属性之间的相互关系,而入侵检测的数据集不能很好地满足这一条件假设.为此,提出了一种基于有向完全图的贝叶斯分类器,将属性之间的关系加入到分类器的构造中,降低了朴素贝叶斯分类器的强独立性假设,并将其应用于入侵检测中.在MIT入侵检测数据集的实验表明,该算法能提高入侵检测的准确率,其效果很好.  相似文献   

7.
根据RoughSet属性重要度理论,构建了基于互信息的属性子集重要度,提出属性相关性的加权朴素贝叶斯分类算法,该算法同时放宽了朴素贝叶斯算法属性独立性、属性重要性相同的假设。通过在UCI部分数据集上进行仿真实验,与基于属性相关性分析的贝叶斯(CB)和加权朴素贝叶斯(WNB)两种算法做比较,证明了该算法的有效性。  相似文献   

8.
彭兴媛  刘琼荪 《计算机应用》2011,31(11):3072-3074
朴素贝叶斯(NB)分类算法虽是一种简单且有效的分类方法,但其条件属性独立性假设忽略了属性变量间存在的相关性。考虑到条件独立性假设对分类效果的影响,提出一种新的将条件属性进行聚类的分组技术,不仅避免了传统朴素贝叶斯算法假设各条件属性间独立的这一缺陷,而且反映出了在不同类别情况下条件属性间具有的不同依赖程度。经过对UCI的几个数据集的仿真实验,结果表明了新算法的有效性。  相似文献   

9.
基于属性加权的朴素贝叶斯分类算法   总被引:3,自引:0,他引:3       下载免费PDF全文
朴素贝叶斯分类是一种简单而高效的方法,但是它的属性独立性假设,影响了它的分类性能。通过放松朴素贝叶斯假设可以增强其分类效果,但通常会导致计算代价大幅提高。提出了属性加权朴素贝叶斯算法,该算法通过属性加权来提高朴素贝叶斯分类器性能,加权参数直接从训练数据中学习得到。权值可以看作是计算某个类的后验概率时,某属性取值对该类别的影响程度。实验结果表明,该算法可行而且有效。  相似文献   

10.
朴素贝叶斯分类算法简单且高效, 但其基于属性间强独立性的假设限制了其应用范围. 针对这一问题, 提出一种基于属性选择的改进加权朴素贝叶斯分类算法(ASWNBC). 该算法将基于相关的属性选择算法(CFS)和加权朴素贝叶斯分类算法(WNBC)相结合, 首先使用CFS算法获得属性子集使简化后的属性集尽量满足条件独立性, 同时根据不同属性取值对分类结果影响的不同设计新权重作为算法的加权系数, 最后使用ASWNBC算法进行分类. 实验结果表明, 该算法在降低分类消耗时间的同时提高了分类准确率, 有效地提高了朴素贝叶斯分类算法的性能.  相似文献   

11.
Numerous models have been proposed to reduce the classification error of Na¨ ve Bayes by weakening its attribute independence assumption and some have demonstrated remarkable error performance. Considering that ensemble learning is an effective method of reducing the classification error of the classifier, this paper proposes a double-layer Bayesian classifier ensembles (DLBCE) algorithm based on frequent itemsets. DLBCE constructs a double-layer Bayesian classifier (DLBC) for each frequent itemset the new instance contained and finally ensembles all the classifiers by assigning different weight to different classifier according to the conditional mutual information. The experimental results show that the proposed algorithm outperforms other outstanding algorithms.  相似文献   

12.
The prior distribution of an attribute in a naïve Bayesian classifier is typically assumed to be a Dirichlet distribution, and this is called the Dirichlet assumption. The variables in a Dirichlet random vector can never be positively correlated and must have the same confidence level as measured by normalized variance. Both the generalized Dirichlet and the Liouville distributions include the Dirichlet distribution as a special case. These two multivariate distributions, also defined on the unit simplex, are employed to investigate the impact of the Dirichlet assumption in naïve Bayesian classifiers. We propose methods to construct appropriate generalized Dirichlet and Liouville priors for naïve Bayesian classifiers. Our experimental results on 18 data sets reveal that the generalized Dirichlet distribution has the best performance among the three distribution families. Not only is the Dirichlet assumption inappropriate, but also forcing the variables in a prior to be all positively correlated can deteriorate the performance of the naïve Bayesian classifier.  相似文献   

13.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

14.
惠孛  吴跃  陈佳 《计算机科学》2006,33(5):110-112
使用朴素的贝叶斯(NB)分类模型对邮件进行分类,是目前基于内容的垃圾邮件过滤方法的研究热点。朴素的贝叶斯在参数之间联系不强的时候分类效果简单而有效。但是朴素的贝叶斯分类模型中对特征参数的条件独立假设无法表达参数之间在语义上的关系,影响分类性能。在朴素的贝叶斯分类模型的基础上,我们提出了一种双级贝叶斯分类模型(DLB,Double Level Bayes),既考虑到了参数之间的影响又保留了朴素的贝叶斯分类模型的优点。同时时DLB模型与朴素的贝叶斯分类模型的性能进行比较。仿真实验表明,DLB分类模型在垃圾邮件过滤应用中的效果在大部分条件下优于朴素的贝叶斯分类模型。  相似文献   

15.
一种限定性的双层贝叶斯分类模型   总被引:29,自引:1,他引:28  
朴素贝叶斯分类模型是一种简单而有效的分类方法,但它的属性独立性假设使其无法表达属性变量间存在的依赖关系,影响了它的分类性能.通过分析贝叶斯分类模型的分类原则以及贝叶斯定理的变异形式,提出了一种基于贝叶斯定理的新的分类模型DLBAN(double-level Bayesian network augmented naive Bayes).该模型通过选择关键属性建立属性之间的依赖关系.将该分类方法与朴素贝叶斯分类器和TAN(tree augmented naive Bayes)分类器进行实验比较.实验结果表明,在大多数数据集上,DLBAN分类方法具有较高的分类正确率.  相似文献   

16.
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss   总被引:67,自引:0,他引:67  
Domingos  Pedro  Pazzani  Michael 《Machine Learning》1997,29(2-3):103-130
The simple Bayesian classifier is known to be optimal when attributes are independent given the class, but the question of whether other sufficient conditions for its optimality exist has so far not been explored. Empirical results showing that it performs surprisingly well in many domains containing clear attribute dependences suggest that the answer to this question may be positive. This article shows that, although the Bayesian classifier's probability estimates are only optimal under quadratic loss if the independence assumption holds, the classifier itself can be optimal under zero-one loss (misclassification rate) even when this assumption is violated by a wide margin. The region of quadratic-loss optimality of the Bayesian classifier is in fact a second-order infinitesimal fraction of the region of zero-one optimality. This implies that the Bayesian classifier has a much greater range of applicability than previously thought. For example, in this article it is shown to be optimal for learning conjunctions and disjunctions, even though they violate the independence assumption. Further, studies in artificial domains show that it will often outperform more powerful classifiers for common training set sizes and numbers of attributes, even if its bias is a priori much less appropriate to the domain. This article's results also imply that detecting attribute dependence is not necessarily the best way to extend the Bayesian classifier, and this is also verified empirically.  相似文献   

17.
贝叶斯分类方法因具有严密的数学理论基础,于是成为一种简单而有效的数据挖掘方法;然而,贝叶斯分类器要求——条件独立性假设和每个属性权值为1,这极大降低了贝叶斯分类器的性能;针对贝叶斯分类器的局限性,文章提出了一种优化的贝叶斯分类算法;文中,首先利用粗糙集理论对待分类数据集进行属性约简,删除冗余属性;然后给出了属性权值的计算方法和公式,目的在于更准确地描述数据集的重要性和相关性;同时,通过weka3.6.2工具,以UCI机器学习数据库中的数据集为测试数据,进行了对比测试;实验结果表明:OBCA具有较高的分类准确率。  相似文献   

18.
由于作为朴素贝叶斯分类器的主要特征的条件独立性假设条件过强且在不同数据集上表现出的差异,所以独立性假设成为众多改进算法的切入点。但也有研究指出不满足该假设并没有对分类器造成预想的影响。从降低后验概率的估计误差入手提出一种条件熵匹配的半朴素贝叶斯分类器。实验证明,该方法能有效提高朴素贝叶斯分类器的性能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号