首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 250 毫秒
1.
连续属性完全贝叶斯分类器的学习与优化   总被引:2,自引:0,他引:2  
针对连续属性朴素贝叶斯分类器不能有效利用属性之间的条件依赖信息,而依赖扩展又很难实现属性条件联合密度估计和结构学习协同优化的问题,文中在使用多元高斯核函数估计属性条件联合密度的基础上,建立了具有多平滑参数的连续属性完全贝叶斯分类器,并给出将分类准确性标准与区间异步长划分完全搜索相结合的平滑参数优化方法,再通过时序扩展构建了动态完全贝叶斯分类器.我们使用UCI机器学习数据仓库中连续属性分类数据和宏观经济数据进行实验,结果显示,经过优化的两种分类器均具有良好的分类准确性.  相似文献   

2.
针对连续属性朴素贝叶斯分类器不能有效利用属性之间的条件依赖信息、而对其进行依赖扩展中的高阶协方差矩阵的求逆和行列式运算又非常困难等问题,将三对角矩阵和多元高斯函数相结合,建立连续属性完全贝叶斯分类器,并在三对角矩阵中引入平滑参数,通过对平滑参数的调整来实现分类器的优化.使用UCI数据的实验结果显示,经过优化的连续属性完全贝叶斯分类器具有良好的分类准确性.  相似文献   

3.
约束高斯分类网研究   总被引:1,自引:0,他引:1  
王双成  高瑞  杜瑞杰 《自动化学报》2015,41(12):2164-2176
针对基于一元高斯函数估计属性边缘密度的朴素贝叶斯分类器不能有效利 用属性之间的依赖信息和使用多元高斯函数估计属性联合密度的完全贝叶斯分类器 易于导致对数据的过度拟合而且高阶协方差矩阵的计算也非常困难等情况,在建立 属性联合密度分解与组合定理和属性条件密度计算定理的基础上,将朴素贝叶斯分类 器的属性选择、分类准确性标准和属性父结点的贪婪选择相结合,进行约束高斯 分类网学习与优化,并依据贝叶斯网络理论,对贝叶斯衍生分类器中属性为类提供 的信息构成进行分析.使用UCI数据库中连续属性分类数据进行实验,结果显示,经过 优化的约束高斯分类网具有良好的分类准确性.  相似文献   

4.
朴素贝叶斯分类器是一种简单而高效的分类器,但它的条件独立性假设使其无法表示属性问的依赖关系。TAN分类器按照一定的结构限制,通过添加扩展弧的方式扩展朴素贝叶斯分类器的结构。在TAN分类器中,类变量是每一个属性变量的父结点,但有些属性的存在降低了它分类的正确率。文中提出一种基于MDL度量的选择性扩展贝叶斯分类器(SANC),通过MDL度量,删除影响分类性能的属性变量和扩展弧。实验结果表明,与NBC和TANC相比,SANC具有较高的分类正确率。  相似文献   

5.
王峻  周孟然 《微机发展》2007,17(7):35-37
朴素贝叶斯分类器是一种简单而高效的分类器,但它的条件独立性假设使其无法表示属性间的依赖关系。TAN分类器按照一定的结构限制,通过添加扩展弧的方式扩展朴素贝叶斯分类器的结构。在TAN分类器中,类变量是每一个属性变量的父结点,但有些属性的存在降低了它分类的正确率。文中提出一种基于MDL度量的选择性扩展贝叶斯分类器(SANC),通过MDL度量,删除影响分类性能的属性变量和扩展弧。实验结果表明,与NBC和TANC相比,SANC具有较高的分类正确率。  相似文献   

6.
朴素贝叶斯分类器具有很高的学习和分类效率,但不能充分利用属性变量之间的依赖信息.贝叶斯网络分类器具有很强的分类能力,但分类器学习比较复杂.本文建立广义朴素贝叶斯分类器,它具有灵活的分类能力选择方式、效率选择方式及学习方式,能够弥补朴素贝叶斯分类器和贝叶斯网络分类器的不足,并继承它们的优点.  相似文献   

7.
基于有向树算法构造的TAN分类器   总被引:1,自引:0,他引:1  
树扩展型朴素贝叶斯(TAN)分类器放松了朴素贝叶斯的属性独立性假设,是对朴素贝叶斯分类器的有效改进.但传统TAN的构造算法中树的根结点是随意选择的,这使得其无法精确表达属性间的依赖关系.通过将依赖关系设定方向,并将有向树算法引入TAN分类器的构造,提出了一种新的TAN模型构造方法--DTAN.实验结果表明,DTAN分类方法在实例个数比较多的数据集上具有显著优秀的分类性能.  相似文献   

8.
分类准确性是分类器最重要的性能指标,特征子集选择是提高分类器分类准确性的一种有效方法。现有的特征子集选择方法主要针对静态分类器,缺少动态分类器特征子集选择方面的研究。首先给出具有连续属性的动态朴素贝叶斯网络分类器和动态分类准确性评价标准,在此基础上建立动态朴素贝叶斯网络分类器的特征子集选择方法,并使用真实宏观经济时序数据进行实验与分析。  相似文献   

9.
通过分析朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器,提出了一种新的属性依赖度量方法,并依此对TAN分类器的构造方法进行了改进.将该分类方法(XINTAN)与朴素贝叶斯分类器和TAN分类器进行了实验比较.实验结果表明,此分类方法集中了朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器的优点,性能要优于TAN分类器.  相似文献   

10.
基于特征空间变换的纠错输出编码   总被引:1,自引:0,他引:1  

针对基于纠错输出编码多类分类中如何保证基分类器差异性的问题, 提出一种基于特征空间变换的编码方法. 该方法引入特征空间, 将编码矩阵扩展成三维矩阵; 然后基于二类划分, 利用特征变换得到不同的特征子空间, 从而训练得到差异性大的基分类器. 基于公共数据集的实验结果表明: 该方法能够比原始的编码矩阵获得更优的分类性能, 同时增加了基分类器的差异性; 该方法适用于任何编码矩阵, 为大数据的分类提供了新的思路.

  相似文献   

11.
针对二支决策TAN分类器在处理不确定数据时有较高的错误率,提出一种新的三支扩展TAN贝叶斯分类器(3WDTAN).首先通过构建TAN贝叶斯分类模型,采用先验概率和类条件概率估计三支决策中的条件概率;其次构建3WD-TAN分类器,制定3WD-TAN分类器中正域,负域和边界域的三支分类规则,结合边界域处理不确定性数据的优势,在一定程度上纠正了传统TAN贝叶斯分类器产生的分类错误;最后通过在5个UCI数据集上选取NB、TAN、SETAN算法进行对比实验,表明3WD-TAN具有较高的准确率和召回率,且适用于不同规模数据集的分类问题.  相似文献   

12.
Abstract

Although successful in medical diagnostic problems, inductive learning systems were not widely accepted in medical practice. In this paper two different approaches to machine learning in medical applications are compared: the system for inductive learning of decision trees Assistant, and the naive Bayesian classifier. Both methodologies were tested in four medical diagnostic problems: localization of primary tumor, prognostics of recurrence of breast cancer, diagnosis of thyroid diseases, and rheumatology. The accuracy of automatically acquired diagnostic knowledge from stored data records is compared, and the interpretation of the knowledge and the explanation ability of the classification process of each system is discussed. Surprisingly, the naive Bayesian classifier is superior to Assistant in classification accuracy and explanation ability, while the interpretation of the acquired knowledge seems to be equally valuable. In addition, two extensions to naive Bayesian classifier are briefly described: dealing with continuous attributes, and discovering the dependencies among attributes.  相似文献   

13.
针对传统的基于传输层端口和基于特征码的流量分类技术准确率低、应用范围有限等缺点,提出了使用树扩展的贝叶斯分类器的方法,该方法利用网络流量的统计属性和基于统计理论的贝叶斯方法构建分类模型,并利用该模型对未知流量进行分类。实验分析了不同权值、不同规模的数据集对其性能的影响,并与NB、C4.5算法做了比较。实验结果表明,该方法具有较好的分类性能和较高的分类准确率。  相似文献   

14.
This article investigates boosting naive Bayesian classification. It first shows that boosting does not improve the accuracy of the naive Bayesian classifier as much as we expected in a set of natural domains. By analyzing the reason for boosting's weakness, we propose to introduce tree structures into naive Bayesian classification to improve the performance of boosting when working with naive Bayesian classification. The experimental results show that although introducing tree structures into naive Bayesian classification increases the average error of the naive Bayesian classification for individual models, boosting naive Bayesian classifiers with tree structures can achieve significantly lower average error than both the naive Bayesian classifier and boosting the naive Bayesian classifier, providing a method of successfully applying the boosting technique to naive Bayesian classification. A bias and variance analysis confirms our expectation that the naive Bayesian classifier is a stable classifier with low variance and high bias. We show that the boosted naive Bayesian classifier has a strong bias on a linear form, exactly the same as its base learner. Introducing tree structures reduces the bias and increases the variance, and this allows boosting to gain advantage.  相似文献   

15.
基于改进属性加权的朴素贝叶斯分类模型   总被引:1,自引:0,他引:1       下载免费PDF全文
构造了一种新的属性间相关性度量方法,提出了改进属性加权的朴素贝叶斯分类模型。经实验证明,提出的朴素贝叶斯分类模型明显优于张舜仲等人提出的分类模型。  相似文献   

16.
Lazy Learning of Bayesian Rules   总被引:19,自引:0,他引:19  
The naive Bayesian classifier provides a simple and effective approach to classifier learning, but its attribute independence assumption is often violated in the real world. A number of approaches have sought to alleviate this problem. A Bayesian tree learning algorithm builds a decision tree, and generates a local naive Bayesian classifier at each leaf. The tests leading to a leaf can alleviate attribute inter-dependencies for the local naive Bayesian classifier. However, Bayesian tree learning still suffers from the small disjunct problem of tree learning. While inferred Bayesian trees demonstrate low average prediction error rates, there is reason to believe that error rates will be higher for those leaves with few training examples. This paper proposes the application of lazy learning techniques to Bayesian tree induction and presents the resulting lazy Bayesian rule learning algorithm, called LBR. This algorithm can be justified by a variant of Bayes theorem which supports a weaker conditional attribute independence assumption than is required by naive Bayes. For each test example, it builds a most appropriate rule with a local naive Bayesian classifier as its consequent. It is demonstrated that the computational requirements of LBR are reasonable in a wide cross-section of natural domains. Experiments with these domains show that, on average, this new algorithm obtains lower error rates significantly more often than the reverse in comparison to a naive Bayesian classifier, C4.5, a Bayesian tree learning algorithm, a constructive Bayesian classifier that eliminates attributes and constructs new attributes using Cartesian products of existing nominal attributes, and a lazy decision tree learning algorithm. It also outperforms, although the result is not statistically significant, a selective naive Bayesian classifier.  相似文献   

17.
一种限定性的双层贝叶斯分类模型   总被引:29,自引:1,他引:28  
朴素贝叶斯分类模型是一种简单而有效的分类方法,但它的属性独立性假设使其无法表达属性变量间存在的依赖关系,影响了它的分类性能.通过分析贝叶斯分类模型的分类原则以及贝叶斯定理的变异形式,提出了一种基于贝叶斯定理的新的分类模型DLBAN(double-level Bayesian network augmented naive Bayes).该模型通过选择关键属性建立属性之间的依赖关系.将该分类方法与朴素贝叶斯分类器和TAN(tree augmented naive Bayes)分类器进行实验比较.实验结果表明,在大多数数据集上,DLBAN分类方法具有较高的分类正确率.  相似文献   

18.
For learning a Bayesian network classifier, continuous attributes usually need to be discretized. But the discretization of continuous attributes may bring information missing, noise and less sensitivity to the changing of the attributes towards class variables. In this paper, we use the Gaussian kernel function with smoothing parameter to estimate the density of attributes. Bayesian network classifier with continuous attributes is established by the dependency extension of Naive Bayes classifiers. We also analyze the information provided to a class for each attributes as a basis for the dependency extension of Naive Bayes classifiers. Experimental studies on UCI data sets show that Bayesian network classifiers using Gaussian kernel function provide good classification accuracy comparing to other approaches when dealing with continuous attributes.  相似文献   

19.
基于多重判别分析的朴素贝叶斯分类器   总被引:4,自引:1,他引:4  
通过分析朴素贝叶斯分类器的分类原理,并结合多重判别分析的优点,提出了一种基于多重判别分析的朴素贝叶斯分类器DANB(Discriminant Analysis Naive Bayesian classifier).将该分类方法与朴素贝叶斯分类器(Naive Bayesian classifier, NB)和TAN分类器(Tree Augmented Naive Bayesian classifier)进行实验比较,实验结果表明在大多数数据集上,DANB分类器具有较高的分类正确率.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号