首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到19条相似文献,搜索用时 109 毫秒
1.
基于有向树算法构造的TAN分类器   总被引:1,自引:0,他引:1  
树扩展型朴素贝叶斯(TAN)分类器放松了朴素贝叶斯的属性独立性假设,是对朴素贝叶斯分类器的有效改进.但传统TAN的构造算法中树的根结点是随意选择的,这使得其无法精确表达属性间的依赖关系.通过将依赖关系设定方向,并将有向树算法引入TAN分类器的构造,提出了一种新的TAN模型构造方法--DTAN.实验结果表明,DTAN分类方法在实例个数比较多的数据集上具有显著优秀的分类性能.  相似文献   

2.
通过分析朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器,提出了一种新的属性依赖度量方法,并依此对TAN分类器的构造方法进行了改进.将该分类方法(XINTAN)与朴素贝叶斯分类器和TAN分类器进行了实验比较.实验结果表明,此分类方法集中了朴素贝叶斯分类器与树扩张型朴素贝叶斯(TAN)分类器的优点,性能要优于TAN分类器.  相似文献   

3.
朴素贝叶斯分类器是一种简单而高效的分类器,但它的条件独立性假设使其无法表示属性问的依赖关系。TAN分类器按照一定的结构限制,通过添加扩展弧的方式扩展朴素贝叶斯分类器的结构。在TAN分类器中,类变量是每一个属性变量的父结点,但有些属性的存在降低了它分类的正确率。文中提出一种基于MDL度量的选择性扩展贝叶斯分类器(SANC),通过MDL度量,删除影响分类性能的属性变量和扩展弧。实验结果表明,与NBC和TANC相比,SANC具有较高的分类正确率。  相似文献   

4.
多种策略改进朴素贝叶斯分类器   总被引:7,自引:1,他引:7  
张璠 《微机发展》2005,15(4):35-36,39
朴素贝叶斯分类器是一种简单而高效的分类器,但是它的属性独立性假设使其无法表示现实世界属性之间的依赖关系,影响了它的分类性能。通过广泛深入的研究,对改进朴素贝叶斯分类器的多种策略进行了系统的分析和归类整理,为进一步的研究打下坚实的基础。  相似文献   

5.
朴素贝叶斯分类器是一种简单而高效的分类器,但是它的属性独立性假设使其无法表示现实世界属性之间的依赖关系,影响了它的分类性能.通过广泛深入的研究,对改进朴素贝叶斯分类器的多种策略进行了系统的分析和归类整理,为进一步的研究打下坚实的基础.  相似文献   

6.
王峻  周孟然 《微机发展》2007,17(7):35-37
朴素贝叶斯分类器是一种简单而高效的分类器,但它的条件独立性假设使其无法表示属性间的依赖关系。TAN分类器按照一定的结构限制,通过添加扩展弧的方式扩展朴素贝叶斯分类器的结构。在TAN分类器中,类变量是每一个属性变量的父结点,但有些属性的存在降低了它分类的正确率。文中提出一种基于MDL度量的选择性扩展贝叶斯分类器(SANC),通过MDL度量,删除影响分类性能的属性变量和扩展弧。实验结果表明,与NBC和TANC相比,SANC具有较高的分类正确率。  相似文献   

7.
基于特征加权的朴素贝叶斯分类器   总被引:13,自引:0,他引:13  
程克非  张聪 《计算机仿真》2006,23(10):92-94,150
朴素贝叶斯分类器是一种广泛使用的分类算法,其计算效率和分类效果均十分理想。但是,由于其基础假设“朴素贝叶斯假设”与现实存在一定的差异,因此在某些数据上可能导致较差的分类结果。现在存在多种方法试图通过放松朴素贝叶斯假设来增强贝叶斯分类器的分类效果,但是通常会导致计算代价大幅提高。该文利用特征加权技术来增强朴素贝叶斯分类器。特征加权参数直接从数据导出,可以看作是计算某个类别的后验概率时,某个属性对于该计算的影响程度。数值实验表明,特征加权朴素贝叶斯分类器(FWNB)的效果与其他的一些常用分类算法,例如树扩展朴素贝叶斯(TAN)和朴素贝叶斯树(NBTree)等的分类效果相当,其平均错误率都在17%左右;在计算速度上,FWNB接近于NB,比TAN和NBTree快至少一个数量级。  相似文献   

8.
针对朴素贝叶斯分类的属性独立性假设的不足,讨论了相关性及多变量相关的概念,给出词间相关度的定义。在TAN分类器的词间相关性分析基础上,提出一种文档特征词相关度估计公式及其在改进朴素贝叶斯分类模型中应用的算法,在Reuters-21578文本数据集上的实验表明,改进算法简单易行,能有效改进贝叶斯分类性能。  相似文献   

9.
朴素贝叶斯分类器是一种简单且有效实现的文本自动类方法,但其独立性假设在实际中是不存在的。在TAN结构贝叶斯分类算法中,考虑了两两属性间的关联性,对属性间的独立性假设有了一定程度的降低。  相似文献   

10.
11.
树增强朴素贝叶斯模型通过放松条件属性独立来改进贝叶斯模型,结构学习效率较高且简单。然而在一些实际试验测试中,树增强朴素贝叶斯分类模型的分类精确性和失误率的效果却不好。因此在本文中,设计了平均的树增强朴素贝叶斯分类算法来改进分类的效果,并且利用条件对数似然来测试分类估计的效果,最后利用Weka平台公布的大量的UCI数据集进行试验,结果表明平均树增强朴素贝叶斯分类模型明显优于树增强的朴素贝叶斯分类模型。  相似文献   

12.
基于多重判别分析的朴素贝叶斯分类器   总被引:4,自引:1,他引:4  
通过分析朴素贝叶斯分类器的分类原理,并结合多重判别分析的优点,提出了一种基于多重判别分析的朴素贝叶斯分类器DANB(Discriminant Analysis Naive Bayesian classifier).将该分类方法与朴素贝叶斯分类器(Naive Bayesian classifier, NB)和TAN分类器(Tree Augmented Naive Bayesian classifier)进行实验比较,实验结果表明在大多数数据集上,DANB分类器具有较高的分类正确率.  相似文献   

13.
Bayesian Network Classifiers   总被引:154,自引:0,他引:154  
Friedman  Nir  Geiger  Dan  Goldszmidt  Moises 《Machine Learning》1997,29(2-3):131-163
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.  相似文献   

14.
一种限定性的双层贝叶斯分类模型   总被引:28,自引:1,他引:28  
朴素贝叶斯分类模型是一种简单而有效的分类方法,但它的属性独立性假设使其无法表达属性变量间存在的依赖关系,影响了它的分类性能.通过分析贝叶斯分类模型的分类原则以及贝叶斯定理的变异形式,提出了一种基于贝叶斯定理的新的分类模型DLBAN(double-level Bayesian network augmented naive Bayes).该模型通过选择关键属性建立属性之间的依赖关系.将该分类方法与朴素贝叶斯分类器和TAN(tree augmented naive Bayes)分类器进行实验比较.实验结果表明,在大多数数据集上,DLBAN分类方法具有较高的分类正确率.  相似文献   

15.
In this paper we present several Bayesian algorithms for learning Tree Augmented Naive Bayes (TAN) models. We extend the results in Meila & Jaakkola (2000a) to TANs by proving that accepting a prior decomposable distribution over TAN’s, we can compute the exact Bayesian model averaging over TAN structures and parameters in polynomial time. Furthermore, we prove that the k-maximum a posteriori (MAP) TAN structures can also be computed in polynomial time. We use these results to correct minor errors in Meila & Jaakkola (2000a) and to construct several TAN based classifiers. We show that these classifiers provide consistently better predictions over Irvine datasets and artificially generated data than TAN based classifiers proposed in the literature.Editors: Pedro Larrañaga, Jose A. Lozano, Jose M. Peña and Iñaki Inza  相似文献   

16.
Bayesian networks are important knowledge representation tools for handling uncertain pieces of information. The success of these models is strongly related to their capacity to represent and handle dependence relations. Some forms of Bayesian networks have been successfully applied in many classification tasks. In particular, naive Bayes classifiers have been used for intrusion detection and alerts correlation. This paper analyses the advantage of adding expert knowledge to probabilistic classifiers in the context of intrusion detection and alerts correlation. As examples of probabilistic classifiers, we will consider the well-known Naive Bayes, Tree Augmented Naïve Bayes (TAN), Hidden Naive Bayes (HNB) and decision tree classifiers. Our approach can be applied for any classifier where the outcome is a probability distribution over a set of classes (or decisions). In particular, we study how additional expert knowledge such as “it is expected that 80 % of traffic will be normal” can be integrated in classification tasks. Our aim is to revise probabilistic classifiers’ outputs in order to fit expert knowledge. Experimental results show that our approach improves existing results on different benchmarks from intrusion detection and alert correlation areas.  相似文献   

17.
This paper evaluates the effect on the predictive accuracy of different models of two recently proposed imputation methods, namely missForest (MF) and Multiple Imputation based on Expectation-Maximization (MIEM), along with two other imputation methods: Sequential Hot-deck and Multiple Imputation based on Logistic Regression (MILR). Their effect is assessed over the classification accuracy of four different models, namely Tree Augmented Naive Bayes (TAN) which has received little attention, Naive Bayes (NB), Logistic Regression (LR), and Support Vector Machine (SVM) with Radial Basis Function (RBF) kernel. Experiments are conducted over fourteen binary datasets with large feature sets, and across a wide range of missing data rates (between 5 and 50%). The results from 10 fold cross-validations show that the performance of the imputation methods varies substantially between different classifiers and at different rates of missing values. The MIEM method is shown to generally give the best results for all the classifiers across all rates of missing data. While NB model does not benefit much from imputation compared to a no imputation baseline, LR and TAN are highly susceptible to gain from the imputation methods at higher rates of missing values. The results also show that MF works best with TAN, and Hot-deck degrades the predictive performance of SVM and NB models at high rates of missing values (over 30%). Detailed analysis of the imputation methods over the different datasets is reported. Implications of these findings on the choice of an imputation method are discussed.  相似文献   

18.
一种新颖混合贝叶斯分类模型研究   总被引:2,自引:0,他引:2  
朴素贝叶斯分类器(Naive Bayesian classifier,NB)是一种简单而有效的分类模型,但这种分类器缺乏对训练集信息的充分利用,影响了它的分类性能。通过分析NB的分类原理,并结合线性判别分析(Linear Discriminant Analysis,LDA)与核判别分析(Kernel Discriminant Analysis,KDA)的优点,提出了一种混合贝叶斯分类模型DANB(Discriminant Analysis Naive Bayesian classifier,DANB)。将该分类方法与NB和TAN(Tree Augmented Naive Bayesian classifier,TAN)进行实验比较,结果表明,在大多数数据集上,DANB分类器具有较高的分类正确率。  相似文献   

19.
The Naive Bayes classifier is a popular classification technique for data mining and machine learning. It has been shown to be very effective on a variety of data classification problems. However, the strong assumption that all attributes are conditionally independent given the class is often violated in real-world applications. Numerous methods have been proposed in order to improve the performance of the Naive Bayes classifier by alleviating the attribute independence assumption. However, violation of the independence assumption can increase the expected error. Another alternative is assigning the weights for attributes. In this paper, we propose a novel attribute weighted Naive Bayes classifier by considering weights to the conditional probabilities. An objective function is modeled and taken into account, which is based on the structure of the Naive Bayes classifier and the attribute weights. The optimal weights are determined by a local optimization method using the quasisecant method. In the proposed approach, the Naive Bayes classifier is taken as a starting point. We report the results of numerical experiments on several real-world data sets in binary classification, which show the efficiency of the proposed method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号